id
stringlengths
2
8
url
stringlengths
31
253
title
stringlengths
1
181
text
stringlengths
6
353k
23535
https://en.wikipedia.org/wiki/Photon
Photon
A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that always move at the speed of light measured in vacuum. The photon belongs to the class of boson particles. As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. Nomenclature The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a "discrete quantity composed of an integral number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets. He called such a wave-packet a light quantum (German: ein Lichtquant). The name photon derives from the Greek word for light, (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on 18 December 1926. The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971). The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted by most physicists very soon after Compton used it. In physics, a photon is usually denoted by the symbol (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where is the Planck constant and the Greek letter (nu) is the photon's frequency. Physical properties The photon has no electric charge, is generally considered to have zero rest mass and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−50 kg; its lifetime would be more than 1018 years. For comparison the age of the universe is about years. In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon obeys Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle and more than one can occupy the same bound quantum state. Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation). Relativistic energy and momentum In empty space, the photon moves at (the speed of light) and its energy and momentum are related by , where is the magnitude of the momentum vector . This derives from the following relativistic relation, with : The energy and momentum of a photon depend only on its frequency () or inversely, its wavelength (): where is the wave vector, where   is the wave number, and   is the angular frequency, and   is the reduced Planck constant. Since points in the direction of the photon's propagation, the magnitude of its momentum is Polarization and spin angular momentum The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light). The angular momentum of the photon has two possible values, either or . These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta. The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and S. Bhagavantam in 1931. Antiparticle annihilation The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of . Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term mAA would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of . The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of (the equivalent of ) given by the Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of from the test of Coulomb's law is valid. Historical development In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity. At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question then, was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See and , below.) Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that and The and are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Quantum field theory Quantization of the electromagnetic field In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider. In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. As a gauge boson The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states. In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally. Hadronic properties Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electric charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon which interacts only with electric charges and vector mesons. However, if experimentally probed at very short distances, the intrinsic structure of the photon is recognized as a flux of quark and gluon components, quasi-free according to asymptotic freedom in QCD and described by the photon structure function. A comprehensive comparison of data with theoretical predictions was presented in a review in 2000. Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei). This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium. Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves. In matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (see this list for some other quasi-particles); this polariton has a nonzero effective mass, which means that it cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering. Photons can be scattered by matter. For example, photons engage in so many collisions on the way from the core of the Sun that radiant energy can take about a million years to reach the surface; however, once in open space, a photon takes only 8.3 minutes to reach Earth. Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry. Technological applications Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas. Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations. Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy. In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins. Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1". Quantum optics and computation Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography. Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons. See also Notes References Further reading By date of publication Education with single photons External links Bosons Gauge bosons Elementary particles Electromagnetism Optics Quantum electrodynamics Photons Force carriers Subatomic particles with spin 1
23537
https://en.wikipedia.org/wiki/Philipp%20Franz%20von%20Siebold
Philipp Franz von Siebold
Philipp Franz Balthasar von Siebold (17 February 1796 – 18 October 1866) was a German physician, botanist and traveller. He achieved prominence by his studies of Japanese flora and fauna and the introduction of Western medicine in Japan. He was the father of the first female Japanese doctor educated in Western medicine, Kusumoto Ine. Career Early life Born into a family of doctors and professors of medicine in Würzburg (then in the Prince-Bishopric of Würzburg, later part of Bavaria), Siebold initially studied medicine at the University of Würzburg from November 1815, where he became a member of the Corps Moenania Würzburg. One of his professors was Franz Xaver Heller (1775–1840), author of the ("Flora of the Grand Duchy of Würzburg", 1810–1811). Ignaz Döllinger (1770–1841), his professor of anatomy and physiology, however, most influenced him. Döllinger was one of the first professors to understand and treat medicine as a natural science. Siebold stayed with Döllinger, where he came in regular contact with other scientists. He read the books of Humboldt, a famous naturalist and explorer, which probably raised his desire to travel to distant lands. Philipp Franz von Siebold became a physician by earning his M.D. degree in 1820. He initially practiced medicine in Heidingsfeld, in the Kingdom of Bavaria, now part of Würzburg. Invited to Holland by an acquaintance of his family, Siebold applied for a position as a military physician, which would enable him to travel to the Dutch colonies. He entered the Dutch military service on 19 June 1822, and was appointed as ship's surgeon on the frigate Adriana, sailing from Rotterdam to Batavia (present-day Jakarta) in the Dutch East Indies (now called Indonesia). On his trip to Batavia on the frigate Adriana, Siebold practiced his knowledge of the Dutch language and also rapidly learned Malay. During the long voyage he also began a collection of marine fauna. He arrived in Batavia on 18 February 1823. As an army medical officer, Siebold was posted to an artillery unit. However, he was given a room for a few weeks at the residence of the Governor-General of the Dutch East Indies, Baron Godert van der Capellen, to recover from an illness. With his erudition, he impressed the Governor-General, and also the director of the botanical garden at Buitenzorg (now Bogor), Caspar Georg Carl Reinwardt. These men sensed in Siebold a worthy successor to Engelbert Kaempfer and Carl Peter Thunberg, two former resident physicians at Dejima, a Dutch trading post in Japan, the former of whom was the author of . The Batavian Academy of Arts and Sciences soon elected Siebold as a member. Arrival in Japan On 28 June 1823, after only a few months in the Dutch East Indies, Siebold was posted as resident physician and scientist to Dejima, a small artificial island and trading post at Nagasaki, and arrived there on 11 August 1823. During an eventful voyage to Japan he only just escaped drowning during a typhoon in the East China Sea. As only a very small number of Dutch personnel were allowed to live on this island, the posts of physician and scientist had to be combined. Dejima had been in the possession of the Dutch East India Company (known as the VOC) since the 17th century, but the Company had gone bankrupt in 1798, after which a trading post was operated there by the Dutch state for political considerations, with notable benefits to the Japanese. The European tradition of sending doctors with botanical training to Japan was a long one. Sent on a mission by the Dutch East India Company, Engelbert Kaempfer (1651–1716), a German physician and botanist who lived in Japan from 1690 until 1692, ushered in this tradition of a combination of physician and botanist. The Dutch East India Company did not, however, actually employ the Swedish botanist and physician Carl Peter Thunberg (1743–1828), who had arrived in Japan in 1775. Medical practice Japanese scientists invited Siebold to show them the marvels of western science, and he learned in return through them much about the Japanese and their customs. After curing an influential local officer, Siebold gained the permission to leave the trade post. He used this opportunity to treat Japanese patients in the greater area around the trade post. Siebold is credited with the introduction of vaccination and pathological anatomy for the first time in Japan. In 1824, Siebold started a medical school in Nagasaki, the Narutaki-juku, that grew into a meeting place for around fifty students. They helped him in his botanical and naturalistic studies. The Dutch language became the lingua franca (common spoken language) for these academic and scholarly contacts for a generation, until the Meiji Restoration. His patients paid him in kind with a variety of objects and artifacts that would later gain historical significance. These everyday objects later became the basis of his large ethnographic collection, which consisted of everyday household goods, woodblock prints, tools and hand-crafted objects used by the Japanese people. Japanese family During his stay in Japan, Siebold "lived together" with Kusumoto Taki (楠本滝), who gave birth to their daughter Kusumoto (O-)Ine in 1827. Siebold used to call his wife "Otakusa" (probably derived from O-Taki-san) and named a Hydrangea after her. Kusumoto Ine eventually became the first Japanese woman known to have received a physician's training and became a highly regarded practicing physician and court physician to the Empress in 1882. She died at court in 1903. Studies of Japanese fauna and flora His main interest, however, focused on the study of Japanese fauna and flora. He collected as much material as he could. Starting a small botanical garden behind his home (there was not much room on the small island) Siebold amassed over 1,000 native plants. In a specially built glasshouse he cultivated the Japanese plants to endure the Dutch climate. Local Japanese artists like Kawahara Keiga drew and painted images of these plants, creating botanical illustrations but also images of the daily life in Japan, which complemented his ethnographic collection. He hired Japanese hunters to track rare animals and collect specimens. Many specimens were collected with the help of his Japanese collaborators Keisuke Ito (1803–1901), Mizutani Sugeroku (1779–1833), Ōkochi Zonshin (1796–1882) and Katsuragawa Hoken (1797–1844), a physician to the shōgun. As well, Siebold's assistant and later successor, Heinrich Bürger (1806–1858), proved to be indispensable in carrying on Siebold's work in Japan. Siebold first introduced to Europe such familiar garden-plants as the Hosta and the Hydrangea otaksa. Unknown to the Japanese, he was also able to smuggle out germinative seeds of tea plants to the botanical garden in Batavia. Through this single act, he started the tea culture in Java, a Dutch colony at the time. Until then Japan had strictly guarded the trade in tea plants. Remarkably, in 1833, Java already could boast a half million tea plants. He also introduced Japanese knotweed (Reynoutria japonica, syn. Fallopia japonica), which has become a highly invasive weed in Europe and North America. All derive from a single female plant collected by Siebold. During his stay at Dejima, Siebold sent three shipments with an unknown number of herbarium specimens to Leiden, Ghent, Brussels and Antwerp. The shipment to Leiden contained the first specimens of the Japanese giant salamander (Andrias japonicus) to be sent to Europe. In 1825 the government of the Dutch-Indies provided him with two assistants: apothecary and mineralogist Heinrich Bürger (his later successor) and the painter Carl Hubert de Villeneuve. Each would prove to be useful to Siebold's efforts that ranged from ethnographical to botanical to horticultural, when attempting to document the exotic Eastern Japanese experience. De Villeneuve taught Kawahara the techniques of Western painting. Reportedly, Siebold was not the easiest man to deal with. He was in continuous conflict with his Dutch superiors who felt he was arrogant. This threat of conflict resulted in his recall in July 1827 back to Batavia. But the ship, the Cornelis Houtman, sent to carry him back to Batavia, was thrown ashore by a typhoon in Nagasaki bay. The same storm badly damaged Dejima and destroyed Siebold's botanical garden. Repaired, the Cornelis Houtman was refloated. It left for Batavia with 89 crates of Siebold's salvaged botanical collection, but Siebold himself remained behind in Dejima. Siebold Incident In 1826 Siebold made the court journey to Edo. During this long trip he collected many plants and animals. But he also obtained from the court astronomer Takahashi Kageyasu several detailed maps of Japan and Korea (written by Inō Tadataka), an act strictly forbidden by the Japanese government. When the Japanese discovered, by accident, that Siebold had a map of the northern parts of Japan, the government accused him of high treason and of being a spy for Russia. The Japanese placed Siebold under house arrest and expelled him from Japan on 22 October 1829. Satisfied that his Japanese collaborators would continue his work, he journeyed back on the frigate Java to his former residence, Batavia, in possession of his enormous collection of thousands of animals and plants, his books and his maps. The botanical garden of would soon house Siebold's surviving, living flora collection of 2,000 plants. He arrived in the Netherlands on 7 July 1830. His stay in Japan and Batavia had lasted for a period of eight years. Return to Europe Philipp Franz von Siebold arrived in the Netherlands in 1830, just at a time when political troubles erupted in Brussels, leading soon to Belgian independence. Hastily he salvaged his ethnographic collections in Antwerp and his herbarium specimens in Brussels and took them to Leiden, helped by Johann Baptist Fischer. He left behind his botanical collections of living plants that were sent to the University of Ghent. The consequent expansion of this collection of rare and exotic plants led to the horticultural fame of Ghent. In gratitude the University of Ghent presented him in 1841 with specimens of every plant from his original collection. Siebold settled in Leiden, taking with him the major part of his collection. The "Philipp Franz von Siebold collection", containing many type specimens, was the earliest botanical collection from Japan. Even today, it still remains a subject of ongoing research, a testimony to the depth of work undertaken by Siebold. It contained about 12,000 specimens, from which he could describe only about 2,300 species. The whole collection was purchased for a handsome amount by the Dutch government. Siebold was also granted a substantial annual allowance by the Dutch King William II and was appointed Advisor to the King for Japanese Affairs. In 1842, the King even raised Siebold to the nobility as an esquire. The "Siebold collection" opened to the public in 1831. He founded a museum in his home in 1837. This small, private museum would eventually evolve into the National Museum of Ethnology in Leiden. Siebold's successor in Japan, Heinrich Bürger, sent Siebold three more shipments of herbarium specimens collected in Japan. This flora collection formed the basis of the Japanese collections of the National Herbarium of the Netherlands in Leiden, while the zoological specimens Siebold collected were kept by the Rijksmuseum van Natuurlijke Historie (National Museum of Natural History) in Leiden, which later became Naturalis. Both institutions merged into Naturalis Biodiversity Center in 2010, which now maintains the entire natural history collection that Siebold brought back to Leiden. In 1845 Siebold married Helene von Gagern (1820–1877), they had three sons and two daughters. Writings During his stay in Leiden, Siebold wrote Nippon in 1832, the first part of a volume of a richly illustrated ethnographical and geographical work on Japan. The Archiv zur Beschreibung Nippons also contained a report of his journey to the Shogunate Court at Edo. He wrote six further parts, the last ones published posthumously in 1882; his sons published an edited and lower-priced reprint in 1887. The appeared between 1833 and 1841. This work was co-authored by Joseph Hoffmann and Kuo Cheng-Chang, a Javanese of Chinese extraction, who had journeyed along with Siebold from Batavia. It contained a survey of Japanese literature and a Chinese, Japanese and Korean dictionary. Siebold's writing on Japanese religion and customs notably shaped early modern European conceptions of Buddhism and Shinto; he notably suggested that Japanese Buddhism was a form of Monotheism. The zoologists Coenraad Temminck (1777–1858), Hermann Schlegel (1804–1884), and Wilhem de Haan (1801–1855) scientifically described and documented Siebold's collection of Japanese animals. The , a series of monographs published between 1833 and 1850, was mainly based on Siebold's collection, making the Japanese fauna the best-described non-European fauna – "a remarkable feat". A significant part of the was also based on the collections of Siebold's successor on Dejima, Heinrich Bürger. Siebold wrote his in collaboration with the German botanist Joseph Gerhard Zuccarini (1797–1848). It first appeared in 1835, but the work was not completed until after his death, finished in 1870 by F.A.W. Miquel (1811–1871), director of the Rijksherbarium in Leiden. This work expanded Siebold's scientific fame from Japan to Europe. From the Hortus Botanicus Leiden – the botanical garden of Leiden – many of Siebold's plants spread to Europe and from there to other countries. Hosta and Hortensia, Azalea, and the Japanese butterbur and the coltsfoot as well as the Japanese larch began to inhabit gardens across the world. International endeavours After his return to Europe, Siebold tried to exploit his knowledge of Japan. Whilst living in Boppard, from 1852 he corresponded with Russian diplomats such as Baron von Budberg-Bönninghausen, the Russian ambassador to Prussia, which resulted in an invitation to go to St Petersburg to advise the Russian government how to open trade relations with Japan. Though still employed by the Dutch government he did not inform the Dutch of this voyage until after his return. American Naval Commodore Matthew C. Perry consulted Siebold in advance of his voyage to Japan in 1854. He notably advised Townsend Harris on how Christianity might be spread to Japan, alleging based on his time there that the Japanese "hated" Christianity. In 1858, the Japanese government lifted the banishment of Siebold. He returned to Japan in 1859 as an adviser to the Agent of the Dutch Trading Society (Nederlandsche Handel-Maatschappij) in Nagasaki, Albert Bauduin. After two years the connection with the Trading Society was severed as the advice of Siebold was considered to be of no value. In Nagasaki he fathered another child with one of his female servants. In 1861 Siebold organised his appointment as an adviser to the Japanese government and went in that function to Edo. There he tried to obtain a position between the foreign representatives and the Japanese government. As he had been specially admonished by the Dutch authorities before going to Japan that he was to abstain from all interference in politics, the Dutch Consul General in Japan, J.K. de Wit, was ordered to ask Siebold's removal. Siebold was ordered to return to Batavia and from there he returned to Europe. After his return he asked the Dutch government to employ him as Consul General in Japan but the Dutch government severed all relations with Siebold who had a huge debt because of loans given to him, except for the payment of his pension. Siebold kept trying to organise another voyage to Japan. After he did not succeed in gaining employment with the Russian government, he went to Paris in 1865 to try to interest the French government in funding another expedition to Japan, but failed. He died in Munich on 18 October 1866. Legacy Plants named after Siebold The botanical and horticultural spheres of influence have honored Philipp Franz von Siebold by naming some of the very garden-worthy plants that he studied after him. Examples include: Acer sieboldianum or Siebold's Maple: a variety of maple native to Japan Calanthe sieboldii or Siebold's Calanthe is a terrestrial evergreen orchid native to Japan, the Ryukyu Islands and Taiwan. Clematis florida var. sieboldiana (syn: C. florida 'Sieboldii' & C. florida 'Bicolor'): a somewhat difficult Clematis to grow "well" but a much sought after plant nevertheless : (Asian beaked hazel) is a species of nut found in northeastern Asia and Japan Dryopteris sieboldii: a fern with leathery fronds Hosta sieboldii of which a large garden may have a dozen quite distinct cultivars Magnolia sieboldii: the under-appreciated small "Oyama" magnolia Malus sieboldii: the fragrant Toringo Crab-Apple, (originally called Sorbus toringo by Siebold), whose pink buds fade to white Primula sieboldii: the Japanese woodland primula Sakurasou (Chinese/Japanese: 櫻草) Prunus sieboldii: a flowering cherry Sedum sieboldii: a succulent whose leaves form rose-like whorls Tsuga sieboldii: a Japanese hemlock Viburnum sieboldii: a deciduous large shrub that has creamy white flowers in spring and red berries that ripen to black in autumn Animals named after Siebold Enhydris sieboldii or Siebold's smooth water snake A type of abalone, Nordotis gigantea, is known as Siebold's abalone, and is prized for sushi. A genus of large gomphid dragonflies, Sieboldius Further legacy Though he is well known in Japan, where he is called "Shiboruto-san", and although mentioned in the relevant schoolbooks, Siebold is almost unknown elsewhere, except among gardeners who admire the many plants whose names incorporate sieboldii and sieboldiana. The Hortus Botanicus in Leiden has recently laid out the "Von Siebold Memorial Garden", a Japanese garden with plants sent by Siebold. The garden was laid out under a 150-year-old Zelkova serrata tree dating from Siebold's lifetime. Japanese visitors come and visit this garden, to pay their respect for him. Siebold museums Although he was disillusioned by what he perceived as a lack of appreciation for Japan and his contributions to its understanding, a testimony of the remarkable character of Siebold is found in museums that honor him. Japan Museum SieboldHuis in Leiden, Netherlands, shows highlights from the Leiden Siebold collections in the transformed, refitted, formal, first house of Siebold in Leiden Naturalis Biodiversity Center, the National Museum of Natural History in Leiden, Netherlands houses the zoological and botanical specimens Siebold collected during his first stay in Japan (1823-1829). These include 200 mammals, 900 birds, 750 fishes, 170 reptiles, over 5,000 invertebrates, 2,000 different species of plants and 12,000 herbarium specimens. The National Museum of Ethnology in Leiden, Netherlands houses the large collection which Siebold brought together during his first stay in Japan (1823–1829). The State Museum of Ethnology in Munich, Germany, houses the collection of Philipp Franz von Siebold from his second voyage to Japan (1859–1862) and a letter of Siebold to King Ludwig I in which he urged the monarch to found a museum of ethnology at Munich. Siebold's grave, in the shape of a Buddhist pagoda, is in the (Former Southern Cemetery of Munich). He is also commemorated in the name of a street and a large number of mentions in the Botanical Garden at Munich. A Siebold-Museum exists in Würzburg, Germany. Siebold-Museum on , Schlüchtern, Germany. Nagasaki, Japan, pays tribute to Siebold by housing the Siebold Memorial Museum on property adjacent to Siebold's former residence in the Narutaki neighborhood, the first museum dedicated to a non-Japanese in Japan. His collections laid the foundation for the ethnographic museums of Munich and Leiden. Alexander von Siebold, one of his sons by his European wife, donated much of the material left behind after Siebold's death in Würzburg to the British Museum in London. The Royal Scientific Academy of St. Petersburg purchased 600 colored plates of the . Another son, Heinrich (or Henry) von Siebold (1852–1908), continued part of his father's research. He is recognized, together with Edward S. Morse, as one of the founders of modern archaeological efforts in Japan. Published works (1832–1852) Nippon. Archiv zur Beschreibung von Japan und dessen Neben- und Schutzländern: Jezo mit den Südlichen Kurilen, Krafto, Koorai und den Liukiu-Inseln. 7 volumes, Leiden. (1838) Voyage au Japon Executé Pendant les Années 1823 a 1830 – French abridged version of Nippon – contains 72 plates from Nippon, with a slight variance in size and paper. Published in twelve "Deliveries". Each "Delivery" contains 72 lithographs (plates) and each "Delivery" varies in its lithograph contents by four or five plate variations. Revised and enlarged edition by his sons in 1897: Nippon. Archiv zur Beschreibung von Japan ..., 2. veränderte und ergänzte Auflage, hrsg. von seinen Söhnen, 2 volumes, Würzburg and Leipzig. Translation of the part of Nippon on Korea ("Kooraï"): Boudewijn Walraven (ed.), Frits Vos (transl.), Korean Studies in Early-nineteenth century Leiden, Korean Histories 2.2, 75-85, 2010 (1829) Synopsis Hydrangeae generis specierum Iaponicarum. In: Nova Acta Physico-Medica Academiae Caesareae Leopoldino-Carolina vol 14, part ii. (1835–1870) (with Zuccarini, J. G. von, editor) Flora Japonica. Leiden. (1843) (with Zuccarini, J. G. von) Plantaram, quas in Japonia collegit Dr. Ph. Fr. de Siebold genera nova, notis characteristicis delineationibusque illustrata proponunt. In: Abhandelungen der mathematisch-physikalischen Classe der Königlich Bayerischen Akademie der Wissenschaften vol.3, pp 717–750. (1845) (with Zuccarini, J. G. von) Florae Japonicae familae naturales adjectis generum et specierum exemplis selectis. Sectio prima. Plantae Dicotyledoneae polypetalae. In: Abhandelungen der mathematischphysikalischen Classe der Königlich Bayerischen Akademie der Wissenschaften vol. 4 part iii, pp 109–204. (1846) (with Zuccarini, J. G. von) Florae Japonicae familae naturales adjectis generum et specierum exemplis selectis. Sectio altera. Plantae dicotyledoneae et monocotyledonae. In: Abhandelungen der mathematischphysikalischen Classe der Königlich Bayerischen Akademie der Wissenschaften vol. 4 part iii, pp vol 4 pp 123–240. (1841) (compiled by an anonymous author, not by Siebold himself !) The standard author abbreviation Siebold is used to indicate Philipp Franz von Siebold as the author when citing a botanical name. See also :Category:Taxa named by Philipp Franz von Siebold Bunsei – Japanese era names Dejima Karl Theodor Ernst von Siebold Erwin Bälz Sakoku List of Westerners who visited Japan before 1868 Notes References and other literature Brown, Yu-jing: The von Siebold Collection from Tokugawa, Japan, pp. 1–55, British Library bl.uk Andreas W. Daum: "German Naturalists in the Pacific around 1800: Entanglement, Autonomy, and a Transnational Culture of Expertise." In Explorations and Entanglements: Germans in Pacific Worlds from the Early Modern Period to World War I, ed. Hartmut Berghoff et al. New York, Berghahn Books, 2019, 70‒102. Effert, Rudolf Antonius Hermanus Dominique: Royal Cabinets and Auxiliary Branches: Origins of the National Museum of Ethnology 1816–1883, Leiden: CNWS Publications, 2008. Serie: Mededelingen van het Rijksmuseum van Volkenkunde, Leiden, no. 37 Friese, Eberhard: Philipp Franz von Siebold als früher Exponent der Ostasienwissenschaften. Berliner Beiträge zur sozial- und wirtschaftswissenschaftlichen Japan-Forschung Bd. 15. Bochum 1983 Reginald Grünenberg: Die Entdeckung des Ostpols. Nippon-Trilogie, Vol. 1 Shiborto , Vol. 2 Geheime Landkarten, , Vol. 3 Der Weg in den Krieg, , Die Entdeckung des Ostpols. Nippon-Trilogie.Gesamtausgabe ('Complete Edition'), , Perlen Verlag 2014; English resume of the novel on www.east-pole.com Richtsfeld, Bruno J.: Philipp Franz von Siebolds Japansammlung im Staatlichen Museum für Völkerkunde München. In: Miscellanea der Philipp Franz von Siebold Stiftung 12, 1996, pp. 34–54. Richtsfeld, Bruno J.: Philipp Franz von Siebolds Japansammlung im Staatlichen Museum für Völkerkunde München. In: 200 Jahre Siebold, hrsg. von Josef Kreiner. Tokyo 1996, pp. 202–204. Richtsfeld, Bruno J.: Die Sammlung Siebold im Staatlichen Museum für Völkerkunde, München. In: Das alte Japan. Spuren und Objekte der Siebold-Reisen. Herausgegeben von Peter Noever. München 1997, p. 209f. Richtsfeld, Bruno J.: Philipp Franz von Siebold (1796–1866). Japanforscher, Sammler und Museumstheoretiker. In: Aus dem Herzen Japans. Kunst und Kunsthandwerk an drei Flüssen in Gifu. Herausgegeben von dem Museum für Ostasiatische Kunst Köln und dem Staatlichen Museum für Völkerkunde München. Köln, München 2004, pp. 97–102. Thijsse, Gerard: Herbarium P.F. von Siebold, 1796–1866, 1999, Brill.com Yamaguchi, T., 1997. Von Siebold and Japanese Botany. Calanus Special number I. Yamaguchi, T., 2003. How did Von Siebold accumulate botanical specimens in Japan? Calanus Special number V. External links Scanned versions of Flora Japonica and Fauna Japonica Fauna Japonica – University of Kyoto Flora Japonica – University of Kyoto Siebold University of Nagasaki Website dedicated to the German novel Die Entdeckung des Ostpols Siebold Huis – a museum in the house where Siebold lived in Leiden The Siebold Museum in Würzburg The Siebold-Museum on Brandenstein castle, Schlüchtern Siebold's Nippon, 1897 Proceedings of the symposium 'Siebold in the 21st Century' held at the University Museum, the University of Tokyo, in 2003 1796 births 1866 deaths Scientists from Würzburg People from the Prince-Bishopric of Würzburg German untitled nobility 19th-century German botanists German carcinologists Expatriates in Japan German Japanologists German male non-fiction writers Botanists active in Japan Botanists with author abbreviations
23538
https://en.wikipedia.org/wiki/Probability%20interpretations
Probability interpretations
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory. There are two broad categories of probability interpretations which can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as a yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are frequentist accounts (such as those of Venn, Reichenbach and von Mises) and propensity accounts (such as those of Popper, Miller, Giere and Fetzer). Evidential probability, also called Bayesian probability, can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's) interpretation, the subjective interpretation (de Finetti and Savage), the epistemic or inductive interpretation (Ramsey, Cox) and the logical interpretation (Keynes and Carnap). There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by Gillies and Rowbottom). Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Ronald Fisher, Jerzy Neyman and Egon Pearson. Statisticians of the opposing Bayesian school typically accept the frequency interpretation when it makes sense (although not as a definition), but there is less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the law of large numbers and characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities. Philosophy The philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century, and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the philosophy of mathematics as are shared by other mathematical statements. The mathematical analysis originated in observations of the behaviour of game equipment such as playing cards and dice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of indifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of Ludlow, Massachusetts "is that it was named after Roger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations. Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape. Classical definition The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely. (3.1) This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as tossing a coin until it shows heads, give rise to an infinite set of outcomes. And secondly, it requires an a priori determination that all possible outcomes are equally likely without falling in a trap of circular reasoning by relying on the notion of probability. (In using the terminology "we may be equally undecided", Laplace assumed, by what has been called the "principle of insufficient reason", that all possible outcomes are equally likely if there is no known reason to assume otherwise, for which there is no obvious justification.) Frequentism Frequentists posit that the probability of an event is its relative frequency over time, (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by the number of occurrences of an event in trials, then if we say that . The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?” Subjectivism Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey (p 182) and de Finetti (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent. Evidence casts doubt that humans will have coherent beliefs. The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example. Propensity Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome. This kind of objective probability is sometimes called 'chance'. Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above). In contrast, a propensitist is able to use the law of large numbers to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant single-case probabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time. The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge. A propensity theory of probability was given by Charles Sanders Peirce. A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however. Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments. A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's. Other propensity theorists (e.g. Ronald Giere) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science. What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the Principal Principle, (3.3 & 3.5) a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents. Logical, epistemic, and inductive probability It is widely recognized that the term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs was probably caused by a large meteorite hitting the earth. Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available) empirical evidence (E, say) supports H to a high degree. This degree of support of H by E has been called the logical, or epistemic, or inductive probability of H given E. The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example in Keynes' Treatise on Probability) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial) entailment, or degrees of logical consequence, not degrees of belief. (They do, nevertheless, dictate proper degrees of belief, as is discussed below.) Frank P. Ramsey, on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief". (p 157) In other words, Ramsey held that epistemic probabilities simply are degrees of rational belief, rather than being logical relations that merely constrain degrees of rational belief. Another point of disagreement concerns the uniqueness of evidential probability, relative to a given state of knowledge. Rudolf Carnap held, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. Rational people, in other words, may differ somewhat in their degrees of belief, even if they all have the same information. Prediction An alternative account of probability emphasizes the role of prediction – predicting future observations on the basis of past observations, not on unobservable parameters. In its modern form, it is mainly in the Bayesian vein. This was the main function of probability before the 20th century, but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as in celestial mechanics. The modern predictive approach was pioneered by Bruno de Finetti, with the central idea of exchangeability – that future observations should behave like past observations. This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book, and has since been propounded by such statisticians as Seymour Geisser. Axiomatic probability The mathematics of probability can be developed on an entirely axiomatic basis that is independent of any interpretation: see the articles on probability theory and probability axioms for a detailed treatment. See also Coverage probability Frequency (statistics) Negative probability Philosophy of mathematics Philosophy of statistics Pignistic probability Probability amplitude (quantum mechanics) Sunrise problem Bayesian epistemology References Further reading A comprehensive monograph covering the four principal current interpretations: logical, subjective, frequency, propensity. Also proposes a novel intersubective interpretation. Paul Humphreys, ed. (1994) Patrick Suppes: Scientific Philosopher, Synthese Library, Springer-Verlag. Vol. 1: Probability and Probabilistic Causality. Vol. 2: Philosophy of Physics, Theory Structure and Measurement, and Action Theory. Jackson, Frank, and Robert Pargetter (1982) "Physical Probability as a Propensity," Noûs 16(4): 567–583. Covers mostly non-Kolmogorov probability models, particularly with respect to quantum physics. A highly accessible introduction to the interpretation of probability. Covers all the main interpretations, and proposes a novel group level (or 'intersubjective') interpretation. Also covers fallacies and applications of interpretations in the social and natural sciences. External links Probability theory Philosophy of statistics Philosophy of mathematics Philosophy of science Epistemology Interpretation (philosophy)
23539
https://en.wikipedia.org/wiki/Probability%20axioms
Probability axioms
The standard probability axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. These axioms remain central and have direct contributions to mathematics, the physical sciences, and real-world probability cases. There are several other (equivalent) approaches to formalising probability. Bayesians will often motivate the Kolmogorov axioms by invoking Cox's theorem or the Dutch book arguments instead. Kolmogorov axioms The assumptions as to setting up the axioms can be summarised as follows: Let be a measure space with being the probability of some event , and . Then is a probability space, with sample space , event space and probability measure . First axiom The probability of an event is a non-negative real number: where is the event space. It follows (when combined with the second axiom) that is always finite, in contrast with more general measure theory. Theories which assign negative probability relax the first axiom. Second axiom This is the assumption of unit measure: that the probability that at least one of the elementary events in the entire sample space will occur is 1. Third axiom This is the assumption of σ-additivity: Any countable sequence of disjoint sets (synonymous with mutually exclusive events) satisfies Some authors consider merely finitely additive probability spaces, in which case one just needs an algebra of sets, rather than a σ-algebra. Quasiprobability distributions in general relax the third axiom. Consequences From the Kolmogorov axioms, one can deduce other useful rules for studying probabilities. The proofs of these rules are a very insightful procedure that illustrates the power of the third axiom, and its interaction with the prior two axioms. Four of the immediate corollaries and their proofs are shown below: Monotonicity If A is a subset of, or equal to B, then the probability of A is less than, or equal to the probability of B. Proof of monotonicity In order to verify the monotonicity property, we set and , where and for . From the properties of the empty set (), it is easy to see that the sets are pairwise disjoint and . Hence, we obtain from the third axiom that Since, by the first axiom, the left-hand side of this equation is a series of non-negative numbers, and since it converges to which is finite, we obtain both and . The probability of the empty set In many cases, is not the only event with probability 0. Proof of the probability of the empty set since , by applying the third axiom to the left-hand side (note is disjoint with itself), and so by subtracting from each side of the equation. The complement rule Proof of the complement rule Given and are mutually exclusive and that : ... (by axiom 3) and, ... (by axiom 2) The numeric bound It immediately follows from the monotonicity property that Proof of the numeric bound Given the complement rule and axiom 1 : Further consequences Another important property is: This is called the addition law of probability, or the sum rule. That is, the probability that an event in A or B will happen is the sum of the probability of an event in A and the probability of an event in B, minus the probability of an event that is in both A and B. The proof of this is as follows: Firstly, ... (by Axiom 3) So, (by ). Also, and eliminating from both equations gives us the desired result. An extension of the addition law to any number of sets is the inclusion–exclusion principle. Setting B to the complement Ac of A in the addition law gives That is, the probability that any event will not happen (or the event's complement) is 1 minus the probability that it will. Simple example: coin toss Consider a single coin-toss, and assume that the coin will either land heads (H) or tails (T) (but not both). No assumption is made as to whether the coin is fair or as to whether or not any bias depends on how the coin is tossed. We may define: Kolmogorov's axioms imply that: The probability of neither heads nor tails, is 0. The probability of either heads or tails, is 1. The sum of the probability of heads and the probability of tails, is 1. See also References Further reading Formal definition of probability in the Mizar system, and the list of theorems formally proved about it. Probability theory Mathematical axioms
23542
https://en.wikipedia.org/wiki/Probability%20theory
Probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. History of probability The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657. In the 19th century, what is considered the classical definition of probability was completed by Pierre Laplace. Initially, probability theory mainly considered events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti. Treatment Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more. Motivation Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. The power set of the sample space (or equivalently, the event space) is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called events. In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events. The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5} has a probability of 1/6, and the event {1,2,3,4,5,6} has a probability of 1, that is, absolute certainty. When doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable. A random variable is a function that assigns to each elementary event in the sample space a real number. This function is usually denoted by a capital letter. In the case of a die, the assignment of a number to certain elementary events can be done using the identity function. This does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" () and to the outcome "tails" the number "1" (). Discrete probability distributions deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins. : Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing. : The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by . It is then assumed that for each element , an intrinsic "probability" value is attached, which satisfies the following properties: That is, the probability function f(x) lies between zero and one for every value of x in the sample space Ω, and the sum of f(x) over all values x in the sample space Ω is equal to 1. An is defined as any subset of the sample space . The of the event is defined as So, the probability of the entire sample space is 1, and the probability of the null event is 0. The function mapping a point in the sample space to the "probability" value is called a abbreviated as . Continuous probability distributions deals with events that occur in a continuous sample space. : The classical definition breaks down when confronted with the continuous case. See Bertrand's paradox. : If the sample space of a random variable X is the set of real numbers () or a subset thereof, then a function called the () exists, defined by . That is, F(x) returns the probability that X will be less than or equal to x. The CDF necessarily satisfies the following properties. is a monotonically non-decreasing, right-continuous function; The random variable is said to have a continuous probability distribution if the corresponding CDF is continuous. If is absolutely continuous, i.e., its derivative exists and integrating the derivative gives us the CDF back again, then the random variable X is said to have a () or simply For a set , the probability of the random variable X being in is In case the PDF exists, this can be written as Whereas the PDF exists only for continuous random variables, the CDF exists for all random variables (including discrete random variables) that take values in These concepts can be generalized for multidimensional cases on and other continuous sample spaces. Measure-theoretic probability theory The utility of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two. An example of such distributions could be a mix of discrete and continuous distributions—for example, a random variable that is 0 with probability 1/2, and takes a random value from a normal distribution with probability 1/2. It can still be studied to some extent by considering it to have a PDF of , where is the Dirac delta function. Other distributions may not even be a mix, for example, the Cantor distribution has no positive probability for any single point, neither does it have a density. The modern approach to probability theory solves these problems using measure theory to define the probability space: Given any set (also called ) and a σ-algebra on it, a measure defined on is called a if If is the Borel σ-algebra on the set of real numbers, then there is a unique probability measure on for any CDF, and vice versa. The measure corresponding to a CDF is said to be by the CDF. This measure coincides with the pmf for discrete variables and PDF for continuous variables, making the measure-theoretic approach free of fallacies. The probability of a set in the σ-algebra is defined as where the integration is with respect to the measure induced by Along with providing better understanding and unification of discrete and continuous probabilities, measure-theoretic treatment also allows us to work on probabilities outside , as in the theory of stochastic processes. For example, to study Brownian motion, probability is defined on a space of functions. When it is convenient to work with a dominating measure, the Radon-Nikodym theorem is used to define a density as the Radon-Nikodym derivative of the probability distribution of interest with respect to this dominating measure. Discrete densities are usually defined as this derivative with respect to a counting measure over the set of all possible outcomes. Densities for absolutely continuous distributions are usually defined as this derivative with respect to the Lebesgue measure. If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions. Classical probability distributions Certain random variables occur very often in probability theory because they well describe many natural or physical processes. Their distributions, therefore, have gained special importance in probability theory. Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions. Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions. Convergence of random variables In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. Weak convergence A sequence of random variables converges to the random variable if their respective CDF converges converges to the CDF of , wherever is continuous. Weak convergence is also called . Most common shorthand notation: Convergence in probability The sequence of random variables is said to converge towards the random variable if for every ε > 0. Most common shorthand notation: Strong convergence The sequence of random variables is said to converge towards the random variable if . Strong convergence is also known as . Most common shorthand notation: As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true. Law of large numbers Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the . This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence. The (LLN) states that the sample average of a sequence of independent and identically distributed random variables converges towards their common expectation (expected value) , provided that the expectation of is finite. It is in the different forms of convergence of random variables that separates the weak and the strong law of large numbers Weak law: for Strong law: for It follows from the LLN that if an event of probability p is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards p. For example, if are independent Bernoulli random variables taking values 1 with probability p and 0 with probability 1-p, then for all i, so that converges to p almost surely. Central limit theorem The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics." The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let be independent random variables with mean and variance Then the sequence of random variables converges in distribution to a standard normal random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT). See also Lists References Citations Sources The first major treatise blending calculus with probability theory, originally in French: Théorie Analytique des Probabilités. An English translation by Nathan Morrison appeared under the title Foundations of the Theory of Probability (Chelsea, New York) in 1950, with a second edition in 1956. Olav Kallenberg; Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. (2002). 650 pp. A lively introduction to probability theory for the beginner. Olav Kallenberg; Probabilistic Symmetries and Invariance Principles. Springer -Verlag, New York (2005). 510 pp. id:Peluang (matematika)
23543
https://en.wikipedia.org/wiki/Probability%20distribution
Probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values. Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names. Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often represented in notation by is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc. For example, the sample space of a coin flip could be To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and absolutely continuous random variables. In the discrete case, it is sufficient to specify a probability mass function assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits to , corresponding to the number of dots on the die, has the probability The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0. For example, consider measuring the weight of a piece of ham in the supermarket, and assume the scale can provide arbitrarily many digits of precision. Then, the probability that it weighs exactly 500g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level. However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability. This is possible because this measurement does not require as much precision from the underlying equipment. Absolutely continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., for some The cumulative distribution function is the area under the probability density function from to as shown in figure 1. General probability definition A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function whose input space is a σ-algebra, and gives a real number probability as its output, particularly, a number in . The probability function can take as argument subsets of the sample space itself, as in the coin toss example, where the function was defined so that and . However, because of the widespread use of random variables, which transform the sample space into a set of numbers (e.g., , ), it is more common to study probability distributions whose argument are subsets of these particular kinds of sets (number sets), and all probability distributions discussed in this article are of this type. It is common to denote as the probability that a certain value of the variable belongs to a certain event . The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is: , so the probability is non-negative , so no probability exceeds for any countable disjoint family of sets The concept of probability function is made more rigorous by defining it as the element of a probability space , where is the set of possible outcomes, is the set of all subsets whose probability can be measured, and is the probability function, or probability measure, that assigns a probability to each of these measurable subsets . Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution. Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function. Terminology Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below. Basic terms Random variable: takes values from a sample space; probabilities describe which values and set of values are taken more likely. Event: set of possible values (outcomes) of a random variable that occurs with a certain probability. Probability function or probability measure: describes the probability that the event occurs. Cumulative distribution function: function evaluating the probability that will take a value less than or equal to for a random variable (only for real-valued random variables). Quantile function: the inverse of the cumulative distribution function. Gives such that, with probability , will not exceed . Discrete probability distributions Discrete probability distribution: for many random variables with finitely or countably infinitely many values. Probability mass function (pmf): function that gives the probability that a discrete random variable is equal to some value. Frequency distribution: a table that displays the frequency of various outcomes . Relative frequency distribution: a frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample (i.e. sample size). Categorical distribution: for discrete random variables with a finite set of values. Absolutely continuous probability distributions Absolutely continuous probability distribution: for many random variables with uncountably many values. Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. Related terms Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable. For a random variable , it is sometimes denoted as . Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form , or a union thereof. Head: the region where the pmf or pdf is relatively high. Usually has the form . Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak. Quantile: the q-quantile is the value such that . Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. Standard deviation: the square root of the variance, and hence another measure of dispersion. Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right. Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution. Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution. Cumulative distribution function In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable with regard to a probability distribution is defined as The cumulative distribution function of any real-valued random variable has the properties: is non-decreasing; is right-continuous; ; and ; and . Conversely, any function that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers. Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions. Discrete probability distribution A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values (almost surely) which means that the probability of any event can be expressed as a (finite or countably infinite) sum: where is a countable set with . Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if for , the sum of probabilities would be . Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution. When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. Cumulative distribution function A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take. Thus the cumulative distribution function has the form The points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers. Dirac delta representation A discrete probability distribution is often represented with Dirac measures, the probability distributions of deterministic random variables. For any outcome , let be the Dirac measure concentrated at . Given a discrete probability distribution, there is a countable set with and a probability mass function . If is any event, then or in short, Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function , where which means for any event Indicator-function representation For a discrete random variable , let be the values it can take with non-zero probability. Denote These are disjoint sets, and for such sets It follows that the probability that takes any value except for is zero, and thus one can write as except on a set of probability zero, where is the indicator function of . This may serve as an alternative definition of discrete random variables. One-point distribution A special case is the discrete distribution of a random variable that can take on only one fixed value; in other words, it is a deterministic distribution. Expressed formally, the random variable has a one-point distribution if it has a possible outcome such that All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 to 1. Absolutely continuous probability distribution An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable has an absolutely continuous probability distribution if there is a function such that for each interval the probability of belonging to is given by the integral of over : This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function. In particular, the probability for to take any single value (that is, ) is zero, because an integral with coinciding upper and lower limits is always equal to zero. If the interval is replaced by any measurable set , the according equality still holds: An absolutely continuous random variable is a random variable whose probability distribution is absolutely continuous. There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function has the form where is a density of the random variable with regard to the distribution . Note on terminology: Absolutely continuous distributions ought to be distinguished from continuous distributions, which are those having a continuous cumulative distribution function. Every absolutely continuous distribution is a continuous distribution but the inverse is not true, there exist singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions. For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure. Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function from a probability space to a measurable space . Given that probabilities of events of the form satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure of , which is a probability measure on satisfying . Other kinds of distributions Absolutely continuous and discrete distributions with support on or are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves within some space or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it. One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma. When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system. This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let be instants in time and a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set would be equal in interval and , which might not happen; for example, it could oscillate similar to a sine, , whose limit when does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future. The branch of dynamical systems that studies the existence of a probability measure is ergodic theory. Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively. Random number generation Most algorithms are based on a pseudorandom number generator that produces numbers that are uniformly distributed in the half-open interval . These random variates are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated. For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , we define so that This random variable X has a Bernoulli distribution with parameter . This is a transformation of discrete random variable. For a distribution function of an absolutely continuous random variable, an absolutely continuous random variable must be constructed. , an inverse function of , relates to the uniform variable : For example, suppose a random variable that has an exponential distribution must be constructed. so and if has a distribution, then the random variable is defined by . This has an exponential distribution of . A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudo-random numbers that are distributed in a given way. Common probability distributions and their applications The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate. The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, absolutely continuous, multivariate, etc.) All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution. Linear growth (e.g. errors, offsets) Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used absolutely continuous distribution Exponential growth (e.g. prices, incomes, populations) Log-normal distribution, for a single such quantity whose log is normally distributed Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution Uniformly distributed quantities Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice) Continuous uniform distribution, for absolutely continuously distributed values Bernoulli trials (yes/no events, with a given probability) Basic distributions: Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no) Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution Related to sampling schemes over a finite population: Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement) Categorical outcomes (events with possible outcomes) Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution Poisson process (events that occur independently with a given rate) Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time Exponential distribution, for the time before the next Poisson-type event occurs Gamma distribution, for the time before the next k Poisson-type events occur Absolute values of vectors with normally distributed components Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components. Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals. Normally distributed quantities operated with sum of squares Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test) Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test) F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient) As conjugate prior distributions in Bayesian inference Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc. Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution Some specialized applications of probability distributions The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions. In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by , probability that the particle's position will be in the interval in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics. Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provides the power flow calculation also in term of probability distribution. Prediction of natural phenomena occurrences based on previous frequency distributions such as tropical cyclones, hail, time in between events, etc. Fitting See also Conditional probability distribution Empirical probability distribution Histogram Joint probability distribution Probability measure Quasiprobability distribution Riemann–Stieltjes integral application to probability theory Lists List of probability distributions List of statistical topics References Citations Sources External links Field Guide to Continuous Probability Distributions, Gavin E. Crooks. Distinguishing probability measure, function and distribution, Math Stack Exchange Mathematical and quantitative methods (economics) it:Variabile casuale#Distribuzione di probabilità
23545
https://en.wikipedia.org/wiki/Psychological%20statistics
Psychological statistics
Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field. Psychometrics Psychometrics deals with measurement of psychological attributes. It involves developing and applying statistical models for mental measurements. The measurement theories are divided into two major areas: (1) Classical test theory; (2) Item Response Theory. Classical test theory The classical test theory or true score theory or reliability theory in statistics is a set of statistical procedures useful for development of psychological tests and scales. It is based on a fundamental equation, X = T + E where, X is total score, T is a true score and E is error of measurement. For each participant, it assumes that there exist a true score and it need to be obtained score (X) has to be as close to it as possible. The closeness of X has with T is expressed in terms of ratability of the obtained score. The reliability in terms of classical test procedure is correlation between true score and obtained score. The typical test construction procedures has following steps: (1) Determine the construct (2) Outline the behavioral domain of the construct (3) Write 3 to 5 times more items than desired test length (4) Get item content analyzed by experts and cull items (5) Obtain data on initial version of the test (6) Item analysis (Statistical Procedure) (7) Factor analysis (Statistical Procedure) (8) After the second cull, make final version (9) Use it for research Reliability The reliability is computed in specific ways. (A) Inter-Rater reliability: Inter-Rater reliability is estimate of agreement between independent raters. This is most useful for subjective responses. Cohen's Kappa, Krippendorff's Alpha, Intra-Class correlation coefficients, Correlation coefficients, Kendal's concordance coefficient, etc. are useful statistical tools. (B) Test-Retest Reliability: Test-Retest Procedure is estimation of temporal consistency of the test. A test is administered twice to the same sample with a time interval. Correlation between two sets of scores is used as an estimate of reliability. Testing conditions are assumed to be identical. (C) Internal Consistency Reliability: Internal consistency reliability estimates consistency of items with each other. Split-half reliability (Spearman- Brown Prophecy) and Cronbach Alpha are popular estimates of this reliability. (D) Parallel Form Reliability: It is an estimate of consistency between two different instruments of measurement. The inter-correlation between two parallel forms of a test or scale is used as an estimate of parallel form reliability. Validity Validity of a scale or test is ability of the instrument to measure what it purports to measure. Construct validity, Content Validity, and Criterion Validity are types of validity. Construct validity is estimated by convergent and discriminant validity and factor analysis. Convergent and discriminant validity are ascertained by correlation between similar of different constructs. Content Validity: Subject matter experts evaluate content validity. Criterion Validity is correlation between the test and a criterion variable (or variables) of the construct. Regression analysis, Multiple regression analysis, and Logistic regression are used as an estimate of criterion validity. Software applications: The R software has ‘psych’ package that is useful for classical test theory analysis. Modern test theory The modern test theory is based on latent trait model. Every item estimates the ability of the test taker. The ability parameter is called as theta (θ). The difficulty parameter is called b. the two important assumptions are local independence and unidimensionality. The Item Response Theory has three models. They are one parameter logistic model, two parameter logistic model and three parameter logistic model. In addition, Polychromous IRT Model are also useful. The R Software has ‘ltm’, packages useful for IRT analysis. Factor analysis Factor analysis is at the core of psychological statistics. It has two schools: (1) Exploratory Factor analysis (2) Confirmatory Factor analysis. Exploratory factor analysis (EFA) The exploratory factor analysis begins without a theory or with a very tentative theory. It is a dimension reduction technique. It is useful in psychometrics, multivariate analysis of data and data analytics. Typically a k-dimensional correlation matrix or covariance matrix of variables is reduced to k X r factor pattern matrix where r < k. Principal Component analysis and common factor analysis are two ways of extracting data. Principal axis factoring, ML factor analysis, alpha factor analysis and image factor analysis is most useful ways of EFA. It employs various factor rotation methods which can be classified into orthogonal (resulting in uncorrelated factors) and oblique (resulting correlated factors). The ‘psych’ package in R is useful for EFA. Confirmatory factor analysis (CFA) Confirmatory Factor Analysis (CFA) is a factor analytic technique that begins with a theory and test the theory by carrying out factor analysis. The CFA is also called as latent structure analysis, which considers factor as latent variables causing actual observable variables. The basic equation of the CFA is X = Λξ + δ where, X is observed variables, Λ are structural coefficients, ξ are latent variables (factors) and δ are errors. The parameters are estimated using ML methods however; other methods of estimation are also available. The chi-square test is very sensitive and hence various fit measures are used. R package ‘sem’, ‘lavaan’ are useful for the same. Experimental design Experimental methods are very popular in psychology, going back more than 100 years. Experimental psychology is a sub-discipline of psychology . Statistical methods applied for designing and analyzing experimental psychological data include the t-test, ANOVA, ANCOVA, MANOVA, MANCOVA, binomial test, chi-square, etc. Multivariate behavioral research Multivariate behavioral research is becoming very popular in psychology. These methods include Multiple Regression and Prediction; Moderated and Mediated Regression Analysis; Logistics Regression; Canonical Correlations; Cluster analysis; Multi-level modeling; Survival-Failure analysis; Structural Equations Modeling; hierarchical linear modelling, etc. are very useful for psychological statistics. Journals for statistical applications for psychology There are many specialized journals that publish advances in statistical analysis for psychology: Psychometrika Educational and Psychological Measurement Assessment American Journal of Evaluation Applied Psychological Measurement Behavior Research Methods British Journal of Mathematical and Statistical Psychology Journal of Educational and Behavioral Statistics Journal of Mathematical Psychology Multivariate Behavioral Research Psychological Assessment Structural Equation Modeling Software packages for psychological research Various software packages are available for statistical methods for psychological research. They can be classified as commercial software (e.g., JMP and SPSS) and open-source (e.g., R). Among the open-source offerings, the R software is the most popular. There are many online references for R and specialized books on R for Psychologists are also being written. The "psych" package of R is very useful for psychologists. Among others, "lavaan", "sem", "ltm", "ggplot2" are some of the popular packages. PSPP and KNIME are other free packages. Commercial packages include JMP, SPSS and SAS. JMP and SPSS are commonly reported in books. See also Quantitative psychology Psychometrics Notes References Agresti, A. (1990). Categorical data analysis. Wiley: NJ. Bollen, KA. (1989). Structural Equations with Latent Variables. New York: John Wiley & Sons. Belhekar, V. M. (2016). Statistics for Psychology Using R, New Delhi: SAGE. Cohen, B.H. (2007) Explaining Psychological Statistics, 3rd Edition, Wiley. Cronbach LJ (1951). Coefficient alpha and the internal structure of tests. Psychometrika 16, 297–334. doi:10.1007/bf02310555 Hambleton, R. K., & Swaminathan H. (1985). Item Response theory: Principles and Applications. Boston: Kluwer. Harman, H. H. (1976). Modern Factor Analysis(3rd ed.). Chicago: University of Chicago Press. Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis. The Guilford Press: NY. Howell, D. (2009) Statistical Methods for Psychology, International Edition, Wadsworth. Kline, T. J. B. (2005)Psychological Testing: A Practical Approach to Design and Evaluation. Sage Publications: Thousand Oaks. Loehlin, J. E. (1992). Latent Variable Models: An Introduction to Factor, Path, and Structural Analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum. Lord, F. M., and Novick, M. R. ( 1 968). Statistical theories of mental test scores. Reading, Mass. : Addison-Wesley, 1968. Menard, S. (2001). Applied logistic regression analysis. (2nd ed.). Thousand Oaks. CA: Sage Publications. Nunnally, J. & Bernstein, I. (1994). Psychometric Theory. McGraw-Hill. Raykov, T. & Marcoulides, G.A. (2010) Introduction to Psychometric Theory. New York: Routledge. Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Statistics, 6th ed. Boston: Pearson. Wilcox, R. (2012). Modern Statistics for the Social and Behavioral Sciences: A Practical Introduction. FL: CRC Press. External links CRAN Webpage for R Page for R functions for psychological statistics Matthew Rockloff's tutorials on t-tests, correlation and ANOVA Psychometrics Psychology experiments Applied statistics
23547
https://en.wikipedia.org/wiki/Peter%20Cook
Peter Cook
Peter Edward Cook (17 November 1937 – 9 January 1995) was an English comedian, actor, satirist, playwright and screenwriter. He was the leading figure of the British satire boom of the 1960s, and he was associated with the anti-establishment comedic movement that emerged in the United Kingdom in the late 1950s. Born in Torquay, he was educated at the University of Cambridge. There he became involved with the Footlights Club, of which he later became president. After graduating, he created the comedy stage revue Beyond the Fringe, beginning a long-running partnership with Dudley Moore. In 1961, Cook opened the comedy club The Establishment in Soho. In 1965, Cook and Moore began a television career, beginning with Not Only... But Also. Cook's deadpan monologues contrasted with Moore's buffoonery. They received the 1966 British Academy Television Award for Best Entertainment Performance. Following the success of the show, the duo appeared together in the films The Wrong Box (1966) and Bedazzled (1967). Cook and Moore returned to television projects continuing to the late 1970s, including co-presenting Saturday Night Live in the United States. From 1978 until his death in 1995, Cook no longer collaborated with Moore, apart from a few cameo appearances but continued to be a regular performer in British television and film. Referred to as "the father of modern satire" by The Guardian in 2005, Cook was ranked number one in the Comedians' Comedian, a poll of more than 300 comics, comedy writers, producers and directors in the English-speaking world. Early life Cook was born at his parents' house, "Shearbridge", in Middle Warberry Road, Torquay, Devon. He was the only son, and eldest of the three children, of Alexander Edward "Alec" Cook (1906–1984), a colonial civil servant and his wife Ethel Catherine Margaret (1908–1994), daughter of solicitor Charles Mayo. His father served as political officer and later district officer in Nigeria, then as financial secretary to the colony of Gibraltar, followed by a return to Nigeria as Permanent Secretary of the Eastern Region based at Enugu. Cook's grandfather, Edward Arthur Cook (1869–1914), had also been a colonial civil servant, traffic manager for the Federated Malay States Railway in Kuala Lumpur, Malaya. The stress he suffered in the lead-up to an interview regarding promotion led him to commit suicide. His wife, Minnie Jane (1869–1957), daughter of Thomas Wreford, of Thelbridge and Witheridge, Devon, and of Stratford-upon-Avon, of a prominent Devonshire family traced back to 1440, kept this fact secret. Peter Cook only discovered the truth when later researching his family. Cook was educated at Radley College and then went up to Pembroke College, Cambridge, where he read French and German. As a student, Cook initially intended to become a career diplomat like his father, but Britain "had run out of colonies", as he put it. Although largely apathetic politically, particularly in later life when he displayed a deep distrust of politicians of all hues, he joined the Cambridge University Liberal Club. At Pembroke, Cook performed and wrote comedy sketches as a member of the Cambridge Footlights Club, of which he became president in 1960. His hero was fellow Footlights writer and Cambridge magazine writer David Nobbs. While still at university, Cook wrote for Kenneth Williams, providing several sketches for Williams' hit West End comedy revue Pieces of Eight and much of the follow-up, One Over the Eight, before finding prominence in his own right in a four-man group satirical stage show, Beyond the Fringe, alongside Jonathan Miller, Alan Bennett, and Dudley Moore. Beyond the Fringe became a great success in London after being first performed at the Edinburgh Festival and included Cook impersonating the prime minister, Harold Macmillan. This was one of the first occasions satirical political mimicry had been attempted in live theatre, and it shocked audiences. During one performance, Macmillan was in the theatre and Cook departed from his script and attacked him verbally. Career 1960s In 1961, Cook opened The Establishment, a club at 18 Greek Street in Soho in central London, presenting fellow comedians in a nightclub setting, including American Lenny Bruce. Cook later joked that it was a satirical venue modelled on "those wonderful Berlin cabarets ... which did so much to stop the rise of Hitler and prevent the outbreak of the Second World War". As a members-only venue, it was outside the censorship restrictions. The Establishment's regular cabaret performers were Eleanor Bron, John Bird, and John Fortune. Cook befriended and supported Australian comedian and actor Barry Humphries, who began his British solo career at the club. Humphries said in his autobiography, My Life As Me, that he found Cook's lack of interest in art and literature off-putting. Dudley Moore's jazz trio played in the basement of the club during the early 1960s. Cook also opened an Establishment club in New York in 1963 and Lenny Bruce performed there, as well. In 1962, the BBC commissioned a pilot for a television series of satirical sketches based on the Establishment Club, but it was not immediately picked up and Cook went to New York City for a year to perform Beyond the Fringe on Broadway. When he returned, the pilot had been refashioned as That Was the Week That Was and had made a television star of David Frost, something Cook made no secret of resenting. He complained that Frost's success was based on directly copying Cook's own stage persona and Cook dubbed him "the bubonic plagiarist", and said that his only regret in life, according to Alan Bennett, had been saving Frost from drowning. This incident occurred in the summer of 1963, when the rivalry between the two men was at its height. Cook had realised that Frost's potential drowning would have looked deliberate if he had not been rescued. By the mid 1960s the satire boom was coming to an end and Cook said: "England was about to sink giggling into the sea." Around this time, Cook provided substantial financial backing for the satirical magazine Private Eye, supporting it through difficult periods, particularly in libel trials. Cook invested his own money and solicited investment from his friends. For a time, the magazine was produced from the premises of the Establishment Club. In 1963, Cook married Wendy Snowden. The couple had two daughters, Lucy and Daisy, but the marriage ended in 1970. Cook's first regular television spot was on Granada Television's On the Braden Beat with Bernard Braden, where he featured his most enduring character: the static, dour and monotonal E. L. Wisty, whom Cook had conceived for Radley College's Marionette Society. Cook's comedy partnership with Dudley Moore led to Not Only... But Also. This was originally intended by the BBC as a vehicle for Moore's music, but Moore invited Cook to write sketches and appear with him. Using few props, they created dry, absurd television that proved hugely popular and lasted for three series between 1965 and 1970. Cook played characters such as Sir Arthur Streeb-Greebling and the two men created their Pete and Dud alter egos. Other sketches included "Superthunderstingcar", a parody of the Gerry Anderson marionette TV shows, and Cook's pastiche of 1960s trendy arts documentaries – satirised in a parodic segment on Greta Garbo. When Cook learned a few years later that the videotapes of the series were to be wiped, a common practice at the time, he offered to buy the recordings from the BBC but was refused because of copyright issues. He suggested he could purchase new tapes so that the BBC would have no need to erase the originals, but this was also turned down. Of the original 22 programmes, only eight still survive complete. A compilation of six half-hour programmes, The Best of... What's Left of... Not Only...But Also was shown on television and has been released on both VHS and DVD. With The Wrong Box (1966) and Bedazzled (1967), Cook and Moore began to act in films together. Directed by Stanley Donen, the underlying story of Bedazzled is credited to Cook and Moore and its screenplay to Cook. A comic parody of Faust, it stars Cook as George Spigott (the Devil) who tempts Stanley Moon (Moore), a frustrated, short-order chef, with the promise of gaining his heart's desire – the unattainable beauty and waitress at his cafe, Margaret Spencer (Eleanor Bron) – in exchange for his soul, but repeatedly tricks him. The film features cameo appearances by Barry Humphries as Envy and Raquel Welch as Lust. Moore composed the soundtrack music and co-wrote (with Cook) the songs performed in the film. His jazz trio backed Cook on the theme, a parodic anti-love song, which Cook delivered in a deadpan monotone and included his familiar put-down, "you fill me with inertia". In 1968, Cook and Moore briefly switched to ATV for four one-hour programmes titled Goodbye Again, based on the Pete and Dud characters. Cook's increasing alcoholism led him to become reliant on cue cards. The show was not a popular success, owing in part to a strike causing the suspension of the publication of the ITV listings magazine TV Times. John Cleese was also a cast member, who would become close lifelong friends with Cook and later collaborated on multiple projects together. 1970s In 1970, Cook took over a project initiated by David Frost for a satirical film about an opinion pollster who rises to become Prime Minister of Great Britain. Under Cook's guidance, the character became modelled on Frost. The film, The Rise and Rise of Michael Rimmer, was not a success, although the cast contained notable names (including Cleese and Graham Chapman, who were co-writers). Cook became a favourite of the chat show circuit but his effort at hosting such a show for the BBC in 1971, Where Do I Sit?, was said by the critics to have been a disappointment. It was axed after only three episodes and was replaced by Michael Parkinson, the start of Parkinson's career as a chat show host. Parkinson later asked Cook what his ambitions were, Cook replied jocularly "[...] in fact, my ambition is to shut you up altogether you see!" Cook and Moore fashioned sketches from Not Only....But Also and Goodbye Again with new material into the stage revue called Behind the Fridge. This show toured Australia in 1972 before transferring to New York City in 1973, retitled as Good Evening. Cook frequently appeared on and off stage the worse for drink. Nonetheless, the show proved very popular and it won Tony and Grammy Awards. When it finished, Moore stayed in the United States to pursue his film acting ambitions in Hollywood. Cook returned to Britain and in 1973, married the actress and model Judy Huxtable. Later, the more risqué humour of Pete and Dud went further on such LPs as "Derek and Clive". The first recording was initiated by Cook to alleviate boredom during the Broadway run of Good Evening and used material conceived years before for the two characters but considered too outrageous. One of these audio recordings was also filmed and therein tensions between the duo are seen to rise. Chris Blackwell circulated bootleg copies to friends in the music business. The popularity of the recording convinced Cook to release it commercially, although Moore was initially reluctant, fearing that his rising fame as a Hollywood star would be undermined. Two further Derek and Clive albums were released, the last accompanied by a film. Cook and Moore hosted Saturday Night Live on 24 January 1976 during the show's first season. They did a number of their classic stage routines, including "One Leg Too Few" and "Frog and Peach" among others, in addition to participating in some skits with the show's ensemble cast. In 1978, Cook appeared on the British music series Revolver as the manager of a ballroom where emerging punk and new wave acts played. For some groups, these were their first appearances on television. Cook's acerbic commentary was a distinctive aspect of the programme. In 1979, Cook recorded comedy-segments as B-sides to the Sparks 12-inch singles "Number One Song in Heaven" and "Tryouts for the Human Race". The main songwriter Ron Mael often began with a banal situation in his lyrics and then went at surreal tangents in the style of Cook and S. J. Perelman. Amnesty International performances Cook appeared at the first three fund-raising galas staged by Cleese and Martin Lewis on behalf of Amnesty International. From the third show in 1979 the benefits were dubbed The Secret Policeman's Balls. He performed on all three nights of the first show in April 1976, A Poke in the Eye (With a Sharp Stick), as an individual performer and as a member of the cast of Beyond the Fringe, which reunited for the first time since the 1960s. He also appeared in a Monty Python sketch, taking the place of Eric Idle. Cook was on the cast album of the show and in the film, Pleasure at Her Majesty's. He was in the second Amnesty gala in May 1977, An Evening Without Sir Bernard Miles. It was retitled The Mermaid Frolics for the cast album and TV special. Cook performed monologues and skits with Terry Jones. In June 1979, Cook performed all four nights of The Secret Policeman's Ball, teaming with Cleese. Cook performed a couple of solo pieces and a sketch with Eleanor Bron. He also led the ensemble in the finale – the "End of the World" sketch from Beyond the Fringe. In response to a barb in The Daily Telegraph that the show was recycled material, Cook wrote a satire of the summing-up by Justice Cantley in the trial of former Liberal Party leader Jeremy Thorpe, a summary now widely thought to show bias in favour of Thorpe. Cook performed it that same night (Friday 29 June – the third of the four nights) and the following night. The nine-minute opus, "Entirely a Matter for You", is considered by many fans and critics to be one of the finest works of Cook's career. Along with Cook, producer of the show Martin Lewis brought out an album on Virgin Records entitled Here Comes the Judge: Live, containing the live performance together with three studio tracks that further lampooned the Thorpe trial. Although unable to take part in the 1981 gala, Cook supplied the narration over the animated opening title sequence of the 1982 film of the show. With Lewis, he wrote and voiced radio commercials to advertise the film in the UK. He also hosted a spoof film awards ceremony that was part of the world première of the film in London in March 1982. Following Cook's 1987 stage reunion with Moore for the annual American benefit for the homeless, Comic Relief (not related to the UK Comic Relief benefits), Cook repeated the reunion for a British audience by performing with Moore at the 1989 Amnesty benefit The Secret Policeman's Biggest Ball. Consequences album Cook played multiple roles on the 1977 concept album Consequences, written and produced by former 10cc members Kevin Godley and Lol Creme. A mixture of spoken comedy and progressive rock with an environmental subtext, Consequences started as a single that Godley and Creme planned to make to demonstrate their invention, an electric guitar effect called the Gizmo, which they developed in 10cc. The project grew into a three-LP box set. The comedy sections were originally intended to be performed by a cast including Spike Milligan and Peter Ustinov, but Godley and Creme eventually settled on Cook once they realised he could perform most parts himself. The storyline centres on the impending divorce of ineffectual Englishman Walter Stapleton (Cook) and his French wife Lulu (Judy Huxtable). While meeting their lawyers – the bibulous Mr. Haig and overbearing Mr. Pepperman (both played by Cook) – the encroaching global catastrophe interrupts proceedings with bizarre and mysterious happenings, which seem to centre on Mr. Blint (Cook), a musician and composer living in the flat below Haig's office, to which it is connected by a large hole in the floor. Although it has since developed a cult following, Consequences was released as punk was sweeping the UK and proved a resounding commercial failure, savaged by critics who found the music self-indulgent. The script and story have evident connections to Cook's own life – his then-wife Judy Huxtable plays Walter's wife. Cook's struggles with alcohol are mirrored in Haig's drinking, and there is a parallel between the fictional divorce of Walter and Lulu and Cook's own divorce from his first wife. The voice and accent Cook used for the character of Stapleton are similar to those of Cook's Beyond the Fringe colleague, Alan Bennett, and a book on Cook's comedy, How Very Interesting: Peter Cook's Universe and All That Surrounds It, speculates that the characters Cook plays in Consequences are his verbal caricatures of the four Beyond the Fringe cast members – the alcoholic Haig represents Cook himself, the tremulous Stapleton is Bennett, the parodically Jewish Pepperman is Miller, and the pianist Blint represents Moore. 1980s Cook starred in the LWT special Peter Cook & Co. in 1980. The show included comedy sketches, including a Tales of the Unexpected parody "Tales of the Much As We Expected". This involved Cook as Roald Dahl, explaining his name had been Ronald before he dropped the "n". The cast included Cleese, Rowan Atkinson, Beryl Reid, Paula Wilcox, and Terry Jones. Partly spurred by Moore's growing film star status, Cook moved to Hollywood in that year. He then appeared as an uptight English butler to a wealthy American woman in a short-lived United States television sitcom, The Two of Us, with Mimi Kennedy and Dana Hill. Cook also made some cameo appearances in a few undistinguished films. In 1983, Cook played the role of Richard III in the first episode of Blackadder, "The Foretelling", which parodies Laurence Olivier's portrayal. In 1984, he played the role of Nigel, the mathematics teacher, in Jeannot Szwarc's film Supergirl, working alongside the evil Selena played by Faye Dunaway. He then narrated the short film Diplomatix by Norwegian comedy trio Kirkvaag, Lystad, and Mjøen, which won the "Special Prize of the City of Montreux" at the Montreux Comedy Festival in 1985. In 1986, he partnered Joan Rivers on her UK talk show. He appeared as Mr Jolly in 1987 in The Comic Strip Presents... episode "Mr. Jolly Lives Next Door", playing an assassin who covers the sound of his murders by playing Tom Jones records. That same year, Cook appeared in The Princess Bride as the "Impressive Clergyman" who officiates at the wedding ceremony between Buttercup and Prince Humperdinck. Also that year, he spent time working with humourist Martin Lewis on a political satire about the 1988 US presidential elections for HBO, but the script went unproduced. Lewis suggested that Cook team with Moore for the US Comic Relief telethon for the homeless. The duo reunited and performed their "One Leg Too Few" sketch. Cook again collaborated with Moore for the 1989 Amnesty International benefit show, The Secret Policeman's Biggest Ball. A 1984 commercial for John Harvey & Sons showed Cook at a poolside party drinking Harvey's Bristol Cream sherry. He then says to "throw away those silly little glasses" whereupon the other party guests toss their sunglasses in the swimming pool. In 1988, Cook appeared as a contestant on the improvisation comedy show Whose Line Is It Anyway? He was declared the winner, his prize being to read the credits in the style of a New York cab driver – a character he had portrayed in Peter Cook & Co. Cook occasionally called in to Clive Bull's night-time phone-in radio show on LBC in London. Using the name "Sven from Swiss Cottage", he mused on love, loneliness, and herrings in a mock Norwegian accent. Jokes included Sven's attempts to find his estranged wife, in which he often claimed to be telephoning the show from all over the world, and his dislike of his fellow Norwegians' obsession with fish. While Bull was clearly aware that Sven was fictional and was happy to play along with the joke, he did not learn of the caller's real identity until later. Revival In late 1989, Cook married for the third time, to Malaysian-born property developer Chiew Lin Chong in Torbay, Devon. She provided him with some stability in his personal life, and he reduced his drinking to the extent that for a time he was teetotal. He lived alone in a small 18th-century house in Perrins Walk, Hampstead, while she kept her own property just away. Cook returned to the BBC as Sir Arthur Streeb-Greebling for an appearance with Ludovic Kennedy in A Life in Pieces. The 12 interviews saw Sir Arthur recount his life, based on the song "Twelve Days of Christmas". Unscripted interviews with Cook as Streeb-Greebling and satirist Chris Morris were recorded in late 1993 and broadcast as Why Bother? on BBC Radio 3 in 1994. Morris described them: On 17 December 1993, Cook appeared on Clive Anderson Talks Back as four characters – biscuit tester and alien abductee Norman House, football manager and motivational speaker Alan Latchley, judge Sir James Beauchamp, and rock legend Eric Daley. The following day, he appeared on BBC2 performing links for Arena'''s "Radio Night". He also appeared in the 1993 Christmas special of One Foot in the Grave ("One Foot in the Algarve"), playing a muckraking tabloid photographer. Before the end of the following year, his mother died, and a grief-stricken Cook returned to heavy drinking. He made his last television appearance on the show Pebble Mill at One in November 1994. Personal life Cook was married three times. He was first married to Wendy Snowden, whom he met at university, in 1963. They had two daughters, Lucy and Daisy. They divorced in 1971. Cook then married his second wife, model and actress Judy Huxtable, in 1973, the marriage ending in 1989 after they had been separated for some years. He married his third and final wife, Chiew Lin Chong, in 1989, to whom he remained married until his death. Cook became stepfather to Chong's daughter, Nina. Following Cook's death, Chong suffered from depression, deriving both from her loss and the difficulties arising from raising Nina, who had learning difficulties. Chong died at the age of 71 in November 2016. Cook was an avid spectator of most sports (except Rugby league) and was a supporter of Tottenham Hotspur football club, though he also maintained support for his hometown team Torquay United. Cook was an admitted heavy smoker. As a regular interviewee on his friend's show, Michael Parkinson, he was usually to be seen with a lighted cigarette in his hand or mouth during their broadcast interviews. Death Cook died in a coma on 9 January 1995 at age 57 at the Royal Free Hospital in Hampstead, London, from a gastrointestinal haemorrhage, a complication resulting from years of heavy drinking. His body was cremated at Golders Green Crematorium, and his ashes were buried in an unmarked plot behind St John-at-Hampstead, not far from his home in Perrins Walk. Dudley Moore attended Cook's memorial service at St John-at-Hampstead on 1 May 1995. He and Martin Lewis presented a two-night memorial for Cook at The Improv in Los Angeles, on 15 and 16 November 1995, to mark what would have been Cook's 58th birthday. Legacy Cook is widely acknowledged as a strong influence on the many British comedians who followed him from the amateur dramatic clubs of British universities to the Edinburgh Festival Fringe, and then to radio and television. On his death, some critics choose to see Cook's life as tragic, insofar as the brilliance of his youth had not been sustained in his later years. However, Cook maintained he was "comfortable with limited ambition" not necessarily for the sustained international success that Dudley Moore achieved. He assessed happiness by his friendships and his enjoyment of life. Eric Idle said Cook had not wasted his talent, but rather that the newspapers had tried to waste him. In 1995 premiered Play Wisty For Me – The Life of Peter Cook, an original play to pay tribute to Cook. Several friends honoured him with a dedication in the closing credits of Fierce Creatures (1997), a comedy film written by John Cleese about a zoo in peril of being closed. It starred Cleese alongside Jamie Lee Curtis, Kevin Kline, and Michael Palin. The dedication displays photos and the lifespan dates of Cook and of naturalist and humorist Gerald Durrell. In 1999, the minor planet 20468 Petercook, in the main asteroid belt, was named after Cook. Channel 4 broadcast Not Only But Always, a television film dramatising the relationship between Cook and Moore, with Rhys Ifans portraying Cook. At the 2005 Edinburgh Festival Fringe, a play, Pete and Dud: Come Again written by Chris Bartlett and Nick Awde, examined the relationship from Moore's view. The play was transferred to London's West End at The Venue in 2006 and toured the UK the following year. During the West End run, Tom Goodman-Hill starred as Cook, with Kevin Bishop as Moore. A green plaque to honour Cook was unveiled by the Westminster City Council and the Heritage Foundation at the site of the Establishment Club, at 18 Greek Street, on 15 February 2009. A blue plaque was unveiled by the Torbay Civic Society on 17 November 2014 at Cook's place of birth, "Shearbridge", Middle Warberry Road, Torquay, with his widow Lin and other members of the family in attendance. A further blue plaque was commissioned and erected at the home of Torquay United, Plainmoor, Torquay, in 2015. Filmography Film Bachelor of Hearts (1958) – Pedestrian in Street (uncredited) Ten Thousand Talents (short film, 1960) – voice What's Going on Here (TV film, 1963) The Wrong Box (1966) – Morris Finsbury Alice in Wonderland (TV film, 1966) – Mad Hatter Bedazzled (1967) – George Spiggott / The Devil A Dandy in Aspic (1968) – Prentiss Monte Carlo or Bust! (released in the US as Those Daring Young Men in Their Jaunty Jalopies) (1969) – Maj. Digby Dawlish The Bed Sitting Room (1969) – Inspector The Rise and Rise of Michael Rimmer (1970) – Michael Rimmer Behind the Fridge (TV film, 1971) – Various Characters An Apple a Day (TV film, 1971) – Mr Elwood Sr. The Adventures of Barry McKenzie (1972) – Dominic Saturday Night at the Baths (1975) – Himself, in theatre audience (uncredited) Find the Lady (1976) – Lewenhak Eric Sykes Shows a Few of Our Favourite Things (TV film, 1977) – Stagehand The Hound of the Baskervilles (1978) – Sherlock Holmes Derek and Clive Get the Horn (1979) – Clive Peter Cook & Co. (TV Special, 1980) – Various Characters Yellowbeard (1983) – Lord Percy Lambourn Supergirl (1984) – Nigel Kenny Everett's Christmas Carol (TV movie, 1985) – Ghost of Christmas Yet To Come The Myth (1986) – Himself The Princess Bride (1987) – The Impressive Clergyman Whoops Apocalypse (1988) – Sir Mortimer Chris Without a Clue (1988) – Norman Greenhough Jake's Journey (TV movie, 1988) – King Getting It Right (1989) – Mr Adrian Great Balls of Fire! (1989) – First English Reporter The Craig Ferguson Story (TV film, 1991) – Fergus Ferguson Roger Mellie (1991) - Roger Mellie (voice) One Foot in the Algarve (1993 episode of One Foot in the Grave) – Martin Trout Black Beauty (1994) – Lord Wexmire (final film role) Peter Cook Talks Golf Balls (video, 1994) – played four characters: Alec Dunroonie / Dieter Liedbetter / Major Titherly Glibble / Bill Rossi Television Chronicle (1964) – presenter (one episode) A Series of Bird's (1967) – (1 episode) Not Only... But Also (1965–70) – Various Characters (22 episodes) Not Only But Also. Peter Cook and Dudley Moore in Australia (miniseries, 1971) Thirty-Minute Theatre (1972) – Peter Trilby (1 episode) Revolver (1978) (8 episodes) The Two of Us (1981–1982) – Robert Brentwood (20 episodes) The Black Adder (1983) – Richard III (first episode, "The Foretelling") Diplomatix (TV Short, 1985) – Narrator (voice) The Comic Strip Presents... (1988) – Mr Jolly (one episode) The Best of... What's Left of... Not Only... But Also (1990) – Pete / Himself / other characters (one episode) A Life in Pieces (TV Short, 1990) – Sir Arthur Streeb-Greebling (12 episodes) Roger Mellie: The Man on the Telly (1991) – Roger Mellie (voice) Gone to Seed (1992) – Wesley Willis (six episodes) Arena (1993) – himself (two episodes) Other works Amnesty International performances Pleasure at Her Majesty's (1976) The Mermaid Frolics (1977) The Secret Policeman's Ball (1979) The Secret Policeman's Private Parts (1981) - Intro narrator The Secret Policeman's Biggest Ball (1989) The Best of Amnesty: Featuring the Stars of Monty Python (1999) Discography UK chart singles: "The Ballad of Spotty Muldoon" (1965) "Goodbye-ee" (1965) both with Dudley Moore Albums: Bridge on the River Wye (1962) The Misty Mr. Wisty (Decca, 1965) Not Only Peter Cook... But Also Dudley Moore (Decca, 1965) Once Moore with Cook (with Dudley Moore) (Decca, 1966) Peter Cook and Dudley Moore Cordially Invite You to Go to Hell! (1967) Goodbye Again (with Dudley Moore) (Decca, 1968) Behind the Fridge (with Dudley Moore) (1972) Aus #35 The World of Pete & Dud (Decca, 1974) Derek and Clive (Live) (with Dudley Moore) (1976) Derek and Clive Come Again (with Dudley Moore) (1977) Derek and Clive Ad Nauseam'' (with Dudley Moore) (1978) References Further reading Richard Mills, (2010). Pop half-cocked: a history of "Revolver". In Inglis, Ian, (ed). Popular Music and Television in Britain. Ashgate, Farnham, pp. 149 - 160. External links The Establishment Lengthy 1988 KCRW radio interview in 3 parts "Bob Claster's Funny Stuff" including many excerpts. Mr Blint's Attic Tribute to Peter Cook, with texts and commentary Good Evening, a Peter Cook Fansite incl. Gallery The BBC Guide to Comedy: Not Only...But Also Missing-Episodes.com One Leg Too Few, script for one of Cook and Moore's most famous and oft-performed sketches. 1937 births 1995 deaths Alcohol-related deaths in England Alumni of Pembroke College, Cambridge Comedians from Devon Deaths from gastrointestinal hemorrhage English male comedians English male film actors English male television actors English satirists English television writers Grammy Award winners Male actors from Devon People educated at Radley College Writers from Torquay Private Eye contributors English male writers Decca Records artists British male television writers Special Tony Award recipients Actors from Torquay Best Entertainment Performance BAFTA Award (television) winners 20th-century English male actors 20th-century English screenwriters
23549
https://en.wikipedia.org/wiki/Psychedelic%20rock
Psychedelic rock
Psychedelic rock is a rock music genre that is inspired, influenced, or representative of psychedelic culture, which is centered on perception-altering hallucinogenic drugs. The music incorporated new electronic sound effects and recording techniques, extended instrumental solos, and improvisation. Many psychedelic groups differ in style, and the label is often applied spuriously. Originating in the mid-1960s among British and American musicians, the sound of psychedelic rock invokes three core effects of LSD: depersonalization, dechronicization (the bending of time), and dynamization (when fixed, ordinary objects dissolve into moving, dancing structures), all of which detach the user from everyday reality. Musically, the effects may be represented via novelty studio tricks, electronic or non-Western instrumentation, disjunctive song structures, and extended instrumental segments. Some of the earlier 1960s psychedelic rock musicians were based in folk, jazz, and the blues, while others showcased an explicit Indian classical influence called "raga rock". In the 1960s, there existed two main variants of the genre: the more whimsical, surrealist British psychedelia and the harder American West Coast "acid rock". While "acid rock" is sometimes deployed interchangeably with the term "psychedelic rock", it also refers more specifically to the heavier, harder, and more extreme ends of the genre. The peak years of psychedelic rock were between 1967 and 1969, with milestone events including the 1967 Summer of Love and the 1969 Woodstock Festival, becoming an international musical movement associated with a widespread counterculture before declining as changing attitudes, the loss of some key individuals, and a back-to-basics movement led surviving performers to move into new musical areas. The genre bridged the transition from early blues and folk-based rock to progressive rock and hard rock, and as a result contributed to the development of sub-genres such as heavy metal. Since the late 1970s it has been revived in various forms of neo-psychedelia. Definition As a musical style, psychedelic rock incorporated new electronic sound effects and recording effects, extended solos, and improvisation. Features mentioned in relation to the genre include: electric guitars, often used with feedback, wah-wah and fuzzbox effects units; certain studio effects (principally in British psychedelia), such as backwards tapes, panning, phasing, long delay loops, and extreme reverb; elements of Indian music and other Eastern music, including Middle Eastern modalities; non-Western instruments (especially in British psychedelia), specifically those originally used in Indian classical music, such as sitar, tambura and tabla; elements of free-form jazz; a strong keyboard presence, especially electronic organs, harpsichords, or the Mellotron (an early tape-driven sampler); extended instrumental segments, especially guitar solos, or jams; disjunctive song structures, occasional key and time signature changes, modal melodies and drones; droning quality in vocals; electronic instruments such as synthesizers and the theremin; lyrics that made direct or indirect reference to hallucinogenic drugs; surreal, whimsical, esoterically or literary-inspired lyrics with (especially in British psychedelia) references to childhood; Victorian-era antiquation (exclusive to British psychedelia), drawing on items such as music boxes, music hall nostalgia and circus sounds. The term "psychedelic" was coined in 1956 by psychiatrist Humphry Osmond in a letter to LSD exponent Aldous Huxley and used as an alternative descriptor for hallucinogenic drugs in the context of psychedelic psychotherapy. As the countercultural scene developed in San Francisco, the terms acid rock and psychedelic rock were used in 1966 to describe the new drug-influenced music and were being widely used by 1967. The two terms are often used interchangeably, but acid rock may be distinguished as a more extreme variation that was heavier, louder, relied on long jams, focused more directly on LSD, and made greater use of distortion. Original psychedelic era 1960–65: Precursors and influences Music critic Richie Unterberger says that attempts to "pin down" the first psychedelic record are "nearly as elusive as trying to name the first rock & roll record". Some of the "far-fetched claims" include the instrumental "Telstar" (produced by Joe Meek for the Tornados in 1962) and the Dave Clark Five's "massively reverb-laden" "Any Way You Want It" (1964). The first mention of LSD on a rock record was the Gamblers' 1960 surf instrumental "LSD 25". A 1962 single by the Ventures, "The 2000 Pound Bee", issued forth the buzz of a distorted, "fuzztone" guitar, and the quest into "the possibilities of heavy, transistorised distortion" and other effects, like improved reverb and echo, began in earnest on London's fertile rock 'n' roll scene. By 1964 fuzztone could be heard on singles by P.J. Proby, and the Beatles had employed feedback in "I Feel Fine", their sixth consecutive number 1 hit in the UK. According to AllMusic, the emergence of psychedelic rock in the mid-1960s resulted from British groups who made up the British Invasion of the US market and folk rock bands seeking to broaden "the sonic possibilities of their music". Writing in his 1969 book The Rock Revolution, Arnold Shaw said the genre in its American form represented generational escapism, which he identified as a development of youth culture's "protest against the sexual taboos, racism, violence, hypocrisy and materialism of adult life". American folk singer Bob Dylan's influence was central to the creation of the folk rock movement in 1965, and his lyrics remained a touchstone for the psychedelic songwriters of the late 1960s. Virtuoso sitarist Ravi Shankar had begun in 1956 a mission to bring Indian classical music to the West, inspiring jazz, classical and folk musicians. By the mid-1960s, his influence extended to a generation of young rock musicians who soon made raga rock part of the psychedelic rock aesthetic and one of the many intersecting cultural motifs of the era. In the British folk scene, blues, drugs, jazz and Eastern influences blended in the early 1960s work of Davy Graham, who adopted modal guitar tunings to transpose Indian ragas and Celtic reels. Graham was highly influential on Scottish folk virtuoso Bert Jansch and other pioneering guitarists across a spectrum of styles and genres in the mid-1960s. Jazz saxophonist and composer John Coltrane had a similar impact, as the exotic sounds on his albums My Favorite Things (1960) and A Love Supreme (1965), the latter influenced by the ragas of Shankar, were source material for guitar players and others looking to improvise or "jam". One of the first musical uses of the term "psychedelic" in the folk scene was by the New York-based folk group The Holy Modal Rounders on their version of Lead Belly's 'Hesitation Blues' in 1964. Folk/avant-garde guitarist John Fahey recorded several songs in the early 1960s experimented with unusual recording techniques, including backwards tapes, and novel instrumental accompaniment including flute and sitar. His nineteen-minute "The Great San Bernardino Birthday Party" "anticipated elements of psychedelia with its nervy improvisations and odd guitar tunings". Similarly, folk guitarist Sandy Bull's early work "incorporated elements of folk, jazz, and Indian and Arabic-influenced dronish modes". His 1963 album Fantasias for Guitar and Banjo explores various styles and "could also be accurately described as one of the very first psychedelic records". 1965: Formative psychedelic scenes and sounds Barry Miles, a leading figure in the 1960s UK underground, says that "Hippies didn't just pop up overnight" and that "1965 was the first year in which a discernible youth movement began to emerge [in the US]. Many of the key 'psychedelic' rock bands formed this year." On the US West Coast, underground chemist Augustus Owsley Stanley III and Ken Kesey (along with his followers known as the Merry Pranksters) helped thousands of people take uncontrolled trips at Kesey's Acid Tests and in the new psychedelic dance halls. In Britain, Michael Hollingshead opened the World Psychedelic Centre and Beat Generation poets Allen Ginsberg, Lawrence Ferlinghetti and Gregory Corso read at the Royal Albert Hall. Miles adds: "The readings acted as a catalyst for underground activity in London, as people suddenly realized just how many like-minded people there were around. This was also the year that London began to blossom into colour with the opening of the Granny Takes a Trip and Hung On You clothes shops." Thanks to media coverage, use of LSD became widespread. According to music critic Jim DeRogatis, writing in his book on psychedelic rock, Turn on Your Mind, the Beatles are seen as the "Acid Apostles of the New Age". Producer George Martin, who was initially known as a specialist in comedy and novelty records, responded to the Beatles' requests by providing a range of studio tricks that ensured the group played a leading role in the development of psychedelic effects. Anticipating their overtly psychedelic work, "Ticket to Ride" (April 1965) introduced a subtle, drug-inspired drone suggestive of India, played on rhythm guitar. Musicologist William Echard writes that the Beatles employed several techniques in the years up to 1965 that soon became elements of psychedelic music, an approach he describes as "cognate" and reflective of how they, like the Yardbirds, were early pioneers in psychedelia. As important aspects the group brought to the genre, Echard cites the Beatles' rhythmic originality and unpredictability; "true" tonal ambiguity; leadership in incorporating elements from Indian music and studio techniques such as vari-speed, tape loops and reverse tape sounds; and their embrace of the avant-garde. In Unterberger's opinion, the Byrds, emerging from the Los Angeles folk rock scene, and the Yardbirds, from England's blues scene, were more responsible than the Beatles for "sounding the psychedelic siren". Drug use and attempts at psychedelic music moved out of acoustic folk-based music towards rock soon after the Byrds, inspired by the Beatles' 1964 film A Hard Day's Night, adopted electric instruments to produce a chart-topping version of Dylan's "Mr. Tambourine Man" in the summer of 1965. On the Yardbirds, Unterberger identifies lead guitarist Jeff Beck as having "laid the blueprint for psychedelic guitar", and says that their "ominous minor key melodies, hyperactive instrumental breaks (called rave-ups), unpredictable tempo changes, and use of Gregorian chants" helped to define the "manic eclecticism" typical of early psychedelic rock. The band's "Heart Full of Soul" (June 1965), which includes a distorted guitar riff that replicates the sound of a sitar, peaked at number 2 in the UK and number 9 in the US. In Echard's description, the song "carried the energy of a new scene" as the guitar-hero phenomenon emerged in rock, and it heralded the arrival of new Eastern sounds. The Kinks provided the first example of sustained Indian-style drone in rock when they used open-tuned guitars to mimic the tambura on "See My Friends" (July 1965), which became a top 10 hit in the UK. The Beatles' "Norwegian Wood" from the December 1965 album Rubber Soul marked the first released recording on which a member of a Western rock group played the sitar. The song sparked a craze for the sitar and other Indian instrumentation – a trend that fueled the growth of raga rock as the India exotic became part of the essence of psychedelic rock. Music historian George Case recognises Rubber Soul as the first of two Beatles albums that "marked the authentic beginning of the psychedelic era", while music critic Robert Christgau similarly wrote that "Psychedelia starts here". San Francisco historian Charles Perry recalled the album being "the soundtrack of the Haight-Ashbury, Berkeley and the whole circuit", as pre-hippie youths suspected that the songs were inspired by drugs. Although psychedelia was introduced in Los Angeles through the Byrds, according to Shaw, San Francisco emerged as the movement's capital on the West Coast. Several California-based folk acts followed the Byrds into folk rock, bringing their psychedelic influences with them, to produce the "San Francisco Sound". Music historian Simon Philo writes that although some commentators would state that the centre of influence had moved from London to California by 1967, it was British acts like the Beatles and the Rolling Stones that helped inspire and "nourish" the new American music in the mid-1960s, especially in the formative San Francisco scene. The music scene there developed in the city's Haight-Ashbury neighborhood in 1965 at basement shows organised by Chet Helms of the Family Dog; and as Jefferson Airplane founder Marty Balin and investors opened The Matrix nightclub that summer and began booking his and other local bands such as the Grateful Dead, the Steve Miller Band and Country Joe & the Fish. Helms and San Francisco Mime Troupe manager Bill Graham in the fall of 1965 organised larger scale multi-media community events/benefits featuring the Airplane, the Diggers and Allen Ginsberg. By early 1966 Graham had secured booking at The Fillmore, and Helms at the Avalon Ballroom, where in-house psychedelic-themed light shows replicated the visual effects of the psychedelic experience. Graham became a major figure in the growth of psychedelic rock, attracting most of the major psychedelic rock bands of the day to The Fillmore. According to author Kevin McEneaney, the Grateful Dead "invented" acid rock in front of a crowd of concertgoers in San Jose, California on 4 December 1965, the date of the second Acid Test held by novelist Ken Kesey and the Merry Pranksters. Their stage performance involved the use of strobe lights to reproduce LSD's "surrealistic fragmenting" or "vivid isolating of caught moments". The Acid Test experiments subsequently launched the entire psychedelic subculture. 1966: Growth and early popularity Echard writes that in 1966, "the psychedelic implications" advanced by recent rock experiments "became fully explicit and much more widely distributed", and by the end of the year, "most of the key elements of psychedelic topicality had been at least broached." DeRogatis says the start of psychedelic (or acid) rock is "best listed at 1966". Music journalists Pete Prown and Harvey P. Newquist locate the "peak years" of psychedelic rock between 1966 and 1969. In 1966, media coverage of rock music changed considerably as the music became reevaluated as a new form of art in tandem with the growing psychedelic community. In February and March, two singles were released that later achieved recognition as the first psychedelic hits: the Yardbirds' "Shapes of Things" and the Byrds' "Eight Miles High". The former reached number 3 in the UK and number 11 in the US, and continued the Yardbirds' exploration of guitar effects, Eastern-sounding scales, and shifting rhythms. By overdubbing guitar parts, Beck layered multiple takes for his solo, which included extensive use of fuzz tone and harmonic feedback. The song's lyrics, which Unterberger describes as "stream-of-consciousness", have been interpreted as pro-environmental or anti-war. The Yardbirds became the first British band to have the term "psychedelic" applied to one of its songs. On "Eight Miles High", Roger McGuinn's 12-string Rickenbacker guitar provided a psychedelic interpretation of free jazz and Indian raga, channelling Coltrane and Shankar, respectively. The song's lyrics were widely taken to refer to drug use, although the Byrds denied it at the time. "Eight Miles High" peaked at number 14 in the US and reached the top 30 in the UK. Contributing to psychedelia's emergence into the pop mainstream was the release of the Beach Boys' Pet Sounds (May 1966) and the Beatles' Revolver (August 1966). Often considered one of the earliest albums in the canon of psychedelic rock, Pet Sounds contained many elements that would be incorporated into psychedelia, with its artful experiments, psychedelic lyrics based on emotional longings and self-doubts, elaborate sound effects and new sounds on both conventional and unconventional instruments. The album track "I Just Wasn't Made for These Times" contained the first use of theremin sounds on a rock record. Scholar Philip Auslander says that even though psychedelic music is not normally associated with the Beach Boys, the "odd directions" and experiments in Pet Sounds "put it all on the map. ... basically that sort of opened the door – not for groups to be formed or to start to make music, but certainly to become as visible as say Jefferson Airplane or somebody like that." DeRogatis views Revolver as another of "the first psychedelic rock masterpieces", along with Pet Sounds. The Beatles' May 1966 B-side "Rain", recorded during the Revolver sessions, was the first pop recording to contain reversed sounds. Together with further studio tricks such as varispeed, the song includes a droning melody that reflected the band's growing interest in non-Western musical form and lyrics conveying the division between an enlightened psychedelic outlook and conformism. Philo cites "Rain" as "the birth of British psychedelic rock" and describes Revolver as "[the] most sustained deployment of Indian instruments, musical form and even religious philosophy" heard in popular music up to that time. Author Steve Turner recognises the Beatles' success in conveying an LSD-inspired worldview on Revolver, particularly with "Tomorrow Never Knows", as having "opened the doors to psychedelic rock (or acid rock)". In author Shawn Levy's description, it was "the first true drug album, not [just] a pop record with some druggy insinuations", while musicologists Russell Reising and Jim LeBlanc credit the Beatles with "set[ting] the stage for an important subgenre of psychedelic music, that of the messianic pronouncement". Echard highlights early records by the 13th Floor Elevators and Love among the key psychedelic releases of 1966, along with "Shapes of Things", "Eight Miles High", "Rain" and Revolver. Originating from Austin, Texas, the first of these new bands came to the genre via the garage scene before releasing their debut album, The Psychedelic Sounds of the 13th Floor Elevators in October that year. It was one of the first rock albums to include the adjective in its title, although the LP was released on an independent label and was little noticed at the time. Two other bands also used the word in titles of LPs released in November 1966: The Blues Magoos' Psychedelic Lollipop, and the Deep's Psychedelic Moods. Having formed in late 1965 with the aim of spreading LSD consciousness, the Elevators commissioned business cards containing an image of the third eye and the caption "Psychedelic rock". Rolling Stone highlights the 13th Floor Elevators as arguably "the most important early progenitors of psychedelic garage rock". Donovan's July 1966 single "Sunshine Superman" became one of the first psychedelic pop/rock singles to top the Billboard charts in the US. Influenced by Aldous Huxley’s The Doors of Perception, and with lyrics referencing LSD, it contributed to bringing psychedelia to the mainstream. The Beach Boys' October 1966 single "Good Vibrations" was another early pop song to incorporate psychedelic lyrics and sounds. The single's success prompted an unexpected revival in theremins and increased the awareness of analog synthesizers. As psychedelia gained prominence, Beach Boys-style harmonies would be ingrained into the newer psychedelic pop. 1967–69: Continued development Peak era In 1967, psychedelic rock received widespread media attention and a larger audience beyond local psychedelic communities. From 1967 to 1968, it was the prevailing sound of rock music, either in the more whimsical British variant, or the harder American West Coast acid rock. Music historian David Simonelli says the genre's commercial peak lasted "a brief year", with San Francisco and London recognised as the two key cultural centres. Compared with the American form, British psychedelic music was often more arty in its experimentation, and it tended to stick within pop song structures. Music journalist Mark Prendergast writes that it was only in US garage-band psychedelia that the often whimsical traits of UK psychedelic music were found. He says that aside from the work of the Byrds, Love and the Doors, there were three categories of US psychedelia: the "acid jams" of the San Francisco bands, who favoured albums over singles; pop psychedelia typified by groups such as the Beach Boys and Buffalo Springfield; and the "wigged-out" music of bands following in the example of the Beatles and the Yardbirds, such as the Electric Prunes, the Nazz, the Chocolate Watchband and the Seeds. The Doors' self-titled debut album (January 1967) is notable for possessing a darker sound and subject matter than many contemporary psychedelic albums, which would become very influential to the later Gothic rock movement. Aided by the No. 1 single, "Light My Fire", the album became very successful, reaching number 2 on the Billboard chart. In February 1967, the Beatles released the double A-side single "Strawberry Fields Forever" / "Penny Lane", which Ian MacDonald says launched both the "English pop-pastoral mood" typified by bands such as Pink Floyd, Family, Traffic and Fairport Convention, and English psychedelia's LSD-inspired preoccupation with "nostalgia for the innocent vision of a child". The Mellotron parts on "Strawberry Fields Forever" remain the most celebrated example of the instrument on a pop or rock recording. According to Simonelli, the two songs heralded the Beatles' brand of Romanticism as a central tenet of psychedelic rock. Jefferson Airplane's Surrealistic Pillow (February 1967) was one of the first albums to come out of San Francisco that sold well enough to bring national attention to the city's music scene. The LP tracks "White Rabbit" and "Somebody to Love" subsequently became top 10 hits in the US. The Hollies psychedelic B-side "All the World Is Love" (February 1967) was released as the flipside to the hit single "On a Carousel". Pink Floyd's "Arnold Layne" (March 1967) and "See Emily Play" (June 1967), both written by Syd Barrett, helped set the pattern for pop-psychedelia in the UK. There, "underground" venues like the UFO Club, Middle Earth Club, The Roundhouse, the Country Club and the Art Lab drew capacity audiences with psychedelic rock and ground-breaking liquid light shows. A major figure in the development of British psychedelia was the American promoter and record producer Joe Boyd, who moved to London in 1966. He co-founded venues including the UFO Club, produced Pink Floyd's "Arnold Layne", and went on to manage folk and folk rock acts including Nick Drake, the Incredible String Band and Fairport Convention. Psychedelic rock's popularity accelerated following the release of the Beatles' album Sgt. Pepper's Lonely Hearts Club Band (May 1967) and the staging of the Monterey Pop Festival in June. Sgt. Pepper was the first commercially successful work that critics recognised as a landmark aspect of psychedelia, and the Beatles' mass appeal meant that the record was played virtually everywhere. The album was highly influential on bands in the US psychedelic rock scene and its elevation of the LP format benefited the San Francisco bands. Among many changes brought about by its success, artists sought to imitate its psychedelic effects and devoted more time to creating their albums; the counterculture was scrutinised by musicians; and acts adopted its non-conformist sentiments. The 1967 Summer of Love saw a huge number of young people from across America and the world travel to Haight-Ashbury, boosting the area's population from 15,000 to around 100,000. It was prefaced by the Human Be-In event in January and reached its peak at the Monterey Pop Festival in June, the latter helping to make major American stars of Janis Joplin, lead singer of Big Brother and the Holding Company, Jimi Hendrix, and the Who. Several established British acts joined the psychedelic revolution, including Eric Burdon (previously of the Animals) and the Who, whose The Who Sell Out (December 1967) included the psychedelic-influenced "I Can See for Miles" and "Armenia City in the Sky". Other major British Invasion acts who absorbed psychedelia in 1967 include the Hollies with the album Butterfly, and The Rolling Stones album Their Satanic Majesties Request. The Incredible String Band's The 5000 Spirits or the Layers of the Onion (July 1967) developed their folk music into a pastoral form of psychedelia. Many famous established recording artists from the early rock era also fell under psychedelia and recorded psychedelic-inspired tracks, including Del Shannon's "Color Flashing Hair", Bobby Vee's "I May Be Gone", The Four Seasons' "Watch the Flowers Grow", Roy Orbison's "Southbound Jericho Parkway" and The Everly Brothers' "Mary Jane". According to author Edward Macan, there ultimately existed three distinct branches of British psychedelic music. The first, dominated by Cream, the Yardbirds and Hendrix, was founded on a heavy, electric adaptation of the blues played by the Rolling Stones, adding elements such as the Who's power chord style and feedback. The second, considerably more complex form drew strongly from jazz sources and was typified by Traffic, Colosseum, If, and Canterbury scene bands such as Soft Machine and Caravan. The third branch, represented by the Moody Blues, Pink Floyd, Procol Harum and the Nice, was influenced by the later music of the Beatles. Several of the post-Sgt. Pepper English psychedelic groups developed the Beatles' classical influences further than either the Beatles or contemporaneous West Coast psychedelic bands. Among such groups, the Pretty Things abandoned their R&B roots to create S.F. Sorrow (December 1968), the first example of a psychedelic rock opera. International variants The US and UK were the major centres of psychedelic music, but in the late 1960s scenes developed across the world, including continental Europe, Australasia, Asia and south and Central America. In the later 1960s psychedelic scenes developed in a large number of countries in continental Europe, including the Netherlands with bands like The Outsiders, Denmark, where it was pioneered by Steppeulvene, Yugoslavia, with bands like Kameleoni, Dogovor iz 1804., Pop Mašina and Igra Staklenih Perli, and Germany, where musicians fused music of psychedelia and the electronic avant-garde. 1968 saw the first major German rock festival, the in Essen, and the foundation of the Zodiak Free Arts Lab in Berlin by Hans-Joachim Roedelius, and Conrad Schnitzler, which helped bands like Tangerine Dream and Amon Düül achieve cult status. A thriving psychedelic music scene in Cambodia, influenced by psychedelic rock and soul broadcast by US forces radio in Vietnam, was pioneered by artists such as Sinn Sisamouth and Ros Serey Sothea. In South Korea, Shin Jung-Hyeon, often considered the godfather of Korean rock, played psychedelic-influenced music for the American soldiers stationed in the country. Following Shin Jung-Hyeon, the band San Ul Lim (Mountain Echo) often combined psychedelic rock with a more folk sound. In Turkey, Anatolian rock artist Erkin Koray blended classic Turkish music and Middle Eastern themes into his psychedelic-driven rock, helping to found the Turkish rock scene with artists such as Cem Karaca, Mogollar, Barış Manço and Erkin Koray. In Brazil, the Tropicalia movement merged Brazilian and African rhythms with psychedelic rock. Musicians who were part of the movement include Caetano Veloso, Gilberto Gil, Os Mutantes, Gal Costa, Tom Zé, and the poet/lyricist Torquato Neto, all of whom participated in the 1968 album Tropicália: ou Panis et Circencis, which served as a musical manifesto. 1969–71: Decline By the end of the 1960s, psychedelic rock was in retreat. Psychedelic trends climaxed in the 1969 Woodstock Festival, which saw performances by most of the major psychedelic acts, including Jimi Hendrix, Jefferson Airplane and the Grateful Dead. LSD had been made illegal in the United Kingdom in September 1966 and in California in October; by 1967, it was outlawed throughout the United States. In 1969, the murders of Sharon Tate and Leno and Rosemary LaBianca by Charles Manson and his cult of followers, claiming to have been inspired by The Beatles' songs such as "Helter Skelter", has been seen as contributing to an anti-hippie backlash. At the end of the same year, the Altamont Free Concert in California, headlined by the Rolling Stones, became notorious for the fatal stabbing of black teenager Meredith Hunter by Hells Angels security guards. George Clinton's ensembles Funkadelic and Parliament and their various spin-offs took psychedelia and funk to create their own unique style, producing over forty singles, including three in the US top ten, and three platinum albums. Brian Wilson of the Beach Boys, Brian Jones of the Rolling Stones, Peter Green and Danny Kirwan of Fleetwood Mac and Syd Barrett of Pink Floyd were early "acid casualties", helping to shift the focus of the respective bands of which they had been leading figures. Some groups, such as the Jimi Hendrix Experience and Cream, broke up. Hendrix died in London in September 1970, shortly after recording Band of Gypsys (1970), Janis Joplin died of a heroin overdose in October 1970 and they were closely followed by Jim Morrison of the Doors, who died in Paris in July 1971. By this point, many surviving acts had moved away from psychedelia into either more back-to-basics "roots rock", traditional-based, pastoral or whimsical folk, the wider experimentation of progressive rock, or riff-based heavy rock. Revivals and successors Psychedelic soul Following the lead of Hendrix in rock, psychedelia influenced African American musicians, particularly the stars of the Motown label. This psychedelic soul was influenced by the civil rights movement, giving it a darker and more political edge than much psychedelic rock. Building on the funk sound of James Brown, it was pioneered from about 1968 by Sly and the Family Stone and The Temptations. Acts that followed them into this territory included Edwin Starr and the Undisputed Truth. George Clinton's interdependent Funkadelic and Parliament ensembles and their various spin-offs took the genre to its most extreme lengths, making funk almost a religion in the 1970s, producing over forty singles, including three in the US top ten, and three platinum albums. While psychedelic rock wavered at the end of the 1960s, psychedelic soul continued into the 1970s, peaking in popularity in the early years of the decade, and only disappearing in the late 1970s as tastes changed. Songwriter Norman Whitfield wrote psychedelic soul songs for The Temptations and Marvin Gaye. Prog, heavy metal, and krautrock Many of the British musicians and bands that had embraced psychedelia went on to create progressive rock in the 1970s, including Pink Floyd, Soft Machine and members of Yes. The Moody Blues album In Search of the Lost Chord (1968), which is steeped in psychedelia, including prominent use of Indian instruments, is noted as an early predecessor to and influence on the emerging progressive movement. King Crimson's album In the Court of the Crimson King (1969) has been seen as an important link between psychedelia and progressive rock. While bands such as Hawkwind maintained an explicitly psychedelic course into the 1970s, most dropped the psychedelic elements in favour of wider experimentation. The incorporation of jazz into the music of bands like Soft Machine and Can also contributed to the development of the jazz rock of bands like Colosseum. As they moved away from their psychedelic roots and placed increasing emphasis on electronic experimentation, German bands like Kraftwerk, Tangerine Dream, Can, Neu! and Faust developed a distinctive brand of electronic rock, known as kosmische musik, or in the British press as "Kraut rock". The adoption of electronic synthesisers, pioneered by Popol Vuh from 1970, together with the work of figures like Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Psychedelic rock, with its distorted guitar sound, extended solos and adventurous compositions, has been seen as an important bridge between blues-oriented rock and later heavy metal. American bands whose loud, repetitive psychedelic rock emerged as early heavy metal included the Amboy Dukes and Steppenwolf. From England, two former guitarists with the Yardbirds, Jeff Beck and Jimmy Page, moved on to form key acts in the genre, The Jeff Beck Group and Led Zeppelin respectively. Other major pioneers of the genre had begun as blues-based psychedelic bands, including Black Sabbath, Deep Purple, Judas Priest and UFO. Psychedelic music also contributed to the origins of glam rock, with Marc Bolan changing his psychedelic folk duo into rock band T. Rex and becoming the first glam rock star from 1970. From 1971 David Bowie moved on from his early psychedelic work to develop his Ziggy Stardust persona, incorporating elements of professional make up, mime and performance into his act. The jam band movement, which began in the late 1980s, was influenced by the Grateful Dead's improvisational and psychedelic musical style. The Vermont band Phish developed a sizable and devoted fan following during the 1990s, and were described as "heirs" to the Grateful Dead after the death of Jerry Garcia in 1995. Emerging in the 1990s, stoner rock combined elements of psychedelic rock and doom metal. Typically using a slow-to-mid tempo and featuring low-tuned guitars in a bass-heavy sound, with melodic vocals, and 'retro' production, it was pioneered by the Californian bands Kyuss and Sleep. Modern festivals focusing on psychedelic music include Austin Psych Fest in Texas, founded in 2008, Liverpool Psych Fest, and Desert Daze in Southern California. Neo-psychedelia There were occasional mainstream acts that dabbled in neo-psychedelia, a style of music which emerged in late 1970s post-punk circles. Although it has mainly been an influence on alternative and indie rock bands, neo-psychedelia sometimes updated the approach of 1960s psychedelic rock. Neo-psychedelia may include forays into psychedelic pop, jangly guitar rock, heavily distorted free-form jams, or recording experiments. Some of the scene's bands, including the Soft Boys, the Teardrop Explodes, Wah!, Echo & the Bunnymen, became major figures of neo-psychedelia. In the US in the early 1980s it was joined by the Paisley Underground movement, based in Los Angeles and fronted by acts such as Dream Syndicate, the Bangles and Rain Parade. In the late '80s in the UK the genre of Madchester emerged in the Manchester area, in which artists merged alternative rock with acid house and dance culture as well as other sources, including psychedelic music and 1960s pop. The label was popularised by the British music press in the early 1990s. Erchard talks about it as being part of a "thread of 80s psychedelic rock" and lists as main bands in it the Stone Roses, Happy Mondays and Inspiral Carpets. The rave-influenced scene is widely seen as heavily influenced by drugs, especially ecstasy (MDMA), and it is seen by Erchard as central to a wider phenomenon of what he calls a "rock rave crossover" in the late '80s and early '90s UK indie scene, which also included the Screamadelica album by Scottish band Primal Scream. In the 1990's Elephant 6 collective bands such as The Olivia Tremor Control and The Apples in Stereo mixed the genre with lo-fi influences. Later according to Treblezines Jeff Telrich: "Primal Scream made [neo-psychedelia] dancefloor ready. The Flaming Lips and Spiritualized took it to orchestral realms. And Animal Collective—well, they kinda did their own thing." See also List of electric blues musicians List of psychedelic rock artists Notes, references, sources Notes References Bibliography Further reading Joynson, Vernon (2004) Fuzz, Acid and Flowers Revisited: A Comprehensive Guide to American Garage, Psychedelic and Hippie Rock (1964-1975). Borderline . Psychedelic music Counterculture of the 1960s Counterculture of the 1970s 1960s fads and trends 1970s fads and trends 1960s neologisms 1966 introductions Rock music genres Fusion music genres
23550
https://en.wikipedia.org/wiki/Philips
Philips
Koninklijke Philips N.V. (), commonly shortened to Philips, is a Dutch multinational conglomerate corporation that was founded in Eindhoven in 1891. Since 1997, its world headquarters have been situated in Amsterdam, though the Benelux headquarters is still in Eindhoven. Philips was once one of the largest consumer electronics companies in the world, but later focused on health technology, having divested its other divisions. The company was founded in 1891 by Gerard Philips and his father Frederik, with their first products being light bulbs. Philips employs around 80,000 people across 100 countries. The company gained its royal honorary title (hence the Koninklijke) in 1998 and dropped the "Electronics" in its name in 2013, due to its refocusing on healthcare technology. Philips is organized into three main divisions: Personal Health (formerly Philips Consumer Electronics and Philips Domestic Appliances and Personal Care), Connected Care, and Diagnosis & Treatment (formerly Philips Medical Systems). The lighting division was spun off as a separate company, Signify N.V. The company started making electric shavers in 1939 under the Philishave and Norelco brands, and post-war it developed the Compact Cassette, an audiotape format, and co-developed the compact disc format with Sony, as well as numerous other technologies. Philips was the largest manufacturer of lighting in the world as measured by revenue. Philips has a primary listing on the Euronext Amsterdam stock exchange and is a component of the Euro Stoxx 50 stock market index. It has a secondary listing on the New York Stock Exchange. Acquisitions included Signetics and Magnavox. It also founded a multidisciplinary sports club called PSV Eindhoven in 1913. History The Philips Company was founded in 1891, by Dutch entrepreneur Gerard Philips and his father Frederik Philips. Frederik, a banker based in Zaltbommel, financed the purchase and setup of an empty factory building in Eindhoven, where the company started the production of carbon-filament lamps and other electro-technical products in 1892. This first factory has since been adapted and is used as a museum. In 1895, after a difficult first few years and near-bankruptcy, the Philips's brought in Anton, Gerard's younger brother by sixteen years. Though he had earned a degree in engineering, Anton started work as a sales representative; soon, however, he began to contribute many important business ideas. With Anton's arrival, the family business began to expand rapidly, resulting in the founding of Philips Metaalgloeilampfabriek N.V. (Philips Metal Filament Lamp Factory Ltd.) in Eindhoven in 1908, followed in 1912, by the foundation of Philips Gloeilampenfabrieken N.V. (Philips Lightbulb Factories Ltd.). After Gerard and Anton Philips changed their family business by founding the Philips corporation, they laid the foundations for the later multinational. In the 1920s, the company started to manufacture other products, such as vacuum tubes. For this purpose the Van Arkel-De Boer process was invented. In 1924, Philips joined with German lamp trust Osram to form the Phoebus cartel. Radio On 11 March 1927, Philips went on the air, inaugurating the shortwave radio station PCJJ (later PCJ) which was joined in 1929 by a sister station (Philips Omroep Holland-Indië, later PHI). PHOHI broadcast in Dutch to the Dutch East Indies (now Indonesia), and later PHI broadcast in English and other languages to the Eastern hemisphere, while PCJJ broadcast in English, Spanish and German to the rest of the world. The international program Sundays commenced in 1928, with host Eddie Startz hosting the Happy Station show, which became the world's longest-running shortwave program. Broadcasts from the Netherlands were interrupted by the German invasion in May 1940. The Germans commandeered the transmitters in Huizen to use for pro-Nazi broadcasts, some originating from Germany, others concerts from Dutch broadcasters under German control. In the early 1930s, Philips introduced the "Chapel", a radio with a built-in loudspeaker. Philips Radio was absorbed shortly after liberation when its two shortwave stations were nationalised in 1947 and renamed Radio Netherlands Worldwide, the Dutch International Service. Some PCJ programs, such as Happy Station, continued on the new station. Stirling engine Philips was instrumental in the revival of the Stirling engine when, in the early 1930s, the management decided that offering a low-power portable generator would assist in expanding sales of its radios into parts of the world where mains electricity was unavailable and the supply of batteries uncertain. Engineers at the company's research lab carried out a systematic comparison of various power sources and determined that the almost forgotten Stirling engine would be most suitable, citing its quiet operation (both audibly and in terms of radio interference) and ability to run on a variety of heat sources (common lamp oil – "cheap and available everywhere" – was favoured). They were also aware that, unlike steam and internal combustion engines, virtually no serious development work had been carried out on the Stirling engine for many years and asserted that modern materials and know-how should enable great improvements. Encouraged by their first experimental engine, which produced 16 W of shaft power from a bore and stroke of , various development models were produced in a program which continued throughout World War II. By the late 1940s, the 'Type 10' was ready to be handed over to Philips' subsidiary Johan de Witt in Dordrecht to be produced and incorporated into a generator set as originally planned. The result, rated at 180/200 W electrical output from a bore and stroke of , was designated MP1002CA (known as the "Bungalow set"). Production of an initial batch of 250 began in 1951, but it became clear that they could not be made at a competitive price, besides the advent of transistor radios with their much lower power requirements meant that the original rationale for the set was disappearing. Approximately 150 of these sets were eventually produced. In parallel with the generator set, Philips developed experimental Stirling engines for a wide variety of applications and continued to work in the field until the late 1970s, though the only commercial success was the 'reversed Stirling engine' cryocooler. However, they filed a large number of patents and amassed a wealth of information, which they later licensed to other companies. Shavers The first Philips shaver was introduced in 1939, and was simply called Philishave. In the US, it was called Norelco. The Philishave has remained part of the Philips product line-up until the present. World War II On 9 May 1940, the Philips directors learned that the German invasion of the Netherlands was to take place the following day. Having prepared for this, Anton Philips and his son-in-law Frans Otten, as well as other Philips family members, fled to the United States, taking a large amount of the company capital with them. Operating from the US as the North American Philips Company, they managed to run the company throughout the war. At the same time, the company was moved (on paper) to the Netherlands Antilles to keep it out of German hands. On 6 December 1942, the British No. 2 Group RAF undertook Operation Oyster, which heavily damaged the Philips Radio factory in Eindhoven with few casualties among the Dutch workers and civilians. The Philips works in Eindhoven was bombed again by the RAF on 30 March 1943. Frits Philips, the son of Anton, was the only Philips family member to stay in the Netherlands. He saved the lives of 382 Jews by convincing the Nazis that they were indispensable for the production process at Philips. In 1943, he was held at the internment camp for political prisoners at Vught for several months because a strike at his factory reduced production. For his actions in saving the hundreds of Jews, he was recognized by Yad Vashem in 1995 as a "Righteous Among the Nations". 1945–1999 After the war, the company was moved back to the Netherlands, with their headquarters in Eindhoven. In 1949, the company began selling television sets. In 1950, it formed Philips Records, which eventually formed part of PolyGram in 1962. In 1954, Cor Dillen (of the very well known and prestigious Dillen family) (director and later CFO and also CEO) put Philips on the map in South America introducing them to Color television for the first time in most countries of the Americas like Brazil, although Bolivia, Chile, Paraguay, Peru, Uruguay continued to broadcast in black and white until the early 1980s when Dillen was named the company's CEO. Philips introduced the Compact Cassette audio tape format in 1963, and it was wildly successful. Cassettes were initially used for dictation machines for office typing stenographers and professional journalists. As their sound quality improved, cassettes would also be used to record sound and became the second mass media alongside vinyl records used to sell recorded music. Philips introduced the first combination portable radio and cassette recorder, which was marketed as the "radio recorder", and is now better known as the boom box. Later, the cassette was used in telephone answering machines, including a special form of cassette where the tape was wound on an endless loop. The C-cassette was used as the first mass storage device for early personal computers in the 1970s and 1980s. Philips reduced the cassette size for professional needs with the Mini-Cassette, although it would not be as successful as the Olympus Microcassette. This became the predominant dictation medium up to the advent of fully digital dictation machines. Philips continued with computers through the early 1990s (see separate article: Philips Computers). In 1972, Philips launched the world's first home video cassette recorder, in the UK, the N1500. Its relatively bulky video cassettes could record 30 minutes or 45 minutes. Later one-hour tapes were also offered. As the competition came from Sony's Betamax and the VHS group of manufacturers, Philips introduced the N1700 system which allowed double-length recording. For the first time, a 2-hour movie could fit onto one video cassette. In 1977, the company unveiled a special promotional film for this system in the UK, featuring comedy writer and presenter Denis Norden. The concept was quickly copied by the Japanese makers, whose tapes were significantly cheaper. Philips made one last attempt at a new standard for video recorders with the Video 2000 system, with tapes that could be used on both sides and had 8 hours of total recording time. As Philips only sold its systems on the PAL standard and in Europe, and the Japanese makers sold globally, the scale advantages of the Japanese proved insurmountable and Philips withdrew the V2000 system and joined the VHS Coalition. Philips had developed a LaserDisc early on for selling movies, but delayed its commercial launch for fear of cannibalizing its video recorder sales. Later Philips joined with MCA to launch the first commercial LaserDisc standard and players. In 1982, Philips teamed with Sony to launch the compact disc; this format evolved into the CD-R, CD-RW, DVD and later Blu-ray, which Philips launched with Sony in 1997 and 2006 respectively. In 1984, the Dutch Philips Group bought out nearly a one-third share and took over the management of the German company Grundig. In 1984, Philips split off its activities on the field of photolithographic integrated circuit production equipment, the so-called wafer steppers, into a joint venture with ASM International, located in Veldhoven under the name ASML. Over the years, this new company has evolved into the world's leading manufacturer of chip production machines at the expense of competitors like Nikon and Canon. Philips partnered with Sony again later to develop a new "interactive" disc format called CD-i, described by them as a "new way of interacting with a television set". Philips created the majority of CD-i compatible players. After low sales, Philips repositioned the format as a video game console, but it was soon discontinued after being heavily criticized amongst the gaming community. In the 1980s, Philips's profit margin dropped below 1 percent, and in 1990 the company lost more than US$2 billion (biggest corporate loss in Dutch history). Troubles for the company continued into the 1990s as its status as a leading electronics company was swiftly lost. In 1985, Philips was the largest founding investor in TSMC which was established as a joint venture between Philips, the Taiwan government and other private investors. In 1990, the newly appointed CEO, Jan Timmer, decided to sell off all businesses that dealt with computers. This meant the end of Philips Data Systems as well as other computer activities. The year after, those businesses were acquired by Digital Equipment Corporation. In 1991, the company's name was changed from N.V. Philips Gloeilampenfabrieken to Philips Electronics N.V. At the same time, North American Philips was formally dissolved, and a new corporate division was formed in the US with the name Philips Electronics North America Corp. In 1997, the company officers decided to move the headquarters from Eindhoven to Amsterdam along with the corporate name change to Koninklijke Philips Electronics N.V., the latter of which was finalized on 16 March 1998. In 1997, Philips introduced at CES and CeBIT the first large (42-inch) commercially available flat-panel TV, using Fujitsu plasma displays. In 1998, looking to spur innovation, Philips created an Emerging Businesses group for its Semiconductors unit, based in Silicon Valley. The group was designed to be an incubator where promising technologies and products could be developed. 2000s The move of the headquarters to Amsterdam was completed in 2001. Initially, the company was housed in the Rembrandt Tower. In 2002, it moved again, this time to the Breitner Tower. Philips Lighting, Philips Research, Philips Semiconductors (spun off as NXP in September 2006), and Philips Design, are still based in Eindhoven. Philips Healthcare is headquartered in both Best, Netherlands (near Eindhoven) and Andover, Massachusetts, United States (near Boston). In 2000, Philips bought Optiva Corporation, the maker of Sonicare electric toothbrushes. The company was renamed Philips Oral Healthcare and made a subsidiary of Philips DAP. In 2001, Philips acquired Agilent Technologies' Healthcare Solutions Group (HSG) for EUR 2 billion. Philips created a computer monitors joint venture with LG called LG.Philips Displays in 2001. In 2001, after growing the unit's Emerging Businesses group to nearly $1 billion in revenue, Scott A. McGregor was named the new president and CEO of Philips Semiconductors. McGregor's appointment completed the company's shift to having dedicated CEOs for all five of the company's product divisions, which would in turn leave the Board of Management to concentrate on issues confronting the Philips Group as a whole. In February 2001 Philips sold its remaining interest in battery manufacturing to its then partner Matsushita (which itself became Panasonic in 2008). In 2004, Philips abandoned the slogan "Let's make things better" in favour of a new one: "Sense and Simplicity". In December 2005, Philips announced its intention to sell or demerge its semiconductor division. On 1 September 2006, it was announced in Berlin that the name of the new company formed by the division would be NXP Semiconductors. On 2 August 2006, Philips completed an agreement to sell a controlling 80.1% stake in NXP Semiconductors to a consortium of private equity investors consisting of Kohlberg Kravis Roberts & Co. (KKR), Silver Lake Partners and AlpInvest Partners. On 21 August 2006, Bain Capital and Apax Partners announced that they had signed definitive commitments to join the acquiring consortium, a process which was completed on 1 October 2006. In 2006, Philips bought out the company Lifeline Systems headquartered in Framingham, Massachusetts, in a deal valued at $750 million, its biggest move yet to expand its consumer-health business (M). In August 2007, Philips acquired the company Ximis, Inc. headquartered in El Paso, Texas, for their Medical Informatics Division. In October 2007, it purchased a Moore Microprocessor Patent (MPP) Portfolio license from The TPL Group. On 21 December 2007, Philips and Respironics, Inc. announced a definitive agreement pursuant to which Philips acquired all of the outstanding shares of Respironics for US$66 per share, or a total purchase price of approximately €3.6  billion (US$5.1 billion) in cash. On 21 February 2008, Philips completed the acquisition of VISICU in Baltimore, Maryland, through the merger of its indirect wholly-owned subsidiary into VISICU. As a result of that merger, VISICU has become an indirect wholly-owned subsidiary of Philips. VISICU was the creator of the eICU concept of the use of Telemedicine from a centralized facility to monitor and care for ICU patients. The Philips physics laboratory was scaled down in the early 21st century, as the company ceased trying to be innovative in consumer electronics through fundamental research. 2010s In 2010, Philips introduced the Airfryer brand of convection oven at the IFA Berlin consumer electronics fair. Philips announced the sale of its Assembléon subsidiary which made pick-and-place machines for the electronics industry. Philips made several acquisitions during 2011, announcing on 5 January 2011 that it had acquired Optimum Lighting, a manufacturer of LED based luminaires. In January 2011, Philips agreed to acquire the assets of Preethi, a leading India-based kitchen appliances company. On 27 June 2011, Philips acquired Sectra Mamea AB, the mammography division of Sectra AB. Because net profit slumped 85 percent in Q3 2011, Philips announced a cut of 4,500 jobs to match part of an €800 million ($1.1 billion) cost-cutting scheme to boost profits and meet its financial target. In 2011, the company posted a loss of €1.3 billion, but earned a net profit in Q1 and Q2 2012, however the management wanted €1.1 billion cost-cutting which was an increase from €800 million and may cut another 2,200 jobs until end of 2014. In March 2012, Philips announced its intention to sell, or demerge its television manufacturing operations to TPV Technology. Following two decades in decline, Philips went through a major restructuring, shifting its focus from electronics to healthcare. Particularly from 2011 when a new CEO was appointed, Frans van Houten. The new health and medical strategy have helped Philips to thrive again in the 2010s. On 5 December 2012, the antitrust regulators of the European Union fined Philips and several other major companies for fixing prices of TV cathode-ray tubes in two cartels lasting nearly a decade. On 29 January 2013, it was announced that Philips had agreed to sell its audio and video operations to the Japan-based Funai Electric for €150 million, with the audio business planned to transfer to Funai in the latter half of 2013, and the video business in 2017. As part of the transaction, Funai was to pay a regular licensing fee to Philips for the use of the Philips brand. The purchase agreement was terminated by Philips in October because of breach of contract and the consumer electronics operations remained under Philips. Philips said it would seek damages for breach of contract in the US$200-million sale. In April 2016, the International Court of Arbitration ruled in favour of Philips, awarding compensation of €135 million in the process. In April 2013, Philips announced a collaboration with Paradox Engineering for the realization and implementation of a "pilot project" on network-connected street-lighting management solutions. This project was endorsed by the San Francisco Public Utilities Commission (SFPUC). In 2013, Philips removed the word "Electronics" from its name – becoming Royal Philips N.V. On 13 November 2013, Philips unveiled its new brand line "Innovation and You" and a new design of its shield mark. The new brand positioning is cited by Philips to signify company's evolution and emphasize that innovation is only meaningful if it is based on an understanding of people's needs and desires. On 28 April 2014, Philips agreed to sell their Woox Innovations subsidiary (consumer electronics) to Gibson Brands for $US135 million. On 23 September 2014, Philips announced a plan to split the company into two, separating the lighting business from the healthcare and consumer lifestyle divisions. It moved to complete this in March 2015 to an investment group for $3.3 billion. In February 2015, Philips acquired Volcano Corporation to strengthen its position in non-invasive surgery and imaging. In June 2016, Philips spun off its lighting division to focus on the healthcare division. In June 2017, Philips announced it would acquire US-based Spectranetics Corp, a manufacturer of devices to treat heart disease, for €1.9 billion (£1.68 billion) expanding its image-guided therapy business. In May 2016, Philips' lighting division Philips Lighting went through a spin-off process, and became an independent public company named Philips Lighting N.V. In 2017, Philips launched Philips Ventures, with a health technology venture fund as its main focus. Philips Ventures invested in companies including Mytonomy (2017) and DEARhealth (2019). On July 18, 2017, Philips announced its acquisition of TomTec Imaging Systems GmbH. In 2018, the independent Philips Lighting N.V. was renamed Signify N.V. However, it continues to produce and market Philips-branded products such as Philips Hue color-changing LED light bulbs. 2020s In 2021, Philips Domestic Appliances was purchased by Hillhouse Capital for $4.4 Billion. In 2022, Philips announced that Frans Van Houten, who had served as CEO for 12 years would be stepping down, after a key product recall cut the company's market value by more than half over the previous year. He was to be replaced by Philips's EVP and Chief Business Leader of Connected Care, Roy Jakobs, effective October 15, 2022. In 2023, the company announced that it would be cutting 6,000 jobs from the company worldwide over the next two years after reporting 1.6 billion euros in losses during the 2022 financial year. The cuts came in addition to a 4,000 staff reduction being announced in October 2022. In August 2023, Exor N.V., the holding company owned by the Agnelli family, took a 15% stake in Philips. The transaction was worth roughly €2.6 billion. Corporate affairs CEOs Past and present CEOs: 1891–1922: Gerard Philips 1922–1939: Anton Philips 1939–1961: Frans Otten 1961–1971: Frits Philips 1971–1977: 1977–1981: Nico Rodenburg 1981–1982: Cor Dillen 1982–1986: Wisse Dekker 1986–1990: Cor van der Klugt 1990–1996: Jan Timmer 1996–2001: Cor Boonstra 2001–2011: Gerard Kleisterlee 2011–2022: Frans van Houten 2022–present: Roy Jakobs CEOs lighting: 2003–2008: Theo van Deursen 2012–present: Eric Rondolat CFOs Past and present CFOs (chief financial officer) 1960–1968: Cor Dillen –1997: Dudley Eustace 1997–2005: Jan Hommen 2008-2015: Ron Wirahadiraksa 2015–present: Abhijit Bhattacharya Executive Committee CEO: Roy Jakobs CFO: Abhijit Bhattacharya COO: Willem Appelo Chief ESG & Legal Officer: Marnix van Ginneken Chief Patient Safety and Quality Officer: Steve C da Baca Chief Business Leader (Connected Care): Roy Jakobs Chief Business Leader (Personal Health): Deeptha Khanna Chief Business Leader (Image Guided Therapy): Bert van Meurs Chief Business Leader (Precision Diagnosis): Bert van Meurs (ad interim) CEO Philips Domestic Appliances: Henk Siebren de Jong Chief of International Markets: Edwin Paalvast Chief Medical, Innovation & Strategy Officer: Shez Partovi Chief Market Leader (Greater China): Andy Ho Chief Market Leader (North America): Jeff DiLullo Chief Human Resources Officer: Daniela Seabrook Acquisitions Companies acquired by Philips through the years include ADAC Laboratories, Agilent Healthcare Solutions Group, Amperex, ATL Ultrasound, EKCO, Lifeline Systems, Magnavox, Marconi Medical Systems, Mullard, Optiva, Preethi, Pye, Respironics, Inc., Sectra Mamea AB, Signetics, VISICU, Volcano, VLSI, Ximis, portions of Westinghouse and the consumer electronics operations of Philco and Sylvania. Philips abandoned the Sylvania trademark which is now owned by Havells Sylvania except in Australia, Canada, Mexico, New Zealand, Puerto Rico and the US where it is owned by Osram. Formed in November 1999 as an equal joint venture between Philips and Agilent Technologies, the light-emitting diode manufacturer Lumileds became a subsidiary of Philips Lighting in August 2005 and a fully owned subsidiary in December 2006. An 80.1 percent stake in Lumileds was sold to Apollo Global Management in 2017. On 18 July 2017, Philips announced its acquisition of TomTec Imaging Systems GmbH On 19 September 2018, Philips reported that it had acquired Canada-based Blue Willow Systems, a developer of a cloud-based senior living community resident safety platform. On 7 March 2019, Philips announced that was acquiring the Healthcare Information Systems business of Carestream Health Inc., a US-based provider of medical imaging and healthcare IT solutions for hospitals, imaging centers, and specialty medical clinics. On 18 July 2019, Philips announced that it has expanded its patient management solutions in the US with the acquisition of Boston-based start-up company Medumo. On 27 August 2020, Philips announced the acquisition of Intact Vascular, Inc., a U.S.-based developer of medical devices for minimally invasive peripheral vascular procedures. On 18 December 2020, Philips and BioTelemetry, Inc., a leading U.S.-based provider of remote cardiac diagnostics and monitoring, announced that they had entered into a definitive merger agreement. On 19 January 2021, Philips announced the acquisition of Capsule Technologies, Inc., a provider of medical device integration and data technologies for hospitals and healthcare organizations. On 9 November 2021, Philips announced the acquisition of Cardiologs, an AI-powered cardiac diagnostic technology developer, to expand its cardiac diagnostics and monitoring portfolio. Operations Philips is registered in the Netherlands as a naamloze vennootschap (public corporation) and has its global headquarters in Amsterdam. At the end of 2013, Philips had 111 manufacturing facilities, 59 R&D facilities across 26 countries and sales and service operations in around 100 countries. is organized into three main divisions: Philips Consumer Lifestyle (formerly Philips Consumer Electronics and Philips Domestic Appliances and Personal Care), Philips Healthcare (formerly Philips Medical Systems), and Philips Lighting (Former). Philips achieved total revenues of €22.579 billion in 2011, of which €8.852 billion were generated by Philips Healthcare, €7.638 billion by Philips Lighting, €5.823 billion by Philips Consumer Lifestyle and €266 million from group activities. At the end of 2011, Philips had a total of 121,888 employees, of whom around 44% were employed in Philips Lighting, 31% in Philips Healthcare and 15% in Philips Consumer Lifestyle. The lighting division was spun out as a new company called Signify, which uses the Philips brand under license. Philips invested a total of €1.61 billion in research and development in 2011, equivalent to 7.10% of sales. Philips Intellectual Property and Standards is the group-wide division responsible for licensing, trademark protection and patenting. Philips holds around 54,000 patent rights, 39,000 trademarks, 70,000 design rights and 4,400 domain name registrations. In the 2021 review of WIPO's annual World Intellectual Property Indicators Philips ranked 5th in the world for its 95 industrial design registrations being published under the Hague System during 2020. This position is down on their previous 4th-place ranking for 85 industrial design registrations being published in 2019. Asia Thailand Philips Thailand was established in 1952. It is a subsidiary that produces healthcare, lifestyle, and lighting products. Philips started manufacturing in Thailand in 1960 with an incandescent lamp factory. Philips has diversified its production facilities to include a fluorescent lamp factory and a luminaries factory, serving Thai and worldwide markets. Hong Kong Philips Hong Kong began operations in 1948. Philips Hong Kong houses the global headquarters of Philips' Audio Business Unit. It also house Philips' Asia Pacific regional office and headquarters for its Design Division, Domestic Appliances & Personal Care Products Division, Lighting Products Division and Medical System Products Division. In 1974, Philips opened a lamp factory in Hong Kong. This has a capacity of 200 million pieces a year and is certified with ISO 9001:2000 and ISO 14001. Its product portfolio includes prefocus, lensend and E10 miniature light bulbs. China Philips established its first 50/50 joint venture company Beijing Philips Audio/Video Corporation (北京飞利浦有限公司) with Beijing Radio Factory (北京无线电厂) to manufacture audio consumer electronic products in Beijing in 1987. In 1990 a factory was setup in Zhuhai, Guangdong, mainly manufactures Philishaves and healthcare products. In early 2008, Philips Lighting, a division of Royal Philips Electronics, opened a small engineering center in Shanghai to adapt the company's products to vehicles in Asia. Today Philips has 27 WOFE/JVs in China, employing >17,500 people. China is its second largest market. India Philips began operations in India in 1930, with the establishment of Philips Electrical Co. (India) Pvt Ltd in Kolkata(then Calcutta) as a sales outlet for imported Philips lamps. In 1938, Philips established its first Indian lamp manufacturing factory in Kolkata. In 1948, Philips started manufacturing radios in Kolkata. In 1959, a second radio factory was established near Pune. This was closed and sold around 2006. In 1957, the company converted into a public limited company, renamed "Philips India Ltd". In 1970, a new consumer electronics factory began operations in Pimpri near Pune. This is now called the 'Philips Healthcare Innovation Centre'. Also, a manufacturing facility 'Philips Centre for Manufacturing Excellence' was set up in Chakan, Pune in 2012. In 1996, the Philips Software Centre was established in Bangalore, later renamed the Philips Innovation Campus. In 2008, Philips India entered the water purifier market. In 2014, Philips was ranked 12th among India's most trusted brands according to the Brand Trust Report, a study conducted by Trust Research Advisory. Now Philips India is one of the most diversified health care company & broadly focusing on Imaging, Utlrasound, MA & TC products & Sleep & respiratory care products. Philips is aspiring to touch life of 40 Million patients in India by next 2 years. In 2020, Philips introduced mobile ICUs in order to support clinicians to meet the rising demand of ICU beds due to the COVID-19 pandemic. Israel Philips has been active in Israel since 1948 and in 1998, set up a wholly owned subsidiary, Philips Electronics (Israel) Ltd. The company has over 700 employees in Israel and generated sales of over $300 million in 2007. Philips Medical Systems Technologies Ltd. (Haifa) is a developer and manufacturer of Computerized Tomography (CT), diagnostic and Medical Imaging systems. The company was founded in 1969 as Elscint by Elron Electronic Industries and was acquired by Marconi Medical Systems in 1998, which was itself acquired by Philips in 2001. Philips Semiconductors formerly had major operations in Israel; these now form part of NXP Semiconductors. On 1 August 2019, Philips acquired Carestream HCIS division from Onex Corporation. As part of the acquisition, Algotec Systems LTD (Carestream HCIS R&D) located in Raanana Israel changed ownership in a share deal. In addition to that, Algotec changed its name to Philips Algotec and is part of Philips HCIS. Philips HCIS is a provider of medical imaging systems. Pakistan Philips has been active in Pakistan since 1948 and has a wholly owned subsidiary, Philips Pakistan Limited (Formerly Philips Electrical Industries of Pakistan Limited). The head office is in Karachi with regional sales offices in Lahore and Rawalpindi. Singapore Philips began operations in Singapore in 1951, initially as a local distributor of imported Philips products. Philips later established manufacturing sites at Boon Keng Road and Jurong Industrial Estate in 1968 and 1970 respectively. Since 1972, its regional headquarters has been based in the central HDB town of Toa Payoh, which from the 1990s until the early 2010s consisted of four interconnected buildings housing offices and factory spaces. In 2016, a new Philips APAC HQ building was opened on the site of one of the former 1972 buildings. Europe Denmark Philips Denmark was founded in Copenhagen in 1927, and is now headquartered in Frederiksberg. In 1963, Philips established the Philips TV & Test Equipment laboratory in Amager (moved to Brøndby Municipality in 1989) which was where engineers Erik Helmer Nielsen and (1939–2011) created and developed some of Philips' most iconic television test cards, such as the monochrome PM5540 and the colour PM5544 and TVE test cards. In 1998 Philips TV & Test Equipment was spun off as ProTeleVision Technologies A/S and sold to PANTA Electronics B.V. which was owned by a consortium of investors led by Advent International. ProTeleVision Technologies A/S was dissolved in 2001 with products transferring to ProTelevision Technologies Corp A/S , DK-Audio A/S (dissolved 2018) and AREPA Test & Calibration. France Philips France has its headquarters in Suresnes. The company employs over 3600 people nationwide. Philips Lighting has manufacturing facilities in Chalon-sur-Saône (fluorescent lamps), Chartres (automotive lighting), Lamotte-Beuvron (architectural lighting by LEDs and professional indoor lighting), Longvic (lamps), Miribel (outdoor lighting), Nevers (professional indoor lighting). All manufacturing in France were sold or discontinued before the Lighting spin-off in 2016. Germany Philips Germany was founded in 1926 in Berlin. Now its headquarters is located in Hamburg. Over 4900 people are employed in Germany. Hamburg Distribution center of the divisions Healthcare, Consumer Lifestyle, and Lighting. Philips Medical Systems DMC. Philips Innovative Technologies, Research Laboratories. Aachen Philips Innovative Technologies. Philips Innovation Services. Böblingen Philips Medical Systems, patient monitoring systems. Herrsching Philips Respironics. Ulm Philips Photonics, development and manufacture of vertical laser diodes (VCSELs) and photodiodes for sensing and data communication. Greece Philips' Greece is headquartered in Halandri, Attica. As of 2012, Philips has no manufacturing plants in Greece, although previously there have been audio, lighting and telecommunications factories. Italy Philips founded its Italian subsidiary in 1923, basing it in Milan where it still operates. After the closure of the company's industrial operations, mainly manufacturing TVs in Monza and conventional lightbulbs near Turin, Philips Italia exists for commercial activities only. Hungary Philips founded PACH (Philips Assembly Centre Hungary) in 1992, producing televisions and consumer electronics in Székesfehérvár. After TPV entering the Philips TV business, the factory was moved under TP Vision, the new joint-venture company in 2011. Products have been transferred to Poland and China and factory was closed in 2013. By Philips acquiring PLI in 2007 another Hungarian Philips factory emerged in Tamási, producing lamps under the name of Philips IPSC Tamási, later Philips Lighting. The factory was renamed to Signify in 2017, still producing Philips lighting products. Poland Philips' operations in Poland include: a European financial and accounting centre in Łódź; Philips Lighting facilities in Bielsko-Biała, Piła, and Kętrzyn; and a Philips Domestic Appliances facility in Białystok. Portugal Philips started business in Portugal in 1927, as "Philips Portuguesa S.A.R.L.". Philips Portuguesa S.A. is headquartered in Oeiras near Lisbon. There were three Philips factories in Portugal: the FAPAE lamp factory in Lisbon; the Carnaxide magnetic-core memory factory near Lisbon, where the Philips Service organization was also based; and the Ovar factory in northern Portugal making camera components and remote control devices. The company still operates in Portugal with divisions for commercial lighting, medical systems and domestic appliances. Sweden Philips Sweden has two main sites, Kista, Stockholm County, with regional sales, marketing and a customer support organization and Solna, Stockholm County, with the main office of the mammography division. United Kingdom Philips UK has its headquarters in Guildford. The company employs over 2,500 people nationwide. Philips Healthcare Informatics, Belfast develops healthcare software products. Philips Consumer Products, Guildford provides sales and marketing for televisions, including High Definition televisions, DVD recorders, hi-fi and portable audio, CD recorders, PC peripherals, cordless telephones, home and kitchen appliances, personal care (shavers, hair dryers, body beauty and oral hygiene ). Philips Dictation Systems, Colchester. Philips Lighting: sales from Guildford and manufacture in Hamilton. Philips Healthcare, Guildford. Sales and technical support for X-ray, ultrasound, nuclear medicine, patient monitoring, magnetic resonance, computed tomography, and resuscitation products. Philips Research Laboratories, Cambridge (Until 2008 based in Redhill, Surrey. Originally these were the Mullard Research Laboratories.) In the past, Philips UK also included: Consumer product manufacturing in Croydon Television Tube Manufacturing Mullard Simonstone Philips Business Communications, Cambridge: offered voice and data communications products, specialising in Customer Relationship Management (CRM) applications, IP Telephony, data networking, voice processing, command and control systems and cordless and mobile telephony. In 2006 the business was placed into a 60/40 joint venture with NEC. NEC later acquired 100 per cent ownership and the business was renamed NEC Unified Solutions. Philips Electronics Blackburn; vacuum tubes, capacitors, delay-lines, Laserdiscs, CDs. Philips Domestic Appliances Hastings: Design and Production of Electric kettles, Fan Heaters plus former EKCO brand "Thermotube" Tubular Heaters and "Hostess" Domestic Food Warming Trolleys. Mullard Southampton and Hazel Grove, Stockport. Originally brought together as a joint venture between Mullard and GEC as Associated Semiconductor Manufacturers. They developed and manufactured rectifiers, diodes, transistors, integrated circuits and electro-optical devices. These became Philips Semiconductors before becoming part of NXP. London Carriers, logistics and transport division. Mullard Equipment Limited (MEL) which produced products for the military Ada (Halifax) Ltd, maker of washing machines and spin driers, refrigerators Pye TVT Ltd of Cambridge Pye Telecommunications Ltd of Cambridge TMC Limited of Malmesbury North America Canada Philips Canada was founded in 1941 when it acquired Small Electric Motors Limited. It is well known in medical systems for diagnosis and therapy, lighting technologies, shavers, and consumer electronics. The Canadian headquarters are located in Markham, Ontario. For several years, Philips manufactured lighting products in two Canadian factories. The London, Ontario, plant opened in 1971. It produced A19 lamps (including the "Royale" long life bulbs), PAR38 lamps and T19 lamps (originally a Westinghouse lamp shape). Philips closed the factory in May 2003. The Trois-Rivières, Quebec plant was a Westinghouse facility which Philips continued to run it after buying Westinghouse's lamp division in 1983. Philips closed this factory a few years later, in the late 1980s. Mexico Philips Mexico Commercial SA de CV is headquartered in Mexico City. This entity was incorporated in FY2016 to sales consumer lifestyle and healthcare portfolios in the market. United States Philips' Electronics North American headquarters is in Cambridge, Massachusetts. Philips Lighting has its corporate office in Somerset, New Jersey; with manufacturing plants in Danville, Kentucky; Salina, Kansas; Dallas and Paris, Texas, and distribution centers in Mountain Top, Pennsylvania; El Paso, Texas; Ontario, California; and Memphis, Tennessee. Philips Healthcare is headquartered in Cambridge, Massachusetts, and operates a health-tech hub in Nashville, Tennessee, with over 1,000 jobs. The North American sales organization is based in Bothell, Washington. There are also manufacturing facilities in Bothell, Washington; Baltimore, Maryland; Cleveland, Ohio; Foster City, California; Gainesville, Florida; Milpitas, California; and Reedsville, Pennsylvania. Philips Healthcare also formerly had a factory in Knoxville, Tennessee. Philips Consumer Lifestyle has its corporate office in Stamford, Connecticut. Philips Lighting has a Color Kinetics office in Burlington, Massachusetts. Philips Research North American headquarters is in Cambridge, Massachusetts. In 2007, Philips entered into a definitive merger agreement with North American luminaires company Genlyte Group Incorporated, which provides the company with a leading position in the North American luminaires (also known as "lighting fixtures"), controls and related products for a wide variety of applications, including solid state lighting. The company also acquired Respironics, which was a significant gain for its healthcare sector. On 21 February 2008, Philips completed the acquisition of Baltimore, Maryland-based VISICU. VISICU was the creator of the eICU concept of the use of Telemedicine from a centralized facility to monitor and care for ICU patients. In April 2020, the United States Department of Health & Human Services (HHS) entered into a contract with Philips Respironics for 43,000 bundled Trilogy Evo Universal ventilator (EV300) hospital ventilators. This included the production and delivery of ventilators to the Strategic National Stockpile—about 156,000 by the end of August 2020 and 187,000 more by the end of 2020. During the COVID-19 pandemic, beginning in March 2020, in response to an international demand, Philips increased production of the ventilators fourfold within five months. Production lines were added in the United States with employees working around the clock in factories producing ventilators, in Western Pennsylvania and California, for example. In March 2020, ProPublica published a series of articles on the Philips ventilator contract as negotiated by trade adviser Peter Navarro. In response to the ProPublica series, in August, the United States House of Representatives undertook a "congressional investigation" into the acquisition of the Philips ventilators. The lawmakers investigation found "evidence of fraud, waste and abuse".—the deal negotiated by Navarro had resulted in an over-payment to Philips by the US government of "hundreds of millions". Oceania Australia and New Zealand Philips Australia was founded in 1927 and is headquartered in North Ryde, New South Wales, and also manages the New Zealand operation from there. The company employs around 800 people. Regional sales and support offices are located in Melbourne, Brisbane, Adelaide, Perth and Auckland. Activities include: Philips Healthcare (also responsible for New Zealand operations); Philips Lighting (also responsible for New Zealand operations); Philips Oral Healthcare, Philips Professional Dictation Solutions, Philips Professional Display Solutions, Philips AVENT Professional, Philips Consumer Lifestyle (also responsible for New Zealand operations); Philips Sleep & Respiratory Care (formerly Respironics), with its ever-increasing national network of Sleepeasy Centres; Philips Dynalite (Lighting Control systems, acquired in 2009, global design and manufacturing centre) and Philips Selecon NZ (Lighting Entertainment product design and manufacture). South America Brazil Philips do Brasil was founded in 1924 in Rio de Janeiro. In 1929, Philips started to sell radio receivers. In the 1930s, Philips was making its light bulbs and radio receivers in Brazil. From 1939 to 1945, World War II forced Brazilian branch of Philips to sell bicycles, refrigerators and insecticides. After the war, Philips had a great industrial expansion in Brazil, and was among the first groups to establish in Manaus Free Zone. In the 1970s, Philips Records was a major player in Brazil recording industry. Nowadays, Philips do Brasil is one of the largest foreign-owned companies in Brazil. Philips uses the brand Walita for domestic appliances in Brazil. Color television Color television was introduced in South America by then CEO, Cor Dillen, in 1952 in Brazil and then the entire continent in the early 1980s. Former operations Philips subsidiary manufactured pharmaceuticals for human and veterinary use and products for crop protection. Duphar was sold to Solvay in 1990. In subsequent years, Solvay sold off all divisions to other companies (crop protection to UniRoyal, now Chemtura, the veterinary division to Fort Dodge, a division of Wyeth, and the pharmaceutical division to Abbott Laboratories). PolyGram, Philips' music television and movies division, was sold to Seagram in 1998; merged into Universal Music Group. Philips Records continues to operate as record label of UMG, its name licensed from its former parent. In 1980, Philips acquired Marantz, a company renowned for high-end audio and video products, based at Kanagawa, Japan. In 2002, Marantz Japan merged with Denon to form D&M Holdings and Philips sold its remaining stake in D&M Holdings in 2008. Origin, now part of Atos Origin, is a former division of Philips. ASM Lithography is a spin-off from a division of Philips. Hollandse Signaalapparaten was a manufacturer of military electronics. The business was sold to Thomson-CSF in 1990 and is now Thales Nederland. NXP Semiconductors, formerly known as Philips Semiconductors, was sold a consortium of private equity investors in 2006. On 6 August 2010, NXP completed its IPO, with shares trading on NASDAQ. Ignis, of Comerio, in the province of Varese, Italy, produced washing machines, dishwashers and microwave ovens, was one of the leading companies in the domestic appliance market, holding a 38% share in 1960. In 1970, 50% of the company's capital was taken over by Philips, which acquired full control in 1972. Ignis was in those years, after Zanussi, the second largest domestic appliance manufacturer, and in 1973 its factories numbered over 10,000 employees only in Italy. With the transfer of ownership to the Dutch multinational, the corporate name of the company was changed, which became "IRE SpA" (Industrie Riunite Eurodomestici). Thereafter Philips used to sell major household appliances (whitegoods) under the name Philips. After selling the Major Domestic Appliances division to Whirlpool Corporation it changed from Philips Whirlpool to Whirlpool Philips and finally to just Whirlpool. Whirlpool bought a 53% stake in Philips' major appliance operations to form Whirlpool International. Whirlpool bought Philips' remaining interest in Whirlpool International in 1991. Philips Cryogenics was split off in 1990 to form the Stirling Cryogenics BV, Netherlands. This company is still active in the development and manufacturing of Stirling cryocoolers and cryogenic cooling systems. North American Philips distributed AKG Acoustics products under the AKG of America, Philips Audio/Video, Norelco and AKG Acoustics Inc. branding until AKG set up its North American division in San Leandro, California, in 1985. (AKG's North American division has since moved to Northridge, California.) Polymer Vision was a Philips spin-off that manufactured a flexible e-ink display screen. The company was acquired by Taiwanese contract electronics manufacturer Wistron in 2009 and it was shut down in 2012, after repeated failed attempts to find a potential buyer. Products Philips' core products are consumer electronics and electrical products (including small domestic appliances, shavers, beauty appliances, mother and childcare appliances, electric toothbrushes and coffee makers (products like Smart Phones, audio equipment, Blu-ray players, computer accessories and televisions are sold under license); and healthcare products (including CT scanners, ECG equipment, mammography equipment, monitoring equipment, MRI scanners, radiography equipment, resuscitation equipment, ultrasound equipment and X-ray equipment); In January 2020 Philips announced that it is looking to sell its domestic appliances division, which includes products like coffee machines, air purifiers and airfryers. Lighting products Professional indoor luminaires Professional outdoor luminaires Professional lamps Lighting controls and control systems Digital projection lights Horticulture lighting Solar LED lights Smart office lighting systems Smart retail lighting systems Smart city lighting systems Home lamps Home fixtures Home systems (branded as Philips Hue) Automotive Lighting Audio products Hi-fi systems Wireless speakers Radio systems Docking stations Headphones DJ mixers Alarm clocks Healthcare products Philips healthcare products include: Clinical informatics Cardiology informatics (IntelliSpace Cardiovascular, Xcelera) Enterprise Imaging Informatics (IntelliSpace PACS, XIRIS) IntelliSpace family of solutions Imaging systems Cardio/Vascular X-Ray Wires and Catheters (Verrata) Computed tomography (CT) Fluoroscopy Magnetic resonance imaging (MRI) Mammography Mobile C-Arms Nuclear medicine PET (Positron emission tomography) PET/CT Radiography Radiation oncology Systemsroots Ultrasound Diagnostic monitoring Diagnostic ECG Defibrillators Automated External Defibrillators Portable Monitor/Defibrillators Accessories Equipment Software Consumer Philips AVENTil Patient care and clinical informatics Anesthetic gas monitoring Blood pressure Capnography D.M.E. Diagnostic sleep testing ECG Enterprise patient informatics solutions OB TraceVue Compurecord ICIP eICU program Emergin Hemodynamic IntelliSpace Cardiovascular IntelliSpace PACS IntelliSpace portal Multi-measurement servers Neurophedeoiles Pulse oximetry Tasy Temperature Transcutaneous gases Ventilation ViewForum Xcelera XIRIS Xper Information Management Logo evolution The famous Philips logo with the stars and waves was designed by Dutch architect Louis Kalff (1897–1976), who stated that the emblem had been created as a coincidence as he did not know how a radio system worked. Slogans Trust In Philips Is Worldwide (1960–1974) Simply Years Ahead (1974–1981) We Want You To Have The Best (1981–1985) Take a Closer Look (1985–1995) Let's Make Things Better (1995–2004) Sense & Simplicity (2004–2013) Innovation & You (2013–present) Sponsorships In 1913, in celebration of the 100th anniversary of the liberation of the Netherlands, Philips founded Philips Sports Vereniging (Philips Sports Club, now commonly known as PSV). The club is active in numerous sports but is now best known for its football team, PSV Eindhoven, and swimming team. Philips owns the naming rights to Philips Stadium in Eindhoven, which is the home ground of PSV Eindhoven. Outside of the Netherlands, Philips sponsors and has sponsored numerous sports clubs, sports facilities and events. In November 2008, Philips renewed and extended its F1 partnership with AT&T Williams. Philips owns the naming rights to the Philips Championship, the premier basketball league in Australia, traditionally known as the National Basketball League. From 1988 to 1993, Philips was the principal sponsor of the Australian rugby league team The Balmain Tigers and Indonesian football club side Persiba Balikpapan. From 1998 to 2000, Philips sponsored the Winston Cup No. 7 entry for Geoff Bodine Racing, later Ultra Motorsports, for drivers Geoff Bodine and Michael Waltrip. From 1999 to 2018, Philips held the naming rights to Philips Arena in Atlanta, home of the Atlanta Hawks of the National Basketball Association and former home of the defunct Atlanta Thrashers of the National Hockey League. In 2024, Philips became a sponsor for La Liga team FC Barcelona. Outside of sports, Philips sponsors the international Philips Monsters of Rock festival. Respironics recall In June 2021, Philips announced a voluntary recall of several of its Respironics ventilators, BiPAP, and CPAP machines due to potential health risks. Gradual degradation of foam in the devices, intended to reduce noise and vibrations during operation, could result in patients inhaling particulates or certain chemicals. The recall involved around 3 to 4 million machines which, in addition to the COVID-19 pandemic, contributed to a supply chain crisis impeding the availability of these devices to patients. Originally, Philips described the risks as potentially "life-threatening" but that there had been no reports of death as a result of the issues. Since then, the FDA has received 385 reports of death allegedly caused by the foam issue. In 2023, ProPublica and the Pittsburgh Post-Gazette reported that Philips had received thousands of patient reports and returned machines affected by the degrading foam as far back as 2010, and many of these reports were not disclosed to the FDA as Philips was legally obligated to do. In October 2022, dozens of lawsuits against Philips related to the safety concerns were consolidated into one class-action lawsuit. Philips settled this lawsuit in September 2023 for at least $479 million. In January 2024, Philips agreed to halt the sale of any new sleep apnea devices in the U.S. as part of an agreement with the FDA. As part of the deal, Philips would need to meet certain conditions in its U.S. manufacturing plants, a process that Philips CEO Roy Jakobs could take five to seven years. Environmental record Circular economy Philips and its CEO, Frans van Houten, hold several global leadership positions in advancing the circular economy, including as a founding member and co-chair of the board of directors for the Platform for Accelerating the Circular Economy (PACE), applying circular approaches in its capital equipment business, and as a global partner of the Ellen MacArthur Foundation. Planned obsolescence Philips was a member of the 1925 Phoebus cartel along with Osram, Tungsram, Associated Electrical Industries, , Compagnie des Lampes, International General Electric, and the GE Overseas Group, holding shares in the Swiss corporation proportional to their lamp sales. The cartel lowered operational costs and worked to standardize the life expectancy of light bulbs at 1,000 hours (down from 2,500 hours), and raised prices without fear of competition. The cartel tested their bulbs and fined manufacturers for bulbs that lasted more than 1,000 hours. Green initiatives Philips also runs the EcoVision initiative, which commits to a number of environmentally positive improvements, focusing on energy efficiency. Also, Philips marks its "green" products with the Philips Green Logo, identifying them as products that have a significantly better environmental performance than their competitors or predecessors. L-Prize competition In 2011, Philips won a $10 million cash prize from the US Department of Energy for winning its L-Prize competition, to produce a high-efficiency, long operating life replacement for a standard 60-W incandescent lightbulb. The winning LED lightbulb, which was made available to consumers in April 2012, produces slightly more than 900 lumens at an input power of 10 W. Greenpeace ranking In Greenpeace's 2012 Guide to Greener Electronics that ranks electronics manufacturers on sustainability, climate and energy and how green their products are, Philips ranked 10th of 16 companies with a score of 3.8/10. The company was the top scorer in the Energy section due to its energy advocacy work calling upon the EU to adopt a 30% reduction for greenhouse gas emissions by 2020. It was also praised for its new products which are free from PVC plastic and BFRs. However, the guide criticized Philips' sourcing of fibres for paper, arguing it must develop a paper procurement policy which excludes suppliers involved in deforestation and illegal logging. Philips has made some progress since 2007 (when it was first ranked in the guide started in 2006), in particular by supporting the Individual Producer Responsibility principle, which means that the company is accepting the responsibility for the toxic impacts of its products on e-waste dumps around the world. Dubai Lamp In 2016 Philips introduced a series of LED-Lamps with an efficiency up to 200lm/W. The Dubai Lamp produces 600 lumens at an input power of 3 W. Publications A. Heerding: The origin of the Dutch incandescent lamp industry. (Vol. 1 of The history of N.V. Philips gloeilampenfabriek). Cambridge, Cambridge University Press, 1986. A. Heerding: A company of many parts. (Vol. 2 of The history of N.V. Philips' gloeilampenfabrieken). Cambridge, Cambridge University Press, 1988. I.J. Blanken: The development of N.V. Philips' Gloeilampenfabrieken into a major electrical group. Zaltbommel, European Library, 1999. (Vol. 3 of The history of Philips Electronics N.V.). I.J. Blanken: Under German rule. Zaltbommel, European Library, 1999. (Vol. 4 of The history of Philips Electronics N.V). References External links Electronics companies established in 1891 Companies listed on Euronext Amsterdam Companies in the AEX index Consumer electronics brands Display technology companies Dutch brands Eindhoven Guitar amplification tubes Headphones manufacturers Home appliance manufacturers of the Netherlands Kitchenware brands Light-emitting diode manufacturers Lighting brands Medical device manufacturers Medical technology companies of the Netherlands Mobile phone manufacturers Personal care brands Portable audio player manufacturers Vacuum tubes Videotelephony Dutch companies established in 1891 Radio manufacturers
23551
https://en.wikipedia.org/wiki/Perciformes
Perciformes
Perciformes (), also called the Acanthopteri, is an order or superorder of ray-finned fish in the clade Percomorpha. Perciformes means "perch-like". Among the well-known members of this group are perch and darters (Percidae), sea bass and groupers (Serranidae). Taxonomy Formerly, this group was thought to be even more diverse than it is thought to be now, containing about 41% of all bony fish (about 10,000 species) and about 160 families, which is the most of any order within the vertebrates. However, many of these other families have since been reclassified within their own orders within the clade Percomorpha, significantly reducing the size of the group. In contrast to this splitting, other groups formerly considered distinct, such as the Scorpaeniformes, are now classified in the Perciformes. Evolution The earliest fossil perciform is the extinct serranid Paleoserranus from the Early Paleocene of Mexico, but potential records of "percoids" are known from the Maastrichtian, including Eoserranus, Indiaichthys, and Prolates, although their exact taxonomic identity remains uncertain. Characteristics The dorsal and anal fins are divided into anterior spiny and posterior soft-rayed portions, which may be partially or completely separated. The pelvic fins usually have one spine and up to five soft rays, positioned unusually far forward under the chin or under the belly. Scales are usually ctenoid (rough to the touch), although sometimes they are cycloid (smooth to the touch) or otherwise modified. Taxonomy Classification of this group is controversial. As traditionally defined before the introduction of cladistics, the Perciformes are almost certainly paraphyletic. Other orders that should possibly be included as suborders are the Scorpaeniformes, Tetraodontiformes, and Pleuronectiformes. Of the presently recognized suborders, several may be paraphyletic, as well. These are grouped by suborder/superfamily, generally following the text Fishes of the World. References Extant Late Cretaceous first appearances Ray-finned fish orders Taxa named by Pieter Bleeker
23553
https://en.wikipedia.org/wiki/Asimina
Asimina
Asimina is a genus of small trees or shrubs described as a genus in 1763. Asimina is the only temperate genus in the tropical and subtropical flowering plant family Annonaceae. Asimina have large, simple leaves and large fruit. It is native to eastern North America and collectively referred to as pawpaw. The genus includes the widespread common pawpaw Asimina triloba, which bears the largest edible fruit indigenous to the United States. Pawpaws are native to 26 states of the U.S. and to Ontario in Canada. The common pawpaw is a patch-forming (clonal) understory tree found in well-drained, deep, fertile bottomland and hilly upland habitat. Pawpaws are in the same plant family (Annonaceae) as the custard apple, cherimoya, sweetsop, soursop, and ylang-ylang; the genus is the only member of that family not confined to the tropics. Fossils date to the Cretaceous. Names The genus name Asimina was first described and named by Michel Adanson, a French naturalist of Scottish descent. The name is adapted from a Native American term of unknown origin, assimin, through the French colonial asiminier. The common name (American) pawpaw, also spelled paw paw, paw-paw, and papaw, probably derives from the Spanish papaya, perhaps because of the superficial similarity of their fruits. Description Pawpaws are shrubs or small trees to tall. The northern, cold-tolerant common pawpaw (A. triloba) is deciduous, while the southern species are often evergreen. The leaves are alternate, obovate, entire, long and broad. The flowers of pawpaws are produced singly or in clusters of up to eight together; they are large, 4–6 cm across, perfect, with three sepals and six petals (three large outer petals, three smaller inner petals). The petal color varies from white to purple or red-brown. The fruit of the common pawpaw is a large, edible berry, long and broad, weighing from , with numerous seeds; it is green when unripe, maturing to yellow or brown. It has a flavor somewhat similar to both banana and mango, varying significantly by cultivar, and has more protein than most fruits. Species and their distributions 11 species and several natural interspecies hybrids are accepted. Asimina angustifolia Raf. 1840 not A. Gray 1886 – Florida, Georgia, Alabama, South Carolina Regarded as a synonym of A. longifolia by some authorities. Asimina × bethanyensis Asimina × colorata Asimina incana – woolly pawpaw. Florida and Georgia. (Annona incana W. Bartram) Asimina longifolia – slimleaf pawpaw. Florida, Georgia, and Alabama. Asimina × kralii Asimina manasota DeLaney – Manasota papaw native to two counties in Florida (Manatee + Sarasota); first described in 2010 Not recognized by some authorities. Asimina × nashii Asimina × oboreticulata Asimina obovata (Willd.) Nash) (Annona obovata Willd.) – Flag-pawpaw or Bigflower pawpaw – Florida Asimina parviflora – smallflower pawpaw. Southern states from Texas to Virginia. Asimina × peninsularis Asimina × piedmontana Asimina pulchella – white squirrel banana. Endemic to 3 counties in Florida. (endangered) Asimina pygmaea – dwarf pawpaw. Florida and Georgia. Asimina reticulata – netted pawpaw. Florida and Georgia. Asimina rugelii – yellow squirrel banana. Endemic to Volusia county Florida (endangered) Asimina spatulata (Kral) D.B.Ward – slimleaf pawpaw. Florida and Alabama Regarded as a synonym by some authorities. Asimina tetramera – fourpetal pawpaw. Florida (endangered) Asimina triloba – common pawpaw. Extreme southern Ontario, Canada, and the eastern United States from New York west to southeast Nebraska, and south to northern Florida and eastern Texas. (Annona triloba L.) Ecology The common pawpaw is native to shady, rich bottom lands, where it often forms a dense undergrowth in the forest, often appearing as a patch or thicket of individual, small, slender trees. Pawpaw flowers are insect-pollinated, but fruit production is limited since few if any pollinators are attracted to the flower's faint, or sometimes nonexistent scent. The flowers produce an odor similar to that of rotting meat to attract blowflies or carrion beetles for cross pollination. Other insects that are attracted to pawpaw plants include scavenging fruit flies, carrion flies and beetles. Because of difficult pollination, some believe the flowers are self-incompatible. Pawpaw fruit may be eaten by foxes, opossums, squirrels, and raccoons. Pawpaw leaves and twigs are seldom consumed by rabbits or deer. The leaves, twigs, and bark of the common pawpaw tree contain natural insecticides known as acetogenins. Larvae of the zebra swallowtail butterfly feed exclusively on young leaves of the various pawpaw species, but never occur in great numbers on the plants. The pawpaw is considered an evolutionary anachronism, where a now-extinct evolutionary partner, such as a Pleistocene megafauna species, formerly consumed the fruit and assisted in seed dispersal. Cultivation and uses Wild-collected fruits of the common pawpaw (A. triloba) have long been a favorite treat throughout the tree's extensive native range in eastern North America. Pawpaws have never been widely cultivated for fruit, but interest in pawpaw cultivation has increased in recent decades. Fresh pawpaw fruits are commonly eaten raw; however, once ripe they store only a few days at room temperature and do not ship well unless frozen. Other methods of preservation include dehydration, production of jams or jellies, and pressure canning. The fruit pulp is also often used locally in baked dessert recipes, with pawpaw often substituted in many banana-based recipes. The common pawpaw is of interest in ecological restoration plantings, since this tree grows well in wet soil and has a strong tendency to form well-rooted clonal thickets. History The earliest documentation of pawpaws is in the 1541 report of the Spanish de Soto expedition, who found Native Americans cultivating it east of the Mississippi River. Chilled pawpaw fruit was a favorite dessert of George Washington, and Thomas Jefferson planted it at his home in Virginia, Monticello. The Lewis and Clark Expedition sometimes subsisted on pawpaws during their travels. Daniel Boone was also a consumer and fan of the pawpaw. The common pawpaw was designated as the Ohio state native fruit in 2009. Numerous pawpaw festivals have celebrated the plant and its fruit. References External links USDA distribution of Pawpaw Pawpaw Information from Kentucky State University Asimina Genetic Resources - Pawpaw Clark's September 18, 1806 journal entry about pawpaws Asimina triloba - Brooklyn Botanical Garden Pawpaw Wines Pawpaw Festival, Athens, Ohio Annonaceae genera Trees of Northern America Cuisine of the Southern United States Taxa named by Michel Adanson Fruit trees Crops originating from indigenous Americans
23555
https://en.wikipedia.org/wiki/Pentecostalism
Pentecostalism
Pentecostalism or classical Pentecostalism is a Protestant Charismatic Christian movement that emphasizes direct personal experience of God through baptism with the Holy Spirit. The term Pentecostal is derived from Pentecost, an event that commemorates the descent of the Holy Spirit upon the Apostles and other followers of Jesus Christ while they were in Jerusalem celebrating the Feast of Weeks, as described in the Acts of the Apostles (Acts 2:1–31). Like other forms of evangelical Protestantism, Pentecostalism adheres to the inerrancy of the Bible and the necessity of the New Birth: an individual repenting of their sin and "accepting Jesus Christ as their personal Lord and Savior". It is distinguished by belief in both the "baptism in the Holy Spirit" and baptism by water, that enables a Christian to "live a Spirit-filled and empowered life". This empowerment includes the use of spiritual gifts: such as speaking in tongues and divine healing. Because of their commitment to biblical authority, spiritual gifts, and the miraculous, Pentecostals see their movement as reflecting the same kind of spiritual power and teachings that were found in the Apostolic Age of the Early Church. For this reason, some Pentecostals also use the term "Apostolic" or "Full Gospel" to describe their movement. Holiness Pentecostalism emerged in the early 20th century among radical adherents of the Wesleyan-Holiness movement, who were energized by Christian revivalism and expectation of the imminent Second Coming of Christ. Believing that they were living in the end times, they expected God to spiritually renew the Christian Church and bring to pass the restoration of spiritual gifts and the evangelization of the world. In 1900, Charles Parham, an American evangelist and faith healer, began teaching that speaking in tongues was the Biblical evidence of Spirit baptism. Along with William J. Seymour, a Wesleyan-Holiness preacher, he taught that this was the third work of grace. The three-year-long Azusa Street Revival, founded and led by Seymour in Los Angeles, California, resulted in the growth of Pentecostalism throughout the United States and the rest of the world. Visitors carried the Pentecostal experience back to their home churches or felt called to the mission field. While virtually all Pentecostal denominations trace their origins to Azusa Street, the movement has had several divisions and controversies. Early disputes centered on challenges to the doctrine of entire sanctification, and later on, the Holy Trinity. As a result, the Pentecostal movement is divided between Holiness Pentecostals who affirm three definite works of grace, and Finished Work Pentecostals who are partitioned into trinitarian and non-trinitarian branches, the latter giving rise to Oneness Pentecostalism. Comprising over 700 denominations and many independent churches, Pentecostalism is highly decentralized. No central authority exists, but many denominations are affiliated with the Pentecostal World Fellowship. With over 279 million classical Pentecostals worldwide, the movement is growing in many parts of the world, especially the Global South and Third World countries. Since the 1960s, Pentecostalism has increasingly gained acceptance from other Christian traditions, and Pentecostal beliefs concerning the baptism of the Holy Spirit and spiritual gifts have been embraced by non-Pentecostal Christians in Protestant and Catholic churches through their adherence to the Charismatic movement. Together, worldwide Pentecostal and Charismatic Christianity numbers over 644 million adherents. While the movement originally attracted mostly lower classes in the global South, there is a new appeal to middle classes. Middle-class congregations tend to have fewer members. Pentecostalism is believed to be the fastest-growing religious movement in the world. History Background Early Pentecostals have considered the movement a latter-day restoration of the church's apostolic power, and historians such as Cecil M. Robeck Jr. and Edith Blumhofer write that the movement emerged from late 19th-century radical evangelical revival movements in America and in Great Britain. Within this radical evangelicalism, expressed most strongly in the Wesleyan–holiness and Higher Life movements, themes of restorationism, premillennialism, faith healing, and greater attention on the person and work of the Holy Spirit were central to emerging Pentecostalism. Believing that the second coming of Christ was imminent, these Christians expected an endtime revival of apostolic power, spiritual gifts, and miracle-working. Figures such as Dwight L. Moody and R. A. Torrey began to speak of an experience available to all Christians which would empower believers to evangelize the world, often termed baptism with the Holy Spirit. Certain Christian leaders and movements had important influences on early Pentecostals. The essentially universal belief in the continuation of all the spiritual gifts in the Keswick and Higher Life movements constituted a crucial historical background for the rise of Pentecostalism. Albert Benjamin Simpson (1843–1919) and his Christian and Missionary Alliance (founded in 1887) was very influential in the early years of Pentecostalism, especially on the development of the Assemblies of God. Another early influence on Pentecostals was John Alexander Dowie (1847–1907) and his Christian Catholic Apostolic Church (founded in 1896). Pentecostals embraced the teachings of Simpson, Dowie, Adoniram Judson Gordon (1836–1895) and Maria Woodworth-Etter (1844–1924; she later joined the Pentecostal movement) on healing. Edward Irving's Catholic Apostolic Church (founded c. 1831) also displayed many characteristics later found in the Pentecostal revival. Isolated Christian groups were experiencing charismatic phenomena such as divine healing and speaking in tongues. The Holiness Pentecostal movement provided a theological explanation for what was happening to these Christians, and they adapted a modified form of Wesleyan soteriology to accommodate their new understanding. Early revivals: 1900–1929 Charles Fox Parham, an independent holiness evangelist who believed strongly in divine healing, was an important figure to the emergence of Pentecostalism as a distinct Christian movement. Parham, who was raised as a Methodist, started a spiritual school near Topeka, Kansas in 1900, which he named Bethel Bible School. There he taught that speaking in tongues was the scriptural evidence for the reception of the baptism with the Holy Spirit. On January 1, 1901, after a watch night service, the students prayed for and received the baptism with the Holy Spirit with the evidence of speaking in tongues. Parham received this same experience sometime later and began preaching it in all his services. Parham believed this was xenoglossia and that missionaries would no longer need to study foreign languages. Parham closed his Topeka school after 1901 and began a four-year revival tour throughout Kansas and Missouri. He taught that the baptism with the Holy Spirit was a third experience, subsequent to conversion and sanctification. Sanctification cleansed the believer, but Spirit baptism empowered for service. At about the same time that Parham was spreading his doctrine of initial evidence in the Midwestern United States, news of the Welsh Revival of 1904–1905 ignited intense speculation among radical evangelicals around the world and particularly in the US of a coming move of the Spirit which would renew the entire Christian Church. This revival saw thousands of conversions and also exhibited speaking in tongues. Parham moved to Houston, Texas in 1905, where he started a Bible training school. One of his students was William J. Seymour, a one-eyed black preacher. Seymour traveled to Los Angeles where his preaching sparked the three-year-long Azusa Street Revival in 1906. The revival first broke out on Monday April 9, 1906 at 214 Bonnie Brae Street and then moved to 312 Azusa Street on Friday, April 14, 1906. Worship at the racially integrated Azusa Mission featured an absence of any order of service. People preached and testified as moved by the Spirit, spoke and sung in tongues, and fell (were slain) in the Spirit. The revival attracted both religious and secular media attention, and thousands of visitors flocked to the mission, carrying the "fire" back to their home churches. Despite the work of various Wesleyan groups such as Parham's and D. L. Moody's revivals, the beginning of the widespread Pentecostal movement in the US is generally considered to have begun with Seymour's Azusa Street Revival. The crowds of African-Americans and whites worshiping together at William Seymour's Azusa Street Mission set the tone for much of the early Pentecostal movement. During the period of 1906–1924, Pentecostals defied social, cultural and political norms of the time that called for racial segregation and the enactment of Jim Crow laws. The Church of God in Christ, the Church of God (Cleveland), the Pentecostal Holiness Church, and the Pentecostal Assemblies of the World were all interracial denominations before the 1920s. These groups, especially in the Jim Crow South were under great pressure to conform to segregation. Ultimately, North American Pentecostalism would divide into white and African-American branches. Though it never entirely disappeared, interracial worship within Pentecostalism would not reemerge as a widespread practice until after the civil rights movement. Women were vital to the early Pentecostal movement. Believing that whoever received the Pentecostal experience had the responsibility to use it towards the preparation for Christ's second coming, Pentecostal women held that the baptism in the Holy Spirit gave them empowerment and justification to engage in activities traditionally denied to them. The first person at Parham's Bible college to receive Spirit baptism with the evidence of speaking in tongues was a woman, Agnes Ozman. Women such as Florence Crawford, Ida Robinson, and Aimee Semple McPherson founded new denominations, and many women served as pastors, co-pastors, and missionaries. Women wrote religious songs, edited Pentecostal papers, and taught and ran Bible schools. The unconventionally intense and emotional environment generated in Pentecostal meetings dually promoted, and was itself created by, other forms of participation such as personal testimony and spontaneous prayer and singing. Women did not shy away from engaging in this forum, and in the early movement the majority of converts and church-goers were female. Nevertheless, there was considerable ambiguity surrounding the role of women in the church. The subsiding of the early Pentecostal movement allowed a socially more conservative approach to women to settle in, and, as a result, female participation was channeled into more supportive and traditionally accepted roles. Auxiliary women's organizations were created to focus women's talents on more traditional activities. Women also became much more likely to be evangelists and missionaries than pastors. When they were pastors, they often co-pastored with their husbands. The majority of early Pentecostal denominations taught Christian pacifism and adopted military service articles that advocated conscientious objection. Spread and opposition Azusa participants returned to their homes carrying their new experience with them. In many cases, whole churches were converted to the Pentecostal faith, but many times Pentecostals were forced to establish new religious communities when their experience was rejected by the established churches. One of the first areas of involvement was the African continent, where, by 1907, American missionaries were established in Liberia, as well as in South Africa by 1908. Because speaking in tongues was initially believed to always be actual foreign languages, it was believed that missionaries would no longer have to learn the languages of the peoples they evangelized because the Holy Spirit would provide whatever foreign language was required. (When the majority of missionaries, to their disappointment, learned that tongues speech was unintelligible on the mission field, Pentecostal leaders were forced to modify their understanding of tongues.) Thus, as the experience of speaking in tongues spread, a sense of the immediacy of Christ's return took hold, and that energy would be directed into missionary and evangelistic activity. Early Pentecostals saw themselves as outsiders from mainstream society, dedicated solely to preparing the way for Christ's return. An associate of Seymour's, Florence Crawford, brought the message to the Northwest, forming what would become the Apostolic Faith Church—a Holiness Pentecostal denomination—by 1908. After 1907, Azusa participant William Howard Durham, pastor of the North Avenue Mission in Chicago, returned to the Midwest to lay the groundwork for the movement in that region. It was from Durham's church that future leaders of the Pentecostal Assemblies of Canada would hear the Pentecostal message. One of the most well known Pentecostal pioneers was Gaston B. Cashwell (the "Apostle of Pentecost" to the South), whose evangelistic work led three Southeastern holiness denominations into the new movement. The Pentecostal movement, especially in its early stages, was typically associated with the impoverished and marginalized of America, especially African Americans and Southern Whites. With the help of many healing evangelists such as Oral Roberts, Pentecostalism spread across America by the 1950s. International visitors and Pentecostal missionaries would eventually export the revival to other nations. The first foreign Pentecostal missionaries were Alfred G. Garr and his wife, who were Spirit baptized at Azusa and traveled to India and later Hong Kong. On being Spirit baptized, Garr spoke in Bengali, a language he did not know, and becoming convinced of his call to serve in India came to Calcutta with his wife Lilian and began ministering at the Bow Bazar Baptist Church. The Norwegian Methodist pastor T. B. Barratt was influenced by Seymour during a tour of the United States. By December 1906, he had returned to Europe, and he is credited with beginning the Pentecostal movement in Sweden, Norway, Denmark, Germany, France and England. A notable convert of Barratt was Alexander Boddy, the Anglican vicar of All Saints' in Sunderland, England, who became a founder of British Pentecostalism. Other important converts of Barratt were German minister Jonathan Paul who founded the first German Pentecostal denomination (the Mülheim Association) and Lewi Pethrus, the Swedish Baptist minister who founded the Swedish Pentecostal movement. Through Durham's ministry, Italian immigrant Luigi Francescon received the Pentecostal experience in 1907 and established Italian Pentecostal congregations in the US, Argentina (Christian Assembly in Argentina), and Brazil (Christian Congregation of Brazil). In 1908, Giacomo Lombardi led the first Pentecostal services in Italy. In November 1910, two Swedish Pentecostal missionaries arrived in Belem, Brazil and established what would become the Assembleias de Deus (Assemblies of God of Brazil). In 1908, John G. Lake, a follower of Alexander Dowie who had experienced Pentecostal Spirit baptism, traveled to South Africa and founded what would become the Apostolic Faith Mission of South Africa and the Zion Christian Church. As a result of this missionary zeal, practically all Pentecostal denominations today trace their historical roots to the Azusa Street Revival. Eventually, the first missionaries realized that they definitely needed to learn the local language and culture, needed to raise financial support, and develop long-term strategies for the development of indigenous churches. The first generation of Pentecostal believers faced immense criticism and ostracism from other Christians, most vehemently from the Holiness movement from which they originated. Alma White, leader of the Pillar of Fire Church—a Holiness Methodist denomination, wrote a book against the movement titled Demons and Tongues in 1910. She called Pentecostal tongues "satanic gibberish" and Pentecostal services "the climax of demon worship". Famous Holiness Methodist preacher W. B. Godbey characterized those at Azusa Street as "Satan's preachers, jugglers, necromancers, enchanters, magicians, and all sorts of mendicants". To Dr. G. Campbell Morgan, Pentecostalism was "the last vomit of Satan", while Dr. R. A. Torrey thought it was "emphatically not of God, and founded by a Sodomite". The Pentecostal Church of the Nazarene, one of the largest holiness groups, was strongly opposed to the new Pentecostal movement. To avoid confusion, the church changed its name in 1919 to the Church of the Nazarene. A. B. Simpson's Christian and Missionary Alliance—a Keswickian denomination—negotiated a compromise position unique for the time. Simpson believed that Pentecostal tongues speaking was a legitimate manifestation of the Holy Spirit, but he did not believe it was a necessary evidence of Spirit baptism. This view on speaking in tongues ultimately led to what became known as the "Alliance position" articulated by A. W. Tozer as "seek not—forbid not". Early controversies The first Pentecostal converts were mainly derived from the Holiness movement and adhered to a Wesleyan understanding of sanctification as a definite, instantaneous experience and second work of grace. Problems with this view arose when large numbers of converts entered the movement from non-Wesleyan backgrounds, especially from Baptist churches. In 1910, William Durham of Chicago first articulated the Finished Work, a doctrine which located sanctification at the moment of salvation and held that after conversion the Christian would progressively grow in grace in a lifelong process. This teaching polarized the Pentecostal movement into two factions: Holiness Pentecostalism and Finished Work Pentecostalism. The Wesleyan doctrine was strongest in the Apostolic Faith Church, which views itself as being the successor of the Azusa Street Revival, as well as in the Calvary Holiness Association, Congregational Holiness Church, Church of God (Cleveland), Church of God in Christ, Free Gospel Church and the Pentecostal Holiness Church; these bodies are classed as Holiness Pentecostal denominations. The Finished Work, however, would ultimately gain ascendancy among Pentecostals, in denominations such as the Assemblies of God, which was the first Finished Work Pentecostal denomination. After 1911, most new Pentecostal denominations would adhere to Finished Work sanctification. In 1914, a group of 300 predominately white Pentecostal ministers and laymen from all regions of the United States gathered in Hot Springs, Arkansas, to create a new, national Pentecostal fellowship—the General Council of the Assemblies of God. By 1911, many of these white ministers were distancing themselves from an existing arrangement under an African-American leader. Many of these white ministers were licensed by the African-American, C. H. Mason under the auspices of the Church of God in Christ, one of the few legally chartered Pentecostal organizations at the time credentialing and licensing ordained Pentecostal clergy. To further such distance, Bishop Mason and other African-American Pentecostal leaders were not invited to the initial 1914 fellowship of Pentecostal ministers. These predominately white ministers adopted a congregational polity, whereas the COGIC and other Southern groups remained largely episcopal and rejected a Finished Work understanding of Sanctification. Thus, the creation of the Assemblies of God marked an official end of Pentecostal doctrinal unity and racial integration. Among these Finished Work Pentecostals, the new Assemblies of God would soon face a "new issue" which first emerged at a 1913 camp meeting. During a baptism service, the speaker, R. E. McAlister, mentioned that the Apostles baptized converts once in the name of Jesus Christ, and the words "Father, Son, and Holy Ghost" were never used in baptism. This inspired Frank Ewart who claimed to have received as a divine prophecy revealing a nontrinitarian conception of God. Ewart believed that there was only one personality in the Godhead—Jesus Christ. The terms "Father" and "Holy Ghost" were titles designating different aspects of Christ. Those who had been baptized in the Trinitarian fashion needed to submit to rebaptism in Jesus' name. Furthermore, Ewart believed that Jesus' name baptism and the gift of tongues were essential for salvation. Ewart and those who adopted his belief, which is known as Oneness Pentecostalism, called themselves "oneness" or "Jesus' Name" Pentecostals, but their opponents called them "Jesus Only". Amid great controversy, the Assemblies of God rejected the Oneness teaching, and many of its churches and pastors were forced to withdraw from the denomination in 1916. They organized their own Oneness groups. Most of these joined Garfield T. Haywood, an African-American preacher from Indianapolis, to form the Pentecostal Assemblies of the World. This church maintained an interracial identity until 1924 when the white ministers withdrew to form the Pentecostal Church, Incorporated. This church later merged with another group forming the United Pentecostal Church International. This controversy among the Finished Work Pentecostals caused Holiness Pentecostals to further distance themselves from Finished Work Pentecostals, who they viewed as heretical. 1930–1959 While Pentecostals shared many basic assumptions with conservative Protestants, the earliest Pentecostals were rejected by Fundamentalist Christians who adhered to cessationism. In 1928, the World Christian Fundamentals Association labeled Pentecostalism "fanatical" and "unscriptural". By the early 1940s, this rejection of Pentecostals was giving way to a new cooperation between them and leaders of the "new evangelicalism", and American Pentecostals were involved in the founding of the 1942 National Association of Evangelicals. Pentecostal denominations also began to interact with each other both on national levels and international levels through the Pentecostal World Fellowship, which was founded in 1947. Some Pentecostal churches in Europe, especially in Italy and Germany, during the war were also victims of the Holocaust. Because of their tongues speaking their members were considered mentally ill, and many pastors were sent either to confinement or to concentration camps. Though Pentecostals began to find acceptance among evangelicals in the 1940s, the previous decade was widely viewed as a time of spiritual dryness, when healings and other miraculous phenomena were perceived as being less prevalent than in earlier decades of the movement. It was in this environment that the Latter Rain Movement, the most important controversy to affect Pentecostalism since World War II, began in North America and spread around the world in the late 1940s. Latter Rain leaders taught the restoration of the fivefold ministry led by apostles. These apostles were believed capable of imparting spiritual gifts through the laying on of hands. There were prominent participants of the early Pentecostal revivals, such as Stanley Frodsham and Lewi Pethrus, who endorsed the movement citing similarities to early Pentecostalism. However, Pentecostal denominations were critical of the movement and condemned many of its practices as unscriptural. One reason for the conflict with the denominations was the sectarianism of Latter Rain adherents. Many autonomous churches were birthed out of the revival. A simultaneous development within Pentecostalism was the postwar Healing Revival. Led by healing evangelists William Branham, Oral Roberts, Gordon Lindsay, and T. L. Osborn, the Healing Revival developed a following among non-Pentecostals as well as Pentecostals. Many of these non-Pentecostals were baptized in the Holy Spirit through these ministries. The Latter Rain and the Healing Revival influenced many leaders of the charismatic movement of the 1960s and 1970s. 1960–present Before the 1960s, most non-Pentecostal Christians who experienced the Pentecostal baptism in the Holy Spirit typically kept their experience a private matter or joined a Pentecostal church afterward. The 1960s saw a new pattern develop where large numbers of Spirit baptized Christians from mainline churches in the US, Europe, and other parts of the world chose to remain and work for spiritual renewal within their traditional churches. This initially became known as New or Neo-Pentecostalism (in contrast to the older classical Pentecostalism) but eventually became known as the Charismatic Movement. While cautiously supportive of the Charismatic Movement, the failure of Charismatics to embrace traditional Pentecostal teachings, such as the prohibition of dancing, abstinence from alcohol and other drugs such as tobacco, as well as restrictions on dress and appearance following the doctrine of outward holiness, initiated an identity crisis for classical Pentecostals, who were forced to reexamine long held assumptions about what it meant to be Spirit filled. The liberalizing influence of the Charismatic Movement on classical Pentecostalism can be seen in the disappearance of many of these taboos since the 1960s, apart from certain Holiness Pentecostal denominations, such as the Apostolic Faith Church, which maintain these standards of outward holiness. Because of this, the cultural differences between classical Pentecostals and charismatics have lessened over time. The global renewal movements manifest many of these tensions as inherent characteristics of Pentecostalism and as representative of the character of global Christianity. Beliefs Pentecostalism is an evangelical faith, emphasizing the reliability of the Bible and the need for the transformation of an individual's life through faith in Jesus. Like other evangelicals, Pentecostals generally adhere to the Bible's divine inspiration and inerrancy—the belief that the Bible, in the original manuscripts in which it was written, is without error. Pentecostals emphasize the teaching of the "full gospel" or "foursquare gospel". The term foursquare refers to the four fundamental beliefs of Pentecostalism: Jesus saves according to John 3:16; baptizes with the Holy Spirit according to Acts 2:4; heals bodily according to James 5:15; and is coming again to receive those who are saved according to 1 Thessalonians 4:16–17. Salvation The central belief of classical Pentecostalism is that through the death, burial, and resurrection of Jesus Christ, sins can be forgiven and humanity reconciled with God. This is the Gospel or "good news". The fundamental requirement of Pentecostalism is that one be born again. The new birth is received by the grace of God through faith in Christ as Lord and Savior. In being born again, the believer is regenerated, justified, adopted into the family of God, and the Holy Spirit's work of sanctification is initiated. Classical Pentecostal soteriology is generally Arminian rather than Calvinist. The security of the believer is a doctrine held within Pentecostalism; nevertheless, this security is conditional upon continual faith and repentance. Pentecostals believe in both a literal heaven and hell, the former for those who have accepted God's gift of salvation and the latter for those who have rejected it. For most Pentecostals there is no other requirement to receive salvation. Baptism with the Holy Spirit and speaking in tongues are not generally required, though Pentecostal converts are usually encouraged to seek these experiences. A notable exception is Jesus' Name Pentecostalism, most adherents of which believe both water baptism and Spirit baptism are integral components of salvation. Baptism with the Holy Spirit Pentecostals identify three distinct uses of the word "baptism" in the New Testament: Baptism into the body of Christ: This refers to salvation. Every believer in Christ is made a part of his body, the Church, through baptism. The Holy Spirit is the agent, and the body of Christ is the medium. Water baptism: Symbolic of dying to the world and living in Christ, water baptism is an outward symbolic expression of that which has already been accomplished by the Holy Spirit, namely baptism into the body of Christ. Baptism with the Holy Spirit: This is an experience distinct from baptism into the body of Christ. In this baptism, Christ is the agent and the Holy Spirit is the medium. While the figure of Jesus Christ and his redemptive work are at the center of Pentecostal theology, that redemptive work is believed to provide for a fullness of the Holy Spirit of which believers in Christ may take advantage. The majority of Pentecostals believe that at the moment a person is born again, the new believer has the presence (indwelling) of the Holy Spirit. While the Spirit dwells in every Christian, Pentecostals believe that all Christians should seek to be filled with him. The Spirit's "filling", "falling upon", "coming upon", or being "poured out upon" believers is called the baptism with the Holy Spirit. Pentecostals define it as a definite experience occurring after salvation whereby the Holy Spirit comes upon the believer to anoint and empower them for special service. It has also been described as "a baptism into the love of God". The main purpose of the experience is to grant power for Christian service. Other purposes include power for spiritual warfare (the Christian struggles against spiritual enemies and thus requires spiritual power), power for overflow (the believer's experience of the presence and power of God in their life flows out into the lives of others), and power for ability (to follow divine direction, to face persecution, to exercise spiritual gifts for the edification of the church, etc.). Pentecostals believe that the baptism with the Holy Spirit is available to all Christians. Repentance from sin and being born again are fundamental requirements to receive it. There must also be in the believer a deep conviction of needing more of God in their life, and a measure of consecration by which the believer yields themself to the will of God. Citing instances in the Book of Acts where believers were Spirit baptized before they were baptized with water, most Pentecostals believe a Christian need not have been baptized in water to receive Spirit baptism. However, Pentecostals do believe that the biblical pattern is "repentance, regeneration, water baptism, and then the baptism with the Holy Ghost". There are Pentecostal believers who have claimed to receive their baptism with the Holy Spirit while being water baptized. It is received by having faith in God's promise to fill the believer and in yielding the entire being to Christ. Certain conditions, if present in a believer's life, could cause delay in receiving Spirit baptism, such as "weak faith, unholy living, imperfect consecration, and egocentric motives". In the absence of these, Pentecostals teach that seekers should maintain a persistent faith in the knowledge that God will fulfill his promise. For Pentecostals, there is no prescribed manner in which a believer will be filled with the Spirit. It could be expected or unexpected, during public or private prayer. Pentecostals expect certain results following baptism with the Holy Spirit. Some of these are immediate while others are enduring or permanent. Most Pentecostal denominations teach that speaking in tongues is an immediate or initial physical evidence that one has received the experience. Some teach that any of the gifts of the Spirit can be evidence of having received Spirit baptism. Other immediate evidences include giving God praise, having joy, and desiring to testify about Jesus. Enduring or permanent results in the believer's life include Christ glorified and revealed in a greater way, a "deeper passion for souls", greater power to witness to nonbelievers, a more effective prayer life, greater love for and insight into the Bible, and the manifestation of the gifts of the Spirit. Holiness Pentecostals, with their background in the Wesleyan-Holiness movement, historically teach that baptism with the Holy Spirit, as evidenced by glossolalia, is the third work of grace, which follows the new birth (first work of grace) and entire sanctification (second work of grace). While the baptism with the Holy Spirit is a definite experience in a believer's life, Pentecostals view it as just the beginning of living a Spirit-filled life. Pentecostal teaching stresses the importance of continually being filled with the Spirit. There is only one baptism with the Spirit, but there should be many infillings with the Spirit throughout the believer's life. Divine healing Pentecostalism is a holistic faith, and the belief that Jesus is Healer is one quarter of the full gospel. Pentecostals cite four major reasons for believing in divine healing: 1) it is reported in the Bible, 2) Jesus' healing ministry is included in his atonement (thus divine healing is part of salvation), 3) "the whole gospel is for the whole person"—spirit, soul, and body, 4) sickness is a consequence of the Fall of Man and salvation is ultimately the restoration of the fallen world. In the words of Pentecostal scholar Vernon L. Purdy, "Because sin leads to human suffering, it was only natural for the Early Church to understand the ministry of Christ as the alleviation of human suffering, since he was God's answer to sin ... The restoration of fellowship with God is the most important thing, but this restoration not only results in spiritual healing but many times in physical healing as well." In the book In Pursuit of Wholeness: Experiencing God's Salvation for the Total Person, Pentecostal writer and Church historian Wilfred Graves Jr. describes the healing of the body as a physical expression of salvation. For Pentecostals, spiritual and physical healing serves as a reminder and testimony to Christ's future return when his people will be completely delivered from all the consequences of the fall. However, not everyone receives healing when they pray. It is God in his sovereign wisdom who either grants or withholds healing. Common reasons that are given in answer to the question as to why all are not healed include: God teaches through suffering, healing is not always immediate, lack of faith on the part of the person needing healing, and personal sin in one's life (however, this does not mean that all illness is caused by personal sin). Regarding healing and prayer Purdy states: Pentecostals believe that prayer and faith are central in receiving healing. Pentecostals look to scriptures such as James 5:13–16 for direction regarding healing prayer. One can pray for one's own healing (verse 13) and for the healing of others (verse 16); no special gift or clerical status is necessary. Verses 14–16 supply the framework for congregational healing prayer. The sick person expresses their faith by calling for the elders of the church who pray over and anoint the sick with olive oil. The oil is a symbol of the Holy Spirit. Besides prayer, there are other ways in which Pentecostals believe healing can be received. One way is based on Mark 16:17–18 and involves believers laying hands on the sick. This is done in imitation of Jesus who often healed in this manner. Another method that is found in some Pentecostal churches is based on the account in Acts 19:11–12 where people were healed when given handkerchiefs or aprons worn by the Apostle Paul. This practice is described by Duffield and Van Cleave in Foundations of Pentecostal Theology: During the initial decades of the movement, Pentecostals thought it was sinful to take medicine or receive care from doctors. Over time, Pentecostals moderated their views concerning medicine and doctor visits; however, a minority of Pentecostal churches continues to rely exclusively on prayer and divine healing. For example, doctors in the United Kingdom reported that a minority of Pentecostal HIV patients were encouraged to stop taking their medicines and parents were told to stop giving medicine to their children, trends that placed lives at risk. Eschatology The last element of the gospel is that Jesus is the "Soon Coming King". For Pentecostals, "every moment is eschatological" since at any time Christ may return. This "personal and imminent" Second Coming is for Pentecostals the motivation for practical Christian living including: personal holiness, meeting together for worship, faithful Christian service, and evangelism (both personal and worldwide). Globally, Pentecostal attitudes to the End Times range from enthusiastic participation in the prophecy subculture to a complete lack of interest through to the more recent, optimistic belief in the coming restoration of God's kingdom. Historically, however, they have been premillennial dispensationalists believing in a pretribulation rapture. Pre-tribulation rapture theology was popularized extensively in the 1830s by John Nelson Darby, and further popularized in the United States in the early 20th century by the wide circulation of the Scofield Reference Bible. Spiritual gifts Pentecostals are continuationists, meaning they believe that all of the spiritual gifts, including the miraculous or "sign gifts", found in 1 Corinthians 12:4–11, 12:27–31, Romans 12:3–8, and Ephesians 4:7–16 continue to operate within the Church in the present time. Pentecostals place the gifts of the Spirit in context with the fruit of the Spirit. The fruit of the Spirit is the result of the new birth and continuing to abide in Christ. It is by the fruit exhibited that spiritual character is assessed. Spiritual gifts are received as a result of the baptism with the Holy Spirit. As gifts freely given by the Holy Spirit, they cannot be earned or merited, and they are not appropriate criteria with which to evaluate one's spiritual life or maturity. Pentecostals see in the biblical writings of Paul an emphasis on having both character and power, exercising the gifts in love. Just as fruit should be evident in the life of every Christian, Pentecostals believe that every Spirit-filled believer is given some capacity for the manifestation of the Spirit. The exercise of a gift is considered to be a manifestation of the Spirit, not of the gifted person, and though the gifts operate through people, they are primarily gifts given to the Church. They are valuable only when they minister spiritual profit and edification to the body of Christ. Pentecostal writers point out that the lists of spiritual gifts in the New Testament do not seem to be exhaustive. It is generally believed that there are as many gifts as there are useful ministries and functions in the Church. A spiritual gift is often exercised in partnership with another gift. For example, in a Pentecostal church service, the gift of tongues might be exercised followed by the operation of the gift of interpretation. According to Pentecostals, all manifestations of the Spirit are to be judged by the church. This is made possible, in part, by the gift of discerning of spirits, which is the capacity for discerning the source of a spiritual manifestation—whether from the Holy Spirit, an evil spirit, or from the human spirit. While Pentecostals believe in the current operation of all the spiritual gifts within the church, their teaching on some of these gifts has generated more controversy and interest than others. There are different ways in which the gifts have been grouped. W. R. Jones suggests three categories, illumination (Word of Wisdom, word of knowledge, discerning of spirits), action (Faith, working of miracles and gifts of healings) and communication (Prophecy, tongues and interpretation of tongues). Duffield and Van Cleave use two categories: the vocal and the power gifts. Vocal gifts The gifts of prophecy, tongues, interpretation of tongues, and words of wisdom and knowledge are called the vocal gifts. Pentecostals look to 1 Corinthians 14 for instructions on the proper use of the spiritual gifts, especially the vocal ones. Pentecostals believe that prophecy is the vocal gift of preference, a view derived from 1 Corinthians 14. Some teach that the gift of tongues is equal to the gift of prophecy when tongues are interpreted. Prophetic and glossolalic utterances are not to replace the preaching of the Word of God nor to be considered as equal to or superseding the written Word of God, which is the final authority for determining teaching and doctrine. Word of wisdom and word of knowledge Pentecostals understand the word of wisdom and the word of knowledge to be supernatural revelations of wisdom and knowledge by the Holy Spirit. The word of wisdom is defined as a revelation of the Holy Spirit that applies scriptural wisdom to a specific situation that a Christian community faces. The word of knowledge is often defined as the ability of one person to know what God is currently doing or intends to do in the life of another person. Prophecy Pentecostals agree with the Protestant principle of sola Scriptura. The Bible is the "all sufficient rule for faith and practice"; it is "fixed, finished, and objective revelation". Alongside this high regard for the authority of scripture is a belief that the gift of prophecy continues to operate within the Church. Pentecostal theologians Duffield and van Cleave described the gift of prophecy in the following manner: "Normally, in the operation of the gift of prophecy, the Spirit heavily anoints the believer to speak forth to the body not premeditated words, but words the Spirit supplies spontaneously in order to uplift and encourage, incite to faithful obedience and service, and to bring comfort and consolation." Any Spirit-filled Christian, according to Pentecostal theology, has the potential, as with all the gifts, to prophesy. Sometimes, prophecy can overlap with preaching "where great unpremeditated truth or application is provided by the Spirit, or where special revelation is given beforehand in prayer and is empowered in the delivery". While a prophetic utterance at times might foretell future events, this is not the primary purpose of Pentecostal prophecy and is never to be used for personal guidance. For Pentecostals, prophetic utterances are fallible, i.e. subject to error. Pentecostals teach that believers must discern whether the utterance has edifying value for themselves and the local church. Because prophecies are subject to the judgement and discernment of other Christians, most Pentecostals teach that prophetic utterances should never be spoken in the first person (e.g. "I, the Lord") but always in the third person (e.g. "Thus saith the Lord" or "The Lord would have..."). Tongues and interpretation A Pentecostal believer in a spiritual experience may vocalize fluent, unintelligible utterances (glossolalia) or articulate a natural language previously unknown to them (xenoglossy). Commonly termed "speaking in tongues", this vocal phenomenon is believed by Pentecostals to include an endless variety of languages. According to Pentecostal theology, the language spoken (1) may be an unlearned human language, such as the Bible claims happened on the Day of Pentecost, or (2) it might be of heavenly (angelic) origin. In the first case, tongues could work as a sign by which witness is given to the unsaved. In the second case, tongues are used for praise and prayer when the mind is superseded and "the speaker in tongues speaks to God, speaks mysteries, and ... no one understands him". Within Pentecostalism, there is a belief that speaking in tongues serves two functions. Tongues as the initial evidence of the third work of grace, baptism with the Holy Spirit, and in individual prayer serves a different purpose than tongues as a spiritual gift. All Spirit-filled believers, according to initial evidence proponents, will speak in tongues when baptized in the Spirit and, thereafter, will be able to express prayer and praise to God in an unknown tongue. This type of tongue speaking forms an important part of many Pentecostals' personal daily devotions. When used in this way, it is referred to as a "prayer language" as the believer is speaking unknown languages not for the purpose of communicating with others but for "communication between the soul and God". Its purpose is for the spiritual edification of the individual. Pentecostals believe the private use of tongues in prayer (i.e. "prayer in the Spirit") "promotes a deepening of the prayer life and the spiritual development of the personality". From Romans 8:26–27, Pentecostals believe that the Spirit intercedes for believers through tongues; in other words, when a believer prays in an unknown tongue, the Holy Spirit is supernaturally directing the believer's prayer. Besides acting as a prayer language, tongues also function as the gift of tongues. Not all Spirit-filled believers possess the gift of tongues. Its purpose is for gifted persons to publicly "speak with God in praise, to pray or sing in the Spirit, or to speak forth in the congregation". There is a division among Pentecostals on the relationship between the gifts of tongues and prophecy. One school of thought believes that the gift of tongues is always directed from man to God, in which case it is always prayer or praise spoken to God but in the hearing of the entire congregation for encouragement and consolation. Another school of thought believes that the gift of tongues can be prophetic, in which case the believer delivers a "message in tongues"—a prophetic utterance given under the influence of the Holy Spirit—to a congregation. Whether prophetic or not, however, Pentecostals are agreed that all public utterances in an unknown tongue must be interpreted in the language of the gathered Christians. This is accomplished by the gift of interpretation, and this gift can be exercised by the same individual who first delivered the message (if he or she possesses the gift of interpretation) or by another individual who possesses the required gift. If a person with the gift of tongues is not sure that a person with the gift of interpretation is present and is unable to interpret the utterance themself, then the person should not speak. Pentecostals teach that those with the gift of tongues should pray for the gift of interpretation. Pentecostals do not require that an interpretation be a literal word-for-word translation of a glossolalic utterance. Rather, as the word "interpretation" implies, Pentecostals expect only an accurate explanation of the utterance's meaning. Besides the gift of tongues, Pentecostals may also use glossolalia as a form of praise and worship in corporate settings. Pentecostals in a church service may pray aloud in tongues while others pray simultaneously in the common language of the gathered Christians. This use of glossolalia is seen as an acceptable form of prayer and therefore requires no interpretation. Congregations may also corporately sing in tongues, a phenomenon known as singing in the Spirit. Speaking in tongues is not universal among Pentecostal Christians. In 2006, a ten-country survey by the Pew Forum on Religion and Public Life found that 49 percent of Pentecostals in the US, 50 percent in Brazil, 41 percent in South Africa, and 54 percent in India said they "never" speak or pray in tongues. Power gifts The gifts of power are distinct from the vocal gifts in that they do not involve utterance. Included in this category are the gift of faith, gifts of healing, and the gift of miracles. The gift of faith (sometimes called "special" faith) is different from "saving faith" and normal Christian faith in its degree and application. This type of faith is a manifestation of the Spirit granted only to certain individuals "in times of special crisis or opportunity" and endues them with "a divine certainty ... that triumphs over everything". It is sometimes called the "faith of miracles" and is fundamental to the operation of the other two power gifts. Trinitarianism and Oneness During the 1910s, the Finished Work Pentecostal movement split over the nature of the Godhead into two camps – Trinitarian and Oneness. The Oneness doctrine viewed the doctrine of the Trinity as polytheistic. The majority of Pentecostal denominations believe in the doctrine of the Trinity, which is considered by them to be Christian orthodoxy; these include Holiness Pentecostals and Finished Work Pentecostals. Oneness Pentecostals are nontrinitarian Christians, believing in the Oneness theology about God. In Oneness theology, the Godhead is not three persons united by one substance, but one God who reveals himself in three different modes. Thus, God relates himself to humanity as our Father within creation, he manifests himself in human form as the Son by virtue of his incarnation as Jesus Christ (1 Timothy 3:16), and he is the Holy Spirit (John 4:24) by way of his activity in the life of the believer. Oneness Pentecostals believe that Jesus is the name of God and therefore baptize in the name of Jesus Christ as performed by the apostles (Acts 2:38), fulfilling the instructions left by Jesus Christ in the Great Commission (Matthew 28:19), they believe that Jesus is the only name given to mankind by which we must be saved (Acts 4:12). The Oneness doctrine may be considered a form of Modalism, an ancient teaching considered heresy by the Roman Catholic Church and other trinitarian denominations. In contrast, Trinitarian Pentecostals hold to the doctrine of the Trinity, that is, the Godhead is not seen as simply three modes or titles of God manifest at different points in history, but is constituted of three completely distinct persons who are co-eternal with each other and united as one substance. The Son is from all eternity who became incarnate as Jesus, and likewise the Holy Spirit is from all eternity, and both are with the eternal Father from all eternity. Worship Traditional Pentecostal worship has been described as a "gestalt made up of prayer, singing, sermon, the operation of the gifts of the Spirit, altar intercession, offering, announcements, testimonies, musical specials, Scripture reading, and occasionally the Lord's supper". Russell P. Spittler identified five values that govern Pentecostal spirituality. The first was individual experience, which emphasizes the Holy Spirit's personal work in the life of the believer. Second was orality, a feature that might explain Pentecostalism's success in evangelizing nonliterate cultures. The third was spontaneity; members of Pentecostal congregations are expected to follow the leading of the Holy Spirit, sometimes resulting in unpredictable services. The fourth value governing Pentecostal spirituality was "otherworldliness" or asceticism, which was partly informed by Pentecostal eschatology. The final and fifth value was a commitment to biblical authority, and many of the distinctive practices of Pentecostals are derived from a literal reading of scripture. Spontaneity is a characteristic element of Pentecostal worship. This was especially true in the movement's earlier history, when anyone could initiate a song, chorus, or spiritual gift. Even as Pentecostalism has become more organized and formal, with more control exerted over services, the concept of spontaneity has retained an important place within the movement and continues to inform stereotypical imagery, such as the derogatory "holy roller". The phrase "Quench not the Spirit", derived from 1 Thessalonians 5:19, is used commonly and captures the thought behind Pentecostal spontaneity. Prayer plays an important role in Pentecostal worship. Collective oral prayer, whether glossolalic or in the vernacular or a mix of both, is common. While praying, individuals may lay hands on a person in need of prayer, or they may raise their hands in response to biblical commands (1 Timothy 2:8). The raising of hands (which itself is a revival of the ancient orans posture) is an example of some Pentecostal worship practices that have been widely adopted by the larger Christian world. Pentecostal musical and liturgical practice have also played an influential role in shaping contemporary worship trends, popularized by the leading producers of Christian music from artists such as Chris Tomlin, Michael W. Smith, Zach Williams, Darlene Zschech, Matt Maher, Phil Wickham, Grace Larson, Don Moen and bands such as Hillsong Worship, Bethel Worship, Jesus Culture and Sovereign Grace Music. Several spontaneous practices have become characteristic of Pentecostal worship. Being "slain in the Spirit" or "falling under the power" is a form of prostration in which a person falls backwards, as if fainting, while being prayed over. It is at times accompanied by glossolalic prayer; at other times, the person is silent. It is believed by Pentecostals to be caused by "an overwhelming experience of the presence of God", and Pentecostals sometimes receive the baptism in the Holy Spirit in this posture. Another spontaneous practice is "dancing in the Spirit". This is when a person leaves their seat "spontaneously 'dancing' with eyes closed without bumping into nearby persons or objects". It is explained as the worshipper becoming "so enraptured with God's presence that the Spirit takes control of physical motions as well as the spiritual and emotional being". Pentecostals derive biblical precedent for dancing in worship from 2 Samuel 6, where David danced before the Lord. A similar occurrence is often called "running the aisles". The "Jericho march" (inspired by Book of Joshua 6:1–27) is a celebratory practice occurring at times of high enthusiasm. Members of a congregation began to spontaneously leave their seats and walk in the aisles inviting other members as they go. Eventually, a full column forms around the perimeter of the meeting space as worshipers march with singing and loud shouts of praise and jubilation. Another spontaneous manifestation found in some Pentecostal churches is holy laughter, in which worshippers uncontrollably laugh. In some Pentecostal churches, these spontaneous expressions are primarily found in revival services (especially those that occur at tent revivals and camp meetings) or special prayer meetings, being rare or non-existent in the main services. Ordinances Like other Christian churches, Pentecostals believe that certain rituals or ceremonies were instituted as a pattern and command by Jesus in the New Testament. Pentecostals commonly call these ceremonies ordinances. Many Christians call these sacraments, but this term is not generally used by Pentecostals and certain other Protestants as they do not see ordinances as imparting grace. Instead the term sacerdotal ordinance is used to denote the distinctive belief that grace is received directly from God by the congregant with the officiant serving only to facilitate rather than acting as a conduit or vicar. The ordinance of water baptism is an outward symbol of an inner conversion that has already taken place. Therefore, most Pentecostal groups practice believer's baptism by immersion. The majority of Pentecostals do not view baptism as essential for salvation, and likewise, most Pentecostals are Trinitarian and use the traditional Trinitarian baptismal formula. However, Oneness Pentecostals view baptism as an essential and necessary part of the salvation experience and, as non-Trinitarians, reject the use of the traditional baptismal formula. For more information on Oneness Pentecostal baptismal beliefs, see the following section on Statistics and denominations. The ordinance of Holy Communion, or the Lord's Supper, is seen as a direct command given by Jesus at the Last Supper, to be done in remembrance of him. Pentecostal denominations, who traditionally support the temperance movement, reject the use of wine as part of communion, using grape juice instead. Certain Pentecostal denominations observe the ordinance of women's headcovering in obedience to . Foot washing is also held as an ordinance by some Pentecostals. It is considered an "ordinance of humility" because Jesus showed humility when washing his disciples' feet in John 13:14–17. Other Pentecostals do not consider it an ordinance; however, they may still recognize spiritual value in the practice. Statistics and denominations According to various scholars and sources, Pentecostalism is the fastest-growing religious movement in the world; this growth is primarily due to religious conversion to Pentecostal and Charismatic Christianity. According to Pulitzer Center 35,000 people become Pentecostal or "Born again" every day. According to scholar Keith Smith of Georgia State University "many scholars claim that Pentecostalism is the fastest growing religious phenomenon in human history", and according to scholar Peter L. Berger of Boston University "the spread of Pentecostal Christianity may be the fastest growing movement in the history of religion". In 1995, David Barrett estimated there were 217 million "Denominational Pentecostals" throughout the world. In 2011, a Pew Forum study of global Christianity found that there were an estimated 279 million classical Pentecostals, making 4 percent of the total world population and 12.8 percent of the world's Christian population Pentecostal. The study found "Historically Pentecostal denominations" (a category that did not include independent Pentecostal churches) to be the largest Protestant denominational family. The largest percentage of Pentecostals are found in Sub-Saharan Africa (44 percent), followed by the Americas (37 percent) and Asia and the Pacific (16 percent). The movement is witnessing its greatest surge today in the global South, which includes Africa, Central and Latin America, and most of Asia. There are 740 recognized Pentecostal denominations, but the movement also has a significant number of independent churches that are not organized into denominations. Among the over 700 Pentecostal denominations, 240 are classified as part of Wesleyan, Holiness, or "Methodistic" Pentecostalism. Until 1910, Pentecostalism was universally Wesleyan in doctrine, and Holiness Pentecostalism continues to predominate in the Southern United States. Wesleyan Pentecostals teach that there are three crisis experiences within a Christian's life: conversion, sanctification, and Spirit baptism. They inherited the holiness movement's belief in entire sanctification. According to Wesleyan Pentecostals, entire sanctification is a definite event that occurs after salvation but before Spirit baptism. This inward experience cleanses and enables the believer to live a life of outward holiness. This personal cleansing prepares the believer to receive the baptism in the Holy Spirit. Holiness Pentecostal denominations include the Apostolic Faith Church, Calvary Holiness Association, Congregational Holiness Church, Free Gospel Church, Church of God in Christ, Church of God (Cleveland, Tennessee), and the Pentecostal Holiness Church. In the United States, many Holiness Pentecostal clergy are educated at the Heritage Bible College in Savannah, Georgia and the Free Gospel Bible Institute in Murrysville, Pennsylvania. After William H. Durham began preaching his Finished Work doctrine in 1910, many Pentecostals rejected the Wesleyan doctrine of entire sanctification and began to teach that there were only two definite crisis experiences in the life of a Christian: conversion and Spirit baptism. These Finished Work Pentecostals (also known as "Baptistic" or "Reformed" Pentecostals because many converts were originally drawn from Baptist and Presbyterian backgrounds) teach that a person is initially sanctified at the moment of conversion. After conversion, the believer grows in grace through a lifelong process of progressive sanctification. There are 390 denominations that adhere to the finished work position. They include the Assemblies of God, the Foursquare Gospel Church, the Pentecostal Church of God, and the Open Bible Churches. The 1904–1905 Welsh Revival laid the foundation for British Pentecostalism including a distinct family of denominations known as Apostolic Pentecostalism (not to be confused with Oneness Pentecostalism). These Pentecostals are led by a hierarchy of living apostles, prophets, and other charismatic offices. Apostolic Pentecostals are found worldwide in 30 denominations, including the Apostolic Church based in the United Kingdom. There are 80 Pentecostal denominations that are classified as Jesus' Name or Oneness Pentecostalism (often self identifying as "Apostolic Pentecostals"). These differ from the rest of Pentecostalism in several significant ways. Oneness Pentecostals reject the doctrine of the Trinity. They do not describe God as three persons but rather as three manifestations of the one living God. Oneness Pentecostals practice Jesus' Name Baptism—water baptisms performed in the name of Jesus Christ, rather than that of the Trinity. Oneness Pentecostal adherents believe repentance, baptism in Jesus' name, and Spirit baptism are all essential elements of the conversion experience. Oneness Pentecostals hold that repentance is necessary before baptism to make the ordinance valid, and receipt of the Holy Spirit manifested by speaking in other tongues is necessary afterwards, to complete the work of baptism. This differs from other Pentecostals, along with evangelical Christians in general, who see only repentance and faith in Christ as essential to salvation. This has resulted in Oneness believers being accused by some (including other Pentecostals) of a "works-salvation" soteriology, a charge they vehemently deny. Oneness Pentecostals insist that salvation comes by grace through faith in Christ, coupled with obedience to his command to be "born of water and of the Spirit"; hence, no good works or obedience to laws or rules can save anyone. For them, baptism is not seen as a "work" but rather the indispensable means that Jesus himself provided to come into his kingdom. The major Oneness churches include the United Pentecostal Church International and the Pentecostal Assemblies of the World. In addition to the denominational Pentecostal churches, there are many Pentecostal churches that choose to exist independently of denominational oversight. Some of these churches may be doctrinally identical to the various Pentecostal denominations, while others may adopt beliefs and practices that differ considerably from classical Pentecostalism, such as Word of Faith teachings or Kingdom Now theology. Some of these groups have been successful in utilizing the mass media, especially television and radio, to spread their message. According to a denomination census in 2022, the Assemblies of God, the largest Pentecostal denomination in the world, has 367,398 churches and 53,700,000 members worldwide. The other major international Pentecostal denominations are the Apostolic Church with 15,000,000 members, the Church of God (Cleveland) with 36,000 churches and 7,000,000 members, The Foursquare Church with 67,500 churches and 8,800,000 members. Among the censuses carried out by Pentecostal denominations published in 2020, those claiming the most members were on each continent: In Africa, the Redeemed Christian Church of God, with 14,000 churches and 5 million members. In North America, the Assemblies of God USA with 12,986 churches and 1,810,093 members. In South America, the General Convention of the Assemblies of God in Brazil with 12,000,000 members. In Asia, the Indonesian Bethel Church with 5,000 churches and 3,000,000 members. In Europe, the Assemblies of God of France with 658 churches and 40,000 members. In Oceania, the Australian Christian Churches with 1,000 churches and 375,000 members. Assessment from the social sciences Zora Neale Hurston Zora Neale Hurston performed anthropological and sociological studies examining the spread of Pentecostalism, published posthumously in a collection of essays called The Sanctified Church. According to scholar of religion Ashon Crawley, Hurston's analysis is important because she understood the class struggle that this seemingly new religiocultural movement articulated: "The Sanctified Church is a protest against the high-brow tendency in Negro Protestant congregations as the Negroes gain more education and wealth." She stated that this sect was "a revitalizing element in Negro music and religion" and that this collection of groups was "putting back into Negro religion those elements which were brought over from Africa and grafted onto Christianity." Crawley would go on to argue that the shouting that Hurston documented was evidence of what Martinique psychoanalyst Frantz Fanon called the refusal of positionality wherein "no strategic position is given preference" as the creation of, the grounds for, social form. Rural Pentecostalism Pentecostalism is a religious phenomenon more visible in the cities. However, it has attracted significant rural populations in Latin America, Africa, and Eastern Europe. Sociologist David Martin has called attention on an overview on the rural Protestantism in Latin America, focusing on the indigenous and peasant conversion to Pentecostalism. The cultural change resulting from the countryside modernization has reflected on the peasant way of life. Consequently, many peasants – especially in Latin America – have experienced collective conversion to different forms of Pentecostalism and interpreted as a response to modernization in the countryside Rather than a mere religious shift from folk Catholicism to Pentecostalism, Peasant Pentecostals have dealt with agency to employ many of their cultural resources to respond development projects in a modernization framework Researching Guatemalan peasants and indigenous communities, Sheldon Annis argued that conversion to Pentecostalism was a way to quit the burdensome obligations of the cargo-system. Mayan folk Catholicism has many fiestas with a rotation leadership who must pay the costs and organize the yearly patron-saint festivities. One of the socially-accepted ways to opt out those obligations was to convert to Pentecostalism. By doing so, the Pentecostal Peasant engage in a "penny capitalism". In the same lines of moral obligations but with different mechanism economic self-help, Paul Chandler has compared the differences between Catholic and Pentecostal peasants, and has found a web of reciprocity among Catholics compadres, which the Pentecostals lacked. However, Alves has found that the different Pentecostal congregations replaces the compadrazgo system and still provide channels to exercise the reciprocal obligations that the peasant moral economy demands. Conversion to Pentecostalism provides a rupture with a socially disrupted past while allowing to maintain elements of the peasant ethos. Brazil has provided many cases to evaluate this thesis. Hoekstra has found out that rural Pentecostalism more as a continuity of the traditional past though with some ruptures. Anthropologist Brandão sees the small town and rural Pentecostalism as another face for folk religiosity instead of a path to modernization. With similar finding, Abumanssur regards Pentecostalism as an attempt to conciliate traditional worldviews of folk religion with modernity. Identity shift has been noticed among rural converts to Pentecostalism. Indigenous and peasant communities have found in the Pentecostal religion a new identity that helps them navigate the challenges posed by modernity. This identity shift corroborates the thesis that the peasant Pentecostals pave their own ways when facing modernization. Controversies and criticism Various Christian groups have criticized the Pentecostal and charismatic movement for too much attention to mystical manifestations, such as glossolalia (which, for a believer, would be the obligatory sign of a baptism with the Holy Spirit); along with falls to the ground, moans and cries during worship services, as well as anti-intellectualism. A particularly controversial doctrine in the Evangelical Churches is that of the prosperity theology, which spread in the 1970s and 1980s in the United States, mainly through Pentecostals and charismatic televangelists. This doctrine is centered on the teaching of Christian faith as a means to enrich oneself financially and materially through a "positive confession" and a contribution to Christian ministries. Promises of divine healing and prosperity are guaranteed in exchange for certain amounts of donation. Some pastors threaten those who do not tithe with curses, attacks from the devil and poverty. The collections of offerings are multiple or separated in various baskets or envelopes to stimulate the contributions of the faithful. The offerings and the tithe occupies a lot of time in some worship services. Often associated with the mandatory tithe, this doctrine is sometimes compared to a religious business. In 2012, the National Council of Evangelicals of France published a document denouncing this doctrine, mentioning that prosperity was indeed possible for a believer, but that this theology taken to the extreme leads to materialism and to idolatry, which is not the purpose of the gospel. Pentecostal pastors adhering to prosperity theology have been criticized by journalists for their lavish lifestyle (luxury clothes, big houses, high end cars, private aircraft, etc.). In Pentecostalism, rifts accompanied the teaching of faith healing. In some churches, pricing for prayer against promises of healing has been observed. Some pastors and evangelists have been charged with claiming false healings. Some churches have advised their members against vaccination or other medicine, stating that it is for those weak in the faith and that with a positive confession, they would be immune from the disease. Pentecostal churches that discourage the use of medicine have caused preventable deaths, sometimes leading to parents being sentenced to prison for the deaths of their children. This position is not representative of most Pentecostal churches. "The Miraculous Healing", published in 2015 by the , describes medicine as one of the gifts given by God to humanity. Churches and certain evangelical humanitarian organizations are also involved in medical health programs. In Pentecostalism, health and sickness is understood holistically. They are not just physical, concurrently, psychological and spiritual, i.e. diseases and despair are from evil forces; devil, sin. Through spiritual intimacy with ultimate divinity any illness can be healed. Today's Pentecostals accept biomedicine, but believe that divine intervention heals even where biomedicine fails. But critics argue that Pentecostal healers induce merely a placebo effect. In the anthropological view, the placebo effect mirrors that humans are a "socio- psycho -physical entity", thus through symbolic manipulation i.e. strong belief on divinity, rituals and miracles, it may induce an actual healing effect on the physical level. In Pentecostalism, healing mainly is a spiritual experience. Being worthy of having an intimacy with God (i.e. ability to confess, forgiveness) gives a cathartic effect which leads to inner healing; accepting of self, healing from memories, which cleanse the root of emotional wounds and sufferings. Divine healing also addresses socially born distresses; socio gender inequalities, racial hatreds which often emerge as physical ailments by empowering patients. Pentecostalism's divine healing practice is frequently dismissed as emotional or illogical, yet anthropological research reveals its profound cultural and social significance. Hefner demonstrates how deeply rooted Pentecostal healing is in the social structures of countries like Indonesia. Healing techniques are not superstitious; rather, they are an expression of, and response to local power structures and social hierarchies. By functioning as tools for negotiating social roles and upholding communal ties, they show that these activities address genuine social issues and are not isolated from ordinary life. Meyer shows how Pentecostal healing has a similar entanglement with broader cultural frameworks. Giving them a sense of empowerment dispels the notion that these activities are only a means of escape and group support. The focus on divine healing frequently reflects cultural perspectives on health and wellbeing, where social and spiritual dimensions are vital to comprehending and regulating health. Even while Pentecostals occasionally choose not to receive traditional medical care, this decision is influenced by a larger cultural framework that places a high importance on spiritual guidance and communal support. Pentecostalism's doctrine belief in faith healing has been criticized for more than just its aversion to modern medicine, their tendency to attribute mental health conditions to demonic activity has also been questioned. Faith healing has been the subject of numerous studies with a vast number of scholars aiming to either debunk or prove the effects of its use. One study however aimed rather to explore faith healing as a social construction. The study tracked the healing progress of over 100 participants. Throughout the study a significant number of participants went on to redefine their problem, such as one woman whose original problem was severe back pain; however at the conclusion of the study, she claims her troublesome relationship with her son was healed. Further the study found a correlation between those who redefine their issue and those who are more likely to claim they were healed. These findings suggest that these healings may not be a result of divine intervention. Glik stated that "redefinitions may mirror therapeutic processes of self-discovery" meaning these acts of faith healing may simply be the result of self-growth or the result of the restructure of one's world view as they progress into a religious world view. People Forerunners William Boardman (1810–1886) Alexander Boddy (1854–1930) John Alexander Dowie (1848–1907) Henry Drummond (1786–1860) Edward Irving (1792–1834) Andrew Murray (1828–1917) Phoebe Palmer (1807–1874) Jessie Penn-Lewis (1861–1927) Evan Roberts (1878–1951) Albert Benjamin Simpson (1843–1919) Richard Green Spurling father (1810–1891) and son (1857–1935) James Haldane Stewart (1778–1854) Leaders A. A. Allen (1911–1970) – Healing tent evangelist of the 1950s and 1960s Yiye Ávila (1925–2013) – Puerto Rican Pentecostal evangelist of the late 20th century Joseph Ayo Babalola (1904–1959) – Oke – Ooye, Ilesa revivalist in 1930, and spiritual founder of Christ Apostolic Church Reinhard Bonnke (1940–2019) – Evangelist William M. Branham (1909–1965) – American healing evangelist of the mid-20th century, generally acknowledged as initiating the post-World War II healing revival David Yonggi Cho (1936–2021) – Senior pastor and founder of the Yoido Full Gospel Church (Assemblies of God) in Seoul, South Korea, the world's largest congregation Jack Coe (1918–1956) – Healing tent evangelist of the 1950s Donnie Copeland (born 1961) – Pastor of Apostolic Church of North Little Rock, Arkansas, and Republican member of the Arkansas House of Representatives Margaret Court (born 1942) – Tennis champion in the 1960s and 1970s and founder of Victory Life Centre in Perth, Australia; become a pastor in 1991 Luigi Francescon (1866–1964) – Missionary and pioneer of the Italian Pentecostal Movement Donald Gee (1891–1966) – Early Pentecostal bible teacher in UK; "the apostle of balance" Benny Hinn (born 1952) – Evangelist Rex Humbard (1919–2007) – TV evangelist, 1950s–1970s George Jeffreys (1889–1962) – Founder of the Elim Foursquare Gospel Alliance and the Bible-Pattern Church Fellowship (UK) E. W. Kenyon (1867–1948) – A major leader in what became the Word of Faith movement; had a particularly strong influence on Kenneth Hagin's theology and ministry Kathryn Kuhlman (1907–1976) – Evangelist who brought Pentecostalism into the mainstream denominations Gerald Archie Mangun (1919–2010) – American evangelist, pastor, who built one of the largest churches within the United Pentecostal Church International Charles Harrison Mason (1864–1961) – the founder of the Church of God In Christ James McKeown (1937–1982) – Irish missionary in Ghana, founder of The Church of Pentecost Aimee Semple McPherson (1890–1944) – Evangelist, pastor, and organizer of the International Church of the Foursquare Gospel Charles Fox Parham (1873–1929) – Father of the Apostolic Faith movement David du Plessis (1905–1987) – South-African Pentecostal church leader, one of the founders of the Charismatic movement Oral Roberts (1918–2009) – Healing tent evangelist who made the transition to televangelism Bishop Ida Robinson (1891–1946) – Founder of the Mount Sinai Holy Church of America William J. Seymour (1870–1922) – Father of Global and Modern Pentecostalism, Azusa Street Mission founder (Azusa Street Revival) Jimmy Swaggart (born 1935) – TV evangelist, pastor, musician Ambrose Jessup ("AJ") Tomlinson (1865–1943) leader of "Church of God" movement from 1903 until 1923, and of a minority grouping (now called Church of God of Prophecy) from 1923 until his death in 1943 Smith Wigglesworth (1859–1947) – British evangelist Maria Woodworth-Etter (1844–1924) – Healing evangelist See also Cessationism versus Continuationism Direct revelation List of Pentecostal and Full Gospel Churches Redemption Hymnal Renewal theologian Snake handling in Christianity Worship References Bibliography . . . . . . . . . . . . . . . . . . . . . . . . Ross, Thomas D., "The Doctrine of Sanctification." Ph. D. Diss., Great Plains Baptist Divinity School, 2015. . . . . Further reading Alexander, Paul. Peace to War: Shifting Allegiances in the Assemblies of God. Telford, Pennsylvania: Cascadia Publishing/Herald Press, 2009. Alexander, Paul. Signs and Wonders: Why Pentecostalism is the World's Fastest Growing Faith. San Francisco, California: Jossey-Bass, 2009. Blanton, Anderson. Hittin' the Prayer Bones: Materiality of Spirit in the Pentecostal South. (U of North Carolina Press, 2015) 222 pp Brewster, P. S. Pentecostal Doctrine. Grenehurst Press, United Kingdom, May 1976. . Campbell, Marne L. "'The Newest Religious Sect Has Started in Los Angeles': Race, Class, Ethnicity, and the Origins of the Pentecostal Movement, 1906–1913", The Journal of African American History 95#1 (2010), pp. 1–25 in JSTOR Clement, Arthur J. Pentecost or Pretense?: an Examination of the Pentecostal and Charismatic Movements. Milwaukee, Wis.: Northwestern Publishing House, 1981. 255 [1] p. Clifton, Shane Jack. "An Analysis of the Developing Ecclesiology of the Assemblies of God in Australia". PhD thesis, Australian Catholic University, 2005. Cruz, Samuel. Masked Africanisms: Puerto Rican Pentecostalism. Kendall/Hunt Publishing Company, 2005. . Hollenweger, Walter. The Pentecostals: The Charismatic Movement in the Churches. Minneapolis: Augsburg Publishing House, 1972. 255, [1] p. . Hollenweger, Walter. Pentecostalism : Origins and Developments Worldwide. Peabody, Massachusetts: Hendrickson Publishers, 1997. . Knox, Ronald. Enthusiasm: a Chapter in the History of Religion, with Special Reference to the XVII and XVIII Centuries. Oxford, Eng.: Oxford University Press, 1950. viii, 622 pp. Lewis, Meharry H. Mary Lena Lewis Tate: Vision!, A Biography of the Founder and History of the Church of the Living God, the Pillar and Ground of the Truth, Inc. Nashville, Tennessee: The New and Living Way Publishing Company, 2005. . Malcomson, Keith. Pentecostal Pioneers Remembered: British and Irish Pioneers of Pentecost . 2008. Mendiola, Kelly Willis. OCLC 56818195 The Hand of a Woman: Four Holiness-Pentecostal Evangelists and American Culture, 1840–1930. PhD thesis, University of Texas at Austin, 2002. Miller, Donald E. and Tetsunao Yamamori. Global Pentecostalism: The New Face of Christian Social Engagement. Berkeley, California: University of California Press, 2007. Olowe, Abi Olowe. Great Revivals, Great Revivalist – Joseph Ayo Babalola. Omega Publishers, 2007. Ramírez, Daniel. Migrating Faith: Pentecostalism in the United States and Mexico in the Twentieth Century (2015) Robins, R. G. A. J. Tomlinson: Plainfolk Modernist. New York, NY: Oxford University Press, 2004. Robins, R. G. Pentecostalism in America . Santa Barbara, CA: Praeger/ABC-CLIO, 2010. Steel, Matthew. "Pentecostalism in Zambia: Power, Authority and the Overcomers". MSc dissertation, University of Wales, 2005. Woodberry, Robert. "Pentecostalism and Economic Development", in Markets, Morals and Religion, ed. Jonathan B. Imber, 157–177. New Brunswick, New Jersey: Transaction Publishers, 2008. External links "The Rise of Pentecostalism" , Christian History 58 (1998) special issue. two special issues of this magazine had addressed Pentecostalism's roots: "Spiritual Awakenings in North America " (issue 23, 1989) and "Camp Meetings & Circuit Riders: Frontier Revivals " (issue 45, 1995) The European Research Network on Global Pentecostalism Multi-user academic website providing reliable information about Pentecostalism and networking current interdisciplinary research, hosts a dedicated web search engine for Pentecostal studies Flower Pentecostal Heritage Center One of the largest collections of materials documenting the global Pentecostal movement, including searchable databases of periodicals, photographs, and other items The Holiness Messenger: a Holiness Pentecostal periodical Holiness Pentecostal church directory Pentecostal History 20th-century Protestantism 21st-century Protestantism Christian terminology Religious belief systems founded in the United States
23558
https://en.wikipedia.org/wiki/Pangenesis
Pangenesis
Pangenesis was Charles Darwin's hypothetical mechanism for heredity, in which he proposed that each part of the body continually emitted its own type of small organic particles called gemmules that aggregated in the gonads, contributing heritable information to the gametes. He presented this 'provisional hypothesis' in his 1868 work The Variation of Animals and Plants Under Domestication, intending it to fill what he perceived as a major gap in evolutionary theory at the time. The etymology of the word comes from the Greek words pan (a prefix meaning "whole", "encompassing") and genesis ("birth") or genos ("origin"). Pangenesis mirrored ideas originally formulated by Hippocrates and other pre-Darwinian scientists, but using new concepts such as cell theory, explaining cell development as beginning with gemmules which were specified to be necessary for the occurrence of new growths in an organism, both in initial development and regeneration. It also accounted for regeneration and the Lamarckian concept of the inheritance of acquired characteristics, as a body part altered by the environment would produce altered gemmules. This made Pangenesis popular among the neo-Lamarckian school of evolutionary thought. This hypothesis was made effectively obsolete after the 1900 rediscovery among biologists of Gregor Mendel's theory of the particulate nature of inheritance. Early history Pangenesis was similar to ideas put forth by Hippocrates, Democritus and other pre-Darwinian scientists in proposing that the whole of parental organisms participate in heredity (thus the prefix pan). Darwin wrote that Hippocrates' pangenesis was "almost identical with mine—merely a change of terms—and an application of them to classes of facts necessarily unknown to the old philosopher." The historian of science Conway Zirkle wrote that: Zirkle demonstrated that the idea of inheritance of acquired characteristics had become fully accepted by the 16th century and remained immensely popular through to the time of Lamarck's work, at which point it began to draw more criticism due to lack of hard evidence. He also stated that pangenesis was the only scientific explanation ever offered for this concept, developing from Hippocrates' belief that "the semen was derived from the whole body." In the 13th century, pangenesis was commonly accepted on the principle that semen was a refined version of food unused by the body, which eventually translated to 15th and 16th century widespread use of pangenetic principles in medical literature, especially in gynecology. Later pre-Darwinian important applications of the idea included hypotheses about the origin of the differentiation of races. A theory put forth by Pierre Louis Maupertuis in 1745 called for particles from both parents governing the attributes of the child, although some historians have called his remarks on the subject cursory and vague. In 1749, the French naturalist Georges-Louis Leclerc, Comte de Buffon developed a hypothetical system of heredity much like Darwin's pangenesis, wherein 'organic molecules' were transferred to offspring during reproduction and stored in the body during development. Commenting on Buffon's views, Darwin stated, "If Buffon had assumed that his organic molecules had been formed by each separate unit throughout the body, his view and mine would have been very closely similar." In 1801, Erasmus Darwin advocated a hypothesis of pangenesis in the third edition of his book Zoonomia. In 1809, Jean-Baptiste Lamarck in his Philosophie Zoologique put forth evidence for the idea that characteristics acquired during the lifetime of an organism, from either environmental or behavioural effects, may be passed on to the offspring. Charles Darwin first had significant contact with Lamarckism during his time at the University of Edinburgh Medical School in the late 1820s, both through Robert Edmond Grant, whom he assisted in research, and in Erasmus's journals. Darwin's first known writings on the topic of Lamarckian ideas as they related to inheritance are found in a notebook he opened in 1837, also entitled Zoonomia. Historian Jonathan Hodge states that the theory of pangenesis itself first appeared in Darwin's notebooks in 1841. In 1861, the Irish physician Henry Freke developed a variant of pangenesis in his book Origin of Species by Means of Organic Affinity. Freke proposed that all life was developed from microscopic organic agents which he named granules, which existed as 'distinct species of organizing matter' and would develop into different biological structures. Four years before the publication of Variation, in his 1864 book Principles of Biology, Herbert Spencer proposed a theory of "physiological units" similar to Darwin's gemmules, which likewise were said to be related to specific body parts and responsible for the transmission of characteristics of those body parts to offspring. He supported the Lamarckian idea of transmission of acquired characteristics. Darwin had debated whether to publish a theory of heredity for an extended period of time due to its highly speculative nature. He decided to include pangenesis in Variation after sending a 30-page manuscript to his close friend and supporter Thomas Huxley in May 1865, which was met by significant criticism from Huxley that made Darwin even more hesitant. However, Huxley eventually advised Darwin to publish, writing: "Somebody rummaging among your papers half a century hence will find Pangenesis & say 'See this wonderful anticipation of our modern Theories—and that stupid ass, Huxley, prevented his publishing them'" Darwin's initial version of pangenesis appeared in the first edition of Variation in 1868, and was later reworked for the publication of a second edition in 1875. Theory Darwin Darwin's pangenesis theory attempted to explain the process of sexual reproduction, inheritance of traits, and complex developmental phenomena such as cellular regeneration in a unified mechanistic structure. Longshan Liu wrote that in modern terms, pangenesis deals with issues of "dominance inheritance, graft hybridization, reversion, xenia, telegony, the inheritance of acquired characters, regeneration and many groups of facts pertaining to variation, inheritance and development." Mechanistically, Darwin proposed pangenesis to occur through the transfer of organic particles which he named 'gemmules.' Gemmules, which he also sometimes referred to as , pangenes, granules, or germs, were supposed to be shed by the organs of the body and carried in the bloodstream to the reproductive organs where they accumulated in the germ cells or gametes. Their accumulation was thought to occur by some sort of a 'mutual affinity.' Each gemmule was said to be specifically related to a certain body part- as described, they did not contain information about the entire organism. The different types were assumed to be dispersed through the whole body, and capable of self-replication given 'proper nutriment'. When passed on to offspring via the reproductive process, gemmules were thought to be responsible for developing into each part of an organism and expressing characteristics inherited from both parents. Darwin thought this to occur in a literal sense: he explained cell proliferation to progress as gemmules to bind to more developed cells of their same character and mature. In this sense, the uniqueness of each individual would be due to their unique mixture of their parents' gemmules, and therefore characters. Similarity to one parent over the other could be explained by a quantitative superiority of one parent's gemmules. Yongshen Lu points out that Darwin knew of cells' ability to multiply by self-division, so it is unclear how Darwin supposed the two proliferation mechanisms to relate to each other. He did clarify in a later statement that he had always supposed gemmules to only bind to and proliferate from developing cells, not mature ones. Darwin hypothesized that gemmules might be able to survive and multiply outside of the body in a letter to J. D. Hooker in 1870. Some gemmules were thought to remain dormant for generations, whereas others were routinely expressed by all offspring. Every child was built up from selective expression of the mixture of the parents and grandparents' gemmules coming from either side. Darwin likened this to gardening: a flowerbed could be sprinkled with seeds "most of which soon germinate, some lie for a period dormant, whilst others perish." He did not claim gemmules were in the blood, although his theory was often interpreted in this way. Responding to Fleming Jenkin's review of On the Origin of Species, he argued that pangenesis would permit the preservation of some favourable variations in a population so that they wouldn't die out through blending. Darwin thought that environmental effects that caused altered characteristics would lead to altered gemmules for the affected body part. The altered gemmules would then have a chance of being transferred to offspring, since they were assumed to be produced throughout an organism's life. Thus, pangenesis theory allowed for the Lamarckian idea of transmission of characteristics acquired through use and disuse. Accidental gemmule development in incorrect parts of the body could explain deformations and the 'monstrosities' Darwin cited in Variation. De Vries Hugo de Vries characterized his own version of pangenesis theory in his 1889 book Intracellular Pangenesis with two propositions, of which he only accepted the first: I. In the cells there are numberless particles which differ from each other, and represent the individual cells, organs, functions and qualities of the whole individual. These particles are much larger than the chemical molecules and smaller than the smallest known organisms; yet they are for the most part comparable to the latter, because, like them, they can divide and multiply through nutrition and growth. They are transmitted, during cell-division, to the daughter-cells: this is the ordinary process of heredity. II. In addition to this, the cells of the organism, at every stage of development, throw off such particles, which are conducted to the germ-cells and transmit to them those characters which the respective cells may have acquired during development. Other variants The historian of science Janet Browne points out that while Spencer and Carl von Nägeli also put forth ideas for systems of inheritance involving gemmules, their version of gemmules differed from Darwin's in that it contained "a complete microscopic blueprint for an entire creature." Spencer published his theory of "physiological units" three years prior to Darwin's publication of Variation. Browne adds that Darwin believed specifically in gemmules from each body part because they might explain how environmental effects could be passed on as characteristics to offspring. Interpretations and applications of pangenesis continued to appear frequently in medical literature up until Weismann's experiments and subsequent publication on germ-plasm theory in 1892. For instance, an address by Huxley spurred on substantial work by Dr. James Ross in linking ideas found in Darwin's pangenesis to the germ theory of disease. Ross cites the work of both Darwin and Spencer as key to his application of pangenetic theory. Collapse Galton's experiments on rabbits Darwin's half-cousin Francis Galton conducted wide-ranging inquiries into heredity which led him to refute Charles Darwin's hypothetical theory of pangenesis. In consultation with Darwin, he set out to see if gemmules were transported in the blood. In a long series of experiments from 1869 to 1871, he transfused the blood between dissimilar breeds of rabbits, and examined the features of their offspring. He found no evidence of characters transmitted in the transfused blood. Galton was troubled because he began the work in good faith, intending to prove Darwin right, and having praised pangenesis in Hereditary Genius in 1869. Cautiously, he criticized his cousin's theory, although qualifying his remarks by saying that Darwin's gemmules, which he called "pangenes", might be temporary inhabitants of the blood that his experiments had failed to pick up. Darwin challenged the validity of Galton's experiment, giving his reasons in an article published in Nature where he wrote: After the circulation of Galton's results, the perception of pangenesis quickly changed to severe skepticism if not outright disbelief. Weismann August Weismann's idea, set out in his 1892 book Das Keimplasma: eine Theorie der Vererbung (The Germ Plasm: a Theory of Inheritance), was that the hereditary material, which he called the germ plasm, and the rest of the body (the soma) had a one-way relationship: the germ-plasm formed the body, but the body did not influence the germ-plasm, except indirectly in its participation in a population subject to natural selection. This distinction is commonly referred to as the Weismann Barrier. If correct, this made Darwin's pangenesis wrong and Lamarckian inheritance impossible. His experiment on mice, cutting off their tails and showing that their offspring had normal tails across multiple generations, was proposed as a proof of the non-existence of Lamarckian inheritance, although Peter Gauthier has argued that Weismann's experiment showed only that injury did not affect the germ plasm and neglected to test the effect of Lamarckian use and disuse. Weismann argued strongly and dogmatically for Darwinism and against neo-Lamarckism, polarising opinions among other scientists. This increased anti-Darwinian feeling, contributing to its eclipse. After pangenesis Darwin's pangenesis theory was widely criticised, in part for its Lamarckian premise that parents could pass on traits acquired in their lifetime. Conversely, the neo-Lamarckians of the time seized upon pangenesis as evidence to support their case. Italian Botanist Federico Delpino's objection that gemmules' ability to self-divide is contrary to their supposedly innate nature gained considerable traction; however, Darwin was dismissive of this criticism, remarking that the particulate agents of smallpox and scarlet fever seem to have such characteristics. Lamarckism fell from favour after August Weismann's research in the 1880s indicated that changes from use (such as lifting weights to increase muscle mass) and disuse (such as being lazy and becoming weak) were not heritable. However, some scientists continued to voice their support in spite of Galton's and Weismann's results: notably, in 1900 Karl Pearson wrote that pangenesis "is no more disproved by the statement that 'gemmules have not been found in the blood,' than the atomic theory is disproved by the fact that no atoms have been found in the air." Finally, the rediscovery of Mendel's Laws of Inheritance in 1900 led to pangenesis being fully set aside. Julian Huxley has observed that the later discovery of chromosomes and the research of T. H. Morgan also made pangenesis untenable. Some of Darwin's pangenesis principles do relate to heritable aspects of phenotypic plasticity, although the status of gemmules as a distinct class of organic particles has been firmly rejected. However, starting in the 1950s, many research groups in revisiting Galton's experiments found that heritable characteristics could indeed arise in rabbits and chickens following DNA injection or blood transfusion. This type of research originated in the Soviet Union in the late 1940s in the work of Sopikov and others, and was later corroborated by researchers in Switzerland as it was being further developed by the Soviet scientists. Notably, this work was supported in the USSR in part due to its conformation with the ideas of Trofim Lysenko, who espoused a version of neo-Lamarckism as part of Lysenkoism. Further research of this heritability of acquired characteristics developed into, in part, the modern field of epigenetics. Darwin himself had noted that "the existence of free gemmules is a gratuitous assumption"; by some accounts in modern interpretation, gemmules may be considered a prescient mix of DNA, RNA, proteins, prions, and other mobile elements that are heritable in a non-Mendelian manner at the molecular level. Liu points out that Darwin's ideas about gemmules replicating outside of the body are predictive of in vitro gene replication used, for instance, in PCR. See also Modern synthesis References External links On-line Facsimile Edition of The Variation of Animals and Plants Under Domestication from Electronic Scholarly Publishing Variation under Domestication, From: Freeman, R. B. 1977. The Works of Charles Darwin: An Annotated Bibliographical Handlist. 2nd edn. Dawson: Folkestone, at DarwinOnline, with links to online versions of the 1st. edition, first and second issues, and the 2nd. edition. Charles Darwin Developmental biology Evolutionary biology History of biology History of genetics Obsolete biology theories Lamarckism
23560
https://en.wikipedia.org/wiki/Proboscidea
Proboscidea
{{Automatic taxobox | fossil_range = Middle Paleocene-Holocene | image = Elephant Diversity.jpg | image_caption = Proboscidean diversity: Indian elephant, Elephas maximus indicus, African bush elephant, Loxodonta africana and African forest elephant, Loxodonta cyclotis | image2 = Moeritherium sp.jpg | image2_caption = Skeleton of Moeritherium | taxon = Proboscidea | authority = Illiger, 1811 | subdivision_ranks = Subclades | subdivision = †Eritherium †Moeritherium †Saloumia†Numidotheriidae †Barytheriidae †Deinotheriidae Elephantiformes }} Proboscidea (; , ) is a taxonomic order of afrotherian mammals containing one living family (Elephantidae) and several extinct families. First described by J. Illiger in 1811, it encompasses the elephants and their close relatives. Three species of elephant are currently recognised: the African bush elephant, the African forest elephant, and the Asian elephant. Extinct members of Proboscidea include the deinotheres, mastodons, gomphotheres and stegodonts. The family Elephantidae also contains several extinct groups, including mammoths and Palaeoloxodon. Proboscideans include some of the largest known land mammals, with the elephant Palaeoloxodon namadicus and mastodon "Mammut" borsoni suggested to have body masses surpassing , rivalling or exceeding paraceratheres (the otherwise largest known land mammals) in size. The largest extant proboscidean is the African bush elephant, with a world record of size of at the shoulder and . In addition to their enormous size, later proboscideans are distinguished by tusks and long, muscular trunks, which were less developed or absent in early proboscideans. Evolution Over 180 extinct members of Proboscidea have been described. The earliest proboscideans, Eritherium and Phosphatherium are known from the late Paleocene of Africa. The Eocene included Numidotherium, Moeritherium and Barytherium from Africa. These animals were relatively small and some, like Moeritherium and Barytherium were probably amphibious. A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18-19 million years ago allowing proboscideans to disperse from their African homeland across Eurasia, and later, around 16-15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the "shovel tuskers" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres. The Late Miocene saw major climatic changes, which resulted in the decline and extinction of many proboscidean groups such as amebelodontids and choerolophodontids. The earliest members of modern genera of Elephantidae appeared during the latest Miocene-early Pliocene around 6-5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago. Over the course of the Early Pleistocene, all non-elephantid probobscideans outside of the Americas became extinct (including mammutids, gomphotheres and deinotheres), with the exception of Stegodon. Gomphotheres dispersed into South America during this era as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. By the beginning of the Late Pleistocene, proboscideans were represented by around 23 species. Proboscideans underwent a dramatic decline during the Late Pleistocene as part of the Late Pleistocene megafauna extinctions, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the American gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island around 4,000 years ago. The following cladogram is based on endocasts Morphology Over the course of their evolution, proboscideans experienced a significant increase in body size. Some members of the families Deinotheriidae, Mammutidae, Stegodontidae and Elephantidae are thought to have exceeded modern elephants in size, with shoulder heights over and masses over , with average fully grown males of the mammutid "Mammut" borsoni having an estimated body mass of , making it one the largest and perhaps the largest land mammal ever, with a fragmentary specimen of the Indian elephant species Palaeoloxodon namadicus only known from a partial femur being speculatively estimated in the same study to have possibly reached a body mass of . As with other megaherbivores, including the extinct sauropod dinosaurs, the large size of proboscideans likely developed to allow them to survive on vegetation with low nutritional value. Their limbs grew longer and the feet shorter and broader. The feet were originally plantigrade and developed into a digitigrade stance with cushion pads and the sesamoid bone providing support, with this change developing around the common ancestor of Deinotheriidae and Elephantiformes. Members of Elephantiformes which have retracted nasal regions of the skull indicating the development of a trunk, as well as well-developed tusks on the upper and lower jaws. The skull grew larger, especially the cranium, while the neck shortened to provide better support for the skull. The increase in size led to the development and elongation of the mobile trunk to provide reach. The number of premolars, incisors and canines decreased. The cheek teeth (molars and premolars) became larger and more specialised. In Elephantiformes, the second upper incisor and lower incisor were transformed into ever growing tusks. The tusks are proportionally heavy for their size, being primarily composed of dentine. In primitive proboscideans, a band of enamel covers part of the tusk surface, though in many later groups including modern elephants the band is lost, with elephants only having enamel on the tusk tips of juveniles. The upper tusks were initially modest in size, but from the Late Miocene onwards proboscideans developed increasingly large tusks, with the longest ever recorded tusk being long belonging to the mammutid "Mammut" borsoni found in Greece, with some mammoth tusks likely weighing over . The lower tusks are generally smaller than the upper tusks, but could grow to large sizes in some species, like in Deinotherium (which lacks upper tusks), where they could grow over long, the amebelodontid Konobelodon has lower tusks long, with the longest lower tusks ever recorded being from the primitive elephantid Stegotetrabelodon which are around long. The molar teeth changed from being replaced vertically as in other mammals to being replaced horizontally in the clade Elephantimorpha. While early Elephantimorpha generally had lower jaws with an elongated mandibular symphysis at the front of the jaw with well developed lower tusks/incisors, from the Late Miocene onwards, many groups convergently developed brevirostrine (shortened) lower jaws with vestigial or no lower tusks. Elephantids are distinguished from other proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher crowned (hypsodont) and more efficient in consuming grass. Dwarfism Several species of proboscideans lived on islands and experienced insular dwarfism. This occurred primarily during the Pleistocene, when some elephant populations became isolated by fluctuating sea levels, although dwarf elephants did exist earlier in the Pliocene. These elephants likely grew smaller on islands due to a lack of large or viable predator populations and limited resources. By contrast, small mammals such as rodents develop gigantism in these conditions. Dwarf proboscideans are known to have lived in Indonesia, the Channel Islands of California, and several islands of the Mediterranean.Elephas celebensis of Sulawesi is believed to have descended from Elephas planifrons. Elephas falconeri of Malta and Sicily was only , and had probably evolved from the straight-tusked elephant. Other descendants of the straight-tusked elephant existed in Cyprus. Dwarf elephants of uncertain descent lived in Crete, Cyclades and Dodecanese, while dwarf mammoths are known to have lived in Sardinia. The Columbian mammoth colonised the Channel Islands and evolved into the pygmy mammoth. This species reached a height of and weighed . A population of small woolly mammoths survived on Wrangel Island as recently as 4,000 years ago. After their discovery in 1993, they were considered dwarf mammoths. This classification has been re-evaluated and since the Second International Mammoth Conference in 1999, these animals are no longer considered to be true "dwarf mammoths". Ecology It has been suggested that members of Elephantimorpha, including mammutids, gomphotheres, and stegodontids, lived in herds like modern elephants. Analysis of remains of the American mastodon (Mammut americanum) suggest that like modern elephants, that herds consisted of females and juveniles and that adult males lived solitarily or in small groups, and that adult males periodically engaged in fights with other males during periods similar to musth found in living elephants. These traits are suggested to be inherited from the last common ancestor of elephantimorphs, with musth-like behaviour also suggested to have occurred in gomphotheres. All elephantimorphs are suggested to have been capable of communication via infrasound, as found in living elephants. Deinotheres may have also lived in herds, based on tracks found in the Late Miocene of Romania. Over the course of the Neogene and Pleistocene, various members of Elephantida shifted from a browse-dominated diet towards mixed feeding or grazing. Classification Below is an unranked taxonomy of proboscidean genera as of 2019. Order Proboscidea Illiger, 1811 †Eritherium Gheerbrant, 2009 †Moeritherium Andrews, 1901 †Saloumia Tabuce et al., 2019 †Family Numidotheriidae Shoshani & Tassy, 1992 †Phosphatherium Gheerbrant et al., 1996 †Arcanotherium Delmer, 2009 †Daouitherium Gheerbrant & Sudre, 2002 †Numidotherium Mahboubi et al., 1986 †Family Barytheriidae Andrews, 1906 †Omanitherium Seiffert et al., 2012 †Barytherium Andrews, 1901 †Family Deinotheriidae Bonaparte, 1845 †Chilgatherium Sanders et al., 2004 †Prodeinotherium Ehik, 1930 †Deinotherium Kaup, 1829 Suborder Elephantiformes Tassy, 1988 †Eritreum Shoshani et al., 2006 †Hemimastodon Pilgrim, 1912 †Palaeomastodon Andrews, 1901 †Phiomia Andrews & Beadnell, 1902 Infraorder Elephantimorpha Tassy & Shoshani, 1997 †Family Mammutidae Hay, 1922 †Losodokodon Rasmussen & Gutierrez, 2009 †Eozygodon Tassy & Pickford, 1983 †Zygolophodon Vacek, 1877 †Sinomammut Mothé et al., 2016 †Mammut Blumenbach, 1799 Parvorder Elephantida Tassy & Shoshani, 1997 †Family Choerolophodontidae Gaziry, 1976 †Afrochoerodon Pickford, 2001 †Choerolophodon Schlesinger, 1917 †Family Amebelodontidae Barbour, 1927 †Afromastodon Pickford, 2003 †Progomphotherium Pickford, 2003 †Eurybelodon Lambert, 2016 †Serbelodon Frick, 1933 †Archaeobelodon Tassy, 1984 †Protanancus Arambourg, 1945 †Amebelodon Barbour, 1927 †Konobelodon Lambert, 1990 †Torynobelodon Barbour, 1929 †Aphanobelodon Wang et al., 2016 †Platybelodon Borissiak, 1928 †Family Gomphotheriidae Hay, 1922 (paraphyletic) "trilophodont gomphotheres" †Gomphotherium Burmeister, 1837 †Blancotherium May, 2019 †Gnathabelodon Barbour & Sternberg, 1935 †Eubelodon Barbour, 1914 †Stegomastodon Pohlig, 1912 †Sinomastodon Tobien et al., 1986 †Notiomastodon Cabrera, 1929 †Rhynchotherium Falconer, 1868 †Cuvieronius Osborn, 1923 "tetralophodont gomphotheres" †Anancus Aymard, 1855 †Paratetralophodon Tassy, 1983 †Pediolophodon Lambert, 2007 †Tetralophodon Falconer, 1857 Superfamily Elephantoidea Gray, 1821 †Family Stegodontidae Osborn, 1918 †Stegolophodon Schlesinger, 1917 †Stegodon Falconer, 1857 Family Elephantidae Gray, 1821 †Stegodibelodon Coppens, 1972 †Stegotetrabelodon Petrocchi, 1941 †Selenotherium Mackaye, Brunet & Tassy, 2005 †Primelephas Maglio, 1970Loxodonta Anonymous, 1827 †Palaeoloxodon Matsumoto, 1924 †Mammuthus Brookes, 1828Elephas'' Linnaeus, 1758 References Bibliography Mammal orders Selandian first appearances Taxa named by Johann Karl Wilhelm Illiger Extant Selandian first appearances
23561
https://en.wikipedia.org/wiki/Paranthropus
Paranthropus
Paranthropus is a genus of extinct hominin which contains two widely accepted species: P. robustus and P. boisei. However, the validity of Paranthropus is contested, and it is sometimes considered to be synonymous with Australopithecus. They are also referred to as the robust australopithecines. They lived between approximately 2.9 and 1.2 million years ago (mya) from the end of the Pliocene to the Middle Pleistocene. Paranthropus is characterised by robust skulls, with a prominent gorilla-like sagittal crest along the midline—which suggest strong chewing muscles—and broad, herbivorous teeth used for grinding. However, they likely preferred soft food over tough and hard food. Typically, Paranthropus species were generalist feeders, but while P. robustus was likely an omnivore, P. boisei seems to have been herbivorous, possibly preferring abundant bulbotubers. Paranthropoids were bipeds. Despite their robust heads, they had comparatively small bodies. Average weight and height are estimated to be at for P. robustus males, at for P. boisei males, at for P. robustus females, and at for P. boisei females. They were possibly polygamous and patrilocal, but there are no modern analogues for australopithecine societies. They are associated with bone tools and contested as the earliest evidence of fire usage. They typically inhabited woodlands, and coexisted with some early human species, namely A. africanus, H. habilis and H. erectus. They were preyed upon by the large carnivores of the time, specifically crocodiles, leopards, sabertoothed cats and hyenas. Taxonomy Species P. robustus The genus Paranthropus was first erected by Scottish-South African palaeontologist Robert Broom in 1938, with the type species P. robustus. "Paranthropus" derives from Ancient Greek παρα para beside or alongside; and άνθρωπος ánthropos man. The type specimen, a male braincase, TM 1517, was discovered by schoolboy Gert Terblanche at the Kromdraai fossil site, about southwest of Pretoria, South Africa. By 1988, at least six individuals were unearthed in around the same area, now known as the Cradle of Humankind. In 1948, at Swartkrans Cave, in about the same vicinity as Kromdraai, Broom and South African palaeontologist John Talbot Robinson described P. crassidens based on a subadult jaw, SK 6. He believed later Paranthropus were morphologically distinct from earlier Paranthropus in the cave—that is, the Swartkrans Paranthropus were reproductively isolated from Kromdraai Paranthropus and the former eventually speciated. By 1988, several specimens from Swartkrans had been placed into P. crassidens. However, this has since been synonymised with P. robustus as the two populations do not seem to be very distinct. P. boisei In 1959, P. boisei was discovered by Mary Leakey at Olduvai Gorge, Tanzania (specimen OH 5). Her husband Louis named it Zinjanthropus boisei because he believed it differed greatly from Paranthropus and Australopithecus. The name derives from "Zinj", an ancient Arabic word for the coast of East Africa, and "boisei", referring to their financial benefactor Charles Watson Boise. However, this genus was rejected at Mr. Leakey's presentation before the 4th Pan-African Congress on Prehistory, as it was based on a single specimen. The discovery of the Peninj Mandible made the Leakeys reclassify their species as Australopithecus (Zinjanthropus) boisei in 1964, but in 1967, South African palaeoanthropologist Phillip V. Tobias subsumed it into Australopithecus as A. boisei. However, as more specimens were found, the combination Paranthropus boisei became more popular. It is debated whether the wide range of variation in jaw size indicates simply sexual dimorphism or a grounds for identifying a new species. It could be explained as groundmass filling in cracks naturally formed after death, inflating the perceived size of the bone. P. boisei also has a notably wide range of variation in skull anatomy, but these features likely have no taxonomic bearing. P. aethiopicus In 1968, French palaeontologists Camille Arambourg and Yves Coppens described "Paraustralopithecus aethiopicus" based on a toothless mandible from the Shungura Formation, Ethiopia (Omo 18). In 1976, American anthropologist Francis Clark Howell and Breton anthropologist Yves Coppens reclassified it as A. africanus. In 1986, after the discovery of the skull KNM WT 17000 by English anthropologist Alan Walker and Richard Leakey classified it into Paranthropus as P. aethiopicus. There is debate whether this is synonymous with P. boisei, the main argument for separation being the skull seems less adapted for chewing tough vegetation. In 1989, palaeoartist and zoologist Walter Ferguson reclassified KNM WT 17000 into a new species, walkeri, because he considered the skull's species designation questionable as it comprised the skull whereas the holotype of P. aethiopicus comprised only the mandible. Ferguson's classification is almost universally ignored, and is considered to be synonymous with P. aethiopicus. Others In 2015, Ethiopian palaeoanthropologist Yohannes Haile-Selassie and colleagues described the 3.5–3.2 Ma A. deyiremeda based on three jawbones from the Afar Region, Ethiopia. They noted that, though it shares many similarities with Paranthropus, it may not have been closely related because it lacked enlarged molars which characterize the genus. Nonetheless, in 2018, independent researcher Johan Nygren recommended moving it to Paranthropus based on dental and presumed dietary similarity. Validity In 1951, American anthropologists Sherwood Washburn and Bruce D. Patterson were the first to suggest that Paranthropus should be considered a junior synonym of Australopithecus as the former was only known from fragmentary remains at the time, and dental differences were too minute to serve as justification. In face of calls for subsumation, Leakey and Robinson continued defending its validity. Various other authors were still unsure until more complete remains were found. Paranthropus is sometimes classified as a subgenus of Australopithecus. There is currently no clear consensus on the validity of Paranthropus. The argument rests upon whether the genus is monophyletic—is composed of a common ancestor and all of its descendants—and the argument against monophyly (that the genus is paraphyletic) says that P. robustus and P. boisei evolved similar gorilla-like heads independently of each other by coincidence (convergent evolution), as chewing adaptations in hominins evolve very rapidly and multiple times at various points in the family tree (homoplasy). In 1999, a chimp-like ulna forearm bone was assigned to P. boisei, the first discovered ulna of the species, which was markedly different from P. robustus ulnae, which could suggest paraphyly. Evolution P. aethiopicus is the earliest member of the genus, with the oldest remains, from the Ethiopian Omo Kibish Formation, dated to 2.6 mya at the end of the Pliocene. It is sometimes regarded as the direct ancestor of P. boisei and P. robustus. It is possible that P. aethiopicus evolved even earlier, up to 3.3 mya, on the expansive Kenyan floodplains of the time. The oldest P. boisei remains date to about 2.3 mya from Malema, Malawi. P. boisei changed remarkably little over its nearly one-million-year existence. Paranthropus had spread into South Africa by 2 mya with the earliest P. robustus remains. It is sometimes suggested that Paranthropus and Homo are sister taxa, both evolving from Australopithecus. This may have occurred during a drying trend 2.8–2.5 mya in the Great Rift Valley, which caused the retreat of woodland environments in favor of open savanna, with forests growing only along rivers and lakes. Homo evolved in the former, and Paranthropus in the latter riparian environment. However, the classifications of Australopithecus species is problematic. Evolutionary tree according to a 2019 study: Description Skull Paranthropus had a massively built, tall and flat skull, with a prominent gorilla-like sagittal crest along the midline which anchored large temporalis muscles used in chewing. Like other australopithecines, Paranthropus exhibited sexual dimorphism, with males notably larger than females. They had large molars with a relatively thick tooth enamel coating (post-canine megadontia), and comparatively small incisors (similar in size to modern humans), possibly adaptations to processing abrasive foods. The teeth of P. aethiopicus developed faster than those of P. boisei. Paranthropus had adaptations to the skull to resist large bite loads while feeding, namely the expansive squamosal sutures. The notably thick palate was once thought to have been an adaptation to resist a high bite force, but is better explained as a byproduct of facial lengthening and nasal anatomy. In P. boisei, the jaw hinge was adapted to grinding food side-to-side (rather than up-and-down in modern humans), which is better at processing the starchy abrasive foods that likely made up the bulk of its diet. P. robustus may have chewed in a front-to-back direction instead, and had less exaggerated (less derived) anatomical features than P. boisei as it perhaps did not require them with this kind of chewing strategy. This may have also allowed P. robustus to better process tougher foods. The braincase volume averaged about , comparable to gracile australopithecines, but smaller than Homo. Modern human brain volume averages for men and for women. Limbs and locomotion Unlike P. robustus, the forearms of P. boisei were heavily built, which might suggest habitual suspensory behaviour as in orangutans and gibbons. A P. boisei shoulder blade indicates long infraspinatus muscles, which is also associated with suspensory behavior. A P. aethiopicus ulna, on the other hand, shows more similarities to Homo than P. boisei. Paranthropus were bipeds, and their hips, legs and feet resemble A. afarensis and modern humans. The pelvis is similar to A. afarensis, but the hip joints are smaller in P. robustus. The physical similarity implies a similar walking gait. Their modern-humanlike big toe indicates a modern-humanlike foot posture and range of motion, but the more distal ankle joint would have inhibited the modern human toe-off gait cycle. By 1.8 mya, Paranthropus and H. habilis may have achieved about the same grade of bipedality. Height and weight In comparison to the large, robust head, the body was rather small. Average weight for P. robustus may have been for males and for females; and for P. boisei for males and for females. At Swartkrans Cave Members 1 and 2, about 35% of the P. robustus individuals are estimated to have weighed , 22% about , and the remaining 43% bigger than the former but less than . At Member 3, all individuals were about . Female weight was about the same in contemporaneous H. erectus, but male H. erectus were on average heavier than P. robustus males. P. robustus sites are oddly dominated by small adults, which could be explained as heightened predation or mortality of the larger males of a group. The largest-known Paranthropus individual was estimated at . According to a 1991 study, based on femur length and using the dimensions of modern humans, male and female P. robustus are estimated to have stood on average , respectively, and P. boisei . However, the latter estimates are problematic as there were no positively identified male P. boisei femurs at the time. In 2013, a 1.34 Ma male P. boisei partial skeleton was estimated to be at least and . Pathology Paranthropus seems to have had notably high rates of pitting enamel hypoplasia (PEH), where tooth enamel formation is spotty instead of mostly uniform. In P. robustus, about 47% of baby teeth and 14% of adult teeth were affected, in comparison to about 6.7% and 4.3%, respectively, in any other tested hominin species. The condition of these holes covering the entire tooth is consistent with the modern human ailment amelogenesis imperfecta. However, since circular holes in enamel coverage are uniform in size, only present on the molar teeth, and have the same severity across individuals, the PEH may have been a genetic condition. It is possible that the coding-DNA concerned with thickening enamel also left them more vulnerable to PEH. There have been 10 identified cases of cavities in P. robustus, indicating a rate similar to modern humans. A molar from Drimolen, South Africa, showed a cavity on the tooth root, a rare occurrence in fossil great apes. In order for cavity-creating bacteria to reach this area, the individual would have had to have also presented either alveolar resportion, which is commonly associated with gum disease; or super-eruption of teeth which occurs when teeth become worn down and have to erupt a bit more in order to maintain a proper bite, and this exposed the root. The latter is most likely, and the exposed root seems to have caused hypercementosis to anchor the tooth in place. The cavity seems to have been healing, which may have been caused by a change in diet or mouth microbiome, or the loss of the adjacent molar. Palaeobiology Diet It was once thought P. boisei cracked open nuts with its powerful teeth, giving OH 5 the nickname "Nutcracker Man". However, like gorillas, Paranthropus likely preferred soft foods, but would consume tough or hard food during leaner times, and the powerful jaws were used only in the latter situation. In P. boisei, thick enamel was more likely used to resist abrasive gritty particles rather than to minimize chipping while eating hard foods. In fact, there is a distinct lack of tooth fractures which would have resulted from such activity. Paranthropus were generalist feeders, but diet seems to have ranged dramatically with location. The South African P. robustus appears to have been an omnivore, with a diet similar to contemporaneous Homo and nearly identical to the later H. ergaster, and subsisted on mainly C4 savanna plants and C3 forest plants, which could indicate either seasonal shifts in diet or seasonal migration from forest to savanna. In leaner times it may have fallen back on brittle food. It likely also consumed seeds and possibly tubers or termites. A high cavity rate could indicate honey consumption. The East African P. boisei, on the other hand, seems to have been largely herbivorous and fed on C4 plants. Its powerful jaws allowed it to consume a wide variety of different plants, though it may have largely preferred nutrient-rich bulbotubers as these are known to thrive in the well-watered woodlands it is thought to have inhabited. Feeding on these, P. boisei may have been able to meet its daily caloric requirements of approximately 9,700 kJ after about 6 hours of foraging. Juvenile P. robustus may have relied more on tubers than adults, given the elevated levels of strontium compared to adults in teeth from Swartkrans Cave, which, in the area, was most likely sourced from tubers. Dentin exposure on juvenile teeth could indicate early weaning, or a more abrasive diet than adults which wore away the cementum and enamel coatings, or both. It is also possible juveniles were less capable of removing grit from dug-up food rather than purposefully seeking out more abrasive foods. Technology Oldowan toolkits were uncovered at an excavation site on the Homa Peninsula in western Kenya. Stone tools called "oldowan toolkits" are used to pound and shape other rocks or plant materials. These tools are thought to be between 2.6 and 3 million years old. The stone tools were found near Paranthropus teeth. Bone tools dating between 2.3 and 0.6 mya have been found in abundance in Swartkrans, Kromdraai and Drimolen caves, and are often associated with P. robustus. Though Homo is also known from these caves, their remains are comparatively scarce to Paranthropus, making Homo-attribution unlikely. The tools also cooccur with Homo-associated Oldawan and possibly Acheulian stone tool industries. The bone tools were typically sourced from the shaft of long bones from medium- to large-sized mammals, but tools made sourced from mandibles, ribs and horn cores have also been found. Bone tools have also been found at Oldawan Gorge and directly associated with P. boisei, the youngest dating to 1.34 mya, though a great proportion of other bone tools from here have ambiguous attribution. Stone tools from Kromdraai could possibly be attributed to P. robustus, as no Homo have been found there yet. The bone tools were not manufactured or purposefully shaped for a task. However, since the bones display no weathering (and were not scavenged randomly), and there is a preference displayed for certain bones, raw materials were likely specifically hand-picked. This could indicate a similar cognitive ability to contemporary Stone Age Homo. Bone tools may have been used to cut or process vegetation, or dig up tubers or termites, The form of P. robustus incisors appear to be intermediate between H. erectus and modern humans, which could indicate less food processing done by the teeth due to preparation with simple tools. Burnt bones were also associated with the inhabitants of Swartkrans, which could indicate some of the earliest fire usage. However, these bones were found in Member 3, where Paranthropus remains are rarer than H. erectus, and it is also possible the bones were burned in a wildfire and washed into the cave as it is known the bones were not burned onsite. Social structure Given the marked anatomical and physical differences with modern great apes, there may be no modern analogue for australopithecine societies, so comparisons drawn with modern primates will not be entirely accurate. Paranthropus had pronounced sexual dimorphism, with males notably larger than females, which is commonly correlated with a male-dominated polygamous society. P. robustus may have had a harem society similar to modern forest-dwelling silverback gorillas, where one male has exclusive breeding rights to a group of females, as male-female size disparity is comparable to gorillas (based on facial dimensions), and younger males were less robust than older males (delayed maturity is also exhibited in gorillas). However, if P. robustus preferred a savanna habitat, a multi-male society would have been more productive to better defend the troop from predators in the more exposed environment, much like savanna baboons. Further, among primates, delayed maturity is also exhibited in the rhesus monkey which has a multi-male society, and may not be an accurate indicator of social structure. A 2011 strontium isotope study of P. robustus teeth from the dolomite Sterkfontein Valley found that, like other hominins, but unlike other great apes, P. robustus females were more likely to leave their place of birth (patrilocal). This also discounts the plausibility of a harem society, which would have resulted in a matrilocal society due to heightened male–male competition. Males did not seem to have ventured very far from the valley, which could either indicate small home ranges, or that they preferred dolomitic landscapes due to perhaps cave abundance or factors related to vegetation growth. Life history Dental development seems to have followed about the same timeframe as it does in modern humans and most other hominins, but, since Paranthropus molars are markedly larger, rate of tooth eruption would have been accelerated. Their life history may have mirrored that of gorillas as they have the same brain volume, which (depending on the subspecies) reach physical maturity from 12–18 years and have birthing intervals of 40–70 months. Palaeoecology Habitat It is generally thought that Paranthropus preferred to inhabit wooded, riverine landscapes. The teeth of Paranthropus, H. habilis and H. erectus are all known from various overlapping beds in East Africa, such as at Olduvai Gorge and the Turkana Basin. P. robustus and H. erectus also appear to have coexisted. P. boisei, known from the Great Rift Valley, may have typically inhabited wetlands along lakes and rivers, wooded or arid shrublands, and semiarid woodlands, though their presence in the savanna-dominated Malawian Chiwondo Beds implies they could tolerate a range of habitats. During the Pleistocene, there seem to have been coastal and montane forests in Eastern Africa. More expansive river valleys—namely the Omo River Valley—may have served as important refuges for forest-dwelling creatures. Being cut off from the forests of Central Africa by a savanna corridor, these East African forests would have promoted high rates of endemism, especially during times of climatic volatility. The Cradle of Humankind, the only area P. robustus is known from, was mainly dominated by the springbok Antidorcas recki, but other antelope, giraffes and elephants were also seemingly abundant megafauna. Other known primates are early Homo, the hamadryas baboon, and the extinct colobine monkey Cercopithecoides williamsi. Predators The left foot of a P. boisei specimen (though perhaps actually belonging to H. habilis) from Olduvai Gorge seems to have been bitten off by a crocodile, possibly Crocodylus anthropophagus, and another's leg shows evidence of leopard predation. Other likely Olduvan predators of great apes include the hunting hyena Chasmaporthetes nitidula, and the sabertoothed cats Dinofelis and Megantereon. The carnivore assemblage at the Cradle of Humankind comprises the two sabertooths, and the hyena Lycyaenops silberbergi. Male P. robustus appear to have had a higher mortality rate than females. It is possible that males were more likely to be kicked out of a group, and these lone males had a higher risk of predation. Extinction It was once thought that Paranthropus had become a specialist feeder, and were inferior to the more adaptable tool-producing Homo, leading to their extinction, but this has been called into question. However, smaller brain size may have been a factor in their extinction along with gracile australopithecines. P. boisei may have died out due to an arid trend starting 1.45 mya, causing the retreat of woodlands, and more competition with savanna baboons and Homo for alternative food resources. South African Paranthropus appear to have outlasted their East African counterparts. The youngest record of P. boisei comes from Konso, Ethiopia about 1.4 mya; however, there are no East African sites dated between 1.4 and 1 mya, so it may have persisted until 1 mya. P. robustus, on the other hand, was recorded in Swartkrans until Member 3 dated to 1–0.6 mya (the Middle Pleistocene), though more likely the younger side of the estimate. See also Australopithecus Ardipithecus Graecopithecus Orrorin Sahelanthropus References Further reading External links Reconstructions of P. boisei by John Gurche Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). Prehistoric primate genera Pliocene primates Pleistocene primates Pleistocene extinctions Cenozoic mammals of Africa Pleistocene genus extinctions Fossil taxa described in 1938
23562
https://en.wikipedia.org/wiki/Perissodactyla
Perissodactyla
Perissodactyla (, ) is an order of ungulates. The order includes about 17 living species divided into three families: Equidae (horses, asses, and zebras), Rhinocerotidae (rhinoceroses), and Tapiridae (tapirs). They typically have reduced the weight-bearing toes to three or one of the five original toes, though tapirs retain four toes on their front feet. The nonweight-bearing toes are either present, absent, vestigial, or positioned posteriorly. By contrast, artiodactyls (even-toed ungulates) bear most of their weight equally on four or two (an even number) of the five toes: their third and fourth toes. Another difference between the two is that perissodactyls digest plant cellulose in their intestines, rather than in one or more stomach chambers as artiodactyls, with the exception of Suina, do. The order was considerably more diverse in the past, with notable extinct groups including the brontotheres, palaeotheres, chalicotheres, and the paraceratheres, with the paraceratheres including the largest known land mammals to have ever existed. Despite their very different appearances, they were recognized as related families in the 19th century by the zoologist Richard Owen, who also coined the order's name. Anatomy The largest odd-toed ungulates are rhinoceroses, and the extinct Paraceratherium, a hornless rhino from the Oligocene, is considered one of the largest land mammals of all time. At the other extreme, an early member of the order, the prehistoric horse Eohippus, had a withers height of only . Apart from dwarf varieties of the domestic horse and donkey, living perissodactyls reach a body length of and a weight of . While rhinos have only sparse hair and exhibit a thick epidermis, tapirs and horses have dense, short coats. Most species are grey or brown, although zebras and young tapirs are striped. Limbs The main axes of both the front and rear feet pass through the third toe, which is always the largest. The remaining toes have been reduced in size to varying degrees. Tapirs, which are adapted to walking on soft ground, have four toes on their fore feet and three on their hind feet. Living rhinos have three toes on both the front and hind feet. Modern equines possess only a single toe; however, their feet are equipped with hooves, which almost completely cover the toe. Rhinos and tapirs, by contrast, have hooves covering only the leading edge of the toes, with the bottom being soft. Ungulates have stances that require them to stand on the tips of their toes. Equine ungulates with only one digit or hoof have decreased mobility in their limbs, which allows for faster running speeds and agility. Differences in limb structure and physiology between ungulates and other mammals can be seen in the shape of the humerus. For example, often shorter, thicker, bones belong to the largest and heaviest ungulates like the rhinoceros. The ulnae and fibulae are reduced in horses. A common feature that clearly distinguishes this group from other mammals is the articulation between the astragalus, the scaphoid and the cuboid, which greatly restricts the mobility of the foot. The thigh is relatively short, and the clavicle is absent. Skull and teeth Odd-toed ungulates have a long upper jaw with an extended diastema between the front and cheek teeth, giving them an elongated head. The various forms of snout between families are due to differences in the form of the premaxilla. The lacrimal bone has projecting cusps in the eye sockets and a wide contact with the nasal bone. The temporomandibular joint is high and the mandible is enlarged. Rhinos have one or two horns made of agglutinated keratin, unlike the horns of even-toed ungulates, which have a bony core. The number and form of the teeth vary according to diet. The incisors and canines can be very small or completely absent, as in the two African species of rhinoceros. In horses, usually only the males possess canines. The surface shape and height of the molars is heavily dependent on whether soft leaves or hard grass make up the main component of their diets. Three or four cheek teeth are present on each jaw half, so the dental formula of odd-toed ungulates is: The guttural pouch, a small outpocketing of the auditory tube that drains the middle ear, is a characteristic feature of Perissodactyla. The guttural pouch is of particular concern in Equine Veterinary practice, due to its frequent involvement in some serious infections. Aspergillosis (infection with Aspergillus mould) of the guttural pouch (also called Guttural Pouch Mycosis) can cause serious damage to the tissues of the pouch, as well as surrounding structures including important cranial nerves (Nerves IX-XII: Glossopharyngeal, Vagus, Accessory and Hypoglossal Nerves) and the internal carotid artery. Strangles (Streptococcus equi equi infection) is a highly transmissible respiratory infection of horses that can cause pus to accumulate in the guttural pouch; horses with S. equi equi colonising their guttural pouch can continue to intermittently shed the bacteria for several months, and should be isolated from other horses during this time to prevent transmission. Due to the intermittent nature of S. equi equi shedding, prematurely reintroducing an infected horse may risk exposing other horses to the infection, even though the shedding horse appears well and may have previously returned negative samples. The function of the guttural pouch has been difficult to determine, but it is now believed to play a role in cooling blood in the internal carotid artery before it enters the brain. Gut All perissodactyls are hindgut fermenters. In contrast to ruminants, hindgut fermenters store digested food that has left the stomach in an enlarged cecum, where the food is digested by microbes. No gallbladder is present. The stomach of perissodactyls is simply built, while the cecum accommodates up to in horses. The small intestine is very long, reaching up to in horses. Extraction of nutrients from food is relatively inefficient, which probably explains why no odd-toed ungulates are small; nutritional requirements per unit of body weight are lower for large animals, as their surface-area-to-volume ratio is smaller. Lack of carotid rete Unlike artiodactyls, perissodactyls lack a carotid rete, a heat exchange that reduces the dependence of the temperature of the brain on that of the body. As a result, perissodactyls have limited thermoregulatory flexibility compared to artiodactyls which has restricted them to habitats of low seasonality and rich in food and water, such as tropical forests. In contrast, artiodactyls occupy a wide range of habits ranging from the Arctic Circle to deserts and tropical savannahs. Distribution Most extant perissodactyl species occupy a small fraction of their original range. Members of this group are now found only in Central and South America, eastern and southern Africa, and central, southern, and southeastern Asia. During the peak of odd-toed ungulate existence, from the Eocene to the Oligocene, perissodactyls were distributed over much of the globe, the major exceptions being Australia and Antarctica. Horses and tapirs arrived in South America after the formation of the Isthmus of Panama around 3 million years ago in the Pliocene. Their North American counterparts died out around 10,000 years ago, leaving only Baird's tapir with a range extending to what is now southern Mexico. The tarpans were pushed to extinction in 19th century Europe. Hunting and habitat destruction have reduced the surviving perissodactyl species to fragmented populations. In contrast, domesticated horses and donkeys have gained a worldwide distribution, and feral animals of both species are now also found in regions outside their original range, such as in Australia. Lifestyle and diet Perissodactyls inhabit a number of different habitats, leading to different lifestyles. Tapirs are solitary and inhabit mainly tropical rainforests. Rhinos tend to live alone in rather dry savannas, and in Asia, wet marsh or forest areas. Horses inhabit open areas such as grasslands, steppes, or semi-deserts, and live together in groups. Odd-toed ungulates are exclusively herbivores that feed, to varying degrees, on grass, leaves, and other plant parts. A distinction is often made between primarily grass feeders (white rhinos, equines) and leaf feeders (tapirs, other rhinos). Reproduction and development Odd-toed ungulates are characterized by a long gestation period and a small litter size, usually delivering a single young. The gestation period is 330–500 days, being longest in rhinos. Newborn perissodactyls are precocial, meaning offspring are born already quite independent: for example, young horses can begin to follow the mother after a few hours. The young are nursed for a relatively long time, often into their second year, with rhinos reaching sexual maturity around eight or ten years old, but horses and tapirs maturing baround two to four years old. Perissodactyls are long-lived, with several species, such as rhinos, reaching an age of almost 50 years in captivity. Taxonomy Outer taxonomy Traditionally, the odd-toed ungulates were classified with other mammals such as artiodactyls, hyraxes, elephants and other "ungulates". A close family relationship with hyraxes was suspected based on similarities in the construction of the ear and the course of the carotid artery. Molecular genetic studies, however, have shown the ungulates to be polyphyletic, meaning that in some cases the similarities are the result of convergent evolution rather than common ancestry. Elephants and hyraxes are now considered to belong to Afrotheria, so are not closely related to the perissodactyls. These in turn are in the Laurasiatheria, a superorder that had its origin in the former supercontinent Laurasia. Molecular genetic findings suggest that the cloven Artiodactyla (containing the cetaceans as a deeply nested subclade) are the sister taxon of the Perissodactyla; together, the two groups form the Euungulata. More distant are the bats (Chiroptera) and Ferae (a common taxon of carnivorans, Carnivora, and pangolins, Pholidota). In a discredited alternative scenario, a close relationship exists between perissodactyls, carnivores, and bats, this assembly comprising the Pegasoferae. According to studies published in March 2015, odd-toed ungulates are in a close family relationship with at least some of the so-called Meridiungulata, a very diverse group of mammals living from the Paleocene to the Pleistocene in South America, whose systematic unity is largely unexplained. Some of these were classified based on their paleogeographic distribution. However, a close relationship can be worked out to perissodactyls by protein sequencing and comparison with fossil collagen from remnants of phylogenetically young members of the Meridiungulata (specifically Macrauchenia from the Litopterna and Toxodon from the Notoungulata). Both kinship groups, the odd-toed ungulates and the Litopterna-Notoungulata, are now in the higher-level taxon of Panperissodactyla. This kinship group is included among the Euungulata, which also contains the even-toed ungulates and whales (Artiodactyla). The separation of the Litopterna-Notoungulata group from the perissodactyls probably took place before the Cretaceous–Paleogene extinction event. "Condylarths" can probably be considered the starting point for the development of the two groups, as they represent a heterogeneous group of primitive ungulates that mainly inhabited the northern hemisphere in the Paleogene. Modern members Odd-toed ungulates (Perissodactyla) comprise three living families with around 17 species—in horses, however, the exact count is still controversial. Rhinos and tapirs are more closely related to each other than to horses. The separation of horses from other perissodactyls took place according to molecular genetic analysis in the Paleocene some 56 million years ago, while the rhinos and tapirs split off in the lower-middle Eocene, about 47 million years ago. Order Perissodactyla Suborder Hippomorpha Family Equidae: horses and allies, seven species in one genus Equus ferus Tarpan, †Equus ferus ferus Przewalski's horse, Equus ferus przewalskii Domestic horse, Equus ferus caballus African wild ass, Equus africanus Nubian wild ass, Equus africanus africanus Somali wild ass, Equus africanus somaliensis Domesticated ass (donkey), Equus africanus asinus Atlas wild ass, †Equus africanus atlanticus Onager or Asiatic wild ass, Equus hemionus Mongolian wild ass, Equus hemionus hemionus Turkmenian kulan, Equus hemionus kulan Persian onager, Equus hemionus onager Indian wild ass, Equus hemionus khur Syrian wild ass, †Equus hemionus hemippus Kiang or Tibetan wild ass, Equus kiang Western kiang, Equus kiang kiang Eastern kiang, Equus kiang holdereri Southern kiang, Equus kiang polyodon Plains zebra, Equus quagga Quagga, †Equus quagga quagga Burchell's zebra, Equus quagga burchellii Grant's zebra, Equus quagga boehmi Maneless zebra, Equus quagga borensis Chapman's zebra, Equus quagga chapmani Crawshay's zebra, Equus quagga crawshayi Selous' zebra, Equus quagga selousi Mountain zebra, Equus zebra Cape mountain zebra, Equus zebra zebra Hartmann's mountain zebra, Equus zebra hartmannae Grévy's zebra, Equus grevyi Suborder Ceratomorpha Family Tapiridae: tapirs, five species in one genus Brazilian tapir, Tapirus terrestris Mountain tapir, Tapirus pinchaque Baird's tapir, Tapirus bairdii Malayan tapir, Tapirus indicus Kabomani tapir, Tapirus kabomani Family Rhinocerotidae: rhinoceroses, five species in four genera Black rhinoceros, Diceros bicornis Southern black rhinoceros, †Diceros bicornis bicornis North-eastern black rhinoceros, †Diceros bicornis brucii Chobe black rhinoceros, Diceros bicornis chobiensis Uganda black rhinoceros, Diceros bicornis ladoensis Western black rhinoceros, †Diceros bicornis longipes Eastern black rhinoceros, Diceros bicornis michaeli South-central black rhinoceros, Diceros bicornis minor South-western black rhinoceros, Diceros bicornis occidentalis White rhinoceros, Ceratotherium simum Southern white rhinoceros, Ceratotherium simum simum Northern white rhinoceros, Ceratotherium simum cottoni Indian rhinoceros, Rhinoceros unicornis Javan rhinoceros, Rhinoceros sondaicus Indonesian Javan rhinoceros, Rhinoceros sondaicus sondaicus Vietnamese Javan rhinoceros, Rhinoceros sondaicus annamiticus Indian Javan rhinoceros, †Rhinoceros sondaicus inermis Sumatran rhinoceros, Dicerorhinus sumatrensis Western Sumatran rhinoceros, Dicerorhinus sumatrensis sumatrensis Eastern Sumatran rhinoceros, Dicerorhinus sumatrensis harrissoni Northern Sumatran rhinoceros, †Dicerorhinus sumatrensis lasiotis Prehistoric members There are many perissodactyl fossils of multivariant form. The major lines of development include the following groups: Brontotherioidea were among the earliest known large mammals, consisting of the families of Brontotheriidae (synonym Titanotheriidae), the most well-known representative being Megacerops and the more basal family Lambdotheriidae. They were generally characterized in their late phase by a bony horn at the transition from the nose to the frontal bone and flat molars suitable for chewing soft plant food. The Brontotheroidea, which were almost exclusively confined to North America and Asia, died out at the beginning of the Upper Eocene. Equoidea (equines) also developed in the Eocene. Palaeotheriidae are known mainly from Europe; their most famous member is Eohippus, which became extinct in the Oligocene. In contrast, the horse family (Equidae) flourished and spread. Over time this group saw a reduction in toe number, extension of the limbs, and the progressive adjustment of the teeth for eating hard grasses. Chalicotherioidea represented another characteristic group, consisting of the families Chalicotheriidae and Lophiodontidae. The Chalicotheriidae developed claws instead of hooves and considerable extension of the forelegs. The best-known genera include Chalicotherium and Moropus. Chalicotherioidea died out in the Pleistocene. Rhinocerotoidea (rhino relatives) included a large variety of forms from the Eocene up to the Oligocene, including dog-size leaf feeders, semiaquatic animals, and also huge long-necked animals. Only a few had horns on the nose. The Amynodontidae were hippo-like, aquatic animals. Hyracodontidae developed long limbs and long necks that were most pronounced in the Paraceratherium (formerly known as Baluchitherium or Indricotherium), the second largest known land mammal ever to have lived (after Palaeoloxodon namadicus). The rhinos (Rhinocerotidae) emerged in the Middle Eocene; five species survive to the present day. Tapiroidea reached their greatest diversity in the Eocene, when several families lived in Eurasia and North America. They retained a primitive physique and were noted for developing a trunk. The extinct families within this group include the Helaletidae. Several mammal groups traditionally classified as condylarths, long-understood to be a wastebasket taxon, such as hyopsodontids and phenacodontids, are now understood to be part of the odd-toed ungulate assemblage. Phenacodontids seem to be stem-perissodactyls, while hyopsodontids are closely related to horses and brontotheres, despite their more primitive overall appearance. Desmostylia and Anthracobunidae have traditionally been placed among the afrotheres, but they may actually represent stem-perissodactyls. They are an early lineage of mammals that took to the water, spreading across semi-aquatic to fully marine niches in the Tethys Ocean and the northern Pacific. However, later studies have shown that, while anthracobunids are definite perissodactyls, desmostylians have enough mixed characters to suggest that a position among the Afrotheria is not out of the question. Order Perissodactyla Superfamily Brontotherioidea †Brontotheriidae Suborder Hippomorpha †Hyopsodontidae †Pachynolophidae Superfamily Equoidea †Indolophidae †Palaeotheriidae (might be a basal perissodactyl grade instead) Clade Tapiromorpha †Isectolophidae (a basal family of Tapiromorpha; from the Eocene epoch) †Suborder Ancylopoda †Lophiodontidae Superfamily Chalicotherioidea †Eomoropidae (basal grade of chalicotheroids) †Chalicotheriidae Suborder Ceratomorpha Superfamily Rhinocerotoidea †Amynodontidae †Hyracodontidae Superfamily Tapiroidea †Deperetellidae †Rhodopagidae (sometimes recognized as a subfamily of deperetellids) †Lophialetidae †Eoletidae (sometimes recognized as a subfamily of lophialetids) †Anthracobunidae (a family of stem-perissodactyls; from the Early to Middle Eocene epoch) †Phenacodontidae (a clade of stem-perissodactyls; from the Early Palaeocene to the Middle Eocene epoch) Higher classification of perissodactyls Relationships within the large group of odd-toed ungulates are not fully understood. Initially, after the establishment of "Perissodactyla" by Richard Owen in 1848, the present-day representatives were considered equal in rank. In the first half of the 20th century, a more systematic differentiation of odd-toed ungulates began, based on a consideration of fossil forms, and they were placed in two major suborders: Hippomorpha and Ceratomorpha. The Hippomorpha comprises today's horses and their extinct members (Equoidea); the Ceratomorpha consist of tapirs and rhinos plus their extinct members (Tapiroidea and Rhinocerotoidea). The names Hippomorpha and Ceratomorpha were introduced in 1937 by Horace Elmer Wood, in response to criticism of the name "Solidungula" that he proposed three years previously. It had been based on the grouping of horses and Tridactyla and on the rhinoceros/tapir complex. The extinct brontotheriidae were also classified under Hippomorpha and therefore possess a close relationship to horses. Some researchers accept this assignment because of similar dental features, but there is also the view that a very basal position within the odd-toed ungulates places them rather in the group of Titanotheriomorpha. Originally, the Chalicotheriidae were seen as members of Hippomorpha, and presented as such in 1941. William Berryman Scott thought that, as claw-bearing perissodactyls, they belong in the new suborder Ancylopoda (where Ceratomorpha and Hippomorpha as odd-toed ungulates were combined in the group of Chelopoda). The term Ancylopoda, coined by Edward Drinker Cope in 1889, had been established for chalicotheres. However, further morphological studies from the 1960s showed a middle position of Ancylopoda between Hippomorpha and Ceratomorpha. Leonard Burton Radinsky saw all three major groups of odd-toed ungulates as peers, based on the extremely long and independent phylogenetic development of the three lines. In the 1980s, Jeremy J. Hooker saw a general similarity between Ancylopoda and Ceratomorpha based on dentition, especially in the earliest members, leading to the unification in 1984 of the two submissions in the interim order, Tapiromorpha. At the same time, he expanded the Ancylopoda to include the Lophiodontidae. The name "Tapiromorpha" goes back to Ernst Haeckel, who coined it in 1873, but it was long considered synonymous to Ceratomorpha because Wood had not considered it in 1937 when Ceratomorpha were named, since the term had been used quite differently in the past. Also in 1984, Robert M. Schoch used the conceptually similar term Moropomorpha, which today applies synonymously to Tapiromorpha. Included within the Tapiromorpha are the now extinct Isectolophidae, a sister group of the Ancylopoda-Ceratomorpha group and thus the most primitive members of this relationship complex. Evolutionary history Origins The evolutionary development of Perissodactyla is well documented in the fossil record. Numerous finds are evidence of the adaptive radiation of this group, which was once much more varied and widely dispersed. Radinskya from the late Paleocene of East Asia is often considered to be one of the oldest close relatives of the ungulates. Its 8 cm skull must have belonged to a very small and primitive animal with a π-shaped crown pattern on the enamel of its rear molars similar to that of perissodactyls and their relatives, especially the rhinos. Finds of Cambaytherium and Kalitherium in the Cambay shale of western India indicate an origin in Asia dating to the Lower Eocene roughly 54.5 million years ago. Their teeth also show similarities to Radinskya as well as to the Tethytheria clade. The saddle-shaped configuration of the navicular joints and the mesaxonic construction of the front and hind feet also indicates a close relationship to Tethytheria. However, this construction deviates from that of Cambaytherium, indicating that it is actually a member of a sister group. Ancestors of Perissodactyla may have arrived via an island bridge from the Afro-Arab landmass onto the Indian subcontinent as it drifted north towards Asia. A study on Cambaytherium suggests an origin in India prior or near its collision with Asia. The alignment of hyopsodontids and phenacodontids to Perissodactyla in general suggests an older Laurasian origin and distribution for the clade, dispersed across the northern continents already in the early Paleocene. These forms already show a fairly well-developed molar morphology, with no intermediary forms as evidence of the course of its development. The close relationship between meridiungulate mammals and perissoodactyls in particular is of interest since the latter appeared in South America soon after the K–T event, implying rapid ecological radiation and dispersal after mass extinction. Phylogeny The Perissodactyla appeared relatively abruptly at the beginning of the Lower Paleocene about 63 million years ago, both in North America and Asia, in the form of phenacodontids and hyopsodontids. The oldest finds from an extant group originate among other sources, from Sifrhippus, an ancestor of the horses from the Willswood lineup in northwestern Wyoming. The distant ancestors of tapirs appeared not too long after that in the Ghazij lineup in Balochistan, such as Ganderalophus, as well as Litolophus from the Chalicotheriidae line, or Eotitanops from the group of brontotheriidae. Initially, the members of the different lineages looked quite similar, with an arched back and generally four toes on the front and three on the hind feet. Eohippus, which is considered a member of the horse family, outwardly resembled Hyrachyus, the first representative of the rhino and tapir line. All were small compared to later forms and lived as fruit and foliage eaters in forests. The first of the megafauna to emerge were the brontotheres, in the Middle and Upper Eocene. Megacerops, known from North America, reached a withers height of and could have weighed just over . The decline of brontotheres at the end of the Eocene is associated with competition arising from the advent of more successful herbivores. More successful lines of odd-toed ungulates emerged at the end of the Eocene when dense jungles gave way to steppe, such as the chalicotheriid rhinos, and their immediate relatives; their development also began with very small forms. Paraceratherium, one of the largest mammals ever to walk the earth, evolved during this era. They weighed up to and lived throughout the Oligocene in Eurasia. About 20 million years ago, at the onset of the Miocene, the perissodactyls first reached Africa when it became connected to Eurasia because of the closing of the Tethys Ocean. For the same reason, however, new animals such as the mammoths also entered the ancient settlement areas of odd-toed ungulates, creating competition that led to the extinction of some of their lines. The rise of ruminants, which occupied similar ecological niches and had a much more efficient digestive system, is also associated with the decline in diversity of odd-toed ungulates. A significant cause for the decline of perissodactyls was climate change during the Miocene, leading to a cooler and drier climate accompanied by the spread of open landscapes. However, some lines flourished, such as the horses and rhinos; anatomical adaptations made it possible for them to consume tougher grass food. This led to open land forms that dominated newly created landscapes. With the emergence of the Isthmus of Panama in the Pliocene, perissodactyls and other megafauna were given access to one of their last habitable continents: South America. However, many perissodactyls became extinct at the end of the ice ages, including American horses and the Elasmotherium. Whether over-hunting by humans (overkill hypothesis), climatic change, or a combination of both factors was responsible for the extinction of ice age mega-fauna, remains controversial. Research history In 1758, in his seminal work Systema Naturae, Linnaeus (1707–1778) classified horses (Equus) together with hippos (Hippopotamus). At that time, this category also included the tapirs (Tapirus), more precisely the lowland or South American tapir (Tapirus terrestus), the only tapir then known in Europe. Linnaeus classified this tapir as Hippopotamus terrestris and put both genera in the group of the Belluae ("beasts"). He combined the rhinos with the Glires, a group now consisting of the lagomorphs and rodents. Mathurin Jacques Brisson (1723–1806) first separated the tapirs and hippos in 1762 with the introduction of the concept le tapir. He also separated the rhinos from the rodents, but did not combine the three families now known as the odd-toed ungulates. In the transition to the 19th century, the individual perissodactyl genera were associated with various other groups, such as the proboscidean and even-toed ungulates. In 1795, Étienne Geoffroy Saint-Hilaire (1772–1844) and Georges Cuvier (1769–1832) introduced the term "pachyderm" (Pachydermata), including in it not only the rhinos and elephants, but also the hippos, pigs, peccaries, tapirs and hyrax. The horses were still generally regarded as a group separate from other mammals and were often classified under the name Solidungula or Solipèdes, meaning "one-hoof animal". In 1861, Henri Marie Ducrotay de Blainville (1777–1850) classified ungulates by the structure of their feet, differentiating those with an even number of toes from those with an odd number. He moved the horses as solidungulate over to the tapirs and rhinos as multungulate animals and referred to all of them together as onguligrades à doigts impairs, coming close to the concept of the odd-toed ungulate as a systematic unit. Richard Owen (1804–1892) quoted Blainville in his study on fossil mammals of the Isle of Wight and introduced the name Perissodactyla. In 1884, Othniel Charles Marsh (1831–1899) came up with the concept Mesaxonia, which he used for what are today called the odd-toed ungulates, including their extinct relatives, but explicitly excluding the hyrax. Mesaxonia is now considered a synonym of Perissodactyla, but it was sometimes also used for the true odd-toed ungulates as a subcategory (rhinos, horses, tapirs), while Perissodactyla stood for the entire order, including the hyrax. The assumption that hyraxes were Perissodactyla was held well into the 20th century. Only with the advent of molecular genetic research methods had it been recognized that the hyrax was not closely related to perissodactyls but rather to elephants and manatees. Interactions with humans The domestic horse and the donkey play an important role in human history, particularly as transport, work and pack animals. The domestication of both species began several millennia BCE. Due to the motorisation of agriculture and the spread of automobile traffic, such use has declined sharply in Western industrial countries; riding is usually undertaken more as a hobby or sport. In less developed regions of the world, traditional uses for these animals are, however, still widespread. To a lesser extent, horses and donkeys are also kept for their meat and their milk. In contrast, the existence in the wild of almost all other odd-toed ungulates species has declined dramatically because of hunting and habitat destruction. The quagga is extinct and Przewalski's horse was once eradicated in the wild. Present threat levels, according to the International Union for Conservation of Nature (2012): Four species are considered critically endangered: the Javan rhinoceros, the Sumatran rhinoceros, the black rhinoceros and the African wild ass. Six species are endangered: the mountain tapir, the Central American tapir, the Malayan tapir, the wild horse and Grévy's zebra. Three species are considered vulnerable: the Indian rhinoceros, the South American tapir and the mountain zebra. The onager, the plains zebra and the white rhinoceros are near-threatened; however, the northern subspecies, Ceratotherium simum cottoni (northern white rhinoceros) is close to extinction. The kiang is not considered at risk (least concern). References Further reading Martin S. Fischer: Mesaxonia (Perissodactyla) Perissodactyla. In: Wilfried Westheide, Reinhard Rieger (eds.): Systematic Zoology. Part 2: Vortex or craniotes. Spektrum Akademischer Verlag, Heidelberg and Berlin 2004, pp 646–655, . Ronald M. Nowak: Walker's Mammals of the World. 6th edition. Johns Hopkins University Press, Baltimore 1999, . Thomas S. Kemp: The Origin & Evolution of Mammals Oxford University Press, Oxford, 2005. . AH Müller: Textbook of Paleozoology, Volume III: vertebrates, Part 3: Mammalia. 2nd edition. Gustav Fischer Verlag, Jena and Stuttgart 1989. . Don E. Wilson, DeeAnn M. Reeder (eds.): Mammal Species of the World, 3rd edition. The Johns Hopkins University Press, Baltimore 2005 . Extant Ypresian first appearances Mammal orders Taxa named by Richard Owen Panperissodactyla
23565
https://en.wikipedia.org/wiki/Pai%20gow
Pai gow
Pai gow ( ; ) is a Chinese gambling game, played with a set of 32 Chinese dominoes. It is played in major casinos in China (including Macau); the United States (including Boston, Massachusetts; Las Vegas, Nevada; Reno, Nevada; Connecticut; Atlantic City, New Jersey; Pennsylvania; Mississippi; and cardrooms in California); Canada (including Edmonton, Alberta and Calgary, Alberta); Australia; and New Zealand. The name pai gow is sometimes used to refer to a card game called pai gow poker (or "double-hand poker"), which is loosely based on pai gow. The act of playing pai gow is also colloquially known as "eating dog meat". History Pai Gow is the first documented form of dominoes, originating in China before or during the Song Dynasty. It is also the ancestor of modern, western dominoes. The name literally means "make nine" after the normal maximum hand, and the original game was modeled after both a Chinese creation myth, and military organization in China at that time (ranks one through nine). Rules Starting Tiles are shuffled on the table and are arranged into eight face-down stacks of four tiles each in an assembly known as the woodpile. Individual stacks or tiles may then be moved in specific ways to rearrange the woodpile, after which the players place their bets. Next, each player (including the dealer) is given one stack of tiles and must use them to form two hands of two tiles each. The hand with the lower value is called the front hand, and the hand with the higher value is called the rear hand. If a player's front hand beats the dealer's front hand, and the player's rear hand beats the dealer's rear hand, then that player wins the bet and is paid off at 1:1 odds (even money). If a player's front and rear hands both lose to the dealer's respective hands, the player loses the bet. If one hand wins and the other loses, the player is said to push, and gets back only the money he or she bet. Generally seven players will play, and each player's hands are compared only against the dealer's hands; comparisons are always front-front and rear-rear, never one of each. There are 35,960 possible ways to select 4 of the 32 tiles when the 32 tiles are considered distinguishable. However, there are 3,620 distinct sets of 4 tiles when the tiles of a pair are considered indistinguishable. There are 496 ways to select 2 of the 32 tiles when the 32 tiles are considered distinguishable. There are 136 distinct hands (pairs of tiles) when the tiles of a pair are considered indistinguishable. Scoring Each player groups their four tiles into two hands of two tiles each. The two hands are referred to as the "high" and "low" hands, based on their score. The highest-ranked hands are formed from the sixteen named pairs. Otherwise, the next highest-ranked hand results from creating a Gong or Wong, which are specific combinations with the Day and Teen tiles. If the four tiles drawn for the two hands do not permit the formation of a named pair, Gong, or Wong, then the total number of pips on both tiles in a hand are added using modular arithmetic (modulo 10), equivalent to how a hand in baccarat is scored. The name "pai gow" is loosely translated as "make nine" or "card nine". This reflects the fact that, with the exception of named pairs, Gong, or Wong, the maximum score for a hand of mixed tiles is nine. Named pairs The 32 tiles in a Chinese dominoes set can be arranged into 16 named pairs. Eleven of these pairs have identical tiles, and five of these pairs are made up of two tiles that have the same total number of pips, but in different groupings. The latter group includes the Gee Joon tiles, which can score the same, whether as three or six. Any hand consisting of a pair outscores a non-pair, regardless of the pip counts. Named pairs are often thought of as being worth 12 points each, but there is a hierarchy within the named pairs. The pairs are considered to tell the story of creation: Gee Joon (至尊) is the highest ranked pair, and is the Supreme Creator of the universe Teen (天) is the heavens, the first thing Gee Joon created. Day (地) is the earth itself, placed under the heavens. Yun (人) is man, whom Gee Joon made to live upon the earth. Gor (鵝) is geese, made for man to eat. Mooy (梅) is plum flowers, to give the earth beauty. Each subsequent pair is another step in the story...robes (Bon) for man to wear, a hatchet (Foo) to chop wood, partitions (Ping) for a house, man's seventh (Tit) and eighth (Look) children. Only the sixteen named pairs are valid. For example, if a hand contained a Yun (4-4) and a Chop Bot (3-5 or 2-6), these would not form a pair at all, despite both tiles having eight pips each. A Yun (4-4) exclusively pairs with the other Yun, and likewise only the two Chop Bot tiles can be paired together. Likewise, tiles with six pips (Look, 1-5, pairs with another Look, not the Gee Joon 2-4) and seven pips (Tit, 1-6, pairs with another Tit, not the two Chop Chit tiles 2-5 and 3-4, which pair with each other) are subject to the same pairing restrictions. When the player and dealer both have a pair, the higher-ranked pair wins. Ranking pairs is determined not by the sum of the tiles' pips, but rather by aesthetics; the order must be memorized. The highest pairs are the Gee Joon tiles, the Teens, the Days, and the red eights. The lowest pairs are the mismatched nines, eights, sevens, and fives. Wongs and Gongs The double-one tiles and double-six tiles are known as the Day and Teen tiles, respectively. The combination of a Day or Teen with a nine (Gow, 5-4 or 6-3) creates a Wong, worth 11 points, while putting either of them with an eight (either Yun, 4-4; or Bot, 5-3 or 6-2) results in a Gong, worth 10. Gongs and Wongs formed with a Teen tile are ranked higher than those formed with a Day tile. However, if a Day or Teen is grouped in a single hand with any other tile, the standard scoring rules apply. The combination of a Day or Teen with a seven (Tit, 1-6; or Chit, 2-5 or 3-4) is sometimes referred to as a high nine, as the score is the maximum (nine) when added together, and the group contains a high-rank tile for potential tiebreaking purposes. Modular arithmetic When a hand is formed from two tiles that are not a named pair, Wong, or Gong, the total pips on both tiles are counted and any tens digit is dropped; the resulting ones digit (the sum of all pips modulo 10) gives the final score. There is one exception. The 1-2 and the 2-4 tiles which form the Gee Joon pair together, can act as limited wild cards singly. When used as part of a hand of mixed tiles, these tiles may be scored as either 3 or 6, whichever results in a higher hand value. For example, a hand of 1-2 (scored as +6 instead of the face value of +3) and 5-6 (+11) scores as seven rather than four. If the player has both the 1-2 and 2-4 tiles, those collectively form the highest-ranked named pair and should be used together to form an unbeatable rear hand. Ties When the player and dealer display hands with the same score, the one with the highest-valued tile (based on the named pair rankings described above) is the winner. For example, a player's hand of 3-4 and 2-2 (Chit and Bon) and a dealer's hand of 5-6 and 5-5 (Foo and Mooy) would each score one point. However, since the dealer's 5-5 (Mooy) outranks the other three tiles, he would win the hand. If both have a bonus combination (Wong or Gong) or the scores are tied, and if the player and dealer each have an identical highest-ranking tile, then the dealer wins. For example, if the player held 2-2 and 1–6 (Bon and Tit), and the dealer held 2-2 and 3–4 (Bon and Chit), the dealer would win since the scores (1 each) and the highest-ranked tiles (2-2 Bon) are the same. The lower-ranked tile in each hand is never used to break a tie. There are two exceptions to the method described above. First, although the Gee Joon tiles form the highest-ranking pair when used together, when used as single tiles in a mixed hand, for tiebreaking purposes, they fall into the mixed-number ranks according to the number of pips. That is, the 2-4 ranks sequentially below the Chop Chit tiles (3-4 and 2-5), and the 1-2 ranks sequentially last overall, below the Chop Ng tiles (3-2 and 1-4). Second, any zero-zero tie is won by the dealer, regardless of the tiles in the two hands. Strategy The key element of pai gow strategy is to present the optimal front and rear hands based on the tiles dealt to the player. For any four random tiles, there are three ways to arrange them into two hands, assuming that a named pair cannot be formed. However, if there is at least one pair among the tiles, there are only two distinct ways to form two hands. The player must decide which combination is most likely to give a set of front/rear hands that can beat the dealer, or at least break a tie in the player's favor. In some cases, a player with weaker tiles may deliberately attempt to attain a push so as to avoid losing the bet outright. Many players rely on superstition or tradition to choose tile pairings. In popular culture The film Premium Rush (2012) features Pai Gow play as an integral plot element. See also Kiu kiu Tien Gow Pusoy dos References External links Scoring chart Pai gow lore at Wizard of Odds website (Michael Shackleford) Cantonese words and phrases Chinese games Chinese dominoes Gambling games
23572
https://en.wikipedia.org/wiki/Partially%20ordered%20set
Partially ordered set
In mathematics, especially order theory, a partial order on a set is an arrangement such that, for certain pairs of elements, one precedes the other. The word partial is used to indicate that not every pair of elements needs to be comparable; that is, there may be pairs for which neither element precedes the other. Partial orders thus generalize total orders, in which every pair is comparable. Formally, a partial order is a homogeneous binary relation that is reflexive, antisymmetric, and transitive. A partially ordered set (poset for short) is an ordered pair consisting of a set (called the ground set of ) and a partial order on . When the meaning is clear from context and there is no ambiguity about the partial order, the set itself is sometimes called a poset. Partial order relations The term partial order usually refers to the reflexive partial order relations, referred to in this article as non-strict partial orders. However some authors use the term for the other common type of partial order relations, the irreflexive partial order relations, also called strict partial orders. Strict and non-strict partial orders can be put into a one-to-one correspondence, so for every strict partial order there is a unique corresponding non-strict partial order, and vice versa. Partial orders A reflexive, weak, or , commonly referred to simply as a partial order, is a homogeneous relation ≤ on a set that is reflexive, antisymmetric, and transitive. That is, for all it must satisfy: Reflexivity: , i.e. every element is related to itself. Antisymmetry: if and then , i.e. no two distinct elements precede each other. Transitivity: if and then . A non-strict partial order is also known as an antisymmetric preorder. Strict partial orders An irreflexive, strong, or is a homogeneous relation < on a set that is irreflexive, asymmetric, and transitive; that is, it satisfies the following conditions for all Irreflexivity: , i.e. no element is related to itself (also called anti-reflexive). Asymmetry: if then not . Transitivity: if and then . Irreflexivity and transitivity together imply asymmetry. Also, asymmetry implies irreflexivity. In other words, a transitive relation is asymmetric if and only if it is irreflexive. So the definition is the same if it omits either irreflexivity or asymmetry (but not both). A strict partial order is also known as an asymmetric strict preorder. Correspondence of strict and non-strict partial order relations Strict and non-strict partial orders on a set are closely related. A non-strict partial order may be converted to a strict partial order by removing all relationships of the form that is, the strict partial order is the set where is the identity relation on and denotes set subtraction. Conversely, a strict partial order < on may be converted to a non-strict partial order by adjoining all relationships of that form; that is, is a non-strict partial order. Thus, if is a non-strict partial order, then the corresponding strict partial order < is the irreflexive kernel given by Conversely, if < is a strict partial order, then the corresponding non-strict partial order is the reflexive closure given by: Dual orders The dual (or opposite) of a partial order relation is defined by letting be the converse relation of , i.e. if and only if . The dual of a non-strict partial order is a non-strict partial order, and the dual of a strict partial order is a strict partial order. The dual of a dual of a relation is the original relation. Notation Given a set and a partial order relation, typically the non-strict partial order , we may uniquely extend our notation to define four partial order relations and , where is a non-strict partial order relation on , is the associated strict partial order relation on (the irreflexive kernel of ), is the dual of , and is the dual of . Strictly speaking, the term partially ordered set refers to a set with all of these relations defined appropriately. But practically, one need only consider a single relation, or , or, in rare instances, the non-strict and strict relations together, . The term ordered set is sometimes used as a shorthand for partially ordered set, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as "ordered sets", especially in areas where these structures are more common than posets. Some authors use different symbols than such as or to distinguish partial orders from total orders. When referring to partial orders, should not be taken as the complement of . The relation is the converse of the irreflexive kernel of , which is always a subset of the complement of , but is equal to the complement of if, and only if, is a total order. Alternative definitions Another way of defining a partial order, found in computer science, is via a notion of comparison. Specifically, given as defined previously, it can be observed that two elements x and y may stand in any of four mutually exclusive relationships to each other: either , or , or , or x and y are incomparable. This can be represented by a function that returns one of four codes when given two elements. This definition is equivalent to a partial order on a setoid, where equality is taken to be a defined equivalence relation rather than set equality. Wallis defines a more general notion of a partial order relation as any homogeneous relation that is transitive and antisymmetric. This includes both reflexive and irreflexive partial orders as subtypes. A finite poset can be visualized through its Hasse diagram. Specifically, taking a strict partial order relation , a directed acyclic graph (DAG) may be constructed by taking each element of to be a node and each element of to be an edge. The transitive reduction of this DAG is then the Hasse diagram. Similarly this process can be reversed to construct strict partial orders from certain DAGs. In contrast, the graph associated to a non-strict partial order has self-loops at every node and therefore is not a DAG; when a non-strict order is said to be depicted by a Hasse diagram, actually the corresponding strict order is shown. Examples Standard examples of posets arising in mathematics include: The real numbers, or in general any totally ordered set, ordered by the standard less-than-or-equal relation ≤, is a partial order. On the real numbers , the usual less than relation < is a strict partial order. The same is also true of the usual greater than relation > on . By definition, every strict weak order is a strict partial order. The set of subsets of a given set (its power set) ordered by inclusion (see Fig. 1). Similarly, the set of sequences ordered by subsequence, and the set of strings ordered by substring. The set of natural numbers equipped with the relation of divisibility. (see Fig. 3 and Fig. 6) The vertex set of a directed acyclic graph ordered by reachability. The set of subspaces of a vector space ordered by inclusion. For a partially ordered set P, the sequence space containing all sequences of elements from P, where sequence a precedes sequence b if every item in a precedes the corresponding item in b. Formally, if and only if for all ; that is, a componentwise order. For a set X and a partially ordered set P, the function space containing all functions from X to P, where if and only if for all A fence, a partially ordered set defined by an alternating sequence of order relations The set of events in special relativity and, in most cases, general relativity, where for two events X and Y, if and only if Y is in the future light cone of X. An event Y can be causally affected by X only if . One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. Orders on the Cartesian product of partially ordered sets In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig. 4): the lexicographical order:   if or ( and ); the product order:   (a, b) ≤ (c, d) if a ≤ c and b ≤ d; the reflexive closure of the direct product of the corresponding strict orders:   if ( and ) or ( and ). All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field, the result is in each case also an ordered vector space. See also orders on the Cartesian product of totally ordered sets. Sums of partially ordered sets Another way to combine two (disjoint) posets is the ordinal sum (or linear sum), , defined on the union of the underlying sets X and Y by the order if and only if: a, b ∈ X with a ≤X b, or a, b ∈ Y with a ≤Y b, or a ∈ X and b ∈ Y. If two posets are well-ordered, then so is their ordinal sum. Series-parallel partial orders are formed from the ordinal sum operation (in this context called series composition) and another operation called parallel composition. Parallel composition is the disjoint union of two partially ordered sets, with no order relation between elements of one set and elements of the other set. Derived notions The examples use the poset consisting of the set of all subsets of a three-element set ordered by set inclusion (see Fig. 1). a is related to b when a ≤ b. This does not imply that b is also related to a, because the relation need not be symmetric. For example, is related to but not the reverse. a and b are comparable if or . Otherwise they are incomparable. For example, and are comparable, while and are not. A total order or linear order is a partial order under which every pair of elements is comparable, i.e. trichotomy holds. For example, the natural numbers with their standard order. A chain is a subset of a poset that is a totally ordered set. For example, is a chain. An antichain is a subset of a poset in which no two distinct elements are comparable. For example, the set of singletons An element a is said to be strictly less than an element b, if a ≤ b and For example, is strictly less than An element a is said to be covered by another element b, written a ⋖ b (or a <: b), if a is strictly less than b and no third element c fits between them; formally: if both a ≤ b and are true, and a ≤ c ≤ b is false for each c with Using the strict order <, the relation a ⋖ b can be equivalently rephrased as " but not for any c". For example, is covered by but is not covered by Extrema There are several notions of "greatest" and "least" element in a poset notably: Greatest element and least element: An element is a if for every element An element is a if for every element A poset can only have one greatest or least element. In our running example, the set is the greatest element, and is the least. Maximal elements and minimal elements: An element is a maximal element if there is no element such that Similarly, an element is a minimal element if there is no element such that If a poset has a greatest element, it must be the unique maximal element, but otherwise there can be more than one maximal element, and similarly for least elements and minimal elements. In our running example, and are the maximal and minimal elements. Removing these, there are 3 maximal elements and 3 minimal elements (see Fig. 5). Upper and lower bounds: For a subset A of P, an element x in P is an upper bound of A if a ≤ x, for each element a in A. In particular, x need not be in A to be an upper bound of A. Similarly, an element x in P is a lower bound of A if a ≥ x, for each element a in A. A greatest element of P is an upper bound of P itself, and a least element is a lower bound of P. In our example, the set is an for the collection of elements As another example, consider the positive integers, ordered by divisibility: 1 is a least element, as it divides all other elements; on the other hand this poset does not have a greatest element. This partially ordered set does not even have any maximal elements, since any g divides for instance 2g, which is distinct from it, so g is not maximal. If the number 1 is excluded, while keeping divisibility as ordering on the elements greater than 1, then the resulting poset does not have a least element, but any prime number is a minimal element for it. In this poset, 60 is an upper bound (though not a least upper bound) of the subset which does not have any lower bound (since 1 is not in the poset); on the other hand 2 is a lower bound of the subset of powers of 2, which does not have any upper bound. If the number 0 is included, this will be the greatest element, since this is a multiple of every integer (see Fig. 6). Mappings between partially ordered sets Given two partially ordered sets and , a function is called order-preserving, or monotone, or isotone, if for all implies . If is also a partially ordered set, and both and are order-preserving, their composition is order-preserving, too. A function is called order-reflecting if for all implies If is both order-preserving and order-reflecting, then it is called an order-embedding of into . In the latter case, is necessarily injective, since implies and in turn according to the antisymmetry of If an order-embedding between two posets S and T exists, one says that S can be embedded into T. If an order-embedding is bijective, it is called an order isomorphism, and the partial orders and are said to be isomorphic. Isomorphic orders have structurally similar Hasse diagrams (see Fig. 7a). It can be shown that if order-preserving maps and exist such that and yields the identity function on S and T, respectively, then S and T are order-isomorphic. For example, a mapping from the set of natural numbers (ordered by divisibility) to the power set of natural numbers (ordered by set inclusion) can be defined by taking each number to the set of its prime divisors. It is order-preserving: if divides , then each prime divisor of is also a prime divisor of . However, it is neither injective (since it maps both 12 and 6 to ) nor order-reflecting (since 12 does not divide 6). Taking instead each number to the set of its prime power divisors defines a map that is order-preserving, order-reflecting, and hence an order-embedding. It is not an order-isomorphism (since it, for instance, does not map any number to the set ), but it can be made one by restricting its codomain to Fig. 7b shows a subset of and its isomorphic image under . The construction of such an order-isomorphism into a power set can be generalized to a wide class of partial orders, called distributive lattices; see Birkhoff's representation theorem. Number of partial orders Sequence [ A001035] in OEIS gives the number of partial orders on a set of n labeled elements: The number of strict partial orders is the same as that of partial orders. If the count is made only up to isomorphism, the sequence 1, 1, 2, 5, 16, 63, 318, ... is obtained. Subposets A poset is called a subposet of another poset provided that is a subset of and is a subset of . The latter condition is equivalent to the requirement that for any and in (and thus also in ), if then . If is a subposet of and furthermore, for all and in , whenever we also have , then we call the subposet of induced by , and write . Linear extension A partial order on a set is called an extension of another partial order on provided that for all elements whenever it is also the case that A linear extension is an extension that is also a linear (that is, total) order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Every partial order can be extended to a total order (order-extension principle). In computer science, algorithms for finding linear extensions of partial orders (represented as the reachability orders of directed acyclic graphs) are called topological sorting. In category theory Every poset (and every preordered set) may be considered as a category where, for objects and there is at most one morphism from to More explicitly, let if (and otherwise the empty set) and Such categories are sometimes called posetal. Posets are equivalent to one another if and only if they are isomorphic. In a poset, the smallest element, if it exists, is an initial object, and the largest element, if it exists, is a terminal object. Also, every preordered set is equivalent to a poset. Finally, every subcategory of a poset is isomorphism-closed. Partial orders in topological spaces If is a partially ordered set that has also been given the structure of a topological space, then it is customary to assume that is a closed subset of the topological product space Under this assumption partial order relations are well behaved at limits in the sense that if and and for all then Intervals A convex set in a poset P is a subset of P with the property that, for any x and y in and any z in P, if x ≤ z ≤ y, then z is also in . This definition generalizes the definition of intervals of real numbers. When there is possible confusion with convex sets of geometry, one uses order-convex instead of "convex". A convex sublattice of a lattice L is a sublattice of L that is also a convex set of L. Every nonempty convex sublattice can be uniquely represented as the intersection of a filter and an ideal of L. An interval in a poset P is a subset that can be defined with interval notation: For a ≤ b, the closed interval is the set of elements x satisfying (that is, and ). It contains at least the elements a and b. Using the corresponding strict relation "<", the open interval is the set of elements x satisfying (i.e. and ). An open interval may be empty even if . For example, the open interval on the integers is empty since there is no integer such that . The half-open intervals and are defined similarly. Whenever does not hold, all these intervals are empty. Every interval is a convex set, but the converse does not hold; for example, in the poset of divisors of 120, ordered by divisibility (see Fig. 7b), the set is convex, but not an interval. An interval is bounded if there exist elements such that . Every interval that can be represented in interval notation is obviously bounded, but the converse is not true. For example, let as a subposet of the real numbers. The subset is a bounded interval, but it has no infimum or supremum in P, so it cannot be written in interval notation using elements of P. A poset is called locally finite if every bounded interval is finite. For example, the integers are locally finite under their natural ordering. The lexicographical order on the cartesian product is not locally finite, since . Using the interval notation, the property "a is covered by b" can be rephrased equivalently as This concept of an interval in a partial order should not be confused with the particular class of partial orders known as the interval orders. See also Antimatroid, a formalization of orderings on a set that allows more general families of orderings than posets Causal set, a poset-based approach to quantum gravity Nested set collection Poset topology, a kind of topological space that can be defined from any poset Scott continuity – continuity of a function between two partial orders. Szpilrajn extension theorem – every partial order is contained in some total order. Strict weak ordering – strict partial order "<" in which the relation is transitive. Notes Citations References External links Order theory Binary relations de:Ordnungsrelation#Halbordnung
23574
https://en.wikipedia.org/wiki/Psyche
Psyche
Psyche (Psyché in French) is the Greek term for "soul" (ψυχή). Psyche or La Psyché may also refer to: Psychology Psyche (psychology), the totality of the human mind, conscious and unconscious Psyche, an 1846 book about the unconscious by Carl Gustav Carus Psyche, an 1890–1894 book about the ancient Greek concept of soul by Erwin Rohde Psyche (consciousness journal), a periodical on the study of consciousness Psyche, a digital magazine on psychology published by Aeon Psyche Cattell, (1893–1989), American psychologist Religion and mythology Psyche (mythology), a mortal woman in Greek mythology who became the wife of Eros and the goddess of the soul Soul in the Bible, spirit or soul in Judaic and Christian philosophy and theology Arts and media Based on Cupid and Psyche The story of Cupid and Psyche, mainly known from the Latin novel by Apuleius, and depicted in many forms: Cupid and Psyche (Capitoline Museums), a Roman statue Marlborough gem, a 1st-century carved cameo Landscape with Psyche Outside the Palace of Cupid, a 1664 painting by Claude Lorrain, National Gallery London Psyché (play), a 1671 tragedy-ballet by Molière Psyche (Locke), a semi-opera of 1675 with music by Matthew Locke Psyché (opera), a 1678 opera with music by Jean-Baptiste Lully A 1714 violin sonata by Italian composer Michele Mascitti Psyche Revived by Cupid's Kiss a sculpture of 1793 by Antonio Canova Psyche, a six-canto allegorical poem by Mary Tighe first published in 1805 Cupid and Psyche (Thorvaldsen), a sculpture of 1808, Copenhagen Love and Psyche (David), a painting of 1817, now in Cleveland Eros and Psyche (Robert Bridges), poem of 1885 An 1888 symphonic poem by Belgian composer César Franck An 1898 fairy tale by Louis Couperus A 1924 classical music composition by Manuel de Falla Music Psyche (band), a Canadian dark synthpop music group (formed 1982) Psyche (album), a 1994 album by PJ & Duncan A 2009 electronica song by Massive Attack on Splitting the Atom "Psyche-Out", a 1963 instrumental by The Original Surfaris The Psyche (Revolutionary Ensemble album), 1975 Other media A 1972 fictive anthology by Sándor Weöres Danielle Moonstar, a character in the Marvel Comics universe "Psyche" (Duckman), a 1994 episode of Duckman Science and technology Biology Psyche (entomology journal), a periodical on entomology Psyche (moth), a genus of moths in the bagworm family (Psychidae) Leptosia nina, or Psyche, a species of butterfly Other uses in science and technology 16 Psyche, an asteroid Psyche (Red Hat Linux), code name for v8.0 (2002) Vessels Psyche (spacecraft), a NASA orbiter of the metallic asteroid 16 Psyche HMS Psyche, one of various British naval ships USS Psyche V (SP-9), a United States patrol vessel , various ships of the French Navy See also Psy (disambiguation) Psych (disambiguation) Psycho (disambiguation) Psychic (disambiguation) Psychedelic (disambiguation) Soul (disambiguation) Greek
23575
https://en.wikipedia.org/wiki/Parmenides
Parmenides
Parmenides of Elea (; ; fl. late sixth or early fifth century BC) was a pre-Socratic Greek philosopher from Elea in Magna Graecia. Parmenides was born in the Greek colony of Elea, from a wealthy and illustrious family. His dates are uncertain; according to doxographer Diogenes Laërtius, he flourished just before 500 BC, which would put his year of birth near 540 BC, but in the dialogue Parmenides Plato has him visiting Athens at the age of 65, when Socrates was a young man, , which, if true, suggests a year of birth of . He is thought to have been in his prime (or "floruit") around 475 BC. The single known work by Parmenides is a poem whose original title is unknown but which is often referred to as On Nature. Only fragments of it survive. In his poem, Parmenides prescribes two views of reality. The first, the Way of "Aletheia" or truth, describes how all reality is one, change is impossible, and existence is timeless and uniform. The second view, the way of "Doxa", or opinion, describes the world of appearances, in which one's sensory faculties lead to conceptions which are false and deceitful. Parmenides has been considered the founder of ontology and has, through his influence on Plato, influenced the whole history of Western philosophy. He is also considered to be the founder of the Eleatic school of philosophy, which also included Zeno of Elea and Melissus of Samos. Zeno's paradoxes of motion were developed to defend Parmenides's views. In contemporary philosophy, Parmenides's work has remained relevant in debates about the philosophy of time. Biography Parmenides was born in Elea (called Velia in Roman times), a city located in Magna Graecia. Diogenes Laertius says that his father was Pires, and that he belonged to a rich and noble family. Laertius transmits two divergent sources regarding the teacher of the philosopher. One, dependent on Sotion, indicates that he was first a student of Xenophanes, but did not follow him, and later became associated with a Pythagorean, Aminias, whom he preferred as his teacher. Another tradition, dependent on Theophrastus, indicates that he was a disciple of Anaximander. Chronology Everything related to the chronology of Parmenides—the dates of his birth and death, and the period of his philosophical activity—is uncertain. Date of birth All conjectures regarding Parmenides's date of birth are based on two ancient sources. One comes from Apollodorus and is transmitted to us by Diogenes Laertius: this source marks the Olympiad 69th (between 504 BC and 500 BC) as the moment of maturity, placing his birth 40 years earlier (544 BC – 540 BC). The other is Plato, in his dialogue Parmenides. There Plato composes a situation in which Parmenides, 65, and Zeno, 40, travel to Athens to attend the Panathenaic Games. On that occasion they meet Socrates, who was still very young according to the Platonic text. The inaccuracy of the dating from Apollodorus is well known, who chooses the date of a historical event to make it coincide with the maturity (the floruit) of a philosopher, a maturity that he invariably reached at forty years of age. He tries to always match the maturity of a philosopher with the birth of his alleged disciple. In this case Apollodorus, according to Burnet, based his date of the foundation of Elea (540 BC) to chronologically locate the maturity of Xenophanes and thus the birth of his supposed disciple, Parmenides. Knowing this, Burnet and later classicists like Cornford, Raven, Guthrie, and Schofield preferred to base the calculations on the Platonic dialogue. According to the latter, the fact that Plato adds so much detail regarding ages in his text is a sign that he writes with chronological precision. Plato says that Socrates was very young, and this is interpreted to mean that he was less than twenty years old. We know the year of Socrates' death (399 BC) and his age—he was about seventy years old–making the date of his birth 469 BC. The Panathenaic games were held every four years, and of those held during Socrates' youth (454, 450, 446), the most likely is that of 450 BC, when Socrates was nineteen years old. Thus, if at this meeting Parmenides was about sixty-five years old, his birth occurred around 515 BC. However, neither Raven nor Schofield, who follows the former, finds a dating based on a late Platonic dialogue entirely satisfactory. Other scholars directly prefer not to use the Platonic testimony and propose other dates. According to a scholar of the Platonic dialogues, R. Hirzel, Conrado Eggers Lan indicates that the historical has no value for Plato. The fact that the meeting between Socrates and Parmenides is mentioned in the dialogues Theaetetus (183e) and Sophist (217c) only indicates that it is referring to the same fictional event, and this is possible because both the Theaetetus and the Sophist are considered after the Parmenides. In Soph. 217c the dialectic procedure of Socrates is attributed to Parmenides, which would confirm that this is nothing more than a reference to the fictitious dramatic situation of the dialogue. Eggers Lan proposes a correction of the traditional date of the foundation of Elea. Based on Herodotus I, 163–167, which indicates that the Phocians, after defeating the Carthaginians in naval battle, founded Elea, and adding the reference to Thucydides I, 13, where it is indicated that such a battle occurred in the time of Cambyses II, the foundation of Elea can be placed between 530 BC and 522 BC So Parmenides could not have been born before 530 BC or after 520 BC, given that it predates Empedocles. This last dating procedure is not infallible either, because it has been questioned that the fact that links the passages of Herodotus and Thucydides is the same. Nestor Luis Cordero also rejects the chronology based on the Platonic text, and the historical reality of the encounter, in favor of the traditional date of Apollodorus. He follows the traditional datum of the founding of Elea in 545 BC, pointing to it not only as terminus post quem, but as a possible date of Parmenides's birth, from which he concludes that his parents were part of the founding contingent of the city and that he was a contemporary of Heraclitus. The evidence suggests that Parmenides could not have written much after the death of Heraclitus. Timeline relative to other Presocratics Beyond the speculations and inaccuracies about his date of birth, some specialists have turned their attention to certain passages of his work to specify the relationship of Parmenides with other thinkers. It was thought to find in his poem certain controversial allusions to the doctrine of Anaximenes and the Pythagoreans (fragment B 8, verse 24, and frag. B 4), and also against Heraclitus (frag .B 6, vv.8–9), while Empedocles and Anaxagoras frequently refer to Parmenides. The reference to Heraclitus has been debated. Bernays's thesis that Parmenides attacks Heraclitus, to which Diels, Kranz, Gomperz, Burnet and others adhered, was discussed by Reinhardt, whom Jaeger followed. Guthrie finds it surprising that Heraclitus would not have censured Parmenides if he had known him, as he did with Xenophanes and Pythagoras. His conclusion, however, does not arise from this consideration, but points out that, due to the importance of his thought, Parmenides splits the history of pre-Socratic philosophy in two; therefore his position with respect to other thinkers is easy to determine. From this point of view, the philosophy of Heraclitus seems to him pre-Parmenidean, while those of Empedocles, Anaxagoras and Democritus are post-Parmenidean. Anecdotes Plutarch, Strabo and Diogenes—following the testimony of Speusippus—agree that Parmenides participated in the government of his city, organizing it and giving it a code of admirable laws. Archaeological discovery In 1969, the plinth of a statue dated to the 1st century AD was excavated in Velia. On the plinth were four words: ΠΑ[Ρ]ΜΕΝΕΙΔΗΣ ΠΥΡΗΤΟΣ ΟΥΛΙΑΔΗΣ ΦΥΣΙΚΟΣ. The first two clearly read "Parmenides, son of Pires." The fourth word φυσικός (fysikós, "physicist") was commonly used to designate philosophers who devoted themselves to the observation of nature. On the other hand, there is no agreement on the meaning of the third (οὐλιάδης, ouliadēs): it can simply mean "a native of Elea" (the name "Velia" is in Greek Οὐέλια), or "belonging to the Οὐλιος" (Ulios), that is, to a medical school ( the patron of which was Apollo Ulius). If this last hypothesis were true, then Parmenides would be, in addition to being a legislator, a doctor. The hypothesis is reinforced by the ideas contained in fragment 18 of his poem, which contains anatomical and physiological observations. However, other specialists believe that the only certainty we can extract from the discovery is that of the social importance of Parmenides in the life of his city, already indicated by the testimonies that indicate his activity as a legislator. Visit to Athens Plato, in his dialogue Parmenides, relates that, accompanied by his disciple Zeno of Elea, Parmenides visited Athens when he was approximately sixty-five years old and that, on that occasion, Socrates, then a young man, conversed with him. Athenaeus of Naucratis had noted that, although the ages make a dialogue between Parmenides and Socrates hardly possible, the fact that Parmenides has sustained arguments similar to those sustained in the Platonic dialogue is something that seems impossible. Most modern classicists consider the visit to Athens and the meeting and conversation with Socrates to be fictitious. Allusions to this visit in other Platonic works are only references to the same fictitious dialogue and not to a historical fact. On Nature Parmenides's sole work, which has only survived in fragments, is a poem in dactylic hexameter, later titled On Nature. Approximately 160 verses remain today from an original total that was probably near 800. The poem was originally divided into three parts: an introductory proem that contains an allegorical narrative which explains the purpose of the work, a former section known as "The Way of Truth" (aletheia, ἀλήθεια), and a latter section known as "The Way of Appearance/Opinion" (doxa, δόξα). Despite the poem's fragmentary nature, the general plan of both the proem and the first part, "The Way of Truth" have been ascertained by modern scholars, thanks to large excerpts made by Sextus Empiricus and Simplicius of Cilicia. Unfortunately, the second part, "The Way of Opinion", which is supposed to have been much longer than the first, only survives in small fragments and prose paraphrases. Introduction The introductory proem describes the narrator's journey to receive a revelation from an unnamed goddess on the nature of reality. The remainder of the work is then presented as the spoken revelation of the goddess without any accompanying narrative. The narrative of the poet's journey includes a variety of allegorical symbols, such as a speeding chariot with glowing axles, horses, the House of Night, Gates of the paths of Night and Day, and maidens who are "the daughters of the Sun" who escort the poet from the ordinary daytime world to a strange destination, outside our human paths. The allegorical themes in the poem have attracted a variety of different interpretations, including comparisons to Homer and Hesiod, and attempts to relate the journey towards either illumination or darkness, but there is little scholarly consensus about any interpretation, and the surviving evidence from the poem itself, as well as any other literary use of allegory from the same time period, may be too sparse to ever determine any of the intended symbolism with certainty. The Way of Truth In the Way of Truth, an estimated 90% of which has survived, Parmenides distinguishes between the unity of nature and its variety, insisting in the Way of Truth upon the reality of its unity, which is therefore the object of knowledge, and upon the unreality of its variety, which is therefore the object, not of knowledge, but of opinion. This contrasts with the argument in the section called "the way of opinion", which discusses that which is illusory. The Way of Opinion In the significantly longer, but far worse preserved latter section of the poem, Way of Opinion, Parmenides propounds a theory of the world of seeming and its development, pointing out, however, that, in accordance with the principles already laid down, these cosmological speculations do not pretend to anything more than mere appearance. The structure of the cosmos is a fundamental binary principle that governs the manifestations of all the particulars: "the Aether fire of flame" (B 8.56), which is gentle, mild, soft, thin and clear, and self-identical, and the other is "ignorant night", body thick and heavy. Cosmology originally comprised the greater part of his poem, explaining the world's origins and operations. Some idea of the sphericity of the Earth also seems to have been known to Parmenides. Legacy As the first of the Eleatics, Parmenides is generally credited with being the philosopher who first defined ontology as a separate discipline distinct from theology. His most important pupil was Zeno, who appears alongside him in Plato's Parmenides where they debate dialectic with Socrates. The pluralist theories of Empedocles and Anaxagoras and the atomist Leucippus, and Democritus have also been seen as a potential response to Parmenides's arguments and conclusions. Parmenides is also mentioned in Plato's Sophist and Theaetetus. Later Hellenistic doxographers also considered Parmenides to have been a pupil of Xenophanes. Eusebius of Caesarea, quoting Aristocles of Messene, says that Parmenides was part of a line of skeptical philosophy that culminated in Pyrrhonism for he, by the root, rejects the validity of perception through the senses whilst, at any rate, it is first through our five forms of senses that we become aware of things and then by faculty of reasoning. Parmenides's proto-monism of the One also influenced Plotinus and Neoplatonism. Notes Fragments Citations Bibliography Ancient testimony In the Diels–Kranz numbering for testimony and fragments of Pre-Socratic philosophy, Parmenides is catalogued as number 28. The most recent edition of this catalogue is: Life and doctrines A1. A2. A3. A4. A5. A6. A7. A8. A9. A10. A11. A12. Fragments Modern scholarship Further reading Bakalis, Nikolaos (2005), Handbook of Greek Philosophy: From Thales to the Stoics Analysis and Fragments, Trafford Publishing, Cordero, Nestor-Luis (2004), By Being, It Is: The Thesis of Parmenides. Parmenides Publishing, Cordero, Néstor-Luis (ed.), Parmenides, Venerable and Awesome (Plato, Theaetetus 183e) Las Vegas: Parmenides Publishing 2011. Proceedings of the International Symposium (Buenos Aires, 2007), Coxon, but A. H. (2009), The Fragments of Parmenides: A Critical Text With Introduction and Translation, the Ancient Testimonia and a Commentary. Las Vegas, Parmenides Publishing (new edition of Coxon 1986), Curd, Patricia (2011), A Presocratics Reader: Selected Fragments and Testimonia, Hackett Publishing, (Second edition Indianapolis/Cambridge 2011) Hermann, Arnold (2005), To Think Like God: Pythagoras and Parmenides-The Origins of Philosophy, Fully Annotated Edition, Parmenides Publishing, Hermann, Arnold (2010), Plato's Parmenides: Text, Translation & Introductory Essay, Parmenides Publishing, Mourelatos, Alexander P. D. (2008). The Route of Parmenides: A Study of Word, Image, and Argument in the Fragments. Las Vegas: Parmenides Publishing. (First edition Yale University Press 1970) Palmer, John. (2009). Parmenides and Presocratic Philosophy. Oxford: Oxford University Press. Extensive bibliography (up to 2004) by Nestor-Luis Cordero; and annotated bibliography by Raul Corazzon External links "Lecture Notes: Parmenides", S. Marc Cohen, University of Washington Parmenides and the Question of Being in Greek Thought with a selection of critical judgments Parmenides of Elea: Critical Editions and Translations – annotated list of the critical editions and of the English, German, French, Italian and Spanish translations Fragments of Parmenides – parallel Greek with links to Perseus, French, and English (Burnet) includes Parmenides article from Encyclopædia Britannica Eleventh Edition 5th-century BC Greek philosophers 5th-century BC poets 510s BC births 450s BC deaths Eleatic philosophers Ancient Greek epistemologists Lucanian Greeks Ancient Greek metaphysicians Ancient Greek physicists Ontologists Philosophers of Magna Graecia Ancient Greek philosophers of mind People from the Province of Salerno
23576
https://en.wikipedia.org/wiki/Tetraodontidae
Tetraodontidae
Tetraodontidae is a family of primarily marine and estuarine fish of the order Tetraodontiformes. The family includes many familiar species variously called pufferfish, puffers, balloonfish, blowfish, blowers, blowies, bubblefish, globefish, swellfish, toadfish, toadies, toadle, honey toads, sugar toads, and sea squab. They are morphologically similar to the closely related porcupinefish, which have large external spines (unlike the thinner, hidden spines of the Tetraodontidae, which are only visible when the fish have puffed up). The majority of pufferfish species are toxic, with some among the most poisonous vertebrates in the world. In certain species, the internal organs, such as the liver, and sometimes the skin, contain tetrodotoxin, and are highly toxic to most animals when eaten; nevertheless, the meat of some species is considered a delicacy in Japan (as 河豚, pronounced fugu), Korea (as 복, bok, or 복어, bogeo), and China (as 河豚, hétún) when prepared by specially trained chefs who know which part is safe to eat and in what quantity. Other pufferfish species with nontoxic flesh, such as the northern puffer, Sphoeroides maculatus, of Chesapeake Bay, are considered a delicacy elsewhere. The species Torquigener albomaculosus was described by David Attenborough as "the greatest artist of the animal kingdom" due to the males' unique habit of wooing females by creating nests in sand composed of complex geometric designs. Taxonomy The family name comes from the name of its type genus Tetraodon, it is traced from the Greek words tetra meaning "four" and odoús meaning "teeth" because its species has four large teeth fused into upper and lower plates used to crush the hard shells of crustaceans and mollusks, their natural prey. Genera The Tetraodontidae contain 193 species of puffers in 28 genera: Amblyrhynchotes Troschel, 1856 Arothron Müller, 1841 Auriglobus Kottelat, 1999 Canthigaster Swainson, 1839 Carinotetraodon Benl, 1957 Chelonodon Müller, 1841 Chonerhinos Bleeker, 1854 Colomesus Gill, 1884 Contusus Whitley, 1947 Dichotomyctere Duméril, 1855 Ephippion Bibron, 1855 Feroxodon Su, Hardy et Tyler, 1986 Guentheridia Gilbert et Starks, 1904 Javichthys Hardy, 1985 Leiodon Swainson, 1839 Lagocephalus Swainson, 1839 Marilyna Hardy, 1982 Omegophora Whitley, 1934 Pelagocephalus Tyler & Paxton, 1979 Polyspina Hardy, 1983 Pao Kottelat, 2013 Reicheltia Hardy, 1982 Sphoeroides Anonymous, 1798 Takifugu Abe, 1949 Tetractenos Hardy, 1983 Tetraodon Linnaeus, 1758 Torquigener Whitley, 1930 Tylerius Hardy, 1984 Morphology Pufferfish are typically small to medium in size, although a few species such as the Mbu pufferfish can reach lengths greater than . Tetraodontiformes, or pufferfish, are most significantly characterized by the beak-like four teeth – hence the name combining the Greek terms "tetra" for four and "odous" for tooth. Each of the top and bottom arches is fused together with a visible midsagittal demarcation, which are used to break apart and consume small crustaceans. The lack of ribs, a pelvis, and pectoral fins are also unique to pufferfish. The notably missing bone and fin features are due to the pufferfish' specialized defense mechanism, expanding by sucking in water through an oral cavity. Pufferfish can also have many varied structures of caltrop-like dermal spines, which account for the replacement of typical fish scales, and can range in coverage extent from the entire body, to leaving the frontal surface empty. Tetraodontidae typically have smaller spines than the sister family Diodontidae, with some spines not being visible until inflation. Distribution They are most diverse in the tropics, relatively uncommon in the temperate zone, and completely absent from cold waters. Ecology and life history Most pufferfish species live in marine or brackish waters, but some can enter fresh water. About 35 species spend their entire lifecycles in fresh water. These freshwater species are found in disjunct tropical regions of South America (Colomesus asellus and Colomesus tocantinensis), Africa (six Tetraodon species), and Southeast Asia (Auriglobus, Carinotetraodon, Dichotomyctere, Leiodon and Pao). Natural defenses The puffer's unique and distinctive natural defenses help compensate for its slow locomotion. It moves by combining pectoral, dorsal, anal, and caudal fin motions. This makes it highly maneuverable, but very slow, so a comparatively easy predation target. Its tail fin is mainly used as a rudder, but it can be used for a sudden evasive burst of speed that shows none of the care and precision of its usual movements. The puffer's excellent eyesight, combined with this speed burst, is the first and most important defense against predators. The pufferfish's secondary defense mechanism, used if successfully pursued, is to fill its extremely elastic stomach with water (or air when outside the water) until it is much larger and almost spherical in shape. Even if they are not visible when the puffer is not inflated, all puffers have pointed spines, so a hungry predator may suddenly find itself facing an unpalatable, pointy ball rather than a slow, easy meal. Predators that do not heed this warning (or are "lucky" enough to catch the puffer suddenly, before or during inflation) may die from choking, and predators that do manage to swallow the puffer may find their stomachs full of tetrodotoxin (TTX), making puffers an unpleasant, possibly lethal, choice of prey. This neurotoxin is found primarily in the ovaries and liver, although smaller amounts exist in the intestines and skin, as well as trace amounts in muscle. It does not always have a lethal effect on large predators, such as sharks, but it can kill humans. Larval pufferfish are chemically defended by the presence of TTX on the surface of skin, which causes predators to spit them out. Not all puffers are necessarily poisonous; the flesh of the northern puffer is not toxic (a level of poison can be found in its viscera) and it is considered a delicacy in North America. Toxin level varies widely even in fish that are poisonous. A puffer's neurotoxin is not necessarily as toxic to other animals as it is to humans, and puffers are eaten routinely by some species of fish, such as lizardfish and sharks. Puffers are able to move their eyes independently, and many species can change the color or intensity of their patterns in response to environmental changes. In these respects, they are somewhat similar to the terrestrial chameleon. Although most puffers are drab, many have bright colors and distinctive markings, and make no attempt to hide from predators. This is likely an example of honestly signaled aposematism. Dolphins have been filmed expertly handling pufferfish amongst themselves in an apparent attempt to get intoxicated or enter a trance-like state. Reproduction Many marine puffers have a pelagic, or open-ocean, life stage. Spawning occurs after males slowly push females to the water surface or join females already present. The eggs are spherical and buoyant. Hatching occurs after roughly four days. The fry are tiny, but under magnification have a shape usually reminiscent of a pufferfish. They have a functional mouth and eyes, and must eat within a few days. Brackish-water puffers may breed in bays in a manner similar to marine species, or may breed more similarly to the freshwater species, in cases where they have moved far enough upriver. Reproduction in freshwater species varies quite a bit. The dwarf puffers court with males following females, possibly displaying the crests and keels unique to this subgroup of species. After the female accepts his advances, she will lead the male into plants or another form of cover, where she can release eggs for fertilization. The male may help her by rubbing against her side. This has been observed in captivity, and they are the only commonly captive-spawned puffer species. Target-group puffers have also been spawned in aquaria, and follow a similar courting behavior, minus the crest/keel display. Eggs are laid, though, on a flat piece of slate or other smooth, hard material, to which they adhere. The male will guard them until they hatch, carefully blowing water over them regularly to keep the eggs healthy. His parenting is finished when the young hatch and the fry are on their own. In 2012, males of the species Torquigener albomaculosus were documented while carving large and complex geometric, circular structures in the seabed sand in Amami Ōshima, Japan. The structures serve to attract females and to provide a safe place for them to lay their eggs. Information on breeding of specific species is very limited. T. nigroviridis, the green-spotted puffer, has recently been spawned artificially under captive conditions. It is believed to spawn in bays in a similar manner to saltwater species, as their sperm was found to be motile only at full marine salinities, but wild breeding has never been observed. Xenopterus naritus has been reported to be the first bred artificially in Sarawak, Northwestern Borneo, in June 2016, and the main purpose was for development of aquaculture of the species. Diet Pufferfish diets can vary depending on their environment. Traditionally, their diet consists mostly of algae and small invertebrates. They can survive on a completely vegetarian diet if their environment is lacking resources, but prefer an omnivorous food selection. Larger species of pufferfish are able to use their beak-like front teeth to break open clams, mussels, and other shellfish. Some species of pufferfish have also been known to enact various hunting techniques ranging from ambush to open-water hunting. Evolution The tetraodontids have been estimated to have diverged from diodontids between 89 and 138 million years ago. The four major clades diverged during the Cretaceous between 80 and 101 million years ago. The oldest known pufferfish genus is Eotetraodon, from the Lutetian epoch of Middle Eocene Europe, with fossils found in Monte Bolca and the Caucasus Mountains. The Monte Bolca species, E. pygmaeus, coexisted with several other tetraodontiforms, including an extinct species of diodontid, primitive boxfish (Proaracana and Eolactoria), and other, totally extinct forms, such as Zignoichthys and the spinacanthids. The extinct genus, Archaeotetraodon is known from Miocene-aged fossils from Europe. Poisoning Pufferfish can be lethal if not served properly. Puffer poisoning usually results from consumption of incorrectly prepared puffer soup, fugu chiri, or occasionally from raw puffer meat, sashimi fugu. While chiri is much more likely to cause death, sashimi fugu often causes intoxication, light-headedness, and numbness of the lips. Pufferfish tetrodotoxin deadens the tongue and lips, and induces dizziness and vomiting, followed by numbness and prickling over the body, rapid heart rate, decreased blood pressure, and muscle paralysis. The toxin paralyzes the diaphragm muscle and stops the person who has ingested it from breathing. People who live longer than 24 hours typically survive, although possibly after a coma lasting several days. The source of tetrodotoxin in puffers has been a matter of debate, but it is increasingly accepted that bacteria in the fish's intestinal tract are the source. Saxitoxin, the cause of paralytic shellfish poisoning and red tide, can also be found in certain puffers. Philippines In September 2012, the Bureau of Fisheries and Aquatic Resources in the Philippines issued a warning not to eat puffer fish, after local fishermen died upon consuming puffer fish for dinner. The warning indicated that puffer fish toxin is 100 times more potent than cyanide. Thailand Pufferfish, called pakapao in Thailand, are usually consumed by mistake. They are often cheaper than other fish, and because they contain inconsistent levels of toxins between fish and season, there is little awareness or monitoring of the danger. Consumers are regularly hospitalized and some even die from the poisoning. United States Cases of neurological symptoms, including numbness and tingling of the lips and mouth, have been reported to rise after the consumption of puffers caught in the area of Titusville, Florida, US. The symptoms generally resolve within hours to days, although one affected individual required intubation for 72 hours. As a result, Florida banned the harvesting of puffers from certain bodies of water. Treatment Treatment is mainly supportive and consists of intestinal decontamination with gastric lavage and activated charcoal, and life-support until the toxin is metabolized. Case reports suggest anticholinesterases such as edrophonium may be effective. See also Shimonoseki – Japanese city known for its locally caught pufferfish Toado – common Australian name for local varieties of pufferfish References Further reading Ebert, Klaus (2001): The Puffers of Fresh and Brackish Water, Aqualog, . Commercial fish Aposematic animals Ray-finned fish families Taxa named by Charles Lucien Bonaparte Extant Lutetian first appearances
23577
https://en.wikipedia.org/wiki/Partial%20function
Partial function
In mathematics, a partial function from a set to a set is a function from a subset of (possibly the whole itself) to . The subset , that is, the domain of viewed as a function, is called the domain of definition or natural domain of . If equals , that is, if is defined on every element in , then is said to be a total function. More technically, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set. A partial function is often used when its exact domain of definition is not known or difficult to specify. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator. For this reason, in calculus, and more generally in mathematical analysis, a partial function is generally called simply a . In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total. When arrow notation is used for functions, a partial function from to is sometimes written as or However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings. Specifically, for a partial function and any one has either: (it is a single element in ), or is undefined. For example, if is the square root function restricted to the integers defined by: if, and only if, then is only defined if is a perfect square (that is, ). So but is undefined. Basic concepts A partial function arises from the consideration of maps between two sets and that may not be defined on the entire set . A common example is the square root operation on the real numbers : because negative real numbers do not have real square roots, the operation can be viewed as a partial function from to The domain of definition of a partial function is the subset of on which the partial function is defined; in this case, the partial function may also be viewed as a function from to . In the example of the square root operation, the set consists of the nonnegative real numbers The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem. In case the domain of definition is equal to the whole set , the partial function is said to be total. Thus, total partial functions from to coincide with functions from to . Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively. Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function. The notion of transformation can be generalized to partial functions as well. A partial transformation is a function where both and are subsets of some set Function spaces For convenience, denote the set of all partial functions from a set to a set by This set is the union of the sets of functions defined on subsets of with same codomain : the latter also written as In finite case, its cardinality is because any partial function can be extended to a function by any fixed value not contained in so that the codomain is an operation which is injective (unique and invertible by restriction). Discussion and examples The first diagram at the top of the article represents a partial function that is a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set. Natural logarithm Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function. Subtraction of natural numbers Subtraction of natural numbers (in which is the non-negative integers) is a partial function: It is defined only when Bottom element In denotational semantics a partial function is considered as returning the bottom element when it is undefined. In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested. In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function. In category theory In category theory, when considering the operation of morphism composition in concrete categories, the composition operation is a function if and only if has one element. The reason for this is that two morphisms and can only be composed as if that is, the codomain of must equal the domain of The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category. In abstract algebra Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined). The set of all partial functions (partial transformations) on a given base set, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on ), typically denoted by The set of all partial bijections on forms the symmetric inverse semigroup. Charts and atlases for manifolds and fiber bundles Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps. The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined. See also References Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. . Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). . Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill Book Company, New York. Mathematical relations Functions and mappings Properties of binary relations
23579
https://en.wikipedia.org/wiki/Photoelectric%20effect
Photoelectric effect
The photoelectric effect is the emission of electrons from a material caused by electromagnetic radiation such as ultraviolet light. Electrons emitted in this manner are called photoelectrons. The phenomenon is studied in condensed matter physics, solid state, and quantum chemistry to draw inferences about the properties of atoms, molecules and solids. The effect has found use in electronic devices specialized for light detection and precisely timed electron emission. The experimental results disagree with classical electromagnetism, which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency—regardless of the light's intensity or duration of exposure. Because a low-frequency beam at a high intensity does not build up the energy required to produce photoelectrons, as would be the case if light's energy accumulated over time from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but a swarm of discrete energy packets, known as photons—term coined by Gilbert N. Lewis in 1926. Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. In extreme cases, emissions are induced with photons approaching zero energy, like in systems with negative electron affinity and the emission from excited states, or a few hundred keV photons for core electrons in elements with a high atomic number. Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the photoconductive effect, the photovoltaic effect, and the photoelectrochemical effect. Emission mechanism The photons of a light beam have a characteristic energy, called photon energy, which is proportional to the frequency of the light. In the photoemission process, when an electron within some material absorbs the energy of a photon and acquires more energy than its binding energy, it is likely to be ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase in the intensity of low-frequency light will only increase the number of low-energy photons, this change in intensity will not create any single photon with enough energy to dislodge an electron. Moreover, the energy of the emitted electrons will not depend on the intensity of the incoming light of a given frequency, but only on the energy of the individual photons. While free electrons can absorb any energy when irradiated as long as this is followed by an immediate re-emission, like in the Compton effect, in quantum systems all of the energy from one photon is absorbed—if the process is allowed by quantum mechanics—or none at all. Part of the acquired energy is used to liberate the electron from its atomic binding, and the rest contributes to the electron's kinetic energy as a free particle. Because electrons in a material occupy many different quantum states with different binding energies, and because they can sustain energy losses on their way out of the material, the emitted electrons will have a range of kinetic energies. The electrons from the highest occupied states will have the highest kinetic energy. In metals, those electrons will be emitted from the Fermi level. When the photoelectron is emitted into a solid rather than into a vacuum, the term internal photoemission is often used, and emission into a vacuum is distinguished as external photoemission. Experimental observation of photoelectric emission Even though photoemission can occur from any material, it is most readily observed from metals and other conductors. This is because the process produces a charge imbalance which, if not neutralized by current flow, results in the increasing potential barrier until the emission completely ceases. The energy barrier to photoemission is usually increased by nonconductive oxide layers on metal surfaces, so most practical experiments and devices based on the photoelectric effect use clean metal surfaces in evacuated tubes. Vacuum also helps observing the electrons since it prevents gases from impeding their flow between the electrodes. As sunlight, due to atmosphere's absorption, does not provide much ultraviolet light, the light rich in ultraviolet rays used to be obtained by burning magnesium or from an arc lamp. At the present time, mercury-vapor lamps, noble-gas discharge UV lamps and radio-frequency plasma sources, ultraviolet lasers, and synchrotron insertion device light sources prevail. The classical setup to observe the photoelectric effect includes a light source, a set of filters to monochromatize the light, a vacuum tube transparent to ultraviolet light, an emitting electrode (E) exposed to the light, and a collector (C) whose voltage VC can be externally controlled. A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light. An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy Kmax. This value of the retarding voltage is called the stopping potential or cut off potential Vo. Since the work done by the retarding potential in stopping the electron of charge e is eVo, the following must hold eVo = Kmax. The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties. For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy. An increase in the intensity of the same monochromatic light (so long as the intensity is not too high), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I—but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light. The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy. Theoretical explanation In 1905, Einstein proposed a theory of the photoelectric effect using a concept that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. In the range of kinetic energies of the electrons that are removed from their varying atomic bindings by the absorption of a photon of energy , the highest kinetic energy is Here, is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as the formula for the maximum kinetic energy of the ejected electrons becomes Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics. Photoemission from atoms, molecules and solids Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. Models of photoemission from solids The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps: Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as . There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. History 19th century In 1839, Alexandre Edmond Becquerel discovered the related photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect. In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. The discoveries by Hertz led to a series of investigations by Wilhelm Hallwachs, Hoor, Augusto Righi and Aleksander Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light. With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells. Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances. In 1897, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. 20th century In 1902, Lenard observed that the energy of individual emitted electrons was independent of the applied light intensity. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation. Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. Initial investigation of the photoelectric effect in gasses by Lenard were followed up by J. J. Thomson and then more decisively by Frederic Palmer Jr. The gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect". In quantum perturbation theory of atoms and solids acted upon by electromagnetic radiation, the photoelectric effect is still commonly analyzed in terms of waves; the two approaches are equivalent because photon or wave absorption can only happen between quantized energy levels whose energy difference is that of the energy of photon. Albert Einstein's mathematical description of how the photoelectric effect was caused by absorption of quanta of light was in one of his Annus Mirabilis papers, named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". The paper proposed a simple description of energy quanta, and showed how they explained the blackbody radiation spectrum. His explanation in terms of absorption of discrete quanta of light agreed with experimental results. It explained why the energy of photoelectrons was not dependent on incident light intensity. This was a theoretical leap, but the concept was strongly resisted at first because it contradicted the wave theory of light that followed naturally from James Clerk Maxwell's equations of electromagnetism, and more generally, the assumption of infinite divisibility of energy in physical systems. Einstein's work predicted that the energy of individual ejected electrons increases linearly with the frequency of the light. The precise relationship had not at that time been tested. By 1905 it was known that the energy of photoelectrons increases with increasing frequency of incident light and is independent of the intensity of the light. However, the manner of the increase was not experimentally determined until 1914 when Millikan showed that Einstein's prediction was correct. The photoelectric effect helped to propel the then-emerging concept of wave–particle duality in the nature of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light, as the energy of the emitted electrons did not depend on the intensity of the incident radiation. Classical theory predicted that the electrons would 'gather up' energy over a period of time, and then be emitted. Uses and effects Photomultipliers These are extremely light-sensitive vacuum tubes with a coated photocathode inside the envelope. The photo cathode contains combinations of materials such as cesium, rubidium, and antimony specially selected to provide a low work function, so when illuminated even by very low levels of light, the photocathode readily releases electrons. By means of a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and substantially increased in number through secondary emission to provide a readily detectable output current. Photomultipliers are still commonly used wherever low levels of light must be detected. Image sensors Video camera tubes in the early days of television used the photoelectric effect. For example, Philo Farnsworth's "Image dissector" used a screen charged by the photoelectric effect to transform an optical image into a scanned electronic signal. Photoelectron spectroscopy Because the kinetic energy of the emitted electrons is exactly the energy of the incident photon minus the energy of the electron's binding within an atom, molecule or solid, the binding energy can be determined by shining a monochromatic X-ray or UV light of a known energy and measuring the kinetic energies of the photoelectrons. The distribution of electron energies is valuable for studying quantum properties of these systems. It can also be used to determine the elemental composition of the samples. For solids, the kinetic energy and emission angle distribution of the photoelectrons is measured for the complete determination of the electronic band structure in terms of the allowed binding energies and momenta of the electrons. Modern instruments for angle-resolved photoemission spectroscopy are capable of measuring these quantities with a precision better than 1 meV and 0.1°. Photoelectron spectroscopy measurements are usually performed in a high-vacuum environment, because the electrons would be scattered by gas molecules if they were present. However, some companies are now selling products that allow photoemission in air. The light source can be a laser, a discharge tube, or a synchrotron radiation source. The concentric hemispherical analyzer is a typical electron energy analyzer. It uses an electric field between two hemispheres to change (disperse) the trajectories of incident electrons depending on their kinetic energies. Night vision devices Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are accelerated by an electrostatic field where they strike a phosphor coated screen, converting the electrons back into photons. Intensification of the signal is achieved either through acceleration of the electrons or by increasing the number of electrons through secondary emissions, such as with a micro-channel plate. Sometimes a combination of both methods is used. Additional kinetic energy is required to move an electron out of the conduction band and into the vacuum level. This is known as the electron affinity of the photocathode and is another barrier to photoemission other than the forbidden band, explained by the band gap model. Some materials such as gallium arsenide have an effective electron affinity that is below the level of the conduction band. In these materials, electrons that move to the conduction band all have sufficient energy to be emitted from the material, so the film that absorbs photons can be quite thick. These materials are known as negative electron affinity materials. Spacecraft The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This can be a major problem, as other parts of the spacecraft are in shadow which will result in the spacecraft developing a negative charge from nearby plasmas. The imbalance can discharge through delicate electrical components. The static charge created by the photoelectric effect is self-limiting, because a higher charged object does not give up its electrons as easily as a lower charged object does. Moon dust Light from the Sun hitting lunar dust causes it to become positively charged from the photoelectric effect. The charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation. This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor program probes in the 1960s, and most recently the Chang'e 3 rover observed dust deposition on lunar rocks as high as about 28 cm. It is thought that the smallest particles are repelled kilometers from the surface and that the particles move in "fountains" as they charge and discharge. Competing processes and photoemission cross section When photon energies are as high as the electron rest energy of , yet another process, Compton scattering, may occur. Above twice this energy, at , pair production is also more likely. Compton scattering and pair production are examples of two other competing mechanisms. Even if the photoelectric effect is the favoured reaction for a particular interaction of a single photon with a bound electron, the result is also subject to quantum statistics and is not guaranteed. The probability of the photoelectric effect occurring is measured by the cross section of the interaction, σ. This has been found to be a function of the atomic number of the target atom and photon energy. In a crude approximation, for photon energies above the highest atomic binding energy, the cross section is given by: Here Z is the atomic number and n is a number which varies between 4 and 5. The photoelectric effect rapidly decreases in significance in the gamma-ray region of the spectrum, with increasing photon energy. It is also more likely from elements with high atomic number. Consequently, high-Z materials make good gamma-ray shields, which is the principal reason why lead (Z = 82) is preferred and most widely used. See also Anomalous photovoltaic effect Compton scattering Dember effect Photo–Dember effect Wave–particle duality Photomagnetic effect Photochemistry Timeline of atomic and subatomic physics References External links Astronomy Cast "http://www.astronomycast.com/2014/02/ep-335-photoelectric-effect/". AstronomyCast. Nave, R., "Wave-Particle Duality". HyperPhysics. "Photoelectric effect". Physics 2000. University of Colorado, Boulder, Colorado. (page not found) ACEPT W3 Group, "The Photoelectric Effect". Department of Physics and Astronomy, Arizona State University, Tempe, AZ. Haberkern, Thomas, and N Deepak "Grains of Mystique: Quantum Physics for the Layman". Einstein Demystifies Photoelectric Effect, Chapter 3. Department of Physics, "The Photoelectric effect ". Physics 320 Laboratory, Davidson College, Davidson. Fowler, Michael, "The Photoelectric Effect". Physics 252, University of Virginia. Go to "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light" to read an English translation of Einstein's 1905 paper. (Retrieved: 2014 Apr 11) http://www.chemistryexplained.com/Ru-Sp/Solar-Cells.html Photo-electric transducers: http://sensorse.com/page4en.html Applets "HTML 5 JavaScript simulator" Open Source Physics project "Photoelectric Effect". The Physics Education Technology (PhET) project. (Java) Fendt, Walter, "The Photoelectric Effect". (Java) "Applet: Photo Effect ". Open Source Distributed Learning Content Management and Assessment System. (Java) Quantum mechanics Electrical phenomena Albert Einstein Heinrich Hertz Energy conversion Photovoltaics Photochemistry Electrochemistry
23580
https://en.wikipedia.org/wiki/Paleogene
Paleogene
The Paleogene Period ( ; also spelled Palaeogene or Palæogene) is a geologic period and system that spans 43 million years from the end of the Cretaceous Period Ma (million years ago) to the beginning of the Neogene Period Ma. It is the first period of the Cenozoic Era and is divided into the Paleocene, Eocene, and Oligocene epochs. The earlier term Tertiary Period was used to define the time now covered by the Paleogene Period and subsequent Neogene Period; despite no longer being recognized as a formal stratigraphic term, "Tertiary" still sometimes remains in informal use. Paleogene is often abbreviated "Pg", although the United States Geological Survey uses the abbreviation "" for the Paleogene on the Survey's geologic maps. During the Paleogene period, mammals continued to diversify from relatively small, simple forms into a large group of diverse animals in the wake of the Cretaceous–Paleogene extinction event that ended the preceding Cretaceous Period. The Period is marked by considerable changes in climate from the Paleocene–Eocene Thermal Maximum, through global cooling during the Eocene to the first appearance of permanent ice sheets in the Antarctic at the beginning of the Oligocene. Geology Stratigraphy The Paleogene is divided into three series/epochs: the Paleocene, Eocene, and Oligocene. These stratigraphic units can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. Paleocene The Paleocene is the first series/epoch of the Paleogene and lasted from 66.0 Ma to 56.0 Ma. It is divided into three stages: the Danian 66.0 - 61.6 Ma; Selandian 61.6 - 59.2 Ma; and, Thanetian 59.2 - 56.0 Ma. The GSSP for the base of the Cenozoic, Paleogene and Paleocene is at Oued Djerfane, west of El Kef, Tunisia. It is marked by an iridium anomaly produced by an asteroid impact, and is associated with the Cretaceous–Paleogene extinction event. The boundary is defined as the rusty colored base of a 50 cm thick clay, which would have been deposited over only a few days. Similar layers are seen in marine and continental deposits worldwide. These layers include the iridium anomaly, microtektites, nickel-rich spinel crystals and shocked quartz, all indicators of a major extraterrestrial impact. The remains of the crater are found at Chicxulub on the Yucatan Peninsula in Mexico. The extinction of the non-avian dinosaurs, ammonites and dramatic changes in marine plankton and many other groups of organisms, are also used for correlation purposes. Eocene The Eocene is the second series/epoch of the Paleogene, and lasted from 56.0 Ma to 33.9 Ma. It is divided into four stages: the Ypresian 56.0 Ma to 47.8 Ma; Lutetian 47.8 Ma to 41.2 Ma; Bartonian 41.2 Ma to 37.71 Ma; and, Priabonian 37.71 Ma to 33.9 Ma. The GSSP for the base of the Eocene is at Dababiya, near Luxor, Egypt and is marked by the start of a significant variation in global carbon isotope ratios, produced by a major period of global warming. The change in climate was due to a rapid release of frozen methane clathrates from seafloor sediments at the beginning of the Paleocene-Eocene thermal maximum (PETM). Oligocene The Oligocene is the third and youngest series/epoch of the Paleogene, and lasted from 33.9 Ma to 23.03 Ma. It is divided into two stages: the Rupelian 33.9 Ma to 27.82 Ma; and, Chattian 27.82 - 23.03 Ma. The GSSP for the base of the Oligocene is at Massignano, near Ancona, Italy. The extinction the hantkeninid planktonic foraminifera is the key marker for the Eocene-Oligocene boundary, which was a time of climate cooling that led to widespread changes in fauna and flora. Palaeogeography The final stages of the breakup of Pangaea occurred during the Paleogene as Atlantic Ocean rifting and seafloor spreading extended northwards, separating the North America and Eurasian plates, and Australia and South America rifted from Antarctica, opening the Southern Ocean. Africa and India collided with Eurasia forming the Alpine-Himalayan mountain chains and the western margin of the Pacific Plate changed from a divergent to convergent plate boundary. Alpine - Himalayan Orogeny Alpine Orogeny The Alpine Orogeny developed in response to the collision between the African and Eurasian plates during the closing of the Neotethys Ocean and the opening of the Central Atlantic Ocean. The result was a series of arcuate mountain ranges, from the Tell-Rif-Betic cordillera in the western Mediterranean through the Alps, Carpathians, Apennines, Dinarides and Hellenides to the Taurides in the east. From the Late Cretaceous into the early Paleocene, Africa began to converge with Eurasia. The irregular outlines of the continental margins, including the Adriatic promontory (Adria) that extended north from the African Plate, led to the development of several short subduction zones, rather than one long system. In the western Mediterranean, the European Plate was subducted southwards beneath the African Plate, whilst in the eastern Mediterranean, Africa was subducted beneath Eurasia along a northward dipping subduction zone. Convergence between the Iberian and European plates led to the Pyrenean Orogeny and, as Adria pushed northwards the Alps and Carpathian orogens began to develop. The collision of Adria with Eurasia in the early Palaeocene was followed by a  c.10 million year pause in the convergence of Africa and Eurasia, connected with the onset of the opening of the North Atlantic Ocean as Greenland rifted from the Eurasian Plate in the Palaeocene. Convergence rates between Africa and Eurasia increased again in the early Eocene and the remaining oceanic basins between Adria and Europe closed. Between about 40 and 30 Ma, subduction began along the western Mediterranean arc of the Tell, Rif, Betic and Apennine mountain chains. The rate of convergence was less than the subduction rate of the dense lithosphere of the western Mediterranean and roll-back of the subducting slab led to the arcuate structure of these mountain ranges. In the eastern Mediterranean, c. 35 Ma, the Anatolide-Tauride platform (northern part of Adria) began to enter the trench leading to the development of the Dinarides, Hellenides and Tauride mountain chains as the passive margin sediments of Adria were scrapped off onto the Eurasia crust during subduction. Zagros Mountains The Zagros mountain belt stretches for c. 2000  km from the eastern border of Iraq to the Makran coast in southern Iran. It formed as a result of the convergence and collision of the Arabian and Eurasian plates as the Neotethys Ocean closed and is composed sediments scrapped from the descending Arabian Plate. From the Late Cretaceous, a volcanic arc developed on the Eurasia margin as the Neotethys crust was subducted beneath it. A separate intra-oceanic subduction zone in the Neotethys resulted in the obuction of ocean crust onto the Arabian margin in the Late Cretaceous to Paleocene, with break-off of the subducted oceanic plate close to the Arabian margin occurring during the Eocene. Continental collision began during the Eocene c. 35 Ma and continued into the Oligocene to c. 26 Ma. Himalayan Orogeny The Indian continent rifted from Madagascar at c. 83 Ma and drifted rapidly (c. 18 cm/yr in the Paleocene) northwards towards the southern margin of Eurasia. A rapid decrease in velocity to c. 5 cm/yr in the early Eocene records the collision of the Tethyan (Tibetan) Himalayas, the leading edge of Greater India, with the Lhasa Terrane of Tibet (southern Eurasian margin), along the Indus-Yarling-Zangbo suture zone. To the south of this zone, the Himalaya are composed of metasedimentary rocks scraped off the now subducted Indian continental crust and mantle lithosphere as the collision progressed. Palaeomagnetic data place the present day Indian continent further south at the time of collision and decrease in plate velocity, indicating the presence of a large region to the north of India that has now been subducted beneath the Eurasian Plate or incorporated into the mountain belt. This region, known as Greater India, formed by extension along the northern margin of India during the opening of the Neotethys. The Tethyan Himalaya block lay along its northern edge, with the Neotethys Ocean lying between it and southern Eurasia. Debate about the amount of deformation seen in the geological record in the India–Eurasia collision zone versus the size of Greater India, the timing and nature of the collision relative to the decrease in plate velocity, and explanations for the unusually high velocity of the Indian plate have led to several models for Greater India: 1) A Late Cretaceous to early Paleocene subduction zone may have lain between India and Eurasia in the Neotethys, dividing the region into two plates, subduction was followed by collision of India with Eurasia in the middle Eocene. In this model Greater India would have been less than 900 km wide; 2) Greater India may have formed a single plate, several thousand kilometres wide, with the Tethyan Himalaya microcontinent separated from the Indian continent by an oceanic basin. The microcontinent collided with southern Eurasia c. 58 Ma (late Paleocene), whilst the velocity of the plate did not decrease until c. 50 Ma when subduction rates dropped as young, oceanic crust entered the subduction zone; 3) This model assigns older dates to parts of Greater India, which changes its paleogeographic position relative to Eurasia and creates a Greater India formed of extended continental crust 2000 - 3000 km wide. South East Asia The Alpine-Himalayan Orogenic Belt in Southeast Asia extends from the Himalayas in India through Myanmar (West Burma block) Sumatra, Java to West Sulawesi. During the Late Cretaceous to Paleogene, the northward movement of the Indian Plate led to the highly oblique subduction of the Neotethys along the edge of the West Burma block and the development of a major north-south transform fault along the margin of Southeast Asia to the south. Between c. 60 and 50 Ma, the leading northeastern edge of Greater India collided with the West Burma block resulting in deformation and metamorphism. During the middle Eocene, north-dipping subduction resumed along the southern edge of Southeast Asia, from west Sumatra to West Sulawesi, as the Australian Plate drifted slowly northwards. Collision between India and the West Burma block was complete by the late Oligocene. As the India-Eurasia collision continued, movement of material away from the collision zone was accommodated along, and extended, the already existing major strike slip systems of the region. Atlantic Ocean During the Paleocene, seafloor spreading along the Mid-Atlantic Ridge propagated from the Central Atlantic northwards between North America and Greenland in the Labrador Sea (c. 62 Ma) and Baffin Bay (c. 57 Ma), and, by the early Eocene (c. 54 Ma), into the northeastern Atlantic between Greenland and Eurasia. Extension between North America and Eurasia, also in the early Eocene, led to the opening of the Eurasian Basin across the Arctic, which was linked to the Baffin Bay Ridge and Mid-Atlantic Ridge to the south via major strike slip faults. From the Eocene and into the early Oligocene, Greenland acted as an independent plate moving northwards and rotating anticlockwise. This led to compression across the Canadian Arctic Archipelago, Svalbard and northern Greenland resulting in the Eureka Orogeny. From c. 47 Ma, the eastern margin of Greenland was cut by the Reykjanes Ridge (the northeastern branch of the Mid-Atlantic Ridge) propagating northwards and splitting off the Jan Mayen microcontinent. After c. 33 Ma seafloor spreading in Labrador Sea and Baffin Bay gradually ceased and seafloor spreading focused along the northeast Atlantic. By the late Oligocene, the plate boundary between North America and Eurasia was established along the Mid-Atlantic Ridge, with Greenland attached to the North American plate again, and the Jan Mayen microcontinent part of the Eurasian Plate, where its remains now lie to the east and possibly beneath the southeast of Iceland. North Atlantic Large Igneous Province The North Atlantic Igneous Province stretches across the Greenland and northwest European margins and is associated with the proto-Icelandic mantle plume, which rose beneath the Greenland lithosphere at c. 65 Ma. There were two main phases of volcanic activity with peaks at c. 60 Ma and c. 55 Ma. Magmatism in the British and Northwest Atlantic volcanic provinces occurred mainly in the early Palaeocene, the latter associated with an increased spreading rate in the Labrador Sea, whilst northeast Atlantic magmatism occurred mainly during the early Eocene and is associated with a change in the spreading direction in the Labrador Sea and the northward drift of Greenland. The locations of the magmatism coincide with the intersection of propagating the rifts and large-scale, pre-existing lithospheric structures, which acted as channels to the surface for the magma. The arrival of the proto-Iceland plume has been considered the driving mechanism for rifting in the North Atlantic. However, that rifting and initial seafloor spreading occurred prior to the arrival of the plume, large scale magmatism occurred at a distance to rifting, and that rifting propagated towards, rather than away from the plume, has led to the suggestion the plume and associated magmatism may have been a result, rather than a cause, of the plate tectonic forces that led to the propagation of rifting from the Central to the North Atlantic. Americas North America Mountain building continued along the North America Cordillera in response to subduction of the Farallon plate beneath the North American Plate. Along the central section of the North American margin, crustal shortening of the Cretaceous to Paleocene Sevier Orogen lessened and deformation moved eastward. The decreasing dip of the subducting Farallon Plate led to a flat-slab segment that increased friction between this and the base of the North American Plate. The resulting Laramide Orogeny, which began the development of the Rocky Mountains, was a broad zone of thick-skinned deformation, with faults extending to mid-crustal depths and the uplift of basement rocks that lay to the east of the Sevier belt, and more than 700km from the trench. With the Laramide uplift the Western Interior Seaway was divided and then retreated. During the mid to late Eocene (50–35 Ma), plate convergence rates decreased and the dip of the Farallon slab began to steepen. Uplift ceased and the region largely levelled by erosion. By the Oligocene, convergence gave way to extension, rifting and widespread volcanism across the Laramide belt. South America Ocean-continent convergence accommodated by east dipping subduction zone of the Farallon Plate beneath the western edge of South America continued from the Mesozoic. Over the Paleogene, changes in plate motion and episodes of regional slab shallowing and steepening resulted in variations in the magnitude of crustal shortening and amounts of magmatism along the length of the Andes. In the Northern Andes, an oceanic plateau with volcanic arc was accreted during the latest Cretaceous and Paleocene, whilst the Central Andes were dominated by the subduction of oceanic crust and the Southern Andes were impacted by the subduction of the Farallon-East Antarctic ocean ridge. Caribbean The Caribbean Plate is largely composed of oceanic crust of the Caribbean Large Igneous Province that formed during the Late Cretaceous. During the Late Cretaceous to Paleocene, subduction of Atlantic crust was established along its northern margin, whilst to the southwest, an island arc collided with the northern Andes forming an east dipping subduction zone where Caribbean lithosphere was subducted beneath the South American margin. During the Eocene (c. 45 Ma), subduction of the Farallon Plate along the Central American subduction zone was (re)established. Subduction along the northern section of the Caribbean volcanic arc ceased as the Bahamas carbonate platform collided with Cuba and was replaced by strike-slip movements as a transform fault, extending from the Mid-Atlantic Ridge, connected with the northern boundary of the Caribbean Plate. Subduction now focused along the southern Caribbean arc (Lesser Antilles). By the Oligocene, the intra-oceanic Central American volcanic arc began to collide with northwestern South American. Pacific Ocean At the beginning of the Paleogene, the Pacific Ocean consisted of the Pacific, Farallon, Kula and Izanagi plates. The central Pacific Plate grew by seafloor spreading as the other three plates were subducted and broken up. In the southern Pacific, seafloor spreading continued from the Late Cretaceous across the Pacific–Antarctic, Pacific-Farallon and Farallon–Antarctic mid ocean ridges. The Izanagi-Pacific spreading ridge lay nearly parallel to the East Asian subduction zone and between 60–50 Ma the spreading ridge began to be subducted. By c. 50 Ma, the Pacific Plate was no longer surrounded by spreading ridges, but had a subduction zone along its western edge. This changed the forces acting on the Pacific Plate and led to a major reorganisation of plate motions across the entire Pacific region. The resulting changes in stress between the Pacific and Philippine Sea plates initiated subduction along the Izu-Bonin-Mariana and Tonga-Kermadec arcs. Subduction of the Farallon Plate beneath the American plates continued from the Late Cretaceous. The Kula-Farallon spreading ridge lay to its north until the Eocene (c. 55 Ma), when the northern section of the plate split forming the Vancouver/Juan de Fuca Plate. In the Oligocene (c. 28 Ma), the first segment of the Pacific–Farallon spreading ridge entered the North American subduction zone near Baja California leading to major strike-slip movements and the formation of the San Andreas Fault. At the Paleogene-Neogene boundary, spreading ceased between the Pacific and Farallon plates and the Farallon Plate split again forming the present date Nazca and Cocos plates. The Kula Plate lay between Pacific Plate and North America. To the north and northwest it was being subducted beneath the Aleutian trench. Spreading between the Kula and Pacific and Farallon plates ceased c. 40 Ma and the Kula Plate became part of the Pacific Plate. Hawaii hotspot The Hawaiian-Emperor seamount chain formed above the Hawaiian hotspot. Originally thought to be stationary within the mantle, the hotspot is now considered to have drifted south during the Paleocene to early Eocene, as the Pacific Plate moved north. At c. 47 Ma, movement of the hotspot ceased and the Pacific Plate motion changed from northward to northwestward in response to the onset of subduction along its western margin. This resulted in a 60 degree bend in the seamount chain. Other seamount chains related to hotspots in the South Pacific show a similar change in orientation at this time. Antarctica Slow seafloor spreading continued between Australia and East Antarctica. Shallow water channels probably developed south of Tasmania opening the Tasmanian Passage in the Eocene and deep ocean routes opening from the mid Oligocene. Rifting between the Antarctic Peninsula and the southern tip of South America formed the Drake Passage and opened the Southern Ocean also during this time, completing the breakup of Gondwana. The opening of these passages and the creation of the Southern Ocean established the Antarctic Circumpolar Current. Glaciers began to build across the Antarctica continent that now lay isolated in the south polar region and surrounded by cold ocean waters. These changes contributed to the fall in global temperatures and the beginning of icehouse conditions. Red Sea and East Africa Extensional stresses from the subduction zone along the northern Neotethys resulted in rifting between Africa and Arabia, forming the Gulf of Aden in the late Eocene. To the west, in the early Oligocene, flood basalts erupted across Ethiopia, northeast Sudan and southwest Yemen as the Afar mantle plume began to impact the base of the African lithosphere. Rifting across the southern Red Sea began in the mid Oligocene, and across the central and northern Red Sea regions in the late Oligocene and early Miocene. Climate The global climate of the Paleogene began with the brief but intense "impact winter" caused by the Chicxulub impact. This cold period was terminated by an abrupt warming. After temperatures stabilised, the steady cooling and drying of the Late Cretaceous-Early Paleogene Cool Interval (LKEPCI) that had spanned the last two ages of the Late Cretaceous continued. About 62.2 Mya, the Latest Danian Event, a hyperthermal event, took place. About 59 Ma, the LKEPCI was brought to an end by the Thanetian Thermal Event, a change from the relative cool of the Early and Middle Palaeocene and the beginning of an intense supergreenhouse effect. According to a study published in 2018, from about 56 to 48 Mya, annual air temperatures over land and at mid-latitude averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C warmer than most previous estimates. For comparison, this was 10 to 15 °C greater than the current annual mean temperatures in these areas. At the Palaeocene-Eocene boundary, the Paleocene–Eocene Thermal Maximum (PETM) occurred, one of the warmest times of the Phanerozoic eon, during which global mean surface temperatures increased to 31.6. It was followed by the less severe Eocene Thermal Maximum 2 (ETM2) about 53.69 Ma. Eocene Thermal Maximum 3 (ETM3) occurred about 53 Ma. The Early Eocene Climatic Optimum was brought to an end by the Azolla event, a change of climate about 48.5 Mya, believed to have been caused by a proliferation of aquatic ferns from the genus Azolla, resulting in the sequestering of large amounts of carbon dioxide by those plants. From this time until about 34 Mya, there was a slow cooling trend known as the Middle-Late Eocene Cooling (MLEC). Approximately 41.5 Mya, this cooling was interrupted temporarily by the Middle Eocene Climatic Optimum (MECO). Then, about 39.4 Mya, a temperature decrease termed the Late Eocene Cool Event (LECE) is detected in the oxygen isotope record. A rapid decrease of global temperatures and formation of continental glaciers on Antarctica marked the end of the Eocene. This sudden cooling was caused partly by the formation of the Antarctic Circumpolar Current, which significantly lowered oceanic water temperatures. During the earliest Oligocene occurred the Early Oligocene Glacial Maximum (Oi1), which lasted for about 200,000 years. After Oi1, global mean surface temperature continued to decrease gradually during the Rupelian Age. Another major cooling event occurred at the end of the Rupelian; its most likely cause was extreme biological productivity in the Southern Ocean fostered by tectonic reorganisation of ocean currents and an influx of nutrients from Antarctica. In the Late Oligocene, global temperatures began to warm slightly, though they continued to be significantly lower than during the previous epochs of the Paleogene and polar ice remained. Flora and fauna Tropical taxa diversified faster than those at higher latitudes after the Cretaceous–Paleogene extinction event, resulting in the development of a significant latitudinal diversity gradient. Mammals began a rapid diversification during this period. After the Cretaceous–Paleogene extinction event, which saw the demise of the non-avian dinosaurs, mammals began to evolve from a few small and generalized forms into most of the modern varieties we see presently. Some of these mammals evolved into large forms that dominated the land, while others became capable of living in marine, specialized terrestrial, and airborne environments. Those that adapted to the oceans became modern cetaceans, while those that adapted to trees became primates, the group to which humans belong. Birds, extant dinosaurs which were already well established by the end of the Cretaceous, also experienced adaptive radiation as they took over the skies left empty by the now extinct pterosaurs. Some flightless birds such as penguins, ratites, and terror birds also filled niches left by the hesperornithes and other extinct dinosaurs. Pronounced cooling in the Oligocene resulted in a massive floral shift, and many extant modern plants arose during this time. Grasses and herbs, such as Artemisia, began to proliferate, at the expense of tropical plants, which began to decrease. Conifer forests developed in mountainous areas. This cooling trend continued, with major fluctuation, until the end of the Pleistocene period. This evidence for this floral shift is found in the palynological record. See also References External links Paleogene Microfossils: 180+ images of Foraminifera Paleogene (chronostratigraphy scale) Geological periods
23582
https://en.wikipedia.org/wiki/Preorder
Preorder
In mathematics, especially in order theory, a preorder or quasiorder is a binary relation that is reflexive and transitive. The name is meant to suggest that preorders are almost partial orders, but not quite, as they are not necessarily antisymmetric. A natural example of a preorder is the divides relation "x divides y" between integers, polynomials, or elements of a commutative ring. For example, the divides relation is reflexive as every integer divides itself. But the divides relation is not antisymmetric, because divides and divides . It is to this preorder that "greatest" and "lowest" refer in the phrases "greatest common divisor" and "lowest common multiple" (except that, for integers, the greatest common divisor is also the greatest for the natural order of the integers). Preorders are closely related to equivalence relations and (non-strict) partial orders. Both of these are special cases of a preorder: an antisymmetric preorder is a partial order, and a symmetric preorder is an equivalence relation. Moreover, a preorder on a set can equivalently be defined as an equivalence relation on , together with a partial order on the set of equivalence class. Like partial orders and equivalence relations, preorders (on a nonempty set) are never asymmetric. A preorder can be visualized as a directed graph, with elements of the set corresponding to vertices, and the order relation between pairs of elements corresponding to the directed edges between vertices. The converse is not true: most directed graphs are neither reflexive nor transitive. A preorder that is antisymmetric no longer has cycles; it is a partial order, and corresponds to a directed acyclic graph. A preorder that is symmetric is an equivalence relation; it can be thought of as having lost the direction markers on the edges of the graph. In general, a preorder's corresponding directed graph may have many disconnected components. As a binary relation, a preorder may be denoted or . In words, when one may say that b a or that a b, or that b to a. Occasionally, the notation ← or → is also used. Definition Let be a binary relation on a set so that by definition, is some subset of and the notation is used in place of Then is called a or if it is reflexive and transitive; that is, if it satisfies: Reflexivity: for all and Transitivity: if for all A set that is equipped with a preorder is called a preordered set (or proset). Preorders as partial orders on partitions Given a preorder on one may define an equivalence relation on such that The resulting relation is reflexive since the preorder is reflexive; transitive by applying the transitivity of twice; and symmetric by definition. Using this relation, it is possible to construct a partial order on the quotient set of the equivalence, which is the set of all equivalence classes of If the preorder is denoted by then is the set of -cycle equivalence classes: if and only if or is in an -cycle with . In any case, on it is possible to define if and only if That this is well-defined, meaning that its defining condition does not depend on which representatives of and are chosen, follows from the definition of It is readily verified that this yields a partially ordered set. Conversely, from any partial order on a partition of a set it is possible to construct a preorder on itself. There is a one-to-one correspondence between preorders and pairs (partition, partial order). : Let be a formal theory, which is a set of sentences with certain properties (details of which can be found in the article on the subject). For instance, could be a first-order theory (like Zermelo–Fraenkel set theory) or a simpler zeroth-order theory. One of the many properties of is that it is closed under logical consequences so that, for instance, if a sentence logically implies some sentence which will be written as and also as then necessarily (by modus ponens). The relation is a preorder on because always holds and whenever and both hold then so does Furthermore, for any if and only if ; that is, two sentences are equivalent with respect to if and only if they are logically equivalent. This particular equivalence relation is commonly denoted with its own special symbol and so this symbol may be used instead of The equivalence class of a sentence denoted by consists of all sentences that are logically equivalent to (that is, all such that ). The partial order on induced by which will also be denoted by the same symbol is characterized by if and only if where the right hand side condition is independent of the choice of representatives and of the equivalence classes. All that has been said of so far can also be said of its converse relation The preordered set is a directed set because if and if denotes the sentence formed by logical conjunction then and where The partially ordered set is consequently also a directed set. See Lindenbaum–Tarski algebra for a related example. Relationship to strict partial orders If reflexivity is replaced with irreflexivity (while keeping transitivity) then we get the definition of a strict partial order on . For this reason, the term is sometimes used for a strict partial order. That is, this is a binary relation on that satisfies: Irreflexivity or anti-reflexivity: for all that is, is for all and Transitivity: if for all Strict partial order induced by a preorder Any preorder gives rise to a strict partial order defined by if and only if and not . Using the equivalence relation introduced above, if and only if and so the following holds The relation is a strict partial order and strict partial order can be constructed this way. the preorder is antisymmetric (and thus a partial order) then the equivalence is equality (that is, if and only if ) and so in this case, the definition of can be restated as: But importantly, this new condition is used as (nor is it equivalent to) the general definition of the relation (that is, is defined as: if and only if ) because if the preorder is not antisymmetric then the resulting relation would not be transitive (consider how equivalent non-equal elements relate). This is the reason for using the symbol "" instead of the "less than or equal to" symbol "", which might cause confusion for a preorder that is not antisymmetric since it might misleadingly suggest that implies Preorders induced by a strict partial order Using the construction above, multiple non-strict preorders can produce the same strict preorder so without more information about how was constructed (such knowledge of the equivalence relation for instance), it might not be possible to reconstruct the original non-strict preorder from Possible (non-strict) preorders that induce the given strict preorder include the following: Define as (that is, take the reflexive closure of the relation). This gives the partial order associated with the strict partial order "" through reflexive closure; in this case the equivalence is equality so the symbols and are not needed. Define as "" (that is, take the inverse complement of the relation), which corresponds to defining as "neither "; these relations and are in general not transitive; however, if they are then is an equivalence; in that case "" is a strict weak order. The resulting preorder is connected (formerly called total); that is, a total preorder. If then The converse holds (that is, ) if and only if whenever then or Examples Graph theory The reachability relationship in any directed graph (possibly containing cycles) gives rise to a preorder, where in the preorder if and only if there is a path from x to y in the directed graph. Conversely, every preorder is the reachability relationship of a directed graph (for instance, the graph that has an edge from x to y for every pair with ). However, many different graphs may have the same reachability preorder as each other. In the same way, reachability of directed acyclic graphs, directed graphs with no cycles, gives rise to partially ordered sets (preorders satisfying an additional antisymmetry property). The graph-minor relation is also a preorder. Computer science In computer science, one can find examples of the following preorders. Asymptotic order causes a preorder over functions . The corresponding equivalence relation is called asymptotic equivalence. Polynomial-time, many-one (mapping) and Turing reductions are preorders on complexity classes. Subtyping relations are usually preorders. Simulation preorders are preorders (hence the name). Reduction relations in abstract rewriting systems. The encompassment preorder on the set of terms, defined by if a subterm of t is a substitution instance of s. Theta-subsumption, which is when the literals in a disjunctive first-order formula are contained by another, after applying a substitution to the former. Category theory A category with at most one morphism from any object x to any other object y is a preorder. Such categories are called thin. Here the objects correspond to the elements of and there is one morphism for objects which are related, zero otherwise. In this sense, categories "generalize" preorders by allowing more than one relation between objects: each morphism is a distinct (named) preorder relation. Alternately, a preordered set can be understood as an enriched category, enriched over the category Other Further examples: Every finite topological space gives rise to a preorder on its points by defining if and only if x belongs to every neighborhood of y. Every finite preorder can be formed as the specialization preorder of a topological space in this way. That is, there is a one-to-one correspondence between finite topologies and finite preorders. However, the relation between infinite topological spaces and their specialization preorders is not one-to-one. A net is a directed preorder, that is, each pair of elements has an upper bound. The definition of convergence via nets is important in topology, where preorders cannot be replaced by partially ordered sets without losing important features. The relation defined by if where f is a function into some preorder. The relation defined by if there exists some injection from x to y. Injection may be replaced by surjection, or any type of structure-preserving function, such as ring homomorphism, or permutation. The embedding relation for countable total orderings. Example of a total preorder: Preference, according to common models. Constructions Every binary relation on a set can be extended to a preorder on by taking the transitive closure and reflexive closure, The transitive closure indicates path connection in if and only if there is an -path from to Left residual preorder induced by a binary relation Given a binary relation the complemented composition forms a preorder called the left residual, where denotes the converse relation of and denotes the complement relation of while denotes relation composition. Related definitions If a preorder is also antisymmetric, that is, and implies then it is a partial order. On the other hand, if it is symmetric, that is, if implies then it is an equivalence relation. A preorder is total if or for all A preordered class is a class equipped with a preorder. Every set is a class and so every preordered set is a preordered class. Uses Preorders play a pivotal role in several situations: Every preorder can be given a topology, the Alexandrov topology; and indeed, every preorder on a set is in one-to-one correspondence with an Alexandrov topology on that set. Preorders may be used to define interior algebras. Preorders provide the Kripke semantics for certain types of modal logic. Preorders are used in forcing in set theory to prove consistency and independence results. Number of preorders As explained above, there is a 1-to-1 correspondence between preorders and pairs (partition, partial order). Thus the number of preorders is the sum of the number of partial orders on every partition. For example: Interval For the interval is the set of points x satisfying and also written It contains at least the points a and b. One may choose to extend the definition to all pairs The extra intervals are all empty. Using the corresponding strict relation "", one can also define the interval as the set of points x satisfying and also written An open interval may be empty even if Also and can be defined similarly. See also Partial order – preorder that is antisymmetric Equivalence relation – preorder that is symmetric Total preorder – preorder that is total Total order – preorder that is antisymmetric and total Directed set Category of preordered sets Prewellordering Well-quasi-ordering Notes References Schmidt, Gunther, "Relational Mathematics", Encyclopedia of Mathematics and its Applications, vol. 132, Cambridge University Press, 2011, Properties of binary relations Order theory
23585
https://en.wikipedia.org/wiki/Psychoanalysis
Psychoanalysis
Psychoanalysis is a set of theories and therapeutic techniques that deal in part with the unconscious mind, and which together form a method of treatment for mental disorders. The discipline was established in the early 1890s by Sigmund Freud, whose work stemmed partly from the clinical work of Josef Breuer and others. Freud developed and refined the theory and practice of psychoanalysis until his death in 1939. In an encyclopedic article, he identified the cornerstones of psychoanalysis as "the assumption that there are unconscious mental processes, the recognition of the theory of repression and resistance, the appreciation of the importance of sexuality and of the Oedipus complex." Freud's colleagues Alfred Adler and Carl Gustav Jung developed offshoots of psychoanalysis which they called individual psychology (Adler) and analytical psychology (Jung), although Freud himself wrote a number of criticisms of them and emphatically denied that they were forms of psychoanalysis. Psychoanalysis was later developed in different directions by neo-Freudian thinkers, such as Erich Fromm, Karen Horney, and Harry Stack Sullivan. Freud distinguished between the conscious and the unconscious mind, arguing that the unconscious mind largely determines behaviour and cognition owing to unconscious drives. Freud observed that attempts to bring such drives into awareness triggers resistance in the form of defense mechanisms, particularly repression, and that conflicts between conscious and unconscious material can result in mental disturbances. He also postulated that unconscious material can be found in dreams and unintentional acts, including mannerisms and Freudian slips. Psychoanalytic therapy, or simply analytical therapy, developed as a means to improve mental health by bringing unconscious material into consciousness. Psychoanalysts place a large emphasis on early childhood in an individual's development. During therapy, a psychoanalyst aims to induce transference, whereby patients relive their infantile conflicts by projecting onto the analyst feelings of love, dependence and anger. During psychoanalytic sessions a patient traditionally lies on a couch, and an analyst sits just behind and out of sight. The patient expresses their thoughts, including free associations, fantasies, and dreams, from which the analyst infers the unconscious conflicts causing the patient's symptoms and character problems. Through the analysis of these conflicts, which includes interpreting the transference and countertransference (the analyst's feelings for the patient), the analyst confronts the patient's pathological defence mechanisms to help patients understand themselves better. Psychoanalysis is a controversial discipline, and its effectiveness as a treatment has been contested, although it retains influence within psychiatry. Psychoanalytic concepts are also widely used outside the therapeutic arena, in areas such as psychoanalytic literary criticism and film criticism, analysis of fairy tales, philosophical perspectives such as Freudo-Marxism, and other cultural phenomena. History 1890s The idea of psychoanalysis () first began to receive serious attention under Sigmund Freud, who formulated his own theory of psychoanalysis in Vienna in the 1890s. Freud was a neurologist trying to find an effective treatment for patients with neurotic or hysterical symptoms. Freud realised that there were mental processes that were not conscious whilst he was employed as a neurological consultant at the Children's Hospital, where he noticed that many aphasic children had no apparent organic cause for their symptoms. He then wrote a monograph about this subject. In 1885, Freud obtained a grant to study with Jean-Martin Charcot, a famed neurologist, at the Salpêtrière in Paris, where he followed the clinical presentations of Charcot, particularly in the areas of hysteria, paralyses and the anaesthesias. Charcot had introduced hypnotism as an experimental research tool and developed photographic representation of clinical symptoms. Freud's first theory to explain hysterical symptoms was presented in Studies on Hysteria (1895; ), co-authored with his mentor the distinguished physician Josef Breuer, which was generally seen as the birth of psychoanalysis. The work was based on Breuer's treatment of Bertha Pappenheim, referred to in case studies by the pseudonym "Anna O.", treatment which Pappenheim herself had dubbed the "talking cure". Breuer wrote that many factors could result in such symptoms, including various types of emotional trauma, and he also credited work by others such as Pierre Janet; while Freud contended that at the root of hysterical symptoms were repressed memories of distressing occurrences, almost always having direct or indirect sexual associations. Around the same time, Freud attempted to develop a neuro-physiological theory of unconscious mental mechanisms, which he soon gave up. It remained unpublished in his lifetime. The term 'psychoanalysis' () was first introduced by Freud in his essay titled "Heredity and etiology of neuroses" (""), written and published in French in 1896. In 1896, Freud also published his seduction theory, claiming to have uncovered repressed memories of incidents of sexual abuse for all his current patients, from which he proposed that the preconditions for hysterical symptoms are sexual excitations in infancy. Though in 1896 he had reported that his patients "had no feeling of remembering the [infantile sexual] scenes", and assured him "emphatically of their unbelief", in later accounts he claimed that they had told him that they had been sexually abused in infancy. By 1898 he had privately acknowledged to his friend and colleague Wilhelm Fliess that he no longer believed in his theory, though he did not state this publicly until 1906. Building on his claims that the patients reported infantile sexual abuse experiences, Freud subsequently contended that his clinical findings in the mid-1890s provided evidence of the occurrence of unconscious fantasies, supposedly to cover up memories of infantile masturbation. Only much later did he claim the same findings as evidence for Oedipal desires. In the latter part of the 20th century, several Freud scholars challenged Freud's perception of the patients who informed him of childhood sexual abuse, arguing that he had imposed his preconceived notions on his patients. By 1899, Freud had theorised that dreams had symbolic significance and generally were specific to the dreamer. Freud formulated his second psychological theory—that the unconscious has or is a "primary process" consisting of symbolic and condensed thoughts, and a "secondary process" of logical, conscious thoughts. This theory was published in his 1899 book, The Interpretation of Dreams, which Freud thought of as his most significant work. Freud outlined a new topographic theory, which theorised that unacceptable sexual wishes were repressed into the "System Unconscious". These wishes were made unconscious due to society's condemnation of premarital sexual activity, and this repression created anxiety. This "topographic theory" is still popular in much of Europe, although it has fallen out of favour in much of North America, where it has been largely supplanted by structural theory. In addition, The Interpretation of Dreams contained Freud's first conceptualisation of the Oedipal complex, which asserted that young boys are sexually attracted to their mothers and envious of their fathers for being able to have sex with their mothers. Psychologist Frank Sulloway in his book Freud, Biologist of the Mind: Beyond the Psychoanalytic Legend argues that Freud's biological theories like libido were rooted in the biological hypothesis that accompanied the work of Charles Darwin, citing theories of Krafft-Ebing, Molland, Havelock Ellis, Haeckel, Wilhelm Fliess as influencing Freud. 1900–1940s In 1905, Freud published Three Essays on the Theory of Sexuality in which he laid out his discovery of the psychosexual phases, which categorised early childhood development into five stages depending on what sexual affinity a child possessed at the stage: Oral (ages 0–2); Anal (2–4); Phallic-oedipal or First genital (3–6); Latency (6–puberty); and Mature genital (puberty–onward). His early formulation included the idea that because of societal restrictions, sexual wishes were repressed into an unconscious state, and that the energy of these unconscious wishes could be result in anxiety or physical symptoms. Early treatment techniques, including hypnotism and abreaction, were designed to make the unconscious conscious in order to relieve the pressure and the apparently resulting symptoms. This method would later on be left aside by Freud, giving free association a bigger role. In On Narcissism (1915), Freud turned his attention to the titular subject of narcissism. Freud characterized the difference between energy directed at the self versus energy directed at others using a system known as cathexis. By 1917, in "Mourning and Melancholia", he suggested that certain depressions were caused by turning guilt-ridden anger on the self. In 1919, through "A Child is Being Beaten", he began to address the problems of self-destructive behavior and sexual masochism. Based on his experience with depressed and self-destructive patients, and pondering the carnage of World War I, Freud became dissatisfied with considering only oral and sexual motivations for behavior. By 1920, Freud addressed the power of identification (with the leader and with other members) in groups as a motivation for behavior in Group Psychology and the Analysis of the Ego. In that same year, Freud suggested his dual drive theory of sexuality and aggression in Beyond the Pleasure Principle, to try to begin to explain human destructiveness. Also, it was the first appearance of his "structural theory" consisting of three new concepts id, ego, and superego. Three years later, in 1923, he summarised the ideas of id, ego, and superego in The Ego and the Id. In the book, he revised the whole theory of mental functioning, now considering that repression was only one of many defense mechanisms, and that it occurred to reduce anxiety. Hence, Freud characterised repression as both a cause and a result of anxiety. In 1926, in "Inhibitions, Symptoms and Anxiety", Freud characterised how intrapsychic conflict among drive and superego caused anxiety, and how that anxiety could lead to an inhibition of mental functions, such as intellect and speech. In 1924, Otto Rank published The Trauma of Birth, which analysed culture and philosophy in relation to separation anxiety which occurred before the development of an Oedipal complex. Freud's theories, however, characterized no such phase. According to Freud, the Oedipus complex was at the centre of neurosis, and was the foundational source of all art, myth, religion, philosophy, therapy—indeed of all human culture and civilization. It was the first time that anyone in Freud's inner circle had characterised something other than the Oedipus complex as contributing to intrapsychic development, a notion that was rejected by Freud and his followers at the time. By 1936 the "Principle of Multiple Function" was clarified by Robert Waelder. He widened the formulation that psychological symptoms were caused by and relieved conflict simultaneously. Moreover, symptoms (such as phobias and compulsions) each represented elements of some drive wish (sexual and/or aggressive), superego, anxiety, reality, and defenses. Also in 1936, Anna Freud, Sigmund's daughter, published her seminal book, The Ego and the Mechanisms of Defense, outlining numerous ways the mind could shut upsetting things out of consciousness. 1940s–present When Hitler's power grew, the Freud family and many of their colleagues fled to London. Within a year, Sigmund Freud died. In the United States, also following the death of Freud, a new group of psychoanalysts began to explore the function of the ego. Led by Heinz Hartmann, the group built upon understandings of the synthetic function of the ego as a mediator in psychic functioning, distinguishing such from autonomous ego functions (e.g. memory and intellect). These "ego psychologists" of the 1950s paved a way to focus analytic work by attending to the defenses (mediated by the ego) before exploring the deeper roots to the unconscious conflicts. In addition, there was growing interest in child psychoanalysis. Psychoanalysis has been used as a research tool into childhood development, and is still used to treat certain mental disturbances. In the 1960s, Freud's early thoughts on the childhood development of female sexuality were challenged; this challenge led to the development of a variety of understandings of female sexual development, many of which modified the timing and normality of several of Freud's theories. Several researchers followed Karen Horney's studies of societal pressures that influence the development of women. In the first decade of the 21st century, there were approximately 35 training institutes for psychoanalysis in the United States accredited by the American Psychoanalytic Association (APsaA), which is a component organization of the International Psychoanalytical Association (IPA), and there are over 3000 graduated psychoanalysts practicing in the United States. The IPA accredits psychoanalytic training centers through such "component organisations" throughout the rest of the world, including countries such as Serbia, France, Germany, Austria, Italy, Switzerland, and many others, as well as about six institutes directly in the United States. Psychoanalysis as a movement Freud founded the Psychological Wednesday Society in 1902, which Edward Shorter argues was the beginning of psychoanalysis as a movement. This society became the Vienna Psychoanalytic Society in 1908 in the same year as the first international congress of psychoanalysis held in Salzburg, Austria. Alfred Adler was one of the most active members in this society in its early years. The second congress of psychoanalysis took place in Nuremberg, Germany in 1910. At this congress, Ferenczi called for the creation of an International Psychoanalytic Association with Jung as president for life. A third congress was held in Weimar in 1911. The London Psychoanalytical Society was founded in 1913 by Ernest Jones. Developments of alternative forms of psychotherapy Cognitive behavioural therapy (CBT) In the 1950s, psychoanalysis was the main modality of psychotherapy. Behavioural models of psychotherapy started to assume a more central role in psychotherapy in the 1960s. Aaron T. Beck, a psychiatrist trained in a psychoanalytic tradition, set out to test the psychoanalytic models of depression empirically and found that conscious ruminations of loss and personal failing were correlated with depression. He suggested that distorted and biased beliefs were a causal factor of depression, publishing an influential paper in 1967 after a decade of research using the construct of schemas to explain the depression. Beck developed this empirically supported hypothesis for the cause of depression into a talking therapy called cognitive behavioral therapy (CBT) in the early 1970s. Attachment theory Attachment theory was developed theoretically by John Bowlby and formalized empirically by Mary Ainsworth. Bowlby was trained psychoanalytically but was concerned about some properties of psychoanalysis; he was troubled by the dogmatism of psychoanalysis at the time, its arcane terminology, the lack of attention to environment in child behaviour, and the concepts derived from talking therapy to child behaviour. In response, he developed an alternative conceptualization of child behaviour based on principles on ethology. Bowlby's theory of attachment rejects Freud's model of psychosexual development based on the Oedipal model. For his work, Bowlby was shunned from psychoanalytical circles who did not accept his theories. Nonetheless, his conceptualization was adopted widely by mother-infant research in the 1970s. Theories The predominant psychoanalytic theories can be organised into several theoretical schools. Although these perspectives differ, most of them emphasize the influence of unconscious elements on the conscious. There has also been considerable work done on consolidating elements of conflicting theories. There are some persistent conflicts among psychoanalysts regarding specific causes of certain syndromes, and some disputes regarding the ideal treatment techniques. In the 21st century, psychoanalytic ideas have found influence in fields such as childcare, education, literary criticism, cultural studies, mental health, and particularly psychotherapy. Though most mainstream psychoanalysts subscribe to modern strains of psychoanalytical thought, there are groups who follow the precepts of a single psychoanalyst and their school of thought. Psychoanalytic ideas also play roles in some types of literary analysis such as archetypal literary criticism. Topographic theory Topographic theory was named and first described by Sigmund Freud in The Interpretation of Dreams (1899). The theory hypothesizes that the mental apparatus can be divided into the systems Conscious, Preconscious, and Unconscious. These systems are not anatomical structures of the brain but, rather, mental processes. Although Freud retained this theory throughout his life, he largely replaced it with the structural theory. Structural theory Structural theory divides the psyche into the id, the ego, and the super-ego. The id is present at birth as the repository of basic instincts, which Freud called "Triebe" ("drives"). Unorganized and unconscious, it operates merely on the 'pleasure principle', without realism or foresight. The ego develops slowly and gradually, being concerned with mediating between the urging of the id and the realities of the external world; it thus operates on the 'reality principle'. The super-ego is held to be the part of the ego in which self-observation, self-criticism and other reflective and judgmental faculties develop. The ego and the super-ego are both partly conscious and partly unconscious. Theoretical and clinical approaches During the twentieth century, many different clinical and theoretical models of psychoanalysis emerged. Ego psychology Ego psychology was initially suggested by Freud in Inhibitions, Symptoms and Anxiety (1926), while major steps forward would be made through Anna Freud's work on defense mechanisms, first published in her book The Ego and the Mechanisms of Defence (1936). The theory was refined by Hartmann, Loewenstein, and Kris in a series of papers and books from 1939 through the late 1960s. Leo Bellak was a later contributor. This series of constructs, paralleling some of the later developments of cognitive theory, includes the notions of autonomous ego functions: mental functions not dependent, at least in origin, on intrapsychic conflict. Such functions include: sensory perception, motor control, symbolic thought, logical thought, speech, abstraction, integration (synthesis), orientation, concentration, judgment about danger, reality testing, adaptive ability, executive decision-making, hygiene, and self-preservation. Freud noted that inhibition is one method that the mind may utilize to interfere with any of these functions in order to avoid painful emotions. Hartmann (1950s) pointed out that there may be delays or deficits in such functions. Frosch (1964) described differences in those people who demonstrated damage to their relationship to reality, but who seemed able to test it. According to ego psychology, ego strengths, later described by Otto F. Kernberg (1975), include the capacities to control oral, sexual, and destructive impulses; to tolerate painful affects without falling apart; and to prevent the eruption into consciousness of bizarre symbolic fantasy. Synthetic functions, in contrast to autonomous functions, arise from the development of the ego and serve the purpose of managing conflict processes. Defenses are synthetic functions that protect the conscious mind from awareness of forbidden impulses and thoughts. One purpose of ego psychology has been to emphasize that some mental functions can be considered to be basic, rather than derivatives of wishes, affects, or defenses. However, autonomous ego functions can be secondarily affected because of unconscious conflict. For example, a patient may have an hysterical amnesia (memory being an autonomous function) because of intrapsychic conflict (wishing not to remember because it is too painful). Taken together, the above theories present a group of metapsychological assumptions. Therefore, the inclusive group of the different classical theories provides a cross-sectional view of human mental processes. There are six "points of view", five described by Freud and a sixth added by Hartmann. Unconscious processes can therefore be evaluated from each of these six points of view: Topographic Dynamic (the theory of conflict) Economic (the theory of energy flow) Structural Genetic (i.e. propositions concerning origin and development of psychological functions) Adaptational (i.e. psychological phenomena as it relates to the external world) Modern conflict theory Modern conflict theory, a variation of ego psychology, is a revised version of structural theory, most notably different by altering concepts related to where repressed thoughts were stored. Modern conflict theory addresses emotional symptoms and character traits as complex solutions to mental conflict. It dispenses with the concepts of a fixed id, ego and superego, and instead posits conscious and unconscious conflict among wishes (dependent, controlling, sexual, and aggressive), guilt and shame, emotions (especially anxiety and depressive affect), and defensive operations that shut off from consciousness some aspect of the others. Moreover, healthy functioning (adaptive) is also determined, to a great extent, by resolutions of conflict. A major objective of modern conflict-theory psychoanalysis is to change the balance of conflict in a patient by making aspects of the less adaptive solutions (also called "compromise formations") conscious so that they can be rethought, and more adaptive solutions found. Current theoreticians who follow the work of Charles Brenner, especially The Mind in Conflict (1982), include Sandor Abend, Jacob Arlow, and Jerome Blackman. Object relations theory Object relations theory attempts to explain human relationships through a study of how mental representations of the self and others are organized. The clinical symptoms that suggest object relations problems (typically developmental delays throughout life) include disturbances in an individual's capacity to feel: warmth, empathy, trust, sense of security, identity stability, consistent emotional closeness, and stability in relationships with significant others. Klein discusses the concept of introjection, creating a mental representation of external objects; and projection, applying this mental representation to reality. Wilfred Bion introduced the concept of containment of projections in the mother-child relationship where a mother understands an infants projections, modifies them and returns them to the child. Concepts regarding internal representation (aka 'introspect', 'self and object representation', or 'internalization of self and other'), although often attributed to Melanie Klein, were actually first mentioned by Sigmund Freud in his early concepts of drive theory (Three Essays on the Theory of Sexuality, 1905). Freud's 1917 paper "Mourning and Melancholia", for example, hypothesized that unresolved grief was caused by the survivor's internalized image of the deceased becoming fused with that of the survivor, and then the survivor shifting unacceptable anger toward the deceased onto the now complex self-image. Melanie Klein's hypotheses regarding internalization during the first year of life, leading to paranoid and depressive positions, were later challenged by René Spitz (e.g., The First Year of Life, 1965), who divided the first year of life into a coenesthetic phase of the first six months, and then a diacritic phase for the second six months. Mahler, Fine, and Bergman (1975) describe distinct phases and subphases of child development leading to "separation-individuation" during the first three years of life, stressing the importance of constancy of parental figures in the face of the child's destructive aggression, internalizations, stability of affect management, and ability to develop healthy autonomy. During adolescence, Erik Erikson (1950–1960s) described the 'identity crisis', that involves identity-diffusion anxiety. In order for an adult to be able to experience "Warm-ETHICS: (warmth, Empathy, Trust, Holding environment, Identity, Closeness, and Stability) in relationships, the teenager must resolve the problems with identity and redevelop self and object constancy. Self psychology Self psychology emphasizes the development of a stable and integrated sense of self through empathic contacts with other humans, primary significant others conceived of as 'selfobjects'. Selfobjects meet the developing self's needs for mirroring, idealization, and twinship, and thereby strengthen the developing self. The process of treatment proceeds through "transmuting internalizations" in which the patient gradually internalizes the selfobject functions provided by the therapist. Self psychology was proposed originally by Heinz Kohut, and has been further developed by Arnold Goldberg, Frank Lachmann, Paul and Anna Ornstein, Marian Tolpin, and others. Lacanian psychoanalysis Lacanian psychoanalysis, which integrates psychoanalysis with structural linguistics and Hegelian philosophy, is especially popular in France and parts of Latin America. Lacanian psychoanalysis is a departure from the traditional British and American psychoanalysis. Jacques Lacan frequently used the phrase "retourner à Freud" ("return to Freud") in his seminars and writings, as he claimed that his theories were an extension of Freud's own, contrary to those of Anna Freud, the Ego Psychology, object relations and "self" theories and also claims the necessity of reading Freud's complete works, not only a part of them. Lacan's concepts concern the "mirror stage", the "Real", the "Imaginary", and the "Symbolic", and the claim that "the unconscious is structured as a language." Though a major influence on psychoanalysis in France and parts of Latin America, Lacan and his ideas have taken longer to be translated into English and he has thus had a lesser impact on psychoanalysis and psychotherapy in the English-speaking world. In the United Kingdom and the United States, his ideas are most widely used to analyze texts in literary theory. Due to his increasingly critical stance towards the deviation from Freud's thought, often singling out particular texts and readings from his colleagues, Lacan was excluded from acting as a training analyst in the IPA, thus leading him to create his own school in order to maintain an institutional structure for the many candidates who desired to continue their analysis with him. Adaptive paradigm The adaptive paradigm of psychotherapy develops out of the work of Robert Langs. The adaptive paradigm interprets psychic conflict primarily in terms of conscious and unconscious adaptation to reality. Langs' recent work in some measure returns to the earlier Freud, in that Langs prefers a modified version of the topographic model of the mind (conscious, preconscious, and unconscious) over the structural model (id, ego, and super-ego), including the former's emphasis on trauma (though Langs looks to death-related traumas rather than sexual traumas). At the same time, Langs' model of the mind differs from Freud's in that it understands the mind in terms of evolutionary biological principles. Relational psychoanalysis Relational psychoanalysis combines interpersonal psychoanalysis with object-relations theory and with inter-subjective theory as critical for mental health. It was introduced by Stephen Mitchell. Relational psychoanalysis stresses how the individual's personality is shaped by both real and imagined relationships with others, and how these relationship patterns are re-enacted in the interactions between analyst and patient. Relational psychoanalysts have propounded their view of the necessity of helping certain detached, isolated patients, develop the capacity for "mentalization" associated with thinking about relationships and themselves. Psychopathology (mental disturbances) Childhood origins Freudian theories hold that adult problems can be traced to unresolved conflicts from certain phases of childhood and adolescence, caused by fantasy, stemming from their own drives. Freud, based on the data gathered from his patients early in his career, suspected that neurotic disturbances occurred when children were sexually abused in childhood (i.e. seduction theory). Later, Freud came to believe that, although child abuse occurs, neurotic symptoms were not associated with this. He believed that neurotic people often had unconscious conflicts that involved incestuous fantasies deriving from different stages of development. He found the stage from about three to six years of age (preschool years, today called the "first genital stage") to be filled with fantasies of having romantic relationships with both parents. Arguments were quickly generated in early 20th-century Vienna about whether adult seduction of children, i.e. child sexual abuse, was the basis of neurotic illness. There still is no complete agreement, although nowadays professionals recognize the negative effects of child sexual abuse on mental health. The theory on origins of pathologically dysfunctional relationships was further developed by the specialist in psychiatry Jürg Willi (* 16. March 1934 in Zürich; † 8. April 2019) into the Collusion (psychology) concept. The concept takes the observations of Sigmund Freud about the narcissistic, the oral, the anal and the phallic phases and translates them into a two-couples-relationship model, with respect to dysfunctions in the relationship resulting from childhood trauma. Oedipal conflicts Many psychoanalysts who work with children have studied the actual effects of child abuse, which include ego and object relations deficits and severe neurotic conflicts. Much research has been done on these types of trauma in childhood, and the adult sequelae of those. In studying the childhood factors that start neurotic symptom development, Freud found a constellation of factors that, for literary reasons, he termed the Oedipus complex, based on the play by Sophocles, Oedipus Rex, in which the protagonist unwittingly kills his father and marries his mother. The validity of the Oedipus complex is now widely disputed and rejected. The shorthand term, oedipal—later explicated by Joseph J. Sandler in "On the Concept Superego" (1960) and modified by Charles Brenner in The Mind in Conflict (1982)—refers to the powerful attachments that children make to their parents in the preschool years. These attachments involve fantasies of sexual relationships with either (or both) parent, and, therefore, competitive fantasies toward either (or both) parents. Humberto Nagera (1975) has been particularly helpful in clarifying many of the complexities of the child through these years. "Positive" and "negative" oedipal conflicts have been attached to the heterosexual and homosexual aspects, respectively. Both seem to occur in development of most children. Eventually, the developing child's concessions to reality (that they will neither marry one parent nor eliminate the other) lead to identifications with parental values. These identifications generally create a new set of mental operations regarding values and guilt, subsumed under the term superego. Besides superego development, children "resolve" their preschool oedipal conflicts through channeling wishes into something their parents approve of ("sublimation") and the development, during the school-age years ("latency") of age-appropriate obsessive-compulsive defensive maneuvers (rules, repetitive games). Treatment Using the various analytic and psychological techniques to assess mental problems, some believe that there are particular constellations of problems that are especially suited for analytic treatment (see below) whereas other problems might respond better to medicines and other interpersonal interventions. To be treated with psychoanalysis, whatever the presenting problem, the person requesting help must demonstrate a desire to start an analysis. The person wishing to start an analysis must have some capacity for speech and communication. As well, they need to be able to have or develop trust and insight within the psychoanalytic session. Potential patients must undergo a preliminary stage of treatment to assess their amenability to psychoanalysis at that time, and also to enable the analyst to form a working psychological model, which the analyst will use to direct the treatment. Psychoanalysts mainly work with neurosis and hysteria in particular; however, adapted forms of psychoanalysis are used in working with schizophrenia and other forms of psychosis or mental disorder. Finally, if a prospective patient is severely suicidal a longer preliminary stage may be employed, sometimes with sessions which have a twenty-minute break in the middle. There are numerous modifications in technique under the heading of psychoanalysis due to the individualistic nature of personality in both analyst and patient. The most common problems treatable with psychoanalysis include: phobias, conversions, compulsions, obsessions, anxiety attacks, depressions, sexual dysfunctions, a wide variety of relationship problems (such as dating and marital strife), and a wide variety of character problems (for example, painful shyness, meanness, obnoxiousness, workaholism, hyperseductiveness, hyperemotionality, hyperfastidiousness). The fact that many of such patients also demonstrate deficits above makes diagnosis and treatment selection difficult. Analytical organizations such as the IPA, APsaA and the European Federation for Psychoanalytic Psychotherapy have established procedures and models for the indication and practice of psychoanalytical therapy for trainees in analysis. The match between the analyst and the patient can be viewed as another contributing factor for the indication and contraindication for psychoanalytic treatment. The analyst decides whether the patient is suitable for psychoanalysis. This decision made by the analyst, besides made on the usual indications and pathology, is also based to a certain degree by the "fit" between analyst and patient. A person's suitability for analysis at any particular time is based on their desire to know something about where their illness has come from. Someone who is not suitable for analysis expresses no desire to know more about the root causes of their illness. An evaluation may include one or more other analysts' independent opinions and will include discussion of the patient's financial situation and insurances. Techniques The foundation of psychoanalysis is interpretation of the patient's unconscious conflicts that are interfering with current-day functioning – conflicts that are causing painful symptoms such as phobias, anxiety, depression, and compulsions. Strachey (1936) stressed that figuring out ways the patient distorted perceptions about the analyst led to understanding what may have been forgotten. In particular, unconscious hostile feelings toward the analyst could be found in symbolic, negative reactions to what Robert Langs later called the "frame" of the therapy—the setup that included times of the sessions, payment of fees, and necessity of talking. In patients who made mistakes, forgot, or showed other peculiarities regarding time, fees, and talking, the analyst can usually find various unconscious "resistances" to the flow of thoughts (aka free association). When the patient reclines on a couch with the analyst out of view, the patient tends to remember more experiences, more resistance and transference, and is able to reorganize thoughts after the development of insight – through the interpretive work of the analyst. Although fantasy life can be understood through the examination of dreams, masturbation fantasies are also important. The analyst is interested in how the patient reacts to and avoids such fantasies. Various memories of early life are generally distorted—what Freud called screen memories—and in any case, very early experiences (before age two)—cannot be remembered. Variations in technique There is what is known among psychoanalysts as classical technique, although Freud throughout his writings deviated from this considerably, depending on the problems of any given patient. Classical technique was summarized by Allan Compton as comprising: Instructions: telling the patient to try to say what's on their mind, including interferences; Exploration: asking questions; and Clarification: rephrasing and summarizing what the patient has been describing. As well, the analyst can also use confrontation to bringing an aspect of functioning, usually a defense, to the patient's attention. The analyst then uses a variety of interpretation methods, such as: Dynamic interpretation: explaining how being too nice guards against guilt (e.g. defense vs. affect); Genetic interpretation: explaining how a past event is influencing the present; Resistance interpretation: showing the patient how they are avoiding their problems; Transference interpretation: showing the patient ways old conflicts arise in current relationships, including that with the analyst; or Dream interpretation: obtaining the patient's thoughts about their dreams and connecting this with their current problems. Analysts can also use reconstruction to estimate what may have happened in the past that created some current issue. These techniques are primarily based on conflict theory (see above). As object relations theory evolved, supplemented by the work of John Bowlby and Mary Ainsworth, techniques with patients who had more severe problems with basic trust (Erikson, 1950) and a history of maternal deprivation (see the works of Augusta Alpert) led to new techniques with adults. These have sometimes been called interpersonal, intersubjective (cf. Stolorow), relational, or corrective object relations techniques. Ego psychological concepts of deficit in functioning led to refinements in supportive therapy. These techniques are particularly applicable to psychotic and near-psychotic (cf., Eric Marcus, "Psychosis and Near-psychosis") patients. These supportive therapy techniques include discussions of reality; encouragement to stay alive (including hospitalization); psychotropic medicines to relieve overwhelming depressive affect or overwhelming fantasies (hallucinations and delusions); and advice about the meanings of things (to counter abstraction failures). The notion of the "silent analyst" has been criticized. Actually, the analyst listens using Arlow's approach as set out in "The Genesis of Interpretation", using active intervention to interpret resistances, defenses creating pathology, and fantasies. Silence is not a technique of psychoanalysis (see also the studies and opinion papers of Owen Renik). "Analytic neutrality" is a concept that does not mean the analyst is silent. It refers to the analyst's position of not taking sides in the internal struggles of the patient. For example, if a patient feels guilty, the analyst might explore what the patient has been doing or thinking that causes the guilt, but not reassure the patient not to feel guilty. The analyst might also explore the identifications with parents and others that led to the guilt. Interpersonal–relational psychoanalysts emphasize the notion that it is impossible to be neutral. Sullivan introduced the term participant-observer to indicate the analyst inevitably interacts with the analysand, and suggested the detailed inquiry as an alternative to interpretation. The detailed inquiry involves noting where the analysand is leaving out important elements of an account and noting when the story is obfuscated, and asking careful questions to open up the dialogue. Group therapy and play therapy Although single-client sessions remain the norm, psychoanalytic theory has been used to develop other types of psychological treatment. Psychoanalytic group therapy was pioneered by Trigant Burrow, Joseph Pratt, Paul F. Schilder, Samuel R. Slavson, Harry Stack Sullivan, and Wolfe. Child-centered counseling for parents was instituted early in analytic history by Freud, and was later further developed by Irwin Marcus, Edith Schulhofer, and Gilbert Kliman. Psychoanalytically based couples therapy has been promulgated and explicated by Fred Sander. Techniques and tools developed in the first decade of the 21st century have made psychoanalysis available to patients who were not treatable by earlier techniques. This meant that the analytic situation was modified so that it would be more suitable and more likely to be helpful for these patients. Eagle (2007) believes that psychoanalysis cannot be a self-contained discipline but instead must be open to influence from and integration with findings and theory from other disciplines. Psychoanalytic constructs have been adapted for use with children with treatments such as play therapy, art therapy, and storytelling. Throughout her career, from the 1920s through the 1970s, Anna Freud adapted psychoanalysis for children through play. This is still used today for children, especially those who are preadolescent. Using toys and games, children are able to symbolically demonstrate their fears, fantasies, and defenses; although not identical, this technique, in children, is analogous to the aim of free association in adults. Psychoanalytic play therapy allows the child and analyst to understand children's conflicts, particularly defenses such as disobedience and withdrawal, that have been guarding against various unpleasant feelings and hostile wishes. In art therapy, the counselor may have a child draw a portrait and then tell a story about the portrait. The counselor watches for recurring themes—regardless of whether it is with art or toys. Cultural variations Psychoanalysis can be adapted to different cultures, as long as the therapist or counselor understands the client's culture. For example, Tori and Blimes found that defense mechanisms were valid in a normative sample of 2,624 Thais. The use of certain defense mechanisms was related to cultural values. For example, Thais value calmness and collectiveness (because of Buddhist beliefs), so they were low on regressive emotionality. Psychoanalysis also applies because Freud used techniques that allowed him to get the subjective perceptions of his patients. He takes an objective approach by not facing his clients during his talk therapy sessions. He met with his patients wherever they were, such as when he used free association—where clients would say whatever came to mind without self-censorship. His treatments had little to no structure for most cultures, especially Asian cultures. Therefore, it is more likely that Freudian constructs will be used in structured therapy. In addition, Corey postulates that it will be necessary for a therapist to help clients develop a cultural identity as well as an ego identity. Psychodynamic therapy Psychodynamic therapies refer therapies that draw from psychoanalytic approaches but are designed to be shorter in duration or less intensive. Cost and length of treatment The cost to the patient of psychoanalytic treatment ranges widely from place to place and between practitioners. Low-fee analysis is often available in a psychoanalytic training clinic and graduate schools. Otherwise, the fee set by each analyst varies with the analyst's training and experience. Since, in most locations in the United States, unlike in Ontario and Germany, classical analysis (which usually requires sessions three to five times per week) is not covered by health insurance, many analysts may negotiate their fees with patients whom they feel they can help, but who have financial difficulties. The modifications of analysis, which include psychodynamic therapy, brief therapies, and certain types of group therapy, are carried out on a less frequent basis—usually once, twice, or three times a week – and usually the patient sits facing the therapist. As a result of the defense mechanisms and the lack of access to the unfathomable elements of the unconscious, psychoanalysis can be an expansive process that involves 2 to 5 sessions per week for several years. This type of therapy relies on the belief that reducing the symptoms will not actually help with the root causes or irrational drives. The analyst typically is a 'blank screen', disclosing very little about themselves in order that the client can use the space in the relationship to work on their unconscious without interference from outside. The psychoanalyst uses various methods to help the patient to become more self-aware, insightful and uncover meanings of symptoms. Firstly, the psychoanalyst attempts to develop a safe and confidential atmosphere where the patient can report feelings, thoughts and fantasies. Analysands (as people in analysis are called) are asked to report whatever comes to mind without fear of reprisal. Freud called this the "fundamental rule". Analysands are asked to talk about their lives, including their early life, current life and hopes and aspirations for the future. They are encouraged to report their fantasies, "flash thoughts" and dreams. In fact, Freud believed that dreams were, "the royal road to the unconscious"; he devoted an entire volume to the interpretation of dreams. Freud had his patients lay on a couch in a dimly lit room and would sit out of sight, usually directly behind them, as to not influence the patient's thoughts by his gestures or expressions. The psychoanalyst's task, in collaboration with the analysand, is to help deepen the analysand's understanding of those factors, outside of his awareness, that drive his behaviors. In the safe environment psychoanalysis offers, the analysand becomes attached to the analyst and pretty soon he begins to experience the same conflicts with his analyst that he experiences with key figures in his life such as his parents, his boss, his significant other, etc. It is the psychoanalyst's role to point out these conflicts and to interpret them. The transferring of these internal conflicts onto the analyst is called "transference". Many studies have also been done on briefer "dynamic" treatments; these are more expedient to measure, and shed light on the therapeutic process to some extent. Brief Relational Therapy (BRT), Brief Psychodynamic Therapy (BPT), and Time-Limited Dynamic Therapy (TLDP) limit treatment to 20–30 sessions. On average, classical analysis may last 5.7 years , but for phobias and depressions uncomplicated by ego deficits or object relations deficits, analysis may run for a shorter period of time. Longer analyses are indicated for those with more serious disturbances in object relations, more symptoms, and more ingrained character pathology. Training and research Psychoanalysis continues to be practiced by psychiatrists, social workers, and other mental health professionals; however, its practice has declined. It has been largely replaced by the similar but broader psychodynamic psychotherapy in the mid-20th century. Psychoanalytic approaches continue to be listed by the UK National Health Service as possibly helpful for depression. United States Psychoanalytic training in the United States tends to vary according to the program, but it involves a personal psychoanalysis for the trainee, approximately 300 to 600 hours of class instruction, with a standard curriculum, over a two to five year period. Typically, this psychoanalysis must be conducted by a Supervising and Training Analyst. Most institutes (but not all) within the American Psychoanalytic Association, require that Supervising and Training Analysts become certified by the American Board of Psychoanalysts. Certification entails a blind review in which the psychoanalyst's work is vetted by psychoanalysts outside of their local community. After earning certification, these psychoanalysts undergo another hurdle in which they are specially vetted by senior members of their own institute and held to the highest ethical and moral standards. Moreover, they are required to have extensive experience conducting psychoanalyses. Candidates generally have an hour of supervision each week per psychoanalytic case. The minimum number of cases varies between institutes. Candidates often have two to four cases; both male and female cases are required. Supervision extends at least a few years on one or more cases. During supervision the trainee presents material from the psychoanalytic work that week. With the supervisor, the trainee then explores the patient's unconscious conflicts with examination of transference-countertransference constellations. Many psychoanalytic training centers in the United States have been accredited by special committees of the APsaA or the IPA. Because of theoretical differences, there are independent institutes, usually founded by psychologists, who until 1987 were not permitted access to psychoanalytic training institutes of the APsaA. Currently there are between 75 and 100 independent institutes in the United States. As well, other institutes are affiliated to other organizations such as the American Academy of Psychoanalysis and Dynamic Psychiatry, and the National Association for the Advancement of Psychoanalysis. At most psychoanalytic institutes in the United States, qualifications for entry include a terminal degree in a mental health field, such as Ph.D., Psy.D., M.S.W., or M.D. A few institutes restrict applicants to those already holding an M.D. or Ph.D., and most institutes in Southern California confer a Ph.D. or Psy.D. in psychoanalysis upon graduation, which involves completion of the necessary requirements for the state boards that confer that doctoral degree. The first training institute in America to educate non-medical psychoanalysts was The National Psychological Association for Psychoanalysis (1978) in New York City. It was founded by the analyst Theodor Reik. The Contemporary Freudian (originally the New York Freudian Society) an offshoot of the National Psychological Association has a branch in Washington, DC. It is a component society/institute or the IPA. Some psychoanalytic training has been set up as a post-doctoral fellowship in university settings, such as at Duke University, Yale University, New York University, Adelphi University and Columbia University. Other psychoanalytic institutes may not be directly associated with universities, but the faculty at those institutes usually hold contemporaneous faculty positions with psychology Ph.D. programs and/or with medical school psychiatry residency programs. The IPA is the world's primary accrediting and regulatory body for psychoanalysis. Their mission is to assure the continued vigor and development of psychoanalysis for the benefit of psychoanalytic patients. It works in partnership with its 70 constituent organizations in 33 countries to support 11,500 members. In the US, there are 77 psychoanalytical organizations, institutes and associations, which are spread across the states. APsaA has 38 affiliated societies which have 10 or more active members who practice in a given geographical area. The aims of APsaA and other psychoanalytical organizations are: provide ongoing educational opportunities for its members, stimulate the development and research of psychoanalysis, provide training and organize conferences. There are eight affiliated study groups in the United States. A study group is the first level of integration of a psychoanalytical body within the IPA, followed by a provisional society and finally a member society. The Division of Psychoanalysis (39) of the American Psychological Association (APA) was established in the early 1980s by several psychologists. Until the establishment of the Division of Psychoanalysis, psychologists who had trained in independent institutes had no national organization. The Division of Psychoanalysis now has approximately 4,000 members and approximately 30 local chapters in the United States. The Division of Psychoanalysis holds two annual meetings or conferences and offers continuing education in theory, research and clinical technique, as do their affiliated local chapters. The European Psychoanalytical Federation (EPF) is the organization which consolidates all European psychoanalytic societies. This organization is affiliated with the IPA. In 2002, there were approximately 3,900 individual members in 22 countries, speaking 18 different languages. There are also 25 psychoanalytic societies. The American Association of Psychoanalysis in Clinical Social Work (AAPCSW) was established by Crayton Rowe in 1980 as a division of the Federation of Clinical Societies of Social Work and became an independent entity in 1990. Until 2007 it was known as the National Membership Committee on Psychoanalysis. The organization was founded because although social workers represented the larger number of people who were training to be psychoanalysts, they were underrepresented as supervisors and teachers at the institutes they attended. AAPCSW now has over 1000 members and has over 20 chapters. It holds a bi-annual national conference and numerous annual local conferences. Experiences of psychoanalysts and psychoanalytic psychotherapists and research into infant and child development have led to new insights. Theories have been further developed and the results of empirical research are now more integrated in the psychoanalytic theory. United Kingdom The London Psychoanalytical Society was founded by Ernest Jones on 30 October 1913. After World War I with the expansion of psychoanalysis in the United Kingdom, the Society was reconstituted and named the British Psychoanalytical Society in 1919. Soon after, the Institute of Psychoanalysis was established to administer the Society's activities. These include: the training of psychoanalysts, the development of the theory and practice of psychoanalysis, the provision of treatment through The London Clinic of Psychoanalysis, the publication of books in The New Library of Psychoanalysis and Psychoanalytic Ideas. The Institute of Psychoanalysis also publishes The International Journal of Psychoanalysis, maintains a library, furthers research, and holds public lectures. The society has a Code of Ethics and an Ethical Committee. The society, the institute and the clinic are all located at Byron House in West London. The Society is a constituent society of the International Psychoanalytical Association (IPA) a body with members on all five continents which safeguards professional and ethical practice. The Society is a member of the British Psychoanalytic Council (BPC); the BPC publishes a register of British psychoanalysts and psychoanalytical psychotherapists. All members of the British Psychoanalytic Council are required to undertake continuing professional development, CPD. Members of the Society teach and hold posts on other approved psychoanalytic courses, e.g.: British Psychotherapy Foundation and in academic departments, e.g.University College London. Members of the Society have included: Michael Balint, Wilfred Bion, John Bowlby, Ronald Fairbairn, Anna Freud, Harry Guntrip, Melanie Klein, Donald Meltzer, Joseph J. Sandler, Hanna Segal, J. D. Sutherland and Donald Winnicott. The Institute of Psychoanalysis is the foremost publisher of psychoanalytic literature. The 24-volume Standard Edition of the Complete Psychological Works of Sigmund Freud was conceived, translated, and produced under the direction of the British Psychoanalytical Society. The Society, in conjunction with Random House, will soon publish a new, revised and expanded Standard Edition. With the New Library of Psychoanalysis the Institute continues to publish the books of leading theorists and practitioners. The International Journal of Psychoanalysis is published by the Institute of Psychoanalysis. For over 100 years, it has one of the largest circulations of any psychoanalytic journal. Psychoanalytic psychotherapy There are different forms of psychoanalysis and psychotherapies in which psychoanalytic thinking is practiced. Besides classical psychoanalysis there is for example psychoanalytic psychotherapy, a therapeutic approach which widens "the accessibility of psychoanalytic theory and clinical practices that had evolved over 100 plus years to a larger number of individuals." Other examples of well known therapies which also use insights of psychoanalysis are mentalization-based treatment (MBT), and transference focused psychotherapy (TFP). There is also a continuing influence of psychoanalytic thinking in mental health care and psychiatric care. Research Over a hundred years of case reports and studies in the journal Modern Psychoanalysis, the Psychoanalytic Quarterly, the International Journal of Psychoanalysis and the Journal of the American Psychoanalytic Association have analyzed the efficacy of analysis in cases of neurosis and character or personality problems. Psychoanalysis modified by object relations techniques has been shown to be effective in many cases of ingrained problems of intimacy and relationship (cf. the many books of Otto Kernberg). Psychoanalytic treatment, in other situations, may run from about a year to many years, depending on the severity and complexity of the pathology. Psychoanalytic theory has, from its inception, been the subject of criticism and controversy. Freud remarked on this early in his career, when other physicians in Vienna ostracized him for his findings that hysterical conversion symptoms were not limited to women. Challenges to analytic theory began with Otto Rank and Alfred Adler (turn of the 20th century), continued with behaviorists (e.g. Wolpe) into the 1940s and '50s, and have persisted (e.g. Miller). Criticisms come from those who object to the notion that there are mechanisms, thoughts or feelings in the mind that could be unconscious. Criticisms also have been leveled against the idea of "infantile sexuality" (the recognition that children between ages two and six imagine things about procreation). Criticisms of theory have led to variations in analytic theories, such as the work of Ronald Fairbairn, Michael Balint, and John Bowlby. In the past 30 years or so, the criticisms have centered on the issue of empirical verification. With it being difficult to substantiate the efficacy of psychoanalytic treatments in a psychiatric context. Psychoanalysis has been used as a research tool into childhood development (cf. the journal The Psychoanalytic Study of the Child), and has developed into a flexible, effective treatment for certain mental disturbances. In the 1960s, Freud's early (1905) thoughts on the childhood development of female sexuality were challenged; this challenge led to major research in the 1970s and 80s, and then to a reformulation of female sexual development that corrected some of Freud's concepts. Also see the various works of Eleanor Galenson, Nancy Chodorow, Karen Horney, Françoise Dolto, Melanie Klein, Selma Fraiberg, and others. Most recently, psychoanalytic researchers who have integrated attachment theory into their work, including Alicia Lieberman and Daniel Schechter, have explored the role of parental traumatization in the development of young children's mental representations of self and others. Effectiveness The psychoanalytic profession has been resistant to researching efficacy. Evaluations of effectiveness based on the interpretation of the therapist alone cannot be proven. Research results Numerous studies have shown that the efficacy of therapy is primarily related to the quality of the therapeutic alliance. Meta-analyses in 2019 found psychoanalytic and psychodynamic therapy effective at improving psychosocial wellbeing, reducing suicidality, as well as self harm behavior in patients at a 6 month interval. There has also been evidence for psychoanalytic psychotherapy as an effective treatment for Attention Deficit Hyperactivity Disorder (ADHD) and Conduct Disorder when compared with behavioral management treatments with or without methylphenidate. Meta-analysis in 2012 and 2013 found support or evidence for the efficacy of psychoanalytic therapy. Other meta-analyses published in recent years showed psychoanalysis and psychodynamic therapy to be effective, with outcomes comparable to or greater than other kinds of psychotherapy or antidepressant drugs, but these meta-analyses have been subjected to various criticisms. In particular, the inclusion of pre/post studies rather than randomized controlled trials, and the absence of adequate comparisons with control treatments, is a serious limitation in interpreting the results. A French 2004 report from INSERM concluded that psychoanalytic therapy is less effective than other psychotherapies (including cognitive behavioral therapy) for certain diseases. In 2011, the American Psychological Association reviewed 103 RCT comparisons between psychodynamic treatment and a non-dynamic competitor, which had been published between 1974 and 2010, and among which 63 were deemed of adequate quality. Out of 39 comparisons with an active competitor, they found that 6 psychodynamic treatments were superior, 5 were inferior, and 28 showed no difference. The study found these results promising but explicited the necessity of further good quality trials replicating positive results on specific disorders. Meta-analyses of Short Term Psychodynamic Psychotherapy (STPP) have found effect sizes (Cohen's d) ranging from 0.34 to 0.71 compared to no treatment and was found to be slightly better than other therapies in follow up. Other reviews have found an effect size of 0.78 to 0.91 for somatoform disorders compared to no treatment and 0.69 for treating depression. A 2012 Harvard Review of Psychiatry meta-analysis of Intensive Short-Term Dynamic Psychotherapy (ISTDP) found effect sizes ranging from 0.84 for interpersonal problems to 1.51 for depression. Overall ISTDP had an effect size of 1.18 compared to no treatment. A meta-analysis of Long Term Psychodynamic Psychotherapy in 2012 found an overall effect size of 0.33, which is modest. This study concluded the recovery rate following LTPP was equal to control treatments, including treatment as usual, and found the evidence for the effectiveness of LTPP to be limited and at best conflicting. Others have found effect sizes of 0.44–0.68. According to a 2004 French review conducted by INSERM, psychoanalysis was presumed or proven effective at treating panic disorder, post-traumatic stress, and personality disorders, but did not find evidence of its effectiveness in treating schizophrenia, obsessive compulsive disorder, specific phobia, bulimia and anorexia. A 2001 systematic review of the medical literature by the Cochrane Collaboration concluded that no data exist demonstrating that psychodynamic psychotherapy is effective in treating schizophrenia and severe mental illness, and cautioned that medication should always be used alongside any type of talk therapy in schizophrenia cases. A French review from 2004 found the same. The Schizophrenia Patient Outcomes Research Team advises against the use of psychodynamic therapy in cases of schizophrenia, arguing that more trials are necessary to verify its effectiveness. Criticism Both Freud and psychoanalysis have been criticized in extreme terms. Exchanges between critics and defenders of psychoanalysis have often been so heated that they have come to be characterized as the Freud Wars. Linguist Noam Chomsky has criticized psychoanalysis for lacking a scientific basis. Evolutionary biologist Stephen Jay Gould considered psychoanalysis influenced by pseudoscientific theories such as recapitulation theory. Psychologists Hans Eysenck, John F. Kihlstrom, and others have also criticized the field as pseudoscience. Debate over status as scientific The theoretical foundations of psychoanalysis lie in the same philosophical currents that lead to interpretive phenomenology rather than in those that lead to scientific positivism, making the theory largely incompatible with positivist approaches to the study of the mind. Early critics of psychoanalysis believed that its theories were based too little on quantitative and experimental research, and too much on the clinical case study method. Philosopher Frank Cioffi cites false claims of a sound scientific verification of the theory and its elements as the strongest basis for classifying the work of Freud and his school as pseudoscience. Karl Popper argued that psychoanalysis is a pseudoscience because its claims are not testable and cannot be refuted; that is, they are not falsifiable:In addition, Imre Lakatos wrote that "Freudians have been nonplussed by Popper's basic challenge concerning scientific honesty. Indeed, they have refused to specify experimental conditions under which they would give up their basic assumptions." In Sexual Desire (1986), philosopher Roger Scruton rejects Popper's arguments pointing to the theory of repression as an example of a Freudian theory that does have testable consequences. Scruton nevertheless concluded that psychoanalysis is not genuinely scientific, on the grounds that it involves an unacceptable dependence on metaphor. The philosopher and physicist Mario Bunge argued that psychoanalysis is a pseudoscience because it violates the ontology and methodology inherent to science. According to Bunge, most psychoanalytic theories are either untestable or unsupported by evidence. Cognitive scientists, in particular, have also weighed in. Martin Seligman, a prominent academic in positive psychology, wrote that:Adolf Grünbaum argues in Validation in the Clinical Theory of Psychoanalysis (1993) that psychoanalytic based theories are falsifiable, but that the causal claims of psychoanalysis are unsupported by the available clinical evidence. Historian Henri Ellenberger, who researched the history of Freud, Jung, Adler, and Janet, while writing his book The Discovery of the Unconscious: The History and Evolution of Dynamic Psychiatry, argued that psychoanalysis was not scientific on the grounds of both its methodology and social structure: Freud Some have accused Freud of fabrication, most famously in the case of Anna O. Others have speculated that patients had conditions that are now easily identifiable and unrelated to psychoanalysis; for instance, Anna O. is thought to have had an organic impairment such as tuberculous meningitis or temporal lobe epilepsy, rather than Freud's diagnosis of hysteria. Henri Ellenberger and Frank Sulloway argue that Freud and his followers created an inaccurate legend of Freud to popularize psychoanalysis. Mikkel Borch-Jacobsen and Sonu Shamdasani argue that this legend has been adapted to different times and situations. Isabelle Stengers states that psychoanalytic circles have tried to stop historians from accessing documents about the life of Freud. Witch doctors Richard Feynman wrote off psychoanalysts as mere "witch doctors": Likewise, psychiatrist E. Fuller Torrey, in Witchdoctors and Psychiatrists (1986), agreed that psychoanalytic theories have no more scientific basis than the theories of traditional native healers, "witchdoctors" or modern "cult" alternatives such as EST. Psychologist Alice Miller charged psychoanalysis with being similar to the poisonous pedagogies, which she described in her book For Your Own Good. She scrutinized and rejected the validity of Freud's drive theory, including the Oedipus complex, which, according to her and Jeffrey Masson, blames the child for the abusive sexual behavior of adults. Psychologist Joel Kupfersmid investigated the validity of the Oedipus complex, examining its nature and origins. He concluded that there is little evidence to support the existence of the Oedipus complex. Critical perspectives Contemporary French philosophers Michel Foucault and Gilles Deleuze asserted that the institution of psychoanalysis has become a center of power, and that its confessional techniques resemble those included and utilized within the Christian religion. French psychoanalyst Jacques Lacan criticized the emphasis of some American and British psychoanalytical traditions on what he has viewed as the suggestion of imaginary "causes" for symptoms, and recommended the return to Freud. Belgian psycholinguist and psychoanalyst Luce Irigaray also criticized psychoanalysis, employing Jacques Derrida's concept of phallogocentrism to describe the exclusion of the woman both from Freudian and Lacanian psychoanalytical theories. Together with Deleuze, the French psychoanalyst and psychiatrist Félix Guattari criticized the Oedipal and schizophrenic power structure of psychoanalysis and its connivance with capitalism in Anti-Oedipus (1972) and A Thousand Plateaus (1980), the two volumes of their theoretical work Capitalism and Schizophrenia. Deleuze and Guattari in Anti-Oedipus take the cases of Gérard Mendel, Bela Grunberger, and Janine Chasseguet-Smirgel, prominent members of the most respected psychoanalytical associations (including the IPA), to suggest that, traditionally, psychoanalysis had always enthusiastically enjoyed and embraced a police state throughout its history. Freudian theory A survey of scientific research suggested that while personality traits corresponding to Freud's oral, anal, Oedipal, and genital phases can be observed, they do not necessarily manifest as stages in the development of children. These studies also have not confirmed that such traits in adults result from childhood experiences. However, these stages should not be viewed as crucial to modern psychoanalysis. What is crucial to modern psychoanalytic theory and practice is the power of the unconscious and the transference phenomenon. The idea of "unconscious" is contested because human behavior can be observed while human mental activity has to be inferred. However, the unconscious is now a popular topic of study in the fields of experimental and social psychology (e.g., implicit attitude measures, fMRI, and PET scans, and other indirect tests). The idea of unconscious, and the transference phenomenon, have been widely researched and, it is claimed, validated in the fields of cognitive psychology and social psychology, though a Freudian interpretation of unconscious mental activity is not held by the majority of cognitive psychologists. Recent developments in neuroscience have resulted in one side arguing that it has provided a biological basis for unconscious emotional processing in line with psychoanalytic theory i.e., neuropsychoanalysis, while the other side argues that such findings make psychoanalytic theory obsolete and irrelevant. Shlomo Kalo explains that the scientific materialism that flourished in the 19th century severely harmed religion and rejected whatever called spiritual. The institution of the confession priest in particular was badly damaged. The empty void that this institution left behind was swiftly occupied by the newborn psychoanalysis. In his writings, Kalo claims that psychoanalysis basic approach is erroneous. It represents the mainline wrong assumptions that happiness is unreachable and that the natural desire of a human being is to exploit his fellow men for his own pleasure and benefit. Jacques Derrida incorporated aspects of psychoanalytic theory into his theory of deconstruction in order to question what he called the 'metaphysics of presence'. Derrida also turns some of these ideas against Freud, to reveal tensions and contradictions in his work. For example, although Freud defines religion and metaphysics as displacements of the identification with the father in the resolution of the Oedipal complex, Derrida (1987) insists that the prominence of the father in Freud's own analysis is itself indebted to the prominence given to the father in Western metaphysics and theology since Plato. See also Glossary of psychoanalysis List of schools of psychoanalysis Psychoanalytic sociology Training analysis Notes References Further reading Introductions Brenner, Charles (1954). An Elementary Textbook of Psychoanalysis. Elliott, Anthony (2002). Psychoanalytic Theory: An Introduction (2nd ed.). Duke University Press. --An introduction that explains psychoanalytic theory with interpretations of major theorists. Fine, Reuben (1990). The History of Psychoanalysis. (expanded ed.). Northvale: Jason Aronson. Samuel, Lawrence R. (2013). Shrink: A Cultural History of Psychoanalysis in America. University of Nebraska Press. 253 pp. Freud, Sigmund (2014) [1926]. "Psychoanalysis." Encyclopædia Britannica McWilliams Nancy. Psychoanalytic psychotherapy. Practice Guide Reference works de Mijolla, Alain, ed. (2005). International dictionary of psychoanalysis [enhanced American version] 1,2,&3. Detroit: Thomson/Gale. Laplanche, Jean, and J. B. Pontalis (1974). "The Language of Psycho-Analysis". W. W. Norton & Company. Freud, Sigmund (1940). An Outline of Psychoanalysis. ePenguin.;General Edelson, Marshall (1984). Hypothesis and Evidence in Psychoanalysis. Chicago: Chicago University Press. Etchegoyen, Horacio (2005). The Fundamentals of Psychoanalytic Technique (new ed.). Karnac Books. Gellner, Ernest. The Psychoanalytic Movement: The Cunning of Unreason, . A critical view of Freudian theory. Green, André (2005). "Psychoanalysis: A Paradigm For Clinical Thinking". Free Association Books. Irigaray, Luce (2004). Key Writings. Continuum. Jacobson, Edith (1976). Depression; Comparative Studies of Normal, Neurotic, and Psychotic Conditions. International Universities Press. Kernberg, Otto (1993). Severe Personality Disorders: Psychotherapeutic Strategies. Yale University Press. Kohut, Heinz (2000). Analysis of the Self: Systematic Approach to Treatment of Narcissistic Personality Disorders. International Universities Press. Kovacevic, Filip (2007). Liberating Oedipus? Psychoanalysis as Critical Theory. Lexington Books. Kristeva, Julia (1986). The Kristeva Reader, edited by T. Moi. Columbia University Press. Meltzer, Donald (1983). Dream-Life: A Re-Examination of the Psycho-Analytical Theory and Technique. Karnac Books. — (1998). The Kleinian Development (new ed.). Karnac Books; reprint: Mitchell, S. A., and M. J. Black (1995). Freud and beyond: a history of modern psychoanalytic thought. New York: Basic Books. pp. xviii–xx. Pollock, Griselda (2006). "Beyond Oedipus. Feminist Thought, Psychoanalysis, and Mythical Figurations of the Feminine." In Laughing with Medusa, edited by V. Zajko and M. Leonard. Oxford: Oxford University Press. Spielrein, Sabina (1993). Destruction as cause of becoming. Stoller, Robert (1993). Presentations of Gender. Yale University Press. Stolorow, Robert, George Atwood, and Donna Orange (2002). Worlds of Experience: Interweaving Philosophical and Clinical Dimensions in Psychoanalysis. New York: Basic Books. Spitz, René (2006). The First Year of Life: Psychoanalytic Study of Normal and Deviant Development of Object Relations. International Universities Press. Tähkä, Veikko (1993). Mind and Its Treatment: A Psychoanalytic Approach. Madison, CT: International Universities Press. Analyses, discussions and critiques Aziz, Robert (2007). The Syndetic Paradigm: The Untrodden Path Beyond Freud and Jung, Albany: State University of New York Press. Borch-Jacobsen, Mikkel (1991). Lacan: The Absolute Master, Stanford: Stanford University Press. Borch-Jacobsen, Mikkel (1996). Remembering Anna O: A Century of Mystification, London: Routledge. Borch-Jacobsen, Mikkel and Shamdasani, Sonu (2012). The Freud Files: An Inquiry into the History of Psychoanalysis, Cambridge University Press. . Burnham, John, ed. (2012). After Freud Left: A Century of Psychoanalysis in America, University of Chicago Press. Cioffi, Frank. (1998). Freud and the Question of Pseudoscience, Open Court Publishing Company. Crews, Frederick (1986). Skeptical Engagements, New York: Oxford University Press. . Part I of this volume, entitled "The Freudian Temptation," includes five essays critical of psychoanalysis written between 1975 and 1986. Crews, Frederick (1995). The Memory Wars: Freud's Legacy in Dispute, New York: New York Review of Books. Crews, Frederick, ed. (1998). Unauthorized Freud: Doubters Confront a Legend, New York: Viking. Crews, Frederick (2017). Freud: The Making of an Illusion, Metropolitan Books. Dufresne, Todd (2000). Tales From the Freudian Crypt: The Death Drive in Text and Context, Stanford: Stanford University Press. — (2007). Against Freud: Critics Talk Back, Stanford: Stanford University Press. Erwin, Edward (1996), A Final Accounting: Philosophical and Empirical Issues in Freudian Psychology. Esterson, Allen (1993). Seductive Mirage: An Exploration of the Work of Sigmund Freud. Chicago: Open Court. Fisher, Seymour, and Roger P. Greenberg (1977). The Scientific Credibility of Freud's Theories and Therapy. New York: Basic Books. — (1996). Freud Scientifically Reappraised: Testing the Theories and Therapy. New York: John Wiley. Gellner, Ernest (1993), The Psychoanalytic Movement: The Cunning of Unreason. A critical view of Freudian theory. — (1985). The Foundations of Psychoanalysis: A Philosophical Critique. Macmillan, Malcolm (1997), Freud Evaluated: The Completed Arc. Roustang, Francois (1982). Dire Mastery: Discipleship from Freud to Lacan. Baltimore: Johns Hopkins University Press. Webster, Richard. (1995). Why Freud Was Wrong: Sin, Science, and Psychoanalysis, New York: Basic Books, HarperCollins. Wollheim, Richard, editor. (1974). Freud: A Collection of Critical Essays. New York: Anchor Books. Responses to critiques Köhler, Thomas 1996: Anti-Freud-Literatur von ihren Anfängen bis heute. Zur wissenschaftlichen Fundierung von Psychoanalyse-Kritik. Stuttgart: Kohlhammer Verlag. Ollinheimo, Ari — Vuorinen, Risto (1999): Metapsychology and the Suggestion Argument: A Reply to Grünbaum's Critique of Psychoanalysis. Commentationes Scientiarum Socialium, 53. Helsinki: Finnish Academy of Science and Letters. Robinson, Paul (1993). Freud and his Critics. Berkeley & Los Angeles: University of California Press. Gomez, Lavinia: The Freud Wars: An Introduction to the Philosophy of Psychoanalysis. Routledge, 2005. Review: Psychodynamic Practice 14(1):108–111. Feb., 2008. External links International Psychoanalytical Association (IPA) – world's primary regulatory body for psychoanalysis, founded by Sigmund Freud (archived 18 January 1998) Psychoanalysis – Division 39 – American Psychological Association (APA)
23587
https://en.wikipedia.org/wiki/Peking%20%28disambiguation%29
Peking (disambiguation)
Peking is an alternate and mostly obsolete romanization of Beijing, the capital city of the People's Republic of China. Peking may also refer to: Peking (ship), a 1911 German square-rigged sailing ship launched 2045 Peking, an asteroid named for the city Local nickname of the Swedish town Norrköping See also Pekingese Beijing (disambiguation) Beijingese (disambiguation) Pekin (disambiguation) Pekin duck (disambiguation)
23588
https://en.wikipedia.org/wiki/Pinyin
Pinyin
Hanyu Pinyin, or simply pinyin, is the most common romanization system for Standard Chinese. In official documents, it is referred to as the Chinese Phonetic Alphabet. Hanyu () literally means 'Han language'—that is, the Chinese language—while pinyin literally means 'spelled sounds'. Pinyin is the official system used in China, Singapore, Taiwan, and by the United Nations. Its use has become common when transliterating Standard Chinese mostly regardless of region, though it is less ubiquitous in Taiwan. It is used to teach Standard Chinese, normally written with Chinese characters, to students already familiar with the Latin alphabet. Pinyin is also used by various input methods on computers and to categorize entries in some Chinese dictionaries. In pinyin, each Chinese syllable is spelled in terms of an optional initial and a final, each of which is represented by one or more letters. Initials are initial consonants, whereas finals are all possible combinations of medials (semivowels coming before the vowel), a nucleus vowel, and coda (final vowel or consonant). Diacritics are used to indicate the four tones found in Standard Chinese, though these are often omitted in various contexts, such as when spelling Chinese names in non-Chinese texts. Hanyu Pinyin was developed in the 1950s by a group of Chinese linguists including Wang Li, Lu Zhiwei, Li Jinxi, Luo Changpei and Zhou Youguang, who has been called the "father of pinyin". They based their work in part on earlier romanization systems. The system was originally promulgated at the Fifth Session of the 1st National People's Congress in 1958, and has seen several rounds of revisions since. The International Organization for Standardization propagated Hanyu Pinyin as ISO 7098 in 1982, and the United Nations began using it in 1986. Taiwan adopted Hanyu Pinyin as its official romanization system in 2009, replacing Tongyong Pinyin. History Background Matteo Ricci, a Jesuit missionary in China, wrote the first book that used the Latin alphabet to write Chinese, entitled Xizi Qiji () and published in Beijing in 1605. Twenty years later, fellow Jesuit Nicolas Trigault published ) in Hangzhou. Neither book had any influence among the contemporary Chinese literati, and the romanizations they introduced primarily were useful for Westerners. During the late Qing, the reformer Song Shu (1862–1910) proposed that China adopt a phonetic writing system. A student of the scholars Yu Yue and Zhang Taiyan, Song had observed the effect of the kana syllabaries and Western learning during his visits to Japan. While Song did not himself propose a transliteration system for Chinese, his discussion ultimately led to a proliferation of proposed schemes. The Wade–Giles system was produced by Thomas Wade in 1859, and further improved by Herbert Giles, presented in Chinese–English Dictionary (1892). It was popular, and was used in English-language publications outside China until 1979. In 1943, the US military tapped Yale University to develop another romanization system for Mandarin Chinese intended for pilots flying over China—much more than previous systems, the result appears very similar to modern Hanyu Pinyin. Development Hanyu Pinyin was designed by a group of mostly Chinese linguists, including Wang Li, Lu Zhiwei, Li Jinxi, Luo Changpei, as well as Zhou Youguang (1906–2017), an economist by trade, as part of a Chinese government project in the 1950s. Zhou, often called "the father of pinyin", worked as a banker in New York when he decided to return to China to help rebuild the country after the People's Republic was established. Initially, Mao Zedong considered the development of a new writing system for Chinese that only used the Latin alphabet, but during his first official visit to the Soviet Union in 1949, Joseph Stalin convinced him to maintain the existing system. Zhou became an economics professor in Shanghai, and when the Ministry of Education created the Committee for the Reform of the Chinese Written Language in 1955, Premier Zhou Enlai assigned him the task of developing a new romanization system, despite the fact that he was not a linguist by trade. Hanyu Pinyin incorporated different aspects from existing systems, including Gwoyeu Romatzyh (1928), Latinxua Sin Wenz (1931), and the diacritics from bopomofo (1918). "I'm not the father of pinyin", Zhou said years later; "I'm the son of pinyin. It's [the result of] a long tradition from the later years of the Qing dynasty down to today. But we restudied the problem and revisited it and made it more perfect." An initial draft was authored in January 1956 by Ye Laishi, Lu Zhiwei and Zhou Youguang. A revised Pinyin scheme was proposed by Wang Li, Lu Zhiwei and Li Jinxi, and became the main focus of discussion among the group of Chinese linguists in June 1956, forming the basis of Pinyin standard later after incorporating a wide range of feedback and further revisions. The first edition of Hanyu Pinyin was approved and officially adopted at the Fifth Session of the 1st National People's Congress on 11 February 1958. It was then introduced to primary schools as a way to teach Standard Chinese pronunciation and used to improve the literacy rate among adults. Despite its formal promulgation, pinyin did not become widely used until after the tumult of the Cultural Revolution. In the 1980s, students were trained in pinyin from an early age, learning it in tandem with characters or even before. During the height of the Cold War the use of pinyin system over Wade–Giles and Yale romanizations outside of China was regarded as a political statement or identification with the mainland Chinese government. Beginning in the early 1980s, Western publications addressing mainland China began using the Hanyu Pinyin romanization system instead of earlier romanization systems; this change followed the Joint Communiqué on the Establishment of Diplomatic Relations between the United States and China in 1979. In 2001, the Chinese government issued the National Common Language Law, providing a legal basis for applying pinyin. The current specification of the orthography is GB/T 16159–2012. Syllables Chinese phonology is generally described in terms of sound pairs of two initials () and finals (). This is distinct from the concept of consonant and vowel sounds as basic units in traditional (and most other phonetic systems used to describe the Chinese language). Every syllable in Standard Chinese can be described as a pair of one initial and one final, except for the special syllable er or when a trailing -r is considered part of a syllable (a phenomenon known as erhua). The latter case, though a common practice in some sub-dialects, is rarely used in official publications. Even though most initials contain a consonant, finals are not always simple vowels, especially in compound finals (), i.e. when a "medial" is placed in front of the final. For example, the medials and are pronounced with such tight openings at the beginning of a final that some native Chinese speakers (especially when singing) pronounce , officially pronounced , as and , officially pronounced , as or . Often these medials are treated as separate from the finals rather than as part of them; this convention is followed in the chart of finals below. Initials The conventional lexicographical order derived from bopomofo is: In each cell below, the pinyin letters assigned to each initial are accompanied by their phonetic realizations in brackets, notated according to the International Phonetic Alphabet. Finals In each cell below, the first line indicates the International Phonetic Alphabet (IPA) transcription, the second indicates pinyin for a standalone (no-initial) form, and the third indicates pinyin for a combination with an initial. Other than finals modified by an -r, which are omitted, the following is an exhaustive table of all possible finals. The only syllable-final consonants in Standard Chinese are -n, -ng, and -r, the last of which is attached as a grammatical suffix. A Chinese syllable ending with any other consonant either is from a non-Mandarin language (a southern Chinese language such as Cantonese, reflecting final consonants in Old Chinese), or indicates the use of a non-pinyin romanization system, such as one that uses final consonants to indicate tones. Technically, i, u, ü without a following vowel are finals, not medials, and therefore take the tone marks, but they are more concisely displayed as above. In addition, ê () and syllabic nasals m (, ), n (, ), ng (, ) are used as interjections or in neologisms; for example, pinyin defines the names of several pinyin letters using -ê finals. According to the Scheme for the Chinese Phonetic Alphabet, ng can be abbreviated with the shorthand ŋ. However, this shorthand is rarely used due to difficulty of entering it on computers. The sound An umlaut is added to when it occurs after the initials and when necessary in order to represent the sound . This is necessary in order to distinguish the front high rounded vowel in (e.g. ) from the back high rounded vowel in (e.g. ). Tonal markers are placed above the umlaut, as in . However, the ü is not used in the other contexts where it could represent a front high rounded vowel, namely after the letters j, q, x, and y. For example, the sound of the word for is transcribed in pinyin simply as , not as yǘ. This practice is opposed to Wade–Giles, which always uses ü, and Tongyong Pinyin, which always uses yu. Whereas Wade–Giles needs the umlaut to distinguish between chü (pinyin ) and chu (pinyin ), this ambiguity does not arise with pinyin, so the more convenient form ju is used instead of jü. Genuine ambiguities only happen with nu/nü and lu/lü, which are then distinguished by an umlaut. Many fonts or output methods do not support an umlaut for ü or cannot place tone marks on top of ü. Likewise, using ü in input methods is difficult because it is not present as a simple key on many keyboard layouts. For these reasons v is sometimes used instead by convention. For example, it is common for cellphones to use v instead of ü. Additionally, some stores in China use v instead of ü in the transliteration of their names. The drawback is a lack of precomposed characters and limited font support for combining accents on the letter v, (). This also presents a problem in transcribing names for use on passports, affecting people with names that consist of the sound or , particularly people with the surname , a fairly common surname, particularly compared to the surnames , , and . Previously, the practice varied among different passport issuing offices, with some transcribing as "LV" and "NV" while others used "LU" and "NU". On 10 July 2012, the Ministry of Public Security standardized the practice to use "LYU" and "NYU" in passports. Although nüe written as nue, and lüe written as lue are not ambiguous, nue or lue are not correct according to the rules; nüe and lüe should be used instead. However, some Chinese input methods support both nve/lve (typing v for ü) and nue/lue. Tones The pinyin system also uses four diacritics to mark the tones of Mandarin. In the pinyin system, four main tones of Mandarin are shown by diacritics: ā, á, ǎ, and à. There is no symbol or diacritic for the neutral tone: a. The diacritic is placed over the letter that represents the syllable nucleus, unless that letter is missing. Tones are used in Hanyu Pinyin symbols, and they do not appear in Chinese characters. Tones are written on the finals of Chinese pinyin. If the tone mark is written over an i, then it replaces the tittle, as in . The first tone (flat or high-level tone) is represented by a macron added to the pinyin vowel: ā ē ê̄ ī ō ū ǖ Ā Ē Ê̄ Ī Ō Ū Ǖ The second tone (rising or high-rising tone) is denoted by an acute accent : á é ế í ó ú ǘ Á É Ế Í Ó Ú Ǘ The third tone (falling-rising or low tone) is marked by a caron : ǎ ě ê̌ ǐ ǒ ǔ ǚ Ǎ Ě Ê̌ Ǐ Ǒ Ǔ Ǚ The fourth tone (falling or high-falling tone) is represented by a grave accent : à è ề ì ò ù ǜ À È Ề Ì Ò Ù Ǜ The fifth tone (neutral tone) is represented by a normal vowel without any accent mark: a e ê i o u ü A E Ê I O U Ü In dictionaries, neutral tone may be indicated by a dot preceding the syllable—e.g. . When a neutral tone syllable has an alternative pronunciation in another tone, a combination of tone marks may be used: may be pronounced either or . Numbers Before the advent of computers, many typewriter fonts did not contain vowels with macron or caron diacritics. Tones were thus represented by placing a tone number at the end of individual syllables. For example, is written . Each tone can be denoted with its numeral the order listed above. The neutral tone can either be denoted with no numeral, with 0, or with 5. Placement and omission Briefly, tone marks should always be placed in the order , with the only exception being , where the tone mark is placed on the u instead. Pinyin tone marks appear primarily above the syllable nucleus—e.g. as in , where k is the initial, u the medial, a the nucleus, and i is the coda. There is an exception for syllabic nasals like , where the nucleus of the syllable is a consonant: there, the diacritic will be carried by a written dummy vowel. When the nucleus is (written e or o), and there is both a medial and a coda, the nucleus may be dropped from writing. In this case, when the coda is a consonant n or ng, the only vowel left is the medial i, u, or ü, and so this takes the diacritic. However, when the coda is a vowel, it is the coda rather than the medial which takes the diacritic in the absence of a written nucleus. This occurs with syllables ending in (from : → ) and in (from : → ). That is, in the absence of a written nucleus the finals have priority for receiving the tone marker, as long as they are vowels; if not, the medial takes the diacritic. An algorithm to find the correct vowel letter (when there is more than one) is as follows: If there is an a or an e, it will take the tone mark If there is an , then the o takes the tone mark Otherwise, the second vowel takes the tone mark Worded differently, If there is an a, e, or o, it will take the tone mark; in the case of , the mark goes on the a Otherwise, the vowels are or , in which case the second vowel takes the tone mark The above can be summarized as the following table. The vowel letter taking the tone mark is indicated by the fourth-tone mark. Tone sandhi Tone sandhi is not ordinarily reflected in pinyin spelling. Spacing, capitalization, and punctuation Standard Chinese has many polysyllabic words. Like in other writing systems using the Latin alphabet, spacing in pinyin is officially based on word boundaries. However, there are often ambiguities in partitioning a word. The Basic Rules of the Chinese Phonetic Alphabet Orthography were put into effect in 1988 by the National Educational and National Language commissions. These rules became a GB recommendation in 1996, and were last updated in 2012. In practice, however, published materials in China now often space pinyin syllable by syllable. According to Victor H. Mair, this practice became widespread after the Script Reform Committee, previously under direct control of the State Council, had its power greatly weakened in 1985 when it was renamed the State Language Commission and placed under the Ministry of Education. Mair claims that proponents of Chinese characters in the educational bureaucracy "became alarmed that word-based pinyin was becoming a de facto alternative to Chinese characters as a script for writing Mandarin and demanded that all pinyin syllables be written separately." Comparison with other orthographies Pinyin superseded older romanization systems such as Wade–Giles and postal romanization, and replaced bopomofo as the method of Chinese phonetic instruction in mainland China. The ISO adopted pinyin as the standard romanization for modern Chinese in 1982 (ISO 7098:1982, superseded by ISO 7098:2015). The United Nations followed suit in 1986. It has also been accepted by the government of Singapore, the United States's Library of Congress, the American Library Association, and many other international institutions. Pinyin assigns some Latin letters sound values which are quite different from those of most languages. This has drawn some criticism as it may lead to confusion when uninformed speakers apply either native or English assumed pronunciations to words. However, this problem is not limited only to pinyin, since many languages that use the Latin alphabet natively also assign different values to the same letters. A recent study on Chinese writing and literacy concluded, "By and large, pinyin represents the Chinese sounds better than the Wade–Giles system, and does so with fewer extra marks." As pinyin is a phonetic writing system for modern Standard Chinese, it is not designed to replace characters for writing Literary Chinese, the standard written language prior to the early 1900s. In particular, Chinese characters retain semantic cues that help distinguish differently pronounced words in the ancient classical language that are now homophones in Mandarin. Thus, Chinese characters remain indispensable for recording and transmitting the corpus of Chinese writing from the past. Pinyin is not designed to transcribe varieties other than Standard Chinese, which is based on the phonological system of Beijing Mandarin. Other romanization schemes have been devised to transcribe those other Chinese varieties, such as Jyutping for Cantonese and Pe̍h-ōe-jī for Hokkien. Comparison charts Typography and encoding Based on the "Chinese Romanization" section of ISO 7098:2015, pinyin tone marks should use the symbols from Combining Diacritical Marks, as opposed by the use of Spacing Modifier Letters in bopomofo. Lowercase letters with tone marks are included in GB 2312 and their uppercase counterparts are included in JIS X 0212; thus Unicode includes all the common accented characters from pinyin. Other punctuation mark and symbols in Chinese are to use the equivalent symbol in English noted in to GB 15834. According to GB 16159, all accented letters are required to have both uppercase and lowercase characters as per their normal counterparts. GBK has mapped two characters and to Private Use Areas in Unicode respectively, thus some fonts (e.g. SimSun) that adhere to GBK include both characters in the Private Use Areas, and some input methods (e.g. Sogou Pinyin) also outputs the Private Use Areas code point instead of the original character. As the superset GB 18030 changed the mappings of and , this has caused an issue where the input methods and font files use different encoding standards, and thus the input and output of both characters are mixed up. Other symbols are used in pinyin are as follows: Usage The spelling of Chinese geographical or personal names in pinyin has become the most common way to transcribe them in English. Pinyin has also become the dominant Chinese input method in mainland China, in contrast to Taiwan, where bopomofo is most commonly used. Families outside of Taiwan who speak Mandarin as a mother tongue use pinyin to help children associate characters with spoken words which they already know. Chinese families outside of Taiwan who speak some other language as their mother tongue use the system to teach children Mandarin pronunciation when learning vocabulary in elementary school. Since 1958, pinyin has been actively used in adult education as well, making it easier for formerly illiterate people] to continue with self-study after a short period of pinyin literacy instruction. Pinyin has become a tool for many foreigners to learn Mandarin pronunciation, and is used to explain both the grammar and spoken Mandarin coupled with Chinese characters. Books containing both Chinese characters and pinyin are often used by foreign learners of Chinese. Pinyin's role in teaching pronunciation to foreigners and children is similar in some respects to furigana-based books with hiragana letters written alongside kanji (directly analogous to bopomofo) in Japanese, or fully vocalised texts in Arabic. The tone-marking diacritics are commonly omitted in popular news stories and even in scholarly works, as well as in the traditional Mainland Chinese Braille system, which is similar to pinyin, but meant for blind readers. This results in some degree of ambiguity as to which words are being represented. Computer input Simple computer systems, sometimes only able to use simple character systems for text, such as the 7-bit ASCII standard—essentially the 26 Latin letters, 10 digits, and punctuation marks—long provided a convincing argument for using unaccented pinyin instead of diacritical pinyin or Chinese characters. Today, however, most computer systems are able to display characters from Chinese and many other writing systems as well, and have them entered with a Latin keyboard using an input method editor. Alternatively, some touchscreen devices allow users to input characters graphically by writing with a stylus, with concurrent online handwriting recognition. Pinyin with accents can be entered with the use of special keyboard layouts or various other utilities. Sorting techniques Chinese text can be sorted by its pinyin representation, which is often useful for looking up words whose pronunciations are known, but not whose character forms are not known. Chinese characters and words can be sorted for convenient lookup by their Pinyin expressions alphabetically, according to their inherited order originating with the ancient Phoenicians. Identical syllables are then further sorted by tone number, ascending, with neutral tones placed last. Words of multiple characters can be sorted in two different ways, either per character, as is used in the Xiandai Hanyu Cidian, or by the whole word's string, which is only then sorted by tone. This method is used in the ABC Chinese–English Dictionary. By region Taiwan Between October 2002 and January 2009, Taiwan used Tongyong Pinyin, a domestic modification of Hanyu Pinyin, as its official romanization system. Thereafter, it began to promote the use of Hanyu Pinyin instead. Tongyong Pinyin was designed to romanize varieties spoken on the island in addition to Standard Chinese. The ruling Kuomintang (KMT) party resisted its adoption, preferring the system by then used in mainland China and internationally. Romanization preferences quickly became associated with issues of national identity. Preferences split along party lines: the KMT and its affiliated parties in the Pan-Blue Coalition supported the use of Hanyu Pinyin while the Democratic Progressive Party (DPP) and its allies in the Pan-Green Coalition favored the use of Tongyong Pinyin. Today, many street signs in Taiwan use Tongyong Pinyin or derived romanizations, but some use Hanyu Pinyin–derived romanizations. It is not unusual to see spellings on street signs and buildings derived from the older Wade–Giles, MPS2 and other systems. Attempts to make Hanyu Pinyin standard in Taiwan have had uneven success, with most place and proper names remaining unaffected, including all major cities. Personal names on Taiwanese passports honor the choices of Taiwanese citizens, who can choose Wade–Giles, Hakka, Hoklo, Tongyong, aboriginal, or pinyin. Official use of pinyin is controversial, as when pinyin use for a metro line in 2017 provoked protests, despite government responses that "The romanization used on road signs and at transportation stations is intended for foreigners... Every foreigner learning Mandarin learns Hanyu pinyin, because it is the international standard...The decision has nothing to do with the nation's self-determination or any ideologies, because the key point is to ensure that foreigners can read signs." Singapore Singapore implemented Hanyu Pinyin as the official romanization system for Mandarin in the public sector starting in the 1980s, in conjunction with the Speak Mandarin Campaign. Hanyu Pinyin is also used as the romanization system to teach Mandarin Chinese at schools. While adoption has been mostly successful in government communication, placenames, and businesses established in the 1980s and onward, it continues to be unpopular in some areas, most notably for personal names and vocabulary borrowed from other varieties of Chinese already established in the local vernacular. In these situations, romanization continues to be based on the Chinese language variety it originated from, especially the three largest Chinese varieties traditionally spoken in Singapore: Hokkien, Teochew, and Cantonese. Special names In accordance to the Regulation of Phonetic Transcription in Hanyu Pinyin Letters of Place Names in Minority Nationality Languages () promulgated in 1976, place names in non-Han languages like Mongolian, Uyghur, and Tibetan are also officially transcribed using pinyin in a system adopted by the State Administration of Surveying and Mapping and Geographical Names Committee known as SASM/GNC romanization. The pinyin letters (26 Roman letters, plus and ) are used to approximate the non-Han language in question as closely as possible. This results in spellings that are different from both the customary spelling of the place name, and the pinyin spelling of the name in Chinese: See also Combining character Comparison of Chinese transcription systems Cyrillization of Chinese Romanization of Japanese Transcription into Chinese characters Two-cell Chinese Braille Chinese word-segmented writing References Citations Works cited Further reading External links Chinese phonetic alphabet spelling rules for Chinese names—The official standard GB/T 28039–2011 in Chinese. PDF version from the Chinese Ministry of Education HTML version Writing systems introduced in 1958 Chinese words and phrases ISO standards Phonetic alphabets Phonetic guides Ruby characters
23589
https://en.wikipedia.org/wiki/Parable%20of%20the%20Pearl
Parable of the Pearl
The Parable of the Pearl (also called the Pearl of Great Price) is one of the parables of Jesus Christ. It appears in Matthew 13 and illustrates the great value of the Kingdom of Heaven. This is the penultimate parable in Matthew 13, coming just before the Parable of the Dragnet. It immediately follows the Parable of the Hidden Treasure, which has a similar theme. It does not appear in the other synoptic gospels, but a version of this parable does appear in the non-canonical Gospel of Thomas, Saying 76. The parable has been depicted by artists such as Domenico Fetti. The parable reads as follows: Interpretation This parable is generally interpreted as illustrating the great value of the Kingdom of Heaven. Theologian E. H. Plumptre, in Anglican bishop Charles Ellicott's Commentary, notes that: "the caprices of luxury in the Roman empire had given a prominence to pearls, as an article of commerce, which they had never had before, and have probably never had since. They, rather than emeralds and sapphires, were the typical instance of all costliest adornments. The story of Cleopatra and the fact that the opening of a new pearl market was one of the alleged motives which led the Emperor Claudius to invade Britain, are indications of the value that was then set on the 'goodly pearls' of the parable." Theologian John Nolland likewise notes that pearls at that time had a greater value than they do today, and it thus has a similar theme to its partner, the parable of the hidden treasure. Nolland comments that it shares with that parable the notions of "good fortune and demanding action in attaining the kingdom of heaven", but adds in this case the notion of "diligent seeking". The valuable pearl is the "deal of a lifetime" for the merchant in the story. However, those who do not believe in the kingdom of heaven enough to stake their whole future on it are unworthy of the kingdom. This interpretation of the parable is the inspiration for a number of hymns, including the anonymous Swedish hymn Den Kostliga Pärlan (O That Pearl of Great Price!), which begins: O that Pearl of great price! have you found it? Is the Savior supreme in your love? O consider it well, ere you answer, As you hope for a welcome above. Have you given up all for this Treasure? Have you counted past gains as but loss? Has your trust in yourself and your merits Come to naught before Christ and His cross? A less common interpretation of the parable is that the merchant represents Jesus, and the pearl represents the Christian Church, though that definition is problematic as the Christian church, nor Christianity, existed until after Jesus death, while Jesus himself was a Galilean Jew of Nazareth, which we know as he was baptized by John the Baptist, according to the bible. This interpretation would give the parable a similar theme to that of the Parable of the Lost Sheep, the Lost Coin, and the Prodigal Son. Pope Pius XII used the phrase to describe virginity. "Pearl of Great Price" is the title of a selection of Mormon writings, one of the standard works of the Church of Jesus Christ of Latter-day Saints and some other Latter Day Saint denominations. Commentary from the Church Fathers Chrysostom: "The Gospel preaching not only offers manifold gain as a treasure, but is precious as a pearl; wherefore after the parable concerning the treasure, He gives that concerning the pearl. And in preaching, two things are required, namely, to be detached from the business of this life, and to be watchful, which are denoted by this merchantman. Truth moreover is one, and not manifold, and for this reason it is one pearl that is said to be found. And as one who is possessed of a pearl, himself indeed knows of his wealth, but is not known to others, ofttimes concealing it in his hand because of its small bulk, so it is in the preaching of the Gospel; they who possess it know that they are rich, the unbelievers, not knowing of this treasure, know not of our wealth. Jerome: "By the goodly pearls may be understood the Law and the Prophets. Hear then Marcion and Manichæus; the good pearls are the Law and the Prophets. One pearl, the most precious of all, is the knowledge of the Saviour and the sacrament of His passion and resurrection, which when the merchantman has found, like Paul the Apostle, he straightway despises all the mysteries of the Law and the Prophets and the old observances in which he had lived blameless, counting them as dung that he may win Christ. (Phil. 3:8.) Not that the finding of a new pearl is the condemnation of the old pearls, but that in comparison of that, all other pearls are worthless." Gregory the Great: "Or by the pearl of price is to be understood the sweetness of the heavenly kingdom, which, he that hath found it, selleth all and buyeth. For he that, as far as is permitted, has had perfect knowledge of the sweetness of the heavenly life, readily leaves all things that he has loved on earth; all that once pleased him among earthly possessions now appears to have lost its beauty, for the splendour of that precious pearl is alone seen in his mind." Augustine: "Or, A man seeking goodly pearls has found one pearl of great price; that is, he who is seeking good men with whom he may live profitably, finds one alone, Christ Jesus, without sin; or, seeking precepts of life, by aid of which he may dwell righteously among men, finds love of his neighbour, in which one rule, the Apostle says, (Rom. 13:9.) are comprehended all things; or, seeking good thoughts, he finds that Word in which all things are contained, In the beginning was the Word. (John 1:1.) which is lustrous with the light of truth, stedfast with the strength of eternity, and throughout like to itself with the beauty of divinity, and when we have penetrated the shell of the flesh, will be confessed as God. But whichever of these three it may be, or if there be anything else that can occur to us, that can be signified under the figure of the one precious pearl, its preciousness is the possession of ourselves, who are not free to possess it unless we despise all things that can be possessed in this world. For having sold our possessions, we receive no other return greater than ourselves, (for while we were involved in such things we were not our own,) that we may again give ourselves for that pearl, not because we are of equal value to that, but because we cannot give anything more." Gospel of Thomas A version of the parable also appears in the Gnostic Gospel of Thomas (Saying 76): This work's version of the parable of the Hidden Treasure appears later (Saying 109), rather than immediately preceding, as in Matthew. However, the mention of a treasure in Saying 76 may reflect a source for the Gospel of Thomas in which the parables were adjacent, so that the original pair of parables has been "broken apart, placed in separate contexts, and expanded in a manner characteristic of folklore." In Gnostic thought the pearl may represent Christ or the true self. In the Gnostic Acts of Peter and the Twelve, found with the Gospel of Thomas in the Nag Hammadi library, the travelling pearl merchant Lithargoel is eventually revealed to be Jesus. Depictions There have been several depictions of the New Testament parable in art, including works by Domenico Fetti, John Everett Millais and Jan Luyken. In popular culture In literature The parable is referenced in Nathaniel Hawthorne's novel The Scarlet Letter in Chapter 6: "But she named the infant 'Pearl', as being of great price – purchased with all she had,– her mother's only treasure!" George Herbert's "The Pearl" is a reflection on the parable and the hefty price required of the speaker to follow God. The epigraph cites Matthew 13 directly. Pearl is a late Middle English poem often attributed to the Gawain poet by scholars. The narrator mourns the loss of his daughter, called Pearl. Pearl presents her father with a vision of the New Jerusalem. By the end of the poem, Pearl reveals that she wears the pearl from Christ's parable around her neck and urges her father to keep faith. In 2011, Ann C. Crispin wrote a novel titled Pirates of the Caribbean: The Price of Freedom, which focuses on how Jack Sparrow becomes captain of the Wicked Wench/Black Pearl. Of all the "Bible stuff" Robby Greene told him, the first and only Biblical story Jack liked was the parable of a pearl of great price. Jack eventually realized his ship, the Wicked Wench, was like his pearl of great price, so when Davy Jones raised his beloved ship from the bottom of the sea, now half burned and with her hull and masts all charred, Jack renamed his new pirate ship the Black Pearl. In other media The parable is referenced in Star Trek by Scotty at the end of an episode in the original series entitled "The Empath". See also Five Discourses of Matthew Life of Jesus in the New Testament Ministry of Jesus The Pearl of Great Price - one of the standard works in The Church of Jesus Christ of Latter-day Saints. References External links Pearl, Parable of the Gospel of Matthew Pearls in religion
23590
https://en.wikipedia.org/wiki/Pantheism
Pantheism
Pantheism is the philosophical and religious belief that reality, the universe, and nature are identical to divinity or a supreme entity. The physical universe is thus understood as an immanent deity, still expanding and creating, which has existed since the beginning of time. The term pantheist designates one who holds both that everything constitutes a unity and that this unity is divine, consisting of an all-encompassing, manifested god or goddess. All astronomical objects are thence viewed as parts of a sole deity. The worship of all gods of every religion is another definition, but it is more precisely termed omnism. Pantheist belief does not recognize a distinct personal god, anthropomorphic or otherwise, but instead characterizes a broad range of doctrines differing in forms of relationships between reality and divinity. Pantheistic concepts date back thousands of years, and pantheistic elements have been identified in various religious traditions. The term pantheism was coined by mathematician Joseph Raphson in 1697 and since then, it has been used to describe the beliefs of a variety of people and organizations. Pantheism was popularized in Western culture as a theology and philosophy based on the work of the 17th-century philosopher Baruch Spinoza, in particular, his book Ethics. A pantheistic stance was also taken in the 16th century by philosopher and cosmologist Giordano Bruno. In the East, Advaita Vedanta, a school of Hindu philosophy is thought to be similar to pantheism in Western philosophy. The early Taoism of Laozi and Zhuangzi is also sometimes considered pantheistic, although it could be more similar to panentheism. Cheondoism, which arose in the Joseon Dynasty of Korea, and Won Buddhism are also considered pantheistic. Etymology Pantheism derives from the Greek word πᾶν pan (meaning "all, of everything") and θεός theos (meaning "god, divine"). The first known combination of these roots appears in Latin, in Joseph Raphson's 1697 book De Spatio Reali seu Ente Infinito, where he refers to "pantheismus". It was subsequently translated into English as "pantheism" in 1702. Definitions There are numerous definitions of pantheism, including: a theological and philosophical position which identifies God with the universe, or regards the universe as a manifestation of God; the belief that everything is part of an all-encompassing, immanent God, and that all forms of reality may then be considered either modes of that Being, or identical with it; and a non-religious philosophical position maintaining that the Universe (in the sense of the totality of all existence) and God are identical. History Pre-modern times Early traces of pantheist thought can be found within animistic beliefs and tribal religions throughout the world as an expression of unity with the divine, specifically in beliefs that have no central polytheist or monotheist personas. Hellenistic theology makes early recorded reference to pantheism within the ancient Greek religion of Orphism, where pan (the all) is made cognate with the creator God Phanes (symbolizing the universe), and with Zeus, after the swallowing of Phanes. Pantheistic tendencies existed in a number of Gnostic groups, with pantheistic thought appearing throughout the Middle Ages. These included the beliefs of mystics such as Ortlieb of Strasbourg, David of Dinant, Amalric of Bena, and Eckhart. The Catholic Church has long regarded pantheistic ideas as heresy. Sebastian Franck was considered an early Pantheist. Giordano Bruno, an Italian friar who evangelized about a transcendent and infinite God, was burned at the stake in 1600 by the Roman Inquisition. He has since become known as a celebrated pantheist and martyr of science. The Hindu philosophy of Advaita Vedanta is thought to be similar to pantheism. The term Advaita (literally "non-secondness", but usually rendered as "nondualism", and often equated with monism) refers to the idea that Brahman alone is ultimately real, while the transient phenomenal world is an illusory appearance (maya) of Brahman. In this view, jivatman, the experiencing self, is ultimately non-different ("na aparah") from Ātman-Brahman, the highest Self or Reality. The jivatman or individual self is a mere reflection or limitation of singular Ātman in a multitude of apparent individual bodies. Baruch Spinoza In the West, pantheism was formalized as a separate theology and philosophy based on the work of the 17th-century philosopher Baruch Spinoza. Spinoza was a Dutch philosopher of Portuguese descent raised in the Sephardi Jewish community in Amsterdam. He developed highly controversial ideas regarding the authenticity of the Hebrew Bible and the nature of the Divine, and was effectively excluded from Jewish society at age 23, when the local synagogue issued a herem against him. A number of his books were published posthumously, and shortly thereafter included in the Catholic Church's Index of Forbidden Books. In the posthumous Ethics, he opposed René Descartes' famous mind–body dualism, the theory that the body and spirit are separate. Spinoza held the monist view that the two are the same, and monism is a fundamental part of his philosophy. He was described as a "God-intoxicated man," and used the word God to describe the unity of all substance. This view influenced philosophers such as Georg Wilhelm Friedrich Hegel, who said, "You are either a Spinozist or not a philosopher at all." Spinoza earned praise as one of the great rationalists of 17th-century philosophy and one of Western philosophy's most important thinkers. Although the term "pantheism" was not coined until after his death, he is regarded as the most celebrated advocate of the concept. Ethics was the major source from which Western pantheism spread. 18th century The first known use of the term "pantheism" was in Latin ("pantheismus") by the English mathematician Joseph Raphson in his work De Spatio Reali seu Ente Infinito, published in 1697. Raphson begins with a distinction between atheistic "panhylists" (from the Greek roots pan, "all", and hyle, "matter"), who believe everything is matter, and Spinozan "pantheists" who believe in "a certain universal substance, material as well as intelligence, that fashions all things that exist out of its own essence." Raphson thought that the universe was immeasurable in respect to a human's capacity of understanding, and believed that humans would never be able to comprehend it. He referred to the pantheism of the Ancient Egyptians, Persians, Syrians, Assyrians, Greek, Indians, and Jewish Kabbalists, specifically referring to Spinoza. The term was first used in English by a translation of Raphson's work in 1702. It was later used and popularized by Irish writer John Toland in his work of 1705 Socinianism Truly Stated, by a Pantheist. Toland was influenced by both Spinoza and Bruno, and had read Joseph Raphson's De Spatio Reali, referring to it as "the ingenious Mr. Ralphson's (sic) Book of Real Space". Like Raphson, he used the terms "pantheist" and "Spinozist" interchangeably. In 1720 he wrote the Pantheisticon: or The Form of Celebrating the Socratic-Society in Latin, envisioning a pantheist society that believed, "All things in the world are one, and one is all in all things ... what is all in all things is God, eternal and immense, neither born nor ever to perish." He clarified his idea of pantheism in a letter to Gottfried Leibniz in 1710 when he referred to "the pantheistic opinion of those who believe in no other eternal being but the universe". In the mid-eighteenth century, the English theologian Daniel Waterland defined pantheism this way: "It supposes God and nature, or God and the whole universe, to be one and the same substance—one universal being; insomuch that men's souls are only modifications of the divine substance." In the early nineteenth century, the German theologian Julius Wegscheider defined pantheism as the belief that God and the world established by God are one and the same. Between 1785–89, a controversy about Spinoza's philosophy arose between the German philosophers Friedrich Heinrich Jacobi (a critic) and Moses Mendelssohn (a defender). Known in German as the Pantheismusstreit (pantheism controversy), it helped spread pantheism to many German thinkers. 19th century Growing influence During the beginning of the 19th century, pantheism was the viewpoint of many leading writers and philosophers, attracting figures such as William Wordsworth and Samuel Coleridge in Britain; Johann Gottlieb Fichte, Schelling and Hegel in Germany; Knut Hamsun in Norway; and Walt Whitman, Ralph Waldo Emerson and Henry David Thoreau in the United States. Seen as a growing threat by the Vatican, in 1864 it was formally condemned by Pope Pius IX in the Syllabus of Errors. A letter written in 1886 by William Herndon, Abraham Lincoln's law partner, was sold at auction for US$30,000 in 2011. In it, Herndon writes of the U.S. President's evolving religious views, which included pantheism. The subject is understandably controversial, but the content of the letter is consistent with Lincoln's fairly lukewarm approach to organized religion. Comparison with non-Christian religions Some 19th-century theologians thought that various pre-Christian religions and philosophies were pantheistic. They thought Pantheism was similar to the ancient Hinduism philosophy of Advaita (non-dualism). 19th-century European theologians also considered Ancient Egyptian religion to contain pantheistic elements and pointed to Egyptian philosophy as a source of Greek Pantheism. The latter included some of the Presocratics, such as Heraclitus and Anaximander. The Stoics were pantheists, beginning with Zeno of Citium and culminating in the emperor-philosopher Marcus Aurelius. During the pre-Christian Roman Empire, Stoicism was one of the three dominant schools of philosophy, along with Epicureanism and Neoplatonism. The early Taoism of Laozi and Zhuangzi is also sometimes considered pantheistic, although it could be more similar to Panentheism. Cheondoism, which arose in the Joseon Dynasty of Korea, and Won Buddhism are also considered pantheistic. The Realist Society of Canada believes that the consciousness of the self-aware universe is reality, which is an alternative view of Pantheism. 20th century In the late 20th century, some declared that pantheism was an underlying theology of Neopaganism, and pantheists began forming organizations devoted specifically to pantheism and treating it as a separate religion. 21st century Dorion Sagan, son of scientist and science communicator Carl Sagan, published the 2007 book Dazzle Gradually: Reflections on the Nature of Nature, co-written with his mother Lynn Margulis. In the chapter "Truth of My Father", Sagan writes that his "father believed in the God of Spinoza and Einstein, God not behind nature, but as nature, equivalent to it." In 2009, pantheism was mentioned in a Papal encyclical and in a statement on New Year's Day, 2010, criticizing pantheism for denying the superiority of humans over nature and seeing the source of man salvation in nature. In 2015, The Paradise Project, an organization "dedicated to celebrating and spreading awareness about pantheism," commissioned Los Angeles muralist Levi Ponce to paint the 75-foot mural in Venice, California, near the organization's offices. The mural depicts Albert Einstein, Alan Watts, Baruch Spinoza, Terence McKenna, Carl Jung, Carl Sagan, Emily Dickinson, Nikola Tesla, Friedrich Nietzsche, Ralph Waldo Emerson, W.E.B. Du Bois, Henry David Thoreau, Elizabeth Cady Stanton, Rumi, Adi Shankara, and Laozi. Categorizations There are multiple varieties of pantheism and various systems of classifying them relying upon one or more spectra or in discrete categories. Degree of determinism The philosopher Charles Hartshorne used the term Classical Pantheism to describe the deterministic philosophies of Baruch Spinoza, the Stoics, and other like-minded figures. Pantheism (All-is-God) is often associated with monism (All-is-One) and some have suggested that it logically implies determinism (All-is-Now). Albert Einstein explained theological determinism by stating, "the past, present, and future are an 'illusion. This form of pantheism has been referred to as "extreme monism", in which in the words of one commentator "God decides or determines everything, including our supposed decisions." Other examples of determinism-inclined pantheisms include those of Ralph Waldo Emerson, and Hegel. However, some have argued against treating every meaning of "unity" as an aspect of pantheism, and there exist versions of pantheism that regard determinism as an inaccurate or incomplete view of nature. Examples include the beliefs of John Scotus Eriugena, Friedrich Wilhelm Joseph Schelling and William James. Degree of belief It may also be possible to distinguish two types of pantheism, one being more religious and the other being more philosophical. The Columbia Encyclopedia writes of the distinction: "If the pantheist starts with the belief that the one great reality, eternal and infinite, is God, he sees everything finite and temporal as but some part of God. There is nothing separate or distinct from God, for God is the universe. If, on the other hand, the conception taken as the foundation of the system is that the great inclusive unity is the world itself, or the universe, God is swallowed up in that unity, which may be designated nature." Form of monism Philosophers and theologians have often suggested that pantheism implies monism. Other In 1896, J. H. Worman, a theologian, identified seven categories of pantheism: Mechanical or materialistic (God the mechanical unity of existence); Ontological (fundamental unity, Spinoza); Dynamic; Psychical (God is the soul of the world); Ethical (God is the universal moral order, Fichte); Logical (Hegel); and Pure (absorption of God into nature, which Worman equates with atheism). In 1984, Paul D. Feinberg, professor of biblical and systematic theology at Trinity Evangelical Divinity School, also identified seven: Hylozoistic; Immanentistic; Absolutistic monistic; Relativistic monistic; Acosmic; Identity of opposites; and Neoplatonic or emanationistic. Demographics Prevalence According to censuses of 2011, the UK was the country with the most Pantheists. As of 2011, about 1,000 Canadians identified their religion as "Pantheist", representing 0.003% of the population. By 2021, the number of Canadian pantheists had risen to 1,855 (0.005%). In Ireland, Pantheism rose from 202 in 1991, to 1106 in 2002, to 1,691 in 2006, 1,940 in 2011. In New Zealand, there was exactly one pantheist man in 1901. By 1906, the number of pantheists in New Zealand had septupled to 7 (6 male, 1 female). This number had further risen to 366 by 2006. Age, ethnicity, and gender The 2021 Canadian census showed that pantheists were somewhat more likely to be in their 20s and 30s compared to the general population. The age group least likely to be pantheist were those aged under 15, who were about four times less likely to be pantheist than the general population. The 2021 Canadian census also showed that pantheists were less likely to be part of a recognized minority group compared to the general population, with 90.3% of pantheists not being part of any minority group (compared to 73.5% of the general population). The census did not register any pantheists who were Arab, Southeast Asian, West Asian, Korean, or Japanese. In Canada (2011), there was no gender difference in regards to pantheism. However, in Ireland (2011), pantheists were slightly more likely to be female (1074 pantheists, 0.046% of women) than male (866 pantheists, 0.038% of men). In contrast, Canada (2021) showed pantheists to be slightly more likely to be male, with men representing 51.5% of pantheists. Related concepts Nature worship or nature mysticism is often conflated and confused with pantheism. It is pointed out by at least one expert, Harold Wood, founder of the Universal Pantheist Society, that in pantheist philosophy Spinoza's identification of God with nature is very different from a recent idea of a self identifying pantheist with environmental ethical concerns. His use of the word nature to describe his worldview may be vastly different from the "nature" of modern sciences. He and other nature mystics who also identify as pantheists use "nature" to refer to the limited natural environment (as opposed to man-made built environment). This use of "nature" is different from the broader use from Spinoza and other pantheists describing natural laws and the overall phenomena of the physical world. Nature mysticism may be compatible with pantheism but it may also be compatible with theism and other views. Pantheism has also been involved in animal worship especially in primal religions. Nontheism is an umbrella term which has been used to refer to a variety of religions not fitting traditional theism, and under which pantheism has been included. Panentheism (from Greek πᾶν (pân) "all"; ἐν (en) "in"; and θεός (theós) "God"; "all-in-God") was formally coined in Germany in the 19th century in an attempt to offer a philosophical synthesis between traditional theism and pantheism, stating that God is substantially omnipresent in the physical universe but also exists "apart from" or "beyond" it as its Creator and Sustainer. Thus panentheism separates itself from pantheism, positing the extra claim that God exists above and beyond the world as we know it. The line between pantheism and panentheism can be blurred depending on varying definitions of God, so there have been disagreements when assigning particular notable figures to pantheism or panentheism. Pandeism is another word derived from pantheism, and is characterized as a combination of reconcilable elements of pantheism and deism. It assumes a Creator-deity that is at some point distinct from the universe and then transforms into it, resulting in a universe similar to the pantheistic one in present essence, but differing in origin. Panpsychism is the philosophical view that consciousness, mind, or soul is a universal feature of all things. Some pantheists also subscribe to the distinct philosophical views hylozoism (or panvitalism), the view that everything is alive, and its close neighbor animism, the view that everything has a soul or spirit. Pantheism in religion Traditional religions Many traditional and folk religions including African traditional religions and Native American religions can be seen as pantheistic, or a mixture of pantheism and other doctrines such as polytheism and animism. According to pantheists, there are elements of pantheism in some forms of Christianity. Ideas resembling pantheism existed in Eastern religions before the 18th century (notably Sikhism, Hinduism, Confucianism, and Taoism). Although there is no evidence that these influenced Spinoza's work, there is such evidence regarding other contemporary philosophers, such as Leibniz, and later Voltaire. In the case of Hinduism, pantheistic views exist alongside panentheistic, polytheistic, monotheistic, and atheistic ones. In the case of Sikhism, stories attributed to Guru Nanak suggest that he believed God was everywhere in the physical world, and the Sikh tradition typically describes God as the preservative force within the physical world, present in all material forms, each created as a manifestation of God. However, Sikhs view God as the transcendent creator, "immanent in the phenomenal reality of the world in the same way in which an artist can be said to be present in his art". This implies a more panentheistic position. Spirituality and new religious movements Pantheism is popular in modern spirituality and new religious movements, such as Neopaganism and Theosophy. Two organizations that specify the word pantheism in their title formed in the last quarter of the 20th century. The Universal Pantheist Society, open to all varieties of pantheists and supportive of environmental causes, was founded in 1975. The World Pantheist Movement is headed by Paul Harrison, an environmentalist, writer and a former vice president of the Universal Pantheist Society, from which he resigned in 1996. The World Pantheist Movement was incorporated in 1999 to focus exclusively on promoting naturalistic pantheism – a strict metaphysical naturalistic version of pantheism, considered by some a form of religious naturalism. It has been described as an example of "dark green religion" with a focus on environmental ethics. See also Animism Biocentrism (ethics) Irreligion List of pantheists Monism Mother nature Panentheism Theopanism, a term that is philosophically distinct but derived from the same root words Worship of heavenly bodies Notes References Sources Further reading Amryc, C. Pantheism: The Light and Hope of Modern Reason, 1898. online Harrison, Paul, Elements of Pantheism, Element Press, 1999. preview Hunt, John, Pantheism and Christianity, William Isbister Limited, 1884. online Levine, Michael, Pantheism: A Non-Theistic Concept of Deity, Psychology Press, 1994, Picton, James Allanson, Pantheism: Its story and significance, Archibald Constable & Co., 1905. online. Plumptre, Constance E., General Sketch of the History of Pantheism, Cambridge University Press, 2011 (reprint, originally published 1879), online Russell, Sharman Apt, Standing in the Light: My Life as a Pantheist, Basic Books, 2008, Urquhart, W. S. Pantheism and the Value of Life, 1919. online External links Bollacher, Martin 2020: pantheism. In: Kirchhoff, T. (Hg.): Online Encyclopedia Philosophy of Nature. Universitätsbibliothek Heidelberg Pantheism entry by Michael Levine (earlier article on pantheism in the Stanford Encyclopedia of Philosophy) The Pantheist Index, pantheist-index.net An Introduction to Pantheism (wku.edu) The Universal Pantheist Society (pantheist.net) The World Pantheist Movement (pantheism.net) Pantheism.community by The Paradise Project (pantheism.com) Pantheism and Judaism (chabad.org) On Whitehead's process pantheism: Michel Weber, Whitehead's Pancreativism. The Basics. Foreword by Nicholas Rescher, Frankfurt / Paris, Ontos Verlag, 2006. Monism
23591
https://en.wikipedia.org/wiki/Panentheism
Panentheism
Panentheism (; "all in God", from the Greek , and ) is the belief that the divine intersects every part of the universe and also extends beyond space and time. The term was coined by the German philosopher Karl Krause in 1828 (after reviewing Hindu scripture) to distinguish the ideas of Georg Wilhelm Friedrich Hegel (1770–1831) and Friedrich Wilhelm Joseph Schelling (1775–1854) about the relation of God and the universe from the supposed pantheism of Baruch Spinoza. Unlike pantheism, which holds that the divine and the universe are identical, panentheism maintains an ontological distinction between the divine and the non-divine and the significance of both. In panentheism, the universal spirit is present everywhere, which at the same time "transcends" all things created. While pantheism asserts that "all is God", panentheism claims that God is greater than the universe. Some versions of panentheism suggest that the universe is nothing more than the manifestation of God. In addition, some forms indicate that the universe is contained within God, like in the Kabbalah concept of tzimtzum. Much of Hindu thought is highly characterized by panentheism and pantheism. In philosophy Ancient Greek philosophy The religious beliefs of Neoplatonism can be regarded as panentheistic. Plotinus taught that there was an ineffable transcendent God ("the One", to En, τὸ Ἕν) of which subsequent realities were emanations. From "the One" emanates the Divine Mind (Nous, Νοῦς) and the Cosmic Soul (Psyche, Ψυχή). In Neoplatonism the world itself is God (according to Plato's Timaeus 37). This concept of divinity is associated with that of the Logos (Λόγος), which had originated centuries earlier with Heraclitus (c. 535–475 BC). The Logos pervades the cosmos, whereby all thoughts and all things originate, or as Heraclitus said: "He who hears not me but the Logos will say: All is one." Neoplatonists such as Iamblichus attempted to reconcile this perspective by adding another hypostasis above the original monad of force or Dynamis (Δύναμις). This new all-pervasive monad encompassed all creation and its original uncreated emanations. Modern philosophy Baruch Spinoza later claimed that "Whatsoever is, is in God, and without God nothing can be, or be conceived." "Individual things are nothing but modifications of the attributes of God, or modes by which the attributes of God are expressed in a fixed and definite manner." Though Spinoza has been called the "prophet" and "prince" of pantheism, in a letter to Henry Oldenburg Spinoza states that: "as to the view of certain people that I identify god with nature (taken as a kind of mass or corporeal matter), they are quite mistaken". For Spinoza, our universe (cosmos) is a mode under two attributes of Thought and Extension. God has infinitely many other attributes which are not present in our world. According to German philosopher Karl Jaspers, when Spinoza wrote "Deus sive Natura" (God or Nature) Spinoza did not mean to say that God and Nature are interchangeable terms, but rather that God's transcendence was attested by his infinitely many attributes, and that two attributes known by humans, namely Thought and Extension, signified God's immanence. Furthermore, Martial Guéroult suggested the term panentheism, rather than pantheism to describe Spinoza's view of the relation between God and the world. The world is not God, but it is, in a strong sense, "in" God. Yet, American philosopher and self-described panentheist Charles Hartshorne referred to Spinoza's philosophy as "classical pantheism" and distinguished Spinoza's philosophy from panentheism. In 1828, the German philosopher Karl Christian Friedrich Krause (1781–1832) seeking to reconcile monotheism and pantheism, coined the term panentheism (from the Ancient Greek expression πᾶν ἐν θεῷ, pān en theṓ, literally "all in god"). This conception of God influenced New England transcendentalists such as Ralph Waldo Emerson. The term was popularized by Charles Hartshorne in his development of process theology and has also been closely identified with the New Thought. The formalization of this term in the West in the 19th century was not new; philosophical treatises had been written on it in the context of Hinduism for millennia. Philosophers who embraced panentheism have included Thomas Hill Green (1839–1882), James Ward (1843–1925), Andrew Seth Pringle-Pattison (1856–1931) and Samuel Alexander (1859–1938). Beginning in the 1940s, Hartshorne examined numerous conceptions of God. He reviewed and discarded pantheism, deism, and pandeism in favor of panentheism, finding that such a "doctrine contains all of deism and pandeism except their arbitrary negations". Hartshorne formulated God as a being who could become "more perfect": He has absolute perfection in categories for which absolute perfection is possible, and relative perfection (i. e., is superior to all others) in categories for which perfection cannot be precisely determined. In religion Buddhism The Reverend Zen Master Soyen Shaku was the first Zen Buddhist Abbot to tour the United States in 1905–6. He wrote a series of essays collected into the book Zen For Americans. In the essay titled "The God Conception of Buddhism" he attempts to explain how a Buddhist looks at the ultimate without an anthropomorphic God figure while still being able to relate to the term God in a Buddhist sense: At the outset, let me state that Buddhism is not atheistic as the term is ordinarily understood. It has certainly a God, the highest reality and truth, through which and in which this universe exists. However, the followers of Buddhism usually avoid the term God, for it savors so much of Christianity, whose spirit is not always exactly in accord with the Buddhist interpretation of religious experience. Again, Buddhism is not pantheistic in the sense that it identifies the universe with God. On the other hand, the Buddhist God is absolute and transcendent; this world, being merely its manifestation, is necessarily fragmental and imperfect. To define more exactly the Buddhist notion of the highest being, it may be convenient to borrow the term very happily coined by a modern German scholar, "panentheism," according to which God is πᾶν καὶ ἕν (all and one) and more than the totality of existence.Zen For Americans by Soyen Shaku, translated by Daisetz Teitaro Suzuki, 1906, pages 25–26. The essay then goes on to explain first utilizing the term "God" for the American audience to get an initial understanding of what he means by "panentheism," and then discusses the terms that Buddhism uses in place of "God" such as Dharmakaya, Buddha or Adi-Buddha, and Tathagata. Pure land Buddhism Christianity Panentheism is also a feature of some Christian philosophical theologies and resonates strongly within the theological tradition of the Eastern Orthodox Church. It also appears in process theology. Process theological thinkers are generally regarded in the Christian West as unorthodox. Furthermore, process philosophical thought is widely believed to have paved the way for open theism, a movement that tends to associate itself primarily with the Evangelical branch of Protestantism, but is also generally considered unorthodox by most Evangelicals. Catholic panentheism A number of ordained Catholic mystics (including Richard Rohr, David Steindl-Rast, and Thomas Keating) have suggested that panentheism is the original view of Christianity. They hold that such a view is directly supported by mystical experience and the teachings of Jesus and Saint Paul. Richard Rohr surmises this in his 2019 book, The Universal Christ: Similarly, David Steindl-Rast posits that Christianity's original panentheism is being revealed through contemporary mystical insight: This sentiment is mirrored in Thomas Keating's 1993 article, Clarifications Regarding Centering Prayer: Panentheism in other Christian confessions Panentheistic conceptions of God occur amongst some modern theologians. Process theology and Creation Spirituality, two recent developments in Christian theology, contain panentheistic ideas. Charles Hartshorne (1897–2000), who conjoined process theology with panentheism, maintained a lifelong membership in the Methodist church but was also a Unitarian. In later years he joined the Austin, Texas, Unitarian Universalist congregation and was an active participant in that church. Referring to the ideas such as Thomas Oord's ‘theocosmocentrism’ (2010), the soft panentheism of open theism, Keith Ward's comparative theology and John Polkinghorne's critical realism (2009), Raymond Potgieter observes distinctions such as dipolar and bipolar: The former suggests two poles separated such as God influencing creation and it in turn its creator (Bangert 2006:168), whereas bipolarity completes God’s being implying interdependence between temporal and eternal poles. (Marbaniang 2011:133), in dealing with Whitehead’s approach, does not make this distinction. I use the term bipolar as a generic term to include suggestions of the structural definition of God’s transcendence and immanence; to for instance accommodate a present and future reality into which deity must reasonably fit and function, and yet maintain separation from this world and evil whilst remaining within it. Some argue that panentheism should also include the notion that God has always been related to some world or another, which denies the idea of creation out of nothing (creatio ex nihilo). Nazarene Methodist theologian Thomas Jay Oord (* 1965) advocates panentheism, but he uses the word "theocosmocentrism" to highlight the notion that God and some world or another are the primary conceptual starting blocks for eminently fruitful theology. This form of panentheism helps in overcoming the problem of evil and in proposing that God's love for the world is essential to who God is. The Latter Day Saint movement teaches that the Light of Christ "proceeds from God through Christ and gives life and light to all things". Gnosticism Manichaeists, being of another gnostic sect, preached a very different doctrine in positioning the true Manichaean God against matter as well as other deities, that it described as enmeshed with the world, namely the gods of Jews, Christians and pagans. Nevertheless, this dualistic teaching included an elaborate cosmological myth that narrates the defeat of primal man by the powers of darkness that devoured and imprisoned the particles of light. Valentinian Gnosticism taught that matter came about through emanations of the supreme being, even if to some this event is held to be more accidental than intentional. To other gnostics, these emanations were akin to the Sephirot of the Kabbalists and deliberate manifestations of a transcendent God through a complex system of intermediaries. Hinduism The earliest reference to panentheistic thought in Hindu philosophy is in a creation myth contained in the later section of Rig Veda called the Purusha Sukta, which was compiled before 1100 BCE. The Purusha Sukta gives a description of the spiritual unity of the cosmos. It presents the nature of Purusha or the cosmic being as both immanent in the manifested world and yet transcendent to it. From this being the sukta holds, the original creative will proceeds, by which this vast universe is projected in space and time. The most influential and dominant school of Indian philosophy, Advaita Vedanta, rejects theism and dualism by insisting that "Brahman [ultimate reality] is without parts or attributes...one without a second." Since Brahman has no properties, contains no internal diversity and is identical with the whole reality it cannot be understood as an anthropomorphic personal God. The relationship between Brahman and the creation is often thought to be panentheistic. Panentheism is also expressed in the Bhagavad Gita. In verse IX.4, Krishna states: Many schools of Hindu thought espouse monistic theism, which is thought to be similar to a panentheistic viewpoint. Nimbarka's school of differential monism (Dvaitadvaita), Ramanuja's school of qualified monism (Vishistadvaita) and Saiva Siddhanta and Kashmir Shaivism are all considered to be panentheistic. Chaitanya Mahaprabhu's Gaudiya Vaishnavism, which elucidates the doctrine of Achintya Bheda Abheda (inconceivable oneness and difference), is also thought to be panentheistic. In Kashmir Shaivism, all things are believed to be a manifestation of Universal Consciousness (Cit or Brahman). So from the point of view of this school, the phenomenal world (Śakti) is real, and it exists and has its being in Consciousness (Ćit). Thus, Kashmir Shaivism is also propounding of theistic monism or panentheism. Shaktism, or Tantra, is regarded as an Indian prototype of Panentheism. Shakti is considered to be the cosmos itself – she is the embodiment of energy and dynamism, and the motivating force behind all action and existence in the material universe. Shiva is her transcendent masculine aspect, providing the divine ground of all being. "There is no Shiva without Shakti, or Shakti without Shiva. The two ... in themselves are One." Thus, it is She who becomes the time and space, the cosmos, it is She who becomes the five elements, and thus all animate life and inanimate forms. She is the primordial energy that holds all creation and destruction, all cycles of birth and death, all laws of cause and effect within Herself, and yet is greater than the sum total of all these. She is transcendent, but becomes immanent as the cosmos (Mula Prakriti). She, the Primordial Energy, directly becomes Matter. Judaism While mainstream Rabbinic Judaism is classically monotheistic, and follows in the footsteps of Maimonides (c. 1135–1204), the panentheistic conception of God can be found among certain mystical Jewish traditions. A leading scholar of Kabbalah, Moshe Idel ascribes this doctrine to the kabbalistic system of Moses ben Jacob Cordovero (1522–1570) and in the eighteenth century to the Baal Shem Tov (c. 1700–1760), founder of the Hasidic movement, as well as his contemporaries, Rabbi Dov Ber, the Maggid of Mezeritch (died 1772), and Menahem Mendel, the Maggid of Bar. This may be said of many, if not most, subsequent Hasidic masters. There is some debate as to whether Isaac Luria (1534–1572) and Lurianic Kabbalah, with its doctrine of tzimtzum, can be regarded as panentheistic. According to Hasidism, the infinite Ein Sof is incorporeal and exists in a state that is both transcendent and immanent. This appears to be the view of non-Hasidic Rabbi Chaim of Volozhin, as well. Hasidic Judaism merges the elite ideal of nullification to a transcendent God, via the intellectual articulation of inner dimensions through Kabbalah and with emphasis on the panentheistic divine immanence in everything. Many scholars would argue that "panentheism" is the best single-word description of the philosophical theology of Baruch Spinoza. It is therefore no surprise, that aspects of panentheism are also evident in the theology of Reconstructionist Judaism as presented in the writings of Mordecai Kaplan (1881–1983), who was strongly influenced by Spinoza. Sikhism Many newer, contemporary Sikhs have suggested that human souls and the monotheistic God are two different realities (dualism), distinguishing it from the monistic and various shades of nondualistic philosophies of other Indian religions. However, Sikh scholars have explored nondualist exegesis of Sikh scriptures, such as Bhai Vir Singh. According to Mandair, Vir Singh interprets the Sikh scriptures as teaching nonduality. The renowned Sikh Scholar, Bhai Mani Singh, is quoted to saying that Sikhism has all the essence of Vedanta Philosophy. Historically, the Sikh symbol of Ik Oankaar has had a monist meaning, and has been reduced to simply meaning, "There is but One God", which is incorrect. Older exegesis of Sikh scripture, such as the Faridkot Teeka, and Garab Ganjani Teeka has always described Sikh Metaphysics as a non-dual, panentheistic universe. For this reason, Sikh Metaphysics has often been compared to the Non-Dual, Vedanta metaphysics. The Sikh Poet, Bhai Nand Lal, often used Sufi terms to describe Sikh philosophy, talking about wahdat ul-wujud in his persian poetry. Islam Wahdat ul-wujud (the Unity of All Things) is a concept sometimes described as pantheism or panentheistic. It is primarily associated with the Asharite Sufi scholar Ibn Arabi. Some Sufi Orders, notably the Bektashis and the Universal Sufi movement, adhere to similar panentheistic beliefs. Same is said about the Nizari Ismaili who follow panentheism according to Ismaili doctrine. In Pre-Columbian America The Mesoamerican empires of the Mayas, Aztecs as well as the South American Incas (Tahuatinsuyu) have typically been characterized as polytheistic, with strong male and female deities. According to Charles C. Mann's history book 1491: New Revelations of the Americas Before Columbus, only the lower classes of Aztec society were polytheistic. Philosopher James Maffie has argued that Aztec metaphysics was pantheistic rather than panentheistic, since Teotl was considered by Aztec philosophers to be the ultimate all-encompassing yet all-transcending force defined by its inherit duality. Native American beliefs in North America have been characterized as panentheistic in that there is an emphasis on a single, unified divine spirit that is manifest in each individual entity. (North American Native writers have also translated the word for God as the Great Mystery or as the Sacred Other). This concept is referred to by many as the Great Spirit. Philosopher J. Baird Callicott has described Lakota theology as panentheistic, in that the divine both transcends and is immanent in everything. One exception can be modern Cherokee who are predominantly monotheistic but apparently not panentheistic; yet in older Cherokee traditions many observe both aspects of pantheism and panentheism, and are often not beholden to exclusivity, encompassing other spiritual traditions without contradiction, a common trait among some tribes in the Americas. In the stories of Keetoowah storytellers Sequoyah Guess and Dennis Sixkiller, God is known as ᎤᏁᎳᏅᎯ, commonly pronounced "unehlanv," and visited earth in prehistoric times, but then left earth and her people to rely on themselves. This shows a parallel to Vaishnava cosmology. Konkōkyō Konkokyo is a form of sectarian Japanese Shinto, and a faith within the Shinbutsu-shūgō tradition. Traditional Shintoism holds that an impersonal spirit manifests/penetrates the material world, giving all objects consciousness and spontaneously creating a system of natural mechanisms, forces, and phenomena (Musubi). Konkokyo deviates from traditional Shintoism by holding that this spirit (Comparable to Brahman), has a personal identity and mind. This personal form is non-different from the energy itself, not residing in any particular cosmological location. In Konkokyo, this god is named "Tenchi Kane no Kami-Sama" which can be translated directly as, "Spirit of the gilded/golden heavens and earth". Though practitioners of Konkokyo are small in number (~300,000 globally), the sect has birthed or influenced a multiplicity of Japanese New Religions, such as Oomoto. Many of these faiths carry on the Panentheistic views of Konkokyo See also Achintya Bheda Abheda, concept of qualified non-duality in Gaudiya Vaishnava Hinduism Brahman Christian Universalism Conceptions of God Creation Spirituality Divine simplicity Double-aspect theory Essence–energies distinction German idealism Henosis Kabbalah Neoplatonism Neutral monism Open theism The Over-Soul (1841), essay by Ralph Waldo Emerson Orthodox Christian theology Pantheism Pandeism Parabrahman Paramatman Philosophy of space and time Process theology Subud, spiritual movement founded by Muhammad Subuh Sumohadiwidjojo (1901–1987) Tawhid, concept of indivisible oneness in Islam People associated with panentheism Gregory Palamas (1296–1359), Byzantine Orthodox theologian and hesychast Baruch Spinoza (1632–1677), Dutch philosopher of Sephardi-Portuguese origin Alfred North Whitehead (1861–1947), English mathematician, philosopher, and father of process philosophy Charles Hartshorne (1897–2000), American philosopher and father of process theology Arthur Peacocke (1924–2006), British Anglican theologian and biochemist John B. Cobb (b. 1925), American theologian and philosopher Mordechai Nessyahu (1929–1997), Jewish-Israeli political theorist and philosopher of Cosmotheism Sallie McFague (1933–2019), American feminist theologian, author of Models of God and The Body of God William Luther Pierce (1933–2002), American political activist and self-proclaimed cosmotheist Rosemary Radford Ruether (b. 1936), American feminist theologian, author of Sexism and God-Talk and Gaia and God Jan Assmann (b. 1938), German Egyptologist, theorist of Cosmotheism Leonardo Boff (b. 1938), Brazilian liberation theologian and philosopher, former Franciscan priest, author of Ecology and Liberation: A New Paradigm Matthew Fox (priest) (b. 1940), American theologian, exponent of Creation Spirituality, expelled from the Dominican Order in 1993 and received into the Episcopal priesthood in 1994, author of Creation Spirituality, The Coming of the Cosmic Christ and A New Reformation: Creation Spirituality and the Transformation of Christianity Marcus Borg (1942–2015), American New Testament scholar and theologian, prominent member of the Jesus Seminar, author of The God We Never Knew Richard Rohr (b. 1943), American Franciscan priest and spiritual writer, author of Everything Belongs and The Universal Christ Carter Heyward (b. 1945), American feminist theologian and Episcopal priest, author of Touching our Strength and Saving Jesus from Those Who Are Right Norman Lowell (b. 1946), Maltese writer and politician, self-proclaimed cosmotheist John Polkinghorne (1930-2021), English theoretical physicist and theologian Michel Weber (b. 1963), Belgian philosopher Thomas Jay Oord (b. 1965), American theologian and philosopher Citations General and cited references Ankur Barua, "God’s Body at Work: Rāmānuja and Panentheism," in: International Journal of Hindu Studies, 14.1 (2010), pp. 1–30. Philip Clayton and Arthur Peacock (eds.), In Whom We Live and Move and Have Our Being; Panentheistic Reflections on God's Presence in a Scientific World, Eerdmans (2004) Bangert, B.C. (2006). Consenting to God and nature: Toward a theocentric, naturalistic, theological ethics, Princeton theological monograph ser. 55, Pickwick Publications, Eugene. Cooper, John W. (2006). Panentheism: The Other God of the Philosophers, Baker Academic Davis, Andrew M. and Philip Clayton (eds.) (2018). How I Found God in Everyone and Everywhere, Monkfish Book Publishing Thomas Jay Oord (2010). The Nature of Love: A Theology . Joseph Bracken, "Panentheism in the context of the theology and science dialogue", in: Open Theology, 1 (2014), 1–11 (online). External links Dr. Jay McDaniel on Panentheism Biblical Panentheism: The “Everywhere-ness” of God—God in all things, by Jon Zuck John Polkinghorne on Panentheism The Bible, Spiritual authority and Inspiration – Lecture by Tom Wright at Spiritual Minded Christian universalism Kabbalah
23592
https://en.wikipedia.org/wiki/Paraphilia
Paraphilia
A paraphilia is an experience of recurring or intense sexual arousal to atypical objects, places, situations, fantasies, behaviors, or individuals. It has also been defined as a sexual interest in anything other than a legally consenting human partner. Paraphilias are contrasted with normophilic ("normal") sexual interests, though the definition of what makes a sexual interest normal or atypical remains controversial. The exact number and taxonomy of paraphilia is under debate; Anil Aggrawal has listed as many as 549 types of paraphilias. Several sub-classifications of paraphilia have been proposed, although some argue that a fully dimensional, spectrum or complaint-oriented approach would better reflect the evident diversity of human sexuality. Although paraphilias were believed in the 20th century to be rare among the general population, recent research has indicated that paraphilic interests are relatively common. Etymology Coinage of the term paraphilia (paraphilie) has been credited to Friedrich Salomon Krauss in 1903 and it was used with some regularity by Wilhelm Stekel in the 1920s. The term comes from the Greek παρά (para), meaning "other" or "outside of", and φιλία (-philia), meaning "loving". The word was popularized by John Money in the 1980s as a non-pejorative designation for unusual sexual interests. It was first included in the DSM in its 1980 edition. Definition To date there is no broad scientific consensus for definitive boundaries between what are considered "unconventional sexual interests", kinks, fetishes, and paraphilias. As such, these terms are often used loosely and interchangeably, especially in common parlance. History of paraphilic terminology Many terms have been used to describe atypical sexual interests, and there remains debate regarding technical accuracy and perceptions of stigma. Money described paraphilia as "a sexuoerotic embellishment of, or alternative to the official, ideological norm." Psychiatrist Glen Gabbard writes that despite efforts by Wilhelm Stekel and John Money, "the term paraphilia remains pejorative in most circumstances." In the late 19th century, psychologists and psychiatrists started to categorize various paraphilias as they wanted a more descriptive system than the legal and religious constructs of sodomy and perversion. Albert Eulenburg (1914) noted a commonality across the paraphilias, using the terminology of his time, "All the forms of sexual perversion...have one thing in common: their roots reach down into the matrix of natural and normal sex life; there they are somehow closely connected with the feelings and expressions of our physiological erotism. They are... hyperbolic intensifications, distortions, monstrous fruits of certain partial and secondary expressions of this erotism which is considered 'normal' or at least within the limits of healthy sex feeling." Before the introduction of the term paraphilia in the DSM-III (1980), the term sexual deviation was used to refer to paraphilias in the first two editions of the manual. In 1981, an article published in American Journal of Psychiatry described paraphilia as "recurrent, intense sexually arousing fantasies, sexual urges, or behaviors generally involving" the following: Non-human objects The suffering or humiliation of oneself or one's partner Prepubescent children Non-consenting persons Definition of typical versus atypical interests Clinical literature contains reports of many paraphilias, only some of which receive their own entries in the diagnostic taxonomies of the American Psychiatric Association or the World Health Organization. There is disagreement regarding which sexual interests should be deemed paraphilic disorders versus normal variants of sexual interest. The DSM-IV-TR also acknowledges that the diagnosis and classification of paraphilias across cultures or religions "is complicated by the fact that what is considered deviant in one cultural setting may be more acceptable in another setting". Some argue that cultural relativism is important to consider when discussing paraphilias, because there is wide variance concerning what is sexually acceptable across cultures. Consensual adult activities and adult entertainment involving sexual roleplay, novel, superficial, or trivial aspects of sexual fetishism, or incorporating the use of sex toys are not necessarily paraphilic. Criticism of common definitions There is scientific and political controversy regarding the continued inclusion of sex-related diagnoses such as the paraphilias in the DSM, due to the stigma of being classified as a mental illness. Some groups, seeking greater understanding and acceptance of sexual diversity, have lobbied for changes to the legal and medical status of unusual sexual interests and practices. Charles Allen Moser, a physician and advocate for sexual minorities, has argued that the diagnoses should be eliminated from diagnostic manuals. Ray Blanchard stated that the current definition of paraphilia in the DSM done by concatenation (i.e. by listing a set of paraphilias) and that defining the term by exclusion (anything that is not normophilic) is preferable. Inclusion and subsequent exclusion of homosexuality Homosexuality, now widely accepted as a variant of human sexuality, was at one time discussed as a sexual deviation. Sigmund Freud and subsequent psychoanalytic thinkers considered homosexuality and paraphilias to result from psychosexual non-normative relations to the Oedipal complex, though not in the antecedent version of the 'Three Essays on Sexual Theory' where paraphilias are considered as stemming from an original polymorphous perversity. As such, the term sexual perversion or the epithet pervert have historically referred to gay men, as well as other non-heterosexuals (people who fall outside the perceived norms of sexual orientation). By the mid-20th century, mental health practitioners began formalizing "deviant sexuality" classifications into categories. Originally coded as 000-x63, homosexuality was the top of the classification list (Code 302.0) until the American Psychiatric Association removed homosexuality from the DSM in 1973. Martin Kafka writes, "Sexual disorders once considered paraphilias (e.g., homosexuality) are now regarded as variants of normal sexuality." A 2012 literature study by clinical psychologist James Cantor, when comparing homosexuality with paraphilias, found that both share "the features of onset and course (both homosexuality and paraphilia being life-long), but they appear to differ on sex ratio, fraternal birth order, handedness, IQ and cognitive profile, and neuroanatomy". The research then concluded that the data seemed to suggest paraphilias and homosexuality as two distinct categories, but regarded the conclusion as "quite tentative" given the current limited understanding of paraphilias. Characteristics Paraphilias typically arise in late adolescence or early adulthood. Persons with paraphilias are generally egosyntonic and view their paraphilias as something inherent in their being, though they do recognize that their sexual fantasies lie outside the norm and may attempt to conceal them. Paraphilic interests are rarely exclusive and some people have more than one paraphilia. Some people with paraphilias may seek occupations and avocations that increase their access to objects of their sexual fantasies (e.g. voyeurs working in rental properties to "peep" on others, pedophiles working with Boy Scouts). Research has found that some paraphilias, such as voyeurism and sadomasochism, are associated with more lifetime sexual partners, contradicting theories that paraphilias are associated with courtship disorders and arrested social development. Scientific literature includes some single-case studies of very rare and idiosyncratic paraphilias. These include an adolescent male who had a strong fetishistic interest in the exhaust pipes of cars, a young man with a similar interest in a specific type of car, and a man who had a paraphilic interest in sneezing (both his own and the sneezing of others). Causes and correlations The causes of paraphilias in people are unclear, but some research points to a possible prenatal neurodevelopmental correlation. A 2008 study analyzing the sexual fantasies of 200 heterosexual men by using the Wilson Sex Fantasy Questionnaire exam determined that males with a pronounced degree of fetish interest had a greater number of older brothers, a high 2D:4D digit ratio (which would indicate excessive prenatal estrogen exposure), and an elevated probability of being left-handed, suggesting that disturbed hemispheric brain lateralization may play a role in paraphilic attractions. Behavioral explanations propose that paraphilias are conditioned early in life, during an experience that pairs the paraphilic stimulus with intense sexual arousal. Susan Nolen-Hoeksema suggests that, once established, masturbatory fantasies about the stimulus reinforce and broaden the paraphilic arousal. Prevalence Although paraphilic interests in the general population were believed to be rare, research has shown that fantasies and behaviors related to voyeurism, sadomasochism and couple exhibitionism are not statistically uncommon among adults. In a study conducted in a population of men, 62% of participants reported at least one paraphilic interest. In another sample of college students, voyeurism was reported in 52% of men. The DSM-5 estimates that 2.2% of males and 1.3% of females in Australia engaged in bondage and discipline, sadomasochism, or dominance and submission within the past 12 months. The population prevalence of sexual masochism disorder is unknown. Among women Paraphilias are rarely observed in women. However, there have been some studies on females with paraphilias. Men and women differ on the content of their sexual fantasies, with the former reporting greater proportions of fetishism, exhibitionism and sadism, and the latter reporting greater proportions of masochism. Sexual masochism has been found to be the most commonly observed paraphilia in women, with approximately 1 in 20 cases. In ancient cultures Paraphilic fantasies and behaviors have been registered in multiple old and ancient sources. Voyeurism, bestiality, exhibitionism and necrophilia have been described in the Bible. Sexual relations with animals have also been depicted in cave paintings. Some ancient sex manuals such as the Kama Sutra (450), Koka Shastra (1150) and Ananga Ranga (1500) discuss biting, marks left after sex and love blows. Although evidence suggests that paraphilic behaviors have existed prior to the Renaissance, it is difficult to ascertain how common they were and how many people had persistent paraphilic fantasies in ancient times. Bestiality has been depicted multiple times in Greek mythology, though the act itself usually involved a deity in zoomorphic form, such as Zeus seducing Europa, Leda and Persephone while disguised as a bull, a swan and a serpent, respectively. Zeus was also depicted, in the form of an eagle, abducting Ganymede, an act that alludes to both bestiality and pederastry. Some fragments of Hittite law include prohibitions of and permissions to engage in specific acts of bestiality. Havelock Ellis pointed to an example of sexual masochism in the fifteenth century. The report, written by Giovanni Pico della Mirandola, described a man who could only be aroused by being beaten with a whip dipped in vinegar. Wilhelm Stekel also noted that Rousseau also discussed his own masochism in his Confessions. Other similar instances of persistent paraphilic fantasies were reported between 1516 and 1643 by Coelius Sedulius, Rhodiginus, Brundel and Meibomius. Diagnostic and Statistical Manual of Mental Disorders (DSM) DSM-I and DSM-II In American psychiatry, prior to the publication of the DSM-I, paraphilias were classified as cases of "psychopathic personality with pathologic sexuality". The DSM-I (1952) included sexual deviation as a personality disorder of sociopathic subtype. The only diagnostic guidance was that sexual deviation should have been "reserved for deviant sexuality which [was] not symptomatic of more extensive syndromes, such as schizophrenic or obsessional reactions". The specifics of the disorder were to be provided by the clinician as a "supplementary term" to the sexual deviation diagnosis; there were no restrictions in the DSM-I on what this supplementary term could be. Researcher Anil Aggrawal writes that the now-obsolete DSM-I listed examples of supplementary terms for pathological behavior to include "homosexuality, transvestism, pedophilia, fetishism, and sexual sadism, including rape, sexual assault, mutilation." The DSM-II (1968) continued to use the term sexual deviations, but no longer ascribed them under personality disorders, but rather alongside them in a broad category titled "personality disorders and certain other nonpsychotic mental disorders". The types of sexual deviations listed in the DSM-II were: sexual orientation disturbance (homosexuality), fetishism, pedophilia, transvestitism (sic), exhibitionism, voyeurism, sadism, masochism, and "other sexual deviation". No definition or examples were provided for "other sexual deviation", but the general category of sexual deviation was meant to describe the sexual preference of individuals that was "directed primarily toward objects other than people of opposite sex, toward sexual acts not usually associated with coitus, or toward coitus performed under bizarre circumstances, as in necrophilia, pedophilia, sexual sadism, and fetishism." Except for the removal of homosexuality from the DSM-III onwards, this definition provided a general standard that has guided specific definitions of paraphilias in subsequent DSM editions, up to DSM-IV-TR. DSM-III through DSM-IV The term paraphilia was introduced in the DSM-III (1980) as a subset of the new category of "psychosexual disorders." The DSM-III-R (1987) renamed the broad category to sexual disorders, renamed atypical paraphilia to paraphilia NOS (not otherwise specified), renamed transvestism as transvestic fetishism, added frotteurism, and moved zoophilia to the NOS category. It also provided seven nonexhaustive examples of NOS paraphilias, which besides zoophilia included exhibitionism, necrophilia, partialism, coprophilia, klismaphilia, and urophilia. The DSM-IV (1994) retained the sexual disorders classification for paraphilias, but added an even broader category, "sexual and gender identity disorders," which includes them. The DSM-IV retained the same types of paraphilias listed in DSM-III-R, including the NOS examples, but introduced some changes to the definitions of some specific types. DSM-IV-TR The DSM-IV-TR describes paraphilias as "recurrent, intense sexually arousing fantasies, sexual urges or behaviors generally involving nonhuman objects, the suffering or humiliation of oneself or one's partner, or children or other nonconsenting persons that occur over a period of six months" (criterion A), which "cause clinically significant distress or impairment in social, occupational, or other important areas of functioning" (criterion B). DSM-IV-TR names eight specific paraphilic disorders (exhibitionism, fetishism, frotteurism, pedophilia, sexual masochism, sexual sadism, voyeurism, and transvestic fetishism, plus a residual category, paraphilia—not otherwise specified). Criterion B differs for exhibitionism, frotteurism, and pedophilia to include acting on these urges, and for sadism, acting on these urges with a nonconsenting person. Sexual arousal in association with objects that were designed for sexual purposes is not diagnosable. Some paraphilias may interfere with the capacity for sexual activity with consenting adult partners. In the current version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), a paraphilia is not diagnosable as a psychiatric disorder unless it causes distress to the individual or harm to others. DSM-5 The DSM-5 adds a distinction between paraphilias and "paraphilic disorders", stating that paraphilias do not require or justify psychiatric treatment in themselves, and defining paraphilic disorder as "a paraphilia that is currently causing distress or impairment to the individual or a paraphilia whose satisfaction has entailed personal harm, or risk of harm, to others". The DSM-5 Paraphilias Subworkgroup reached a "consensus that paraphilias are not ipso facto psychiatric disorders", and proposed "that the DSM-V make a distinction between paraphilias and paraphilic disorders. One would ascertain a paraphilia (according to the nature of the urges, fantasies, or behaviors) but diagnose a paraphilic disorder (on the basis of distress and impairment). In this conception, having a paraphilia would be a necessary but not a sufficient condition for having a paraphilic disorder." The 'Rationale' page of any paraphilia in the electronic DSM-5 draft continues: "This approach leaves intact the distinction between normative and non-normative sexual behavior, which could be important to researchers, but without automatically labeling non-normative sexual behavior as psychopathological. It also eliminates certain logical absurdities in the DSM-IV-TR. In that version, for example, a man cannot be classified as a transvestite—however much he cross-dresses and however sexually exciting that is to him—unless he is unhappy about this activity or impaired by it. This change in viewpoint would be reflected in the diagnostic criteria sets by the addition of the word 'Disorder' to all the paraphilias. Thus, Sexual Sadism would become Sexual Sadism Disorder; Sexual Masochism would become Sexual Masochism Disorder, and so on." Bioethics professor Alice Dreger interpreted these changes as "a subtle way of saying sexual kinks are basically okay – so okay, the sub-work group doesn't actually bother to define paraphilia. But a paraphilic disorder is defined: that's when an atypical sexual interest causes distress or impairment to the individual or harm to others." Interviewed by Dreger, Ray Blanchard, the Chair of the Paraphilias Sub-Work Group, stated, "We tried to go as far as we could in depathologizing mild and harmless paraphilias, while recognizing that severe paraphilias that distress or impair people or cause them to do harm to others are validly regarded as disorders." Charles Allen Moser stated that this change is not really substantive, as the DSM-IV already acknowledged a difference between paraphilias and non-pathological but unusual sexual interests, a distinction that is virtually identical to what was being proposed for DSM-5, and it is a distinction that, in practice, has often been ignored. Linguist Andrew Clinton Hinderliter argued that "including some sexual interests—but not others—in the DSM creates a fundamental asymmetry and communicates a negative value judgment against the sexual interests included," and leaves the paraphilias in a situation similar to ego-dystonic homosexuality, which was removed from the DSM because it was no longer recognized as a mental disorder. The DSM-5 has specific listings for eight paraphilic disorders. These are voyeuristic disorder, exhibitionistic disorder, frotteuristic disorder, sexual masochism disorder, sexual sadism disorder, pedophilic disorder, fetishistic disorder, and transvestic disorder. Other paraphilic disorders can be diagnosed under the Other Specified Paraphilic Disorder or Unspecified Paraphilic Disorder listings, if accompanied by distress or impairment. International Classification of Diseases ICD-6,  ICD-7,  ICD-8 In the ICD-6 (1948) and ICD-7 (1955), a category of "sexual deviation" was listed with "other Pathological personality disorders". In the ICD-8 (1965), "sexual deviations" were categorized as homosexuality, fetishism, pedophilia, transvestism, exhibitionism, voyeurism, sadism and masochism. ICD-9 In the ICD-9 (1975), the category of sexual deviations and disorders was expanded to include transsexualism, sexual dysfunctions, and psychosexual identity disorders. The list contained homosexuality, bestiality, pedophilia, transvestism, exhibitionism, transexualism, Disorders of psychosexual identity, frigidity and impotence, Other sexual deviations and disorders (including fetishism, masochism, and sadism). ICD-10 In the ICD-10 (1990), the category "sexual deviations and disorders" was divided into several subcategories. Paraphilias were placed in subcategory of "sexual preference disorders". The list included fetishism, fetishistic transvestism, exhibitionism, voyeurism, pedophilia, sadomasochism and other disorders of sexual preference (including frotteurism, necrophilia, and zoophilia). Homosexuality was removed from the list, but ego-dystonic sexual orientation was still considered a deviation which was placed in subcategory "psychological and behavioural disorders associated with sexual development and orientation". ICD-11 In the ICD-11 (2022), "paraphilia" has been replaced with "paraphilic disorder". Any paraphilia and any other arousal pattern by itself no longer constitutes a disorder. To date, the diagnosis must meet criteria of paraphilia and one of the following: 1) a marked distress associated with arousal pattern (but not one that comes from rejection or fear of rejection); 2) the person has acted on the arousal pattern towards unwilling others or others considered as unable to give consent; 3) a serious risk of injury or death. The list of the paraphilic disorders includes: Exhibitionistic Disorder, Voyeuristic Disorder, Pedophilic Disorder, Coercive Sexual Sadism Disorder, Frotteuristic Disorder, Other Paraphilic Disorder Involving Non-Consenting Individuals, and Other Paraphilic Disorder Involving Solitary Behaviour or Consenting Individuals. As of now, disorders associated with sexual orientation have been removed from the ICD. Gender issues have been removed from the mental health category and have been placed under "Conditions related to sexual health". Paraphilic disorders Most clinicians and researchers believe that paraphilic sexual interests cannot be altered, although evidence is needed to support this. Instead, the goal of therapy is normally to reduce the person's discomfort with their paraphilia and limit the risk of any harmful, anti-social, or criminal behavior. Both psychotherapeutic and pharmacological methods are available to these ends. Cognitive behavioral therapy, at times, can help people with extreme paraphilic disorders develop strategies to avoid acting on their interests. Patients are taught to identify and cope with factors that make acting on their interests more likely, such as stress. It is currently the only form of psychotherapy for paraphilic disorders supported by randomized double-blind trials, as opposed to case studies and consensus of expert opinion. Medications Pharmacological treatments can help people control their sexual behaviors, but do not change the content of the paraphilia. They are typically combined with cognitive behavioral therapy for best effect. SSRIs Selective serotonin reuptake inhibitors (SSRIs) have been well received and are considered an important pharmacological treatment of severe paraphilic disorders. They are proposed to work by reducing sexual arousal, compulsivity, and depressive symptoms. They have been used with exhibitionists, non-offending pedophiles, and compulsive masturbators. Antiandrogens Antiandrogens are used in more extreme cases. Similar to physical castration, they work by reducing androgen levels, and have thus been described as chemical castration. The antiandrogen cyproterone acetate has been shown to substantially reduce sexual fantasies and offending behaviors. Medroxyprogesterone acetate and gonadotropin-releasing hormone agonists (such as leuprorelin) have also been used to lower sex drive. Due to the side effects, the World Federation of Societies of Biological Psychiatry recommends that hormonal treatments only be used when there is a serious risk of sexual violence, or when other methods have failed. Surgical castration has largely been abandoned because these pharmacological alternatives are similarly effective and less invasive. Legality In the United States, since 1990 a significant number of states have passed sexually violent predator laws. Following a series of landmark cases in the Supreme Court of the United States, persons diagnosed with extreme paraphilic disorders, particularly pedophilia (Kansas v. Hendricks, 1997) and exhibitionism (Kansas v. Crane, 2002), and with a history of anti-social behavior and related criminal history (that includes at a determination of at least "some lack-of-control" by the person), can be held indefinitely in civil confinement under various state legislation generically known as sexually violent predator laws and the federal Adam Walsh Act (United States v. Comstock, 2010). See also -phil- (list of philias) Courtship disorder Erotic target location error Human sexuality Kink (sexuality) List of paraphilias Lovemap Object sexuality Perversion Psychosexual development Sex and the law Sexual ethics Sexual fetishism Richard von Krafft-Ebing References Citations General bibliography D. Richard Laws, William T. O'Donohue (ed.), Sexual Deviance: Theory, Assessment, and Treatment, 2nd ed., Guilford Press, 2008, Further reading Kenneth Plummer, Sexual stigma: an interactionist account, Routledge, 1975, Elisabeth Roudinesco, Our Dark Side, a History of Perversion, Polity Press, 2009, David Morgan (psychoanalyst), Married to the Eiffel Tower. Married to the Eiffel Tower, a post on the blog Documentary Heaven. External links DSM-IV and DSM-IV-TR list of paraphilias Proposed diagnostic criteria for sex and gender section of DSM5 Sexology
23593
https://en.wikipedia.org/wiki/Pediatrics
Pediatrics
Pediatrics (American English) also spelled paediatrics or pædiatrics (British English), is the branch of medicine that involves the medical care of infants, children, adolescents, and young adults. In the United Kingdom, pediatrics covers many of their youth until the age of 18. The American Academy of Pediatrics recommends people seek pediatric care through the age of 21, but some pediatric subspecialists continue to care for adults up to 25. Worldwide age limits of pediatrics have been trending upward year after year. A medical doctor who specializes in this area is known as a pediatrician, or paediatrician. The word pediatrics and its cognates mean "healer of children", derived from the two Greek words: (pais "child") and (iatros "doctor, healer"). Pediatricians work in clinics, research centers, universities, general hospitals and children's hospitals, including those who practice pediatric subspecialties (e.g. neonatology requires resources available in a NICU). History The earliest mentions of child-specific medical problems appear in the Hippocratic Corpus, published in the fifth century B.C., and the famous Sacred Disease. These publications discussed topics such as childhood epilepsy and premature births. From the first to fourth centuries A.D., Greek philosophers and physicians Celsus, Soranus of Ephesus, Aretaeus, Galen, and Oribasius, also discussed specific illnesses affecting children in their works, such as rashes, epilepsy, and meningitis. Already Hippocrates, Aristotle, Celsus, Soranus, and Galen understood the differences in growing and maturing organisms that necessitated different treatment: ("In general, boys should not be treated in the same way as men"). Some of the oldest traces of pediatrics can be discovered in Ancient India where children's doctors were called kumara bhrtya. Even though some pediatric works existed during this time, they were scarce and rarely published due to a lack of knowledge in pediatric medicine. Sushruta Samhita, an ayurvedic text composed during the sixth century BCE, contains the text about pediatrics. Another ayurvedic text from this period is Kashyapa Samhita. A second century AD manuscript by the Greek physician and gynecologist Soranus of Ephesus dealt with neonatal pediatrics. Byzantine physicians Oribasius, Aëtius of Amida, Alexander Trallianus, and Paulus Aegineta contributed to the field. The Byzantines also built brephotrophia (crêches). Islamic Golden Age writers served as a bridge for Greco-Roman and Byzantine medicine and added ideas of their own, especially Haly Abbas, Yahya Serapion, Abulcasis, Avicenna, and Averroes. The Persian philosopher and physician al-Razi (865–925), sometimes called the father of pediatrics, published a monograph on pediatrics titled Diseases in Children. Also among the first books about pediatrics was Libellus [Opusculum] de aegritudinibus et remediis infantium 1472 ("Little Book on Children Diseases and Treatment"), by the Italian pediatrician Paolo Bagellardo. In sequence came Bartholomäus Metlinger's Ein Regiment der Jungerkinder 1473, Cornelius Roelans (1450–1525) no title Buchlein, or Latin compendium, 1483, and Heinrich von Louffenburg (1391–1460) Versehung des Leibs written in 1429 (published 1491), together form the Pediatric Incunabula, four great medical treatises on children's physiology and pathology. While more information about childhood diseases became available, there was little evidence that children received the same kind of medical care that adults did. It was during the seventeenth and eighteenth centuries that medical experts started offering specialized care for children. The Swedish physician Nils Rosén von Rosenstein (1706–1773) is considered to be the founder of modern pediatrics as a medical specialty, while his work The diseases of children, and their remedies (1764) is considered to be "the first modern textbook on the subject". However, it was not until the nineteenth century that medical professionals acknowledged pediatrics as a separate field of medicine. The first pediatric-specific publications appeared between the 1790s and the 1920s. Etymology The term pediatrics was first introduced in English in 1859 by Abraham Jacobi. In 1860, he became "the first dedicated professor of pediatrics in the world." Jacobi is known as the father of American pediatrics because of his many contributions to the field. He received his medical training in Germany and later practiced in New York City. The first generally accepted pediatric hospital is the Hôpital des Enfants Malades (), which opened in Paris in June 1802 on the site of a previous orphanage. From its beginning, this famous hospital accepted patients up to the age of fifteen years, and it continues to this day as the pediatric division of the Necker-Enfants Malades Hospital, created in 1920 by merging with the nearby Necker Hospital, founded in 1778. In other European countries, the Charité (a hospital founded in 1710) in Berlin established a separate Pediatric Pavilion in 1830, followed by similar institutions at Saint Petersburg in 1834, and at Vienna and Breslau (now Wrocław), both in 1837. In 1852 Britain's first pediatric hospital, the Hospital for Sick Children, Great Ormond Street was founded by Charles West. The first Children's hospital in Scotland opened in 1860 in Edinburgh. In the US, the first similar institutions were the Children's Hospital of Philadelphia, which opened in 1855, and then Boston Children's Hospital (1869). Subspecialties in pediatrics were created at the Harriet Lane Home at Johns Hopkins by Edwards A. Park. Differences between adult and pediatric medicine The body size differences are paralleled by maturation changes. The smaller body of an infant or neonate is substantially different physiologically from that of an adult. Congenital defects, genetic variance, and developmental issues are of greater concern to pediatricians than they often are to adult physicians. A common adage is that children are not simply "little adults". The clinician must take into account the immature physiology of the infant or child when considering symptoms, prescribing medications, and diagnosing illnesses. Pediatric physiology directly impacts the pharmacokinetic properties of drugs that enter the body. The absorption, distribution, metabolism, and elimination of medications differ between developing children and grown adults. Despite completed studies and reviews, continual research is needed to better understand how these factors should affect the decisions of healthcare providers when prescribing and administering medications to the pediatric population. Absorption Many drug absorption differences between pediatric and adult populations revolve around the stomach. Neonates and young infants have increased stomach pH due to decreased acid secretion, thereby creating a more basic environment for drugs that are taken by mouth. Acid is essential to degrading certain oral drugs before systemic absorption. Therefore, the absorption of these drugs in children is greater than in adults due to decreased breakdown and increased preservation in a less acidic gastric space. Children also have an extended rate of gastric emptying, which slows the rate of drug absorption. Drug absorption also depends on specific enzymes that come in contact with the oral drug as it travels through the body. Supply of these enzymes increase as children continue to develop their gastrointestinal tract. Pediatric patients have underdeveloped proteins, which leads to decreased metabolism and increased serum concentrations of specific drugs. However, prodrugs experience the opposite effect because enzymes are necessary for allowing their active form to enter systemic circulation. Distribution Percentage of total body water and extracellular fluid volume both decrease as children grow and develop with time. Pediatric patients thus have a larger volume of distribution than adults, which directly affects the dosing of hydrophilic drugs such as beta-lactam antibiotics like ampicillin. Thus, these drugs are administered at greater weight-based doses or with adjusted dosing intervals in children to account for this key difference in body composition. Infants and neonates also have fewer plasma proteins. Thus, highly protein-bound drugs have fewer opportunities for protein binding, leading to increased distribution. Metabolism Drug metabolism primarily occurs via enzymes in the liver and can vary according to which specific enzymes are affected in a specific stage of development. Phase I and Phase II enzymes have different rates of maturation and development, depending on their specific mechanism of action (i.e. oxidation, hydrolysis, acetylation, methylation, etc.). Enzyme capacity, clearance, and half-life are all factors that contribute to metabolism differences between children and adults. Drug metabolism can even differ within the pediatric population, separating neonates and infants from young children. Elimination Drug elimination is primarily facilitated via the liver and kidneys. In infants and young children, the larger relative size of their kidneys leads to increased renal clearance of medications that are eliminated through urine. In preterm neonates and infants, their kidneys are slower to mature and thus are unable to clear as much drug as fully developed kidneys. This can cause unwanted drug build-up, which is why it is important to consider lower doses and greater dosing intervals for this population. Diseases that negatively affect kidney function can also have the same effect and thus warrant similar considerations. Pediatric autonomy in healthcare A major difference between the practice of pediatric and adult medicine is that children, in most jurisdictions and with certain exceptions, cannot make decisions for themselves. The issues of guardianship, privacy, legal responsibility, and informed consent must always be considered in every pediatric procedure. Pediatricians often have to treat the parents and sometimes, the family, rather than just the child. Adolescents are in their own legal class, having rights to their own health care decisions in certain circumstances. The concept of legal consent combined with the non-legal consent (assent) of the child when considering treatment options, especially in the face of conditions with poor prognosis or complicated and painful procedures/surgeries, means the pediatrician must take into account the desires of many people, in addition to those of the patient. History of pediatric autonomy The term autonomy is traceable to ethical theory and law, where it states that autonomous individuals can make decisions based on their own logic. Hippocrates was the first to use the term in a medical setting. He created a code of ethics for doctors called the Hippocratic Oath that highlighted the importance of putting patients' interests first, making autonomy for patients a top priority in health care.   In ancient times, society did not view pediatric medicine as essential or scientific. Experts considered professional medicine unsuitable for treating children. Children also had no rights. Fathers regarded their children as property, so their children's health decisions were entrusted to them. As a result, mothers, midwives, "wise women", and general practitioners treated the children instead of doctors. Since mothers could not rely on professional medicine to take care of their children, they developed their own methods, such as using alkaline soda ash to remove the vernix at birth and treating teething pain with opium or wine. The absence of proper pediatric care, rights, and laws in health care to prioritize children's health led to many of their deaths. Ancient Greeks and Romans sometimes even killed healthy female babies and infants with deformities since they had no adequate medical treatment and no laws prohibiting infanticide. In the twentieth century, medical experts began to put more emphasis on children's rights. In 1989, in the United Nations Rights of the Child Convention, medical experts developed the Best Interest Standard of Child to prioritize children's rights and best interests. This event marked the onset of pediatric autonomy. In 1995, the American Academy of Pediatrics (AAP) finally acknowledged the Best Interest Standard of a Child as an ethical principle for pediatric decision-making, and it is still being used today. Parental authority and current medical issues The majority of the time, parents have the authority to decide what happens to their child. Philosopher John Locke argued that it is the responsibility of parents to raise their children and that God gave them this authority. In modern society, Jeffrey Blustein, modern philosopher and author of the book Parents and Children: The Ethics of Family, argues that parental authority is granted because the child requires parents to satisfy their needs. He believes that parental autonomy is more about parents providing good care for their children and treating them with respect than parents having rights. The researcher Kyriakos Martakis, MD, MSc, explains that research shows parental influence negatively affects children's ability to form autonomy. However, involving children in the decision-making process allows children to develop their cognitive skills and create their own opinions and, thus, decisions about their health. Parental authority affects the degree of autonomy the child patient has. As a result, in Argentina, the new National Civil and Commercial Code has enacted various changes to the healthcare system to encourage children and adolescents to develop autonomy. It has become more crucial to let children take accountability for their own health decisions. In most cases, the pediatrician, parent, and child work as a team to make the best possible medical decision. The pediatrician has the right to intervene for the child's welfare and seek advice from an ethics committee. However, in recent studies, authors have denied that complete autonomy is present in pediatric healthcare. The same moral standards should apply to children as they do to adults. In support of this idea is the concept of paternalism, which negates autonomy when it is in the patient's interests. This concept aims to keep the child's best interests in mind regarding autonomy. Pediatricians can interact with patients and help them make decisions that will benefit them, thus enhancing their autonomy. However, radical theories that question a child's moral worth continue to be debated today. Authors often question whether the treatment and equality of a child and an adult should be the same. Author Tamar Schapiro notes that children need nurturing and cannot exercise the same level of authority as adults. Hence, continuing the discussion on whether children are capable of making important health decisions until this day. Modern advancements According to the Subcommittee of Clinical Ethics of the Argentinean Pediatric Society (SAP), children can understand moral feelings at all ages and can make reasonable decisions based on those feelings. Therefore, children and teens are deemed capable of making their own health decisions when they reach the age of 13. Recently, studies made on the decision-making of children have challenged that age to be 12. Technology has made several modern advancements that contribute to the future development of child autonomy, for example, unsolicited findings (U.F.s) of pediatric exome sequencing. They are findings based on pediatric exome sequencing that explain in greater detail the intellectual disability of a child and predict to what extent it will affect the child in the future. Genetic and intellectual disorders in children make them incapable of making moral decisions, so people look down upon this kind of testing because the child's future autonomy is at risk. It is still in question whether parents should request these types of testing for their children. Medical experts argue that it could endanger the autonomous rights the child will possess in the future. However, the parents contend that genetic testing would benefit the welfare of their children since it would allow them to make better health care decisions. Exome sequencing for children and the decision to grant parents the right to request them is a medically ethical issue that many still debate today. Education requirements Aspiring medical students will need 4 years of undergraduate courses at a college or university, which will get them a BS, BA or other bachelor's degree. After completing college, future pediatricians will need to attend 4 years of medical school (MD/DO/MBBS) and later do 3 more years of residency training, the first year of which is called "internship." After completing the 3 years of residency, physicians are eligible to become certified in pediatrics by passing a rigorous test that deals with medical conditions related to young children. In high school, future pediatricians are required to take basic science classes such as biology, chemistry, physics, algebra, geometry, and calculus. It is also advisable to learn a foreign language (preferably Spanish in the United States) and be involved in high school organizations and extracurricular activities. After high school, college students simply need to fulfill the basic science course requirements that most medical schools recommend and will need to prepare to take the MCAT (Medical College Admission Test) in their junior or early senior year in college. Once attending medical school, student courses will focus on basic medical sciences like human anatomy, physiology, chemistry, etc., for the first three years, the second year of which is when medical students start to get hands-on experience with actual patients. Training of pediatricians The training of pediatricians varies considerably across the world. Depending on jurisdiction and university, a medical degree course may be either undergraduate-entry or graduate-entry. The former commonly takes five or six years and has been usual in the Commonwealth. Entrants to graduate-entry courses (as in the US), usually lasting four or five years, have previously completed a three- or four-year university degree, commonly but by no means always in sciences. Medical graduates hold a degree specific to the country and university in and from which they graduated. This degree qualifies that medical practitioner to become licensed or registered under the laws of that particular country, and sometimes of several countries, subject to requirements for "internship" or "conditional registration". Pediatricians must undertake further training in their chosen field. This may take from four to eleven or more years depending on jurisdiction and the degree of specialization. In the United States, a medical school graduate wishing to specialize in pediatrics must undergo a three-year residency composed of outpatient, inpatient, and critical care rotations. Subspecialties within pediatrics require further training in the form of 3-year fellowships. Subspecialties include critical care, gastroenterology, neurology, infectious disease, hematology/oncology, rheumatology, pulmonology, child abuse, emergency medicine, endocrinology, neonatology, and others. In most jurisdictions, entry-level degrees are common to all branches of the medical profession, but in some jurisdictions, specialization in pediatrics may begin before completion of this degree. In some jurisdictions, pediatric training is begun immediately following the completion of entry-level training. In other jurisdictions, junior medical doctors must undertake generalist (unstreamed) training for a number of years before commencing pediatric (or any other) specialization. Specialist training is often largely under the control of 'pediatric organizations (see below) rather than universities and depends on the jurisdiction. Subspecialties Subspecialties of pediatrics include: (not an exhaustive list) Addiction medicine (multidisciplinary) Adolescent medicine Child abuse pediatrics Clinical genetics Clinical informatics Developmental-behavioral pediatrics Headache medicine Hospital medicine Medical toxicology Metabolic medicine Neonatology/Perinatology Pain medicine (multidisciplinary) Palliative care (multidisciplinary) Pediatric allergy and immunology Pediatric cardiology Pediatric cardiac critical care Pediatric critical care Neurocritical care Pediatric cardiac critical care Pediatric emergency medicine Pediatric endocrinology Pediatric gastroenterology Transplant hepatology Pediatric hematology Pediatric infectious disease Pediatric nephrology Pediatric oncology Pediatric neuro-oncology Pediatric pulmonology Primary care Pediatric rheumatology Sleep medicine (multidisciplinary) Social pediatrics Sports medicine Other specialties that care for children (not an exhaustive list) Child neurology Addiction medicine (multidisciplinary) Brain injury medicine Clinical neurophysiology Epilepsy Headache medicine Neurocritical care Neuroimmunology Neuromuscular medicine Pain medicine (multidisciplinary) Palliative care (multidisciplinary) Pediatric neuro-oncology Sleep medicine (multidisciplinary) Child and adolescent psychiatry, subspecialty of psychiatry Neurodevelopmental disabilities Pediatric anesthesiology, subspecialty of anesthesiology Pediatric dentistry, subspecialty of dentistry Pediatric dermatology, subspecialty of dermatology Pediatric gynecology Pediatric neurosurgery, subspecialty of neurosurgery Pediatric ophthalmology, subspecialty of ophthalmology Pediatric orthopedic surgery, subspecialty of orthopedic surgery Pediatric otolaryngology, subspecialty of otolaryngology Pediatric plastic surgery, subspecialty of plastic surgery Pediatric radiology, subspecialty of radiology Pediatric rehabilitation medicine, subspecialty of physical medicine and rehabilitation Pediatric surgery, subspecialty of general surgery Pediatric urology, subspecialty of urology See also American Academy of Pediatrics American Osteopathic Board of Pediatrics Center on Media and Child Health (CMCH) Children's hospital List of pediatric organizations List of pediatrics journals Medical specialty Pediatric Oncall Pain in babies Royal College of Paediatrics and Child Health Pediatric environmental health References Further reading BMC Pediatrics - open access Clinical Pediatrics Developmental Review - partial open access JAMA Pediatrics The Journal of Pediatrics - partial open access External links Pediatrics Directory at Curlie Pediatric Health Directory at OpenMD Childhood
23597
https://en.wikipedia.org/wiki/Physiology
Physiology
Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology. Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases. The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine. Foundations Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines: Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another. Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms. Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment. Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype. Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment. Subdisciplines There are many ways to categorize the subdisciplines of physiology: based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology) Subdisciplines by level of organisation Cell physiology Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism. Subdisciplines by taxa Plant physiology Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology. Animal physiology Human physiology Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing. It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical. Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals. Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum. Subdisciplines by research objective Comparative physiology Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms. History The classical era The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine. Early modern period Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature. In 1791 Luigi Galvani described the role of electricity in nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law. In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils). In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased death rate from surgery by a substantial amount. The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences." In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli. In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology. Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology. Late modern period In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline. In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated. In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory. Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains. Notable physiologists Women in physiology Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine. Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth C. Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society." () Prominent women physiologists include: Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975. Gerty Cori, along with husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation. Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize. Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes. Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system. Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS). Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase. See also Outline of physiology Biochemistry Biophysics Cytoarchitecture Defense physiology Ecophysiology Exercise physiology Fish physiology Insect physiology Human body Molecular biology Metabolome Neurophysiology Pathophysiology Pharmacology Physiome American Physiological Society International Union of Physiological Sciences The Physiological Society Brazilian Society of Physiology References Bibliography Human physiology Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009. Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012. Animal physiology Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012. Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008. Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002. Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997. Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992. Plant physiology Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001. Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992 Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010. Fungal physiology Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994. Protistan physiology Levandowsky, M. Physiological Adaptations of Protists. In: Cell physiology sourcebook: essentials of membrane biophysics. Amsterdam; Boston: Elsevier/AP, 2012. Levandowski, M., Hutner, S.H. (eds). Biochemistry and physiology of protozoa. Volumes 1, 2, and 3. Academic Press: New York, NY, 1979; 2nd ed. Laybourn-Parry J. A Functional Biology of Free-Living Protozoa. Berkeley, California: University of California Press; 1984. Algal physiology Lobban, C.S., Harrison, P.J. Seaweed ecology and physiology. Cambridge University Press, 1997. Stewart, W. D. P. (ed.). Algal Physiology and Biochemistry. Blackwell Scientific Publications, Oxford, 1974. Bacterial physiology El-Sharoud, W. (ed.). Bacterial Physiology: A Molecular Approach. Springer-Verlag, Berlin-Heidelberg, 2008. Kim, B.H., Gadd, M.G. Bacterial Physiology and Metabolism. Cambridge, 2008. Moat, A.G., Foster, J.W., Spector, M.P. Microbial Physiology, 4th ed. Wiley-Liss, Inc. New York, NY, 2002. External links physiologyINFO.org – public information site sponsored by the American Physiological Society Branches of biology
23601
https://en.wikipedia.org/wiki/Pi
Pi
The number (; spelled out as "pi") is a mathematical constant that is the ratio of a circle's circumference to its diameter, approximately equal to 3.14159. The number appears in many formulae across mathematics and physics. It is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an equation involving only finite sums, products, powers, and integers. The transcendence of implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of appear to be randomly distributed, but no proof of this conjecture has been found. For thousands of years, mathematicians have attempted to extend their understanding of , sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of for practical computations. Around 250BC, the Greek mathematician Archimedes created an algorithm to approximate with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for , based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. The invention of calculus soon led to the calculation of hundreds of digits of , enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test supercomputers as well as stress testing consumer computer hardware. Because its definition relates to the circle, is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. It also appears in areas having little to do with geometry, such as number theory and statistics, and in modern mathematical analysis can be defined without any reference to geometry. The ubiquity of makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to have been published, and record-setting calculations of the digits of often result in news headlines. Fundamentals Name The symbol used by mathematicians to represent the ratio of a circle's circumference to its diameter is the lowercase Greek letter , sometimes spelled out as pi. In English, is pronounced as "pie" ( ). In mathematical use, the lowercase letter is distinguished from its capitalized and enlarged counterpart , which denotes a product of a sequence, analogous to how denotes summation. The choice of the symbol is discussed in the section Adoption of the symbol . Definition is commonly defined as the ratio of a circle's circumference to its diameter : The ratio is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio . This definition of implicitly makes use of flat (Euclidean) geometry; although the notion of a circle can be extended to any curve (non-Euclidean) geometry, these new circles will no longer satisfy the formula . Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be formally defined independently of geometry using limits—a concept in calculus. For example, one may directly compute the arc length of the top half of the unit circle, given in Cartesian coordinates by the equation , as the integral: An integral such as this was proposed as a definition of by Karl Weierstrass, who defined it directly as an integral in 1841. Integration is no longer commonly used in a first analytical definition because, as explains, differential calculus typically precedes integral calculus in the university curriculum, so it is desirable to have a definition of that does not rely on the latter. One such definition, due to Richard Baltzer and popularized by Edmund Landau, is the following: is twice the smallest positive number at which the cosine function equals 0. is also the smallest positive number at which the sine function equals zero, and the difference between consecutive zeroes of the sine function. The cosine and sine can be defined independently of geometry as a power series, or as the solution of a differential equation. In a similar spirit, can be defined using properties of the complex exponential, , of a complex variable . Like the cosine, the complex exponential can be defined in one of several ways. The set of complex numbers at which is equal to one is then an (imaginary) arithmetic progression of the form: and there is a unique positive real number with this property. A variation on the same idea, making use of sophisticated mathematical concepts of topology and algebra, is the following theorem: there is a unique (up to automorphism) continuous isomorphism from the group R/Z of real numbers under addition modulo integers (the circle group), onto the multiplicative group of complex numbers of absolute value one. The number is then defined as half the magnitude of the derivative of this homomorphism. Irrationality and normality is an irrational number, meaning that it cannot be written as the ratio of two integers. Fractions such as and are commonly used to approximate , but no common fraction (ratio of whole numbers) can be its exact value. Because is irrational, it has an infinite number of digits in its decimal representation, and does not settle into an infinitely repeating pattern of digits. There are several proofs that is irrational; they generally require calculus and rely on the reductio ad absurdum technique. The degree to which can be approximated by rational numbers (called the irrationality measure) is not precisely known; estimates have established that the irrationality measure is larger than the measure of or but smaller than the measure of Liouville numbers. The digits of have no apparent pattern and have passed tests for statistical randomness, including tests for normality; a number of infinite length is called normal when all possible sequences of digits (of any given length) appear equally often. The conjecture that is normal has not been proven or disproven. Since the advent of computers, a large number of digits of have been available on which to perform statistical analysis. Yasumasa Kanada has performed detailed statistical analyses on the decimal digits of , and found them consistent with normality; for example, the frequencies of the ten digits 0 to 9 were subjected to statistical significance tests, and no evidence of a pattern was found. Any random sequence of digits contains arbitrarily long subsequences that appear non-random, by the infinite monkey theorem. Thus, because the sequence of 's digits passes statistical tests for randomness, it contains some sequences of digits that may appear non-random, such as a sequence of six consecutive 9s that begins at the 762nd decimal place of the decimal representation of . This is also called the "Feynman point" in mathematical folklore, after Richard Feynman, although no connection to Feynman is known. Transcendence In addition to being irrational, is also a transcendental number, which means that it is not the solution of any non-constant polynomial equation with rational coefficients, such as . The transcendence of has two important consequences: First, cannot be expressed using any finite combination of rational numbers and square roots or n-th roots (such as or ). Second, since no transcendental number can be constructed with compass and straightedge, it is not possible to "square the circle". In other words, it is impossible to construct, using compass and straightedge alone, a square whose area is exactly equal to the area of a given circle. Squaring a circle was one of the important geometry problems of the classical antiquity. Amateur mathematicians in modern times have sometimes attempted to square the circle and claim success—despite the fact that it is mathematically impossible. Continued fractions As an irrational number, cannot be represented as a common fraction. But every number, including , can be represented by an infinite series of nested fractions, called a continued fraction: Truncating the continued fraction at any point yields a rational approximation for ; the first four of these are , , , and . These numbers are among the best-known and most widely used historical approximations of the constant. Each approximation generated in this way is a best rational approximation; that is, each is closer to than any other fraction with the same or a smaller denominator. Because is transcendental, it is by definition not algebraic and so cannot be a quadratic irrational. Therefore, cannot have a periodic continued fraction. Although the simple continued fraction for (shown above) also does not exhibit any other obvious pattern, several generalized continued fractions do, such as: The middle of these is due to the mid-17th century mathematician William Brouncker, see § Brouncker's formula. Approximate value and digits Some approximations of pi include: Integers: 3 Fractions: Approximate fractions include (in order of increasing accuracy) , , , , , , and . (List is selected terms from and .) Digits: The first 50 decimal digits are (see ) Digits in other number systems The first 48 binary (base 2) digits (called bits) are (see ) The first 38 digits in ternary (base 3) are (see ) The first 20 digits in hexadecimal (base 16) are (see ) The first five sexagesimal (base 60) digits are 3;8,29,44,0,47 (see ) Complex numbers and Euler's identity Any complex number, say , can be expressed using a pair of real numbers. In the polar coordinate system, one number (radius or ) is used to represent 's distance from the origin of the complex plane, and the other (angle or ) the counter-clockwise rotation from the positive real line: where is the imaginary unit satisfying . The frequent appearance of in complex analysis can be related to the behaviour of the exponential function of a complex variable, described by Euler's formula: where the constant is the base of the natural logarithm. This formula establishes a correspondence between imaginary powers of and points on the unit circle centred at the origin of the complex plane. Setting in Euler's formula results in Euler's identity, celebrated in mathematics due to it containing five important mathematical constants: There are different complex numbers satisfying , and these are called the "-th roots of unity" and are given by the formula: History Antiquity The best-known approximations to dating before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period. The earliest written approximations of are found in Babylon and Egypt, both within one percent of the true value. In Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats as  = 3.125. In Egypt, the Rhind Papyrus, dated around 1650 BC but copied from a document dated to 1850 BC, has a formula for the area of a circle that treats as . Although some pyramidologists have theorized that the Great Pyramid of Giza was built with proportions related to , this theory is not widely accepted by scholars. In the Shulba Sutras of Indian mathematics, dating to an oral tradition from the first or second millennium BC, approximations are given which have been variously interpreted as approximately 3.08831, 3.08833, 3.004, 3, or 3.125. Polygon approximation era The first recorded algorithm for rigorously calculating the value of was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes, implementing the method of exhaustion. This polygonal algorithm dominated for over 1,000 years, and as a result is sometimes referred to as Archimedes's constant. Archimedes computed upper and lower bounds of by drawing a regular hexagon inside and outside a circle, and successively doubling the number of sides until he reached a 96-sided regular polygon. By calculating the perimeters of these polygons, he proved that (that is, ). Archimedes' upper bound of may have led to a widespread popular belief that is equal to . Around 150 AD, Greek-Roman scientist Ptolemy, in his Almagest, gave a value for of 3.1416, which he may have obtained from Archimedes or from Apollonius of Perga. Mathematicians using polygonal algorithms reached 39 digits of in 1630, a record only broken in 1699 when infinite series were used to reach 71 digits. In ancient China, values for included 3.1547 (around 1 AD), (100 AD, approximately 3.1623), and (3rd century, approximately 3.1556). Around 265 AD, the Wei Kingdom mathematician Liu Hui created a polygon-based iterative algorithm and used it with a 3,072-sided polygon to obtain a value of of 3.1416. Liu later invented a faster method of calculating and obtained a value of 3.14 with a 96-sided polygon, by taking advantage of the fact that the differences in area of successive polygons form a geometric series with a factor of 4. The Chinese mathematician Zu Chongzhi, around 480 AD, calculated that and suggested the approximations and , which he termed the Milü (''close ratio") and Yuelü ("approximate ratio"), respectively, using Liu Hui's algorithm applied to a 12,288-sided polygon. With a correct value for its seven first decimal digits, this value remained the most accurate approximation of available for the next 800 years. The Indian astronomer Aryabhata used a value of 3.1416 in his Āryabhaṭīya (499 AD). Fibonacci in computed 3.1418 using a polygonal method, independent of Archimedes. Italian author Dante apparently employed the value . The Persian astronomer Jamshīd al-Kāshī produced nine sexagesimal digits, roughly the equivalent of 16 decimal digits, in 1424, using a polygon with sides, which stood as the world record for about 180 years. French mathematician François Viète in 1579 achieved nine digits with a polygon of sides. Flemish mathematician Adriaan van Roomen arrived at 15 decimal places in 1593. In 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits, a record he later increased to 35 digits (as a result, was called the "Ludolphian number" in Germany until the early 20th century). Dutch scientist Willebrord Snellius reached 34 digits in 1621, and Austrian astronomer Christoph Grienberger arrived at 38 digits in 1630 using 1040 sides. Christiaan Huygens was able to arrive at 10 decimal places in 1654 using a slightly different method equivalent to Richardson extrapolation. Infinite series The calculation of was revolutionized by the development of infinite series techniques in the 16th and 17th centuries. An infinite series is the sum of the terms of an infinite sequence. Infinite series allowed mathematicians to compute with much greater precision than Archimedes and others who used geometrical techniques. Although infinite series were exploited for most notably by European mathematicians such as James Gregory and Gottfried Wilhelm Leibniz, the approach also appeared in the Kerala school sometime in the 14th or 15th century. Around 1500 AD, a written description of an infinite series that could be used to compute was laid out in Sanskrit verse in Tantrasamgraha by Nilakantha Somayaji. The series are presented without proof, but proofs are presented in a later work, Yuktibhāṣā, from around 1530 AD. Several infinite series are described, including series for sine (which Nilakantha attributes to Madhava of Sangamagrama), cosine, and arctangent which are now sometimes referred to as Madhava series. The series for arctangent is sometimes called Gregory's series or the Gregory–Leibniz series. Madhava used infinite series to estimate to 11 digits around 1400. In 1593, François Viète published what is now known as Viète's formula, an infinite product (rather than an infinite sum, which is more typically used in calculations): In 1655, John Wallis published what is now known as Wallis product, also an infinite product: In the 1660s, the English scientist Isaac Newton and German mathematician Gottfried Wilhelm Leibniz discovered calculus, which led to the development of many infinite series for approximating . Newton himself used an arcsine series to compute a 15-digit approximation of in 1665 or 1666, writing, "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time." In 1671, James Gregory, and independently, Leibniz in 1673, discovered the Taylor series expansion for arctangent: This series, sometimes called the Gregory–Leibniz series, equals when evaluated with . But for , it converges impractically slowly (that is, approaches the answer very gradually), taking about ten times as many terms to calculate each additional digit. In 1699, English mathematician Abraham Sharp used the Gregory–Leibniz series for to compute to 71 digits, breaking the previous record of 39 digits, which was set with a polygonal algorithm. In 1706, John Machin used the Gregory–Leibniz series to produce an algorithm that converged much faster: Machin reached 100 digits of with this formula. Other mathematicians created variants, now known as Machin-like formulae, that were used to set several successive records for calculating digits of . Isaac Newton accelerated the convergence of the Gregory–Leibniz series in 1684 (in an unpublished work; others independently discovered the result): Leonhard Euler popularized this series in his 1755 differential calculus textbook, and later used it with Machin-like formulae, including with which he computed 20 digits of in one hour. Machin-like formulae remained the best-known method for calculating well into the age of computers, and were used to set records for 250 years, culminating in a 620-digit approximation in 1946 by Daniel Ferguson – the best approximation achieved without the aid of a calculating device. In 1844, a record was set by Zacharias Dase, who employed a Machin-like formula to calculate 200 decimals of in his head at the behest of German mathematician Carl Friedrich Gauss. In 1853, British mathematician William Shanks calculated to 607 digits, but made a mistake in the 528th digit, rendering all subsequent digits incorrect. Though he calculated an additional 100 digits in 1873, bringing the total up to 707, his previous mistake rendered all the new digits incorrect as well. Rate of convergence Some infinite series for converge faster than others. Given the choice of two infinite series for , mathematicians will generally use the one that converges more rapidly because faster convergence reduces the amount of computation needed to calculate to any given accuracy. A simple infinite series for is the Gregory–Leibniz series: As individual terms of this infinite series are added to the sum, the total gradually gets closer to , and – with a sufficient number of terms – can get as close to as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of . An infinite series for (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is: The following table compares the convergence rates of these two series: After five terms, the sum of the Gregory–Leibniz series is within 0.2 of the correct value of , whereas the sum of Nilakantha's series is within 0.002 of the correct value. Nilakantha's series converges faster and is more useful for computing digits of . Series that converge even faster include Machin's series and Chudnovsky's series, the latter producing 14 correct decimal digits per term. Irrationality and transcendence Not all mathematical advances relating to were aimed at increasing the accuracy of approximations. When Euler solved the Basel problem in 1735, finding the exact value of the sum of the reciprocal squares, he established a connection between and the prime numbers that later contributed to the development and study of the Riemann zeta function: Swiss scientist Johann Heinrich Lambert in 1768 proved that is irrational, meaning it is not equal to the quotient of any two integers. Lambert's proof exploited a continued-fraction representation of the tangent function. French mathematician Adrien-Marie Legendre proved in 1794 that 2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that is transcendental, confirming a conjecture made by both Legendre and Euler. Hardy and Wright states that "the proofs were afterwards modified and simplified by Hilbert, Hurwitz, and other writers". Adoption of the symbol In the earliest usages, the Greek letter was used to denote the semiperimeter (semiperipheria in Latin) of a circle and was combined in ratios with (for diameter or semidiameter) or (for radius) to form circle constants. (Before then, mathematicians sometimes used letters such as or instead.) The first recorded use is Oughtred's , to express the ratio of periphery and diameter in the 1647 and later editions of . Barrow likewise used to represent the constant , while Gregory instead used to represent . The earliest known use of the Greek letter alone to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in his 1706 work ; or, a New Introduction to the Mathematics. The Greek letter appears on p. 243 in the phrase " Periphery ()", calculated for a circle with radius one. However, Jones writes that his equations for are from the "ready pen of the truly ingenious Mr. John Machin", leading to speculation that Machin may have employed the Greek letter before Jones. Jones' notation was not immediately adopted by other mathematicians, with the fraction notation still being used as late as 1767. Euler started using the single-letter form beginning with his 1727 Essay Explaining the Properties of Air, though he used , the ratio of periphery to radius, in this and some later writing. Euler first used in his 1736 work Mechanica, and continued in his widely read 1748 work (he wrote: "for the sake of brevity we will write this number as ; thus is equal to half the circumference of a circle of radius "). Because Euler corresponded heavily with other mathematicians in Europe, the use of the Greek letter spread rapidly, and the practice was universally adopted thereafter in the Western world, though the definition still varied between and as late as 1761. Modern quest for more digits Computer era and iterative algorithms The development of computers in the mid-20th century again revolutionized the hunt for digits of . Mathematicians John Wrench and Levi Smith reached 1,120 digits in 1949 using a desk calculator. Using an inverse tangent (arctan) infinite series, a team led by George Reitwiesner and John von Neumann that same year achieved 2,037 digits with a calculation that took 70 hours of computer time on the ENIAC computer. The record, always relying on an arctan series, was broken repeatedly (3089 digits in 1955, 7,480 digits in 1957; 10,000 digits in 1958; 100,000 digits in 1961) until 1 million digits were reached in 1973. Two additional developments around 1980 once again accelerated the ability to compute . First, the discovery of new iterative algorithms for computing , which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern computations because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, Toom–Cook multiplication, and Fourier transform-based methods. The iterative algorithms were independently published in 1975–1976 by physicist Eugene Salamin and scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the arithmetic–geometric mean method (AGM method) or Gauss–Legendre algorithm. As modified by Salamin and Brent, it is also referred to as the Brent–Salamin algorithm. The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent–Salamin algorithm doubles the number of digits in each iteration. In 1984, brothers John and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. Iterative methods were used by Japanese mathematician Yasumasa Kanada to set several records for computing between 1995 and 2002. This rapid convergence comes at a price: the iterative algorithms require significantly more memory than infinite series. Motives for computing For most numerical calculations involving , a handful of digits provide sufficient precision. According to Jörg Arndt and Christoph Haenel, thirty-nine digits are sufficient to perform most cosmological calculations, because that is the accuracy necessary to calculate the circumference of the observable universe with a precision of one atom. Accounting for additional digits needed to compensate for computational round-off errors, Arndt concludes that a few hundred digits would suffice for any scientific application. Despite this, people have worked strenuously to compute to thousands and millions of digits. This effort may be partly ascribed to the human compulsion to break records, and such achievements with often make headlines around the world. They also have practical benefits, such as testing supercomputers, testing numerical analysis algorithms (including high-precision multiplication algorithms); and within pure mathematics itself, providing data for evaluating the randomness of the digits of . Rapidly convergent series Modern calculators do not use iterative algorithms exclusively. New infinite series were discovered in the 1980s and 1990s that are as fast as iterative algorithms, yet are simpler and less memory intensive. The fast iterative algorithms were anticipated in 1914, when Indian mathematician Srinivasa Ramanujan published dozens of innovative new formulae for , remarkable for their elegance, mathematical depth and rapid convergence. One of his formulae, based on modular equations, is This series converges much more rapidly than most arctan series, including Machin's formula. Bill Gosper was the first to use it for advances in the calculation of , setting a record of 17 million digits in 1985. Ramanujan's formulae anticipated the modern algorithms developed by the Borwein brothers (Jonathan and Peter) and the Chudnovsky brothers. The Chudnovsky formula developed in 1987 is It produces about 14 digits of per term and has been used for several record-setting calculations, including the first to surpass 1 billion (109) digits in 1989 by the Chudnovsky brothers, 10 trillion (1013) digits in 2011 by Alexander Yee and Shigeru Kondo, and 100 trillion digits by Emma Haruka Iwao in 2022. For similar formulae, see also the Ramanujan–Sato series. In 2006, mathematician Simon Plouffe used the PSLQ integer relation algorithm to generate several new formulae for , conforming to the following template: where is (Gelfond's constant), is an odd number, and are certain rational numbers that Plouffe computed. Monte Carlo methods Monte Carlo methods, which evaluate the results of multiple random trials, can be used to create approximations of . Buffon's needle is one such technique: If a needle of length is dropped times on a surface on which parallel lines are drawn units apart, and if of those times it comes to rest crossing a line ( > 0), then one may approximate based on the counts: Another Monte Carlo method for computing is to draw a circle inscribed in a square, and randomly place dots in the square. The ratio of dots inside the circle to the total number of dots will approximately equal . Another way to calculate using probability is to start with a random walk, generated by a sequence of (fair) coin tosses: independent random variables such that with equal probabilities. The associated random walk is so that, for each , is drawn from a shifted and scaled binomial distribution. As varies, defines a (discrete) stochastic process. Then can be calculated by This Monte Carlo method is independent of any relation to circles, and is a consequence of the central limit theorem, discussed below. These Monte Carlo methods for approximating are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate when speed or accuracy is desired. Spigot algorithms Two algorithms were discovered in 1995 that opened up new avenues of research into . They are called spigot algorithms because, like water dripping from a spigot, they produce single digits of that are not reused after they are calculated. This is in contrast to infinite series or iterative algorithms, which retain and use all intermediate digits until the final result is produced. Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms. Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe: This formula, unlike others before it, can produce any individual hexadecimal digit of without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. An important application of digit extraction algorithms is to validate new claims of record computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several randomly selected hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct. Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of , which turned out to be 0. In September 2010, a Yahoo! employee used the company's Hadoop application on one thousand computers over a 23-day period to compute 256 bits of at the two-quadrillionth (2×1015th) bit, which also happens to be zero. In 2022, Plouffe found a base-10 algorithm for calculating digits of . Role and characterizations in mathematics Because is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Other branches of science, such as statistics, physics, Fourier analysis, and number theory, also include in some of their important formulae. Geometry and trigonometry appears in formulae for areas and volumes of geometrical shapes based on circles, such as ellipses, spheres, cones, and tori. Below are some of the more common formulae that involve . The circumference of a circle with radius is . The area of a circle with radius is . The area of an ellipse with semi-major axis and semi-minor axis is . The volume of a sphere with radius is . The surface area of a sphere with radius is . Some of the formulae above are special cases of the volume of the n-dimensional ball and the surface area of its boundary, the (n−1)-dimensional sphere, given below. Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter times its width. The Reuleaux triangle (formed by the intersection of three circles with the sides of an equilateral triangle as their radii) has the smallest possible area for its width and the circle the largest. There also exist non-circular smooth and even algebraic curves of constant width. Definite integrals that describe circumference, area, or volume of shapes generated by circles typically have values that involve . For example, an integral that specifies half the area of a circle of radius one is given by: In that integral, the function represents the height over the -axis of a semicircle (the square root is a consequence of the Pythagorean theorem), and the integral computes the area below the semicircle. Units of angle The trigonometric functions rely on angles, and mathematicians generally use radians as units of measurement. plays an important role in angles measured in radians, which are defined so that a complete circle spans an angle of 2 radians. The angle measure of 180° is equal to radians, and . Common trigonometric functions have periods that are multiples of ; for example, sine and cosine have period 2, so for any angle and any integer , Eigenvalues Many of the appearances of in the formulae of mathematics and the sciences have to do with its close relationship with geometry. However, also appears in many natural situations having apparently nothing to do with geometry. In many applications, it plays a distinguished role as an eigenvalue. For example, an idealized vibrating string can be modelled as the graph of a function on the unit interval , with fixed ends . The modes of vibration of the string are solutions of the differential equation , or . Thus is an eigenvalue of the second derivative operator , and is constrained by Sturm–Liouville theory to take on only certain specific values. It must be positive, since the operator is negative definite, so it is convenient to write , where is called the wavenumber. Then satisfies the boundary conditions and the differential equation with . The value is, in fact, the least such value of the wavenumber, and is associated with the fundamental mode of vibration of the string. One way to show this is by estimating the energy, which satisfies Wirtinger's inequality: for a function with and , both square integrable, we have: with equality precisely when is a multiple of . Here appears as an optimal constant in Wirtinger's inequality, and it follows that it is the smallest wavenumber, using the variational characterization of the eigenvalue. As a consequence, is the smallest singular value of the derivative operator on the space of functions on vanishing at both endpoints (the Sobolev space ). Inequalities The number serves appears in similar eigenvalue problems in higher-dimensional analysis. As mentioned above, it can be characterized via its role as the best constant in the isoperimetric inequality: the area enclosed by a plane Jordan curve of perimeter satisfies the inequality and equality is clearly achieved for the circle, since in that case and . Ultimately, as a consequence of the isoperimetric inequality, appears in the optimal constant for the critical Sobolev inequality in n dimensions, which thus characterizes the role of in many physical phenomena as well, for example those of classical potential theory. In two dimensions, the critical Sobolev inequality is for f a smooth function with compact support in , is the gradient of f, and and refer respectively to the and -norm. The Sobolev inequality is equivalent to the isoperimetric inequality (in any dimension), with the same best constants. Wirtinger's inequality also generalizes to higher-dimensional Poincaré inequalities that provide best constants for the Dirichlet energy of an n-dimensional membrane. Specifically, is the greatest constant such that for all convex subsets of of diameter 1, and square-integrable functions u on of mean zero. Just as Wirtinger's inequality is the variational form of the Dirichlet eigenvalue problem in one dimension, the Poincaré inequality is the variational form of the Neumann eigenvalue problem, in any dimension. Fourier transform and Heisenberg uncertainty principle The constant also appears as a critical spectral parameter in the Fourier transform. This is the integral transform, that takes a complex-valued integrable function on the real line to the function defined as: Although there are several different conventions for the Fourier transform and its inverse, any such convention must involve somewhere. The above is the most canonical definition, however, giving the unique unitary operator on that is also an algebra homomorphism of to . The Heisenberg uncertainty principle also contains the number . The uncertainty principle gives a sharp lower bound on the extent to which it is possible to localize a function both in space and in frequency: with our conventions for the Fourier transform, The physical consequence, about the uncertainty in simultaneous position and momentum observations of a quantum mechanical system, is discussed below. The appearance of in the formulae of Fourier analysis is ultimately a consequence of the Stone–von Neumann theorem, asserting the uniqueness of the Schrödinger representation of the Heisenberg group. Gaussian integrals The fields of probability and statistics frequently use the normal distribution as a simple model for complex phenomena; for example, scientists generally assume that the observational error in most experiments follows a normal distribution. The Gaussian function, which is the probability density function of the normal distribution with mean and standard deviation , naturally contains : The factor of makes the area under the graph of equal to one, as is required for a probability distribution. This follows from a change of variables in the Gaussian integral: which says that the area under the basic bell curve in the figure is equal to the square root of . The central limit theorem explains the central role of normal distributions, and thus of , in probability and statistics. This theorem is ultimately connected with the spectral characterization of as the eigenvalue associated with the Heisenberg uncertainty principle, and the fact that equality holds in the uncertainty principle only for the Gaussian function. Equivalently, is the unique constant making the Gaussian normal distribution equal to its own Fourier transform. Indeed, according to , the "whole business" of establishing the fundamental theorems of Fourier analysis reduces to the Gaussian integral. Topology The constant appears in the Gauss–Bonnet formula which relates the differential geometry of surfaces to their topology. Specifically, if a compact surface has Gauss curvature K, then where is the Euler characteristic, which is an integer. An example is the surface area of a sphere S of curvature 1 (so that its radius of curvature, which coincides with its radius, is also 1.) The Euler characteristic of a sphere can be computed from its homology groups and is found to be equal to two. Thus we have reproducing the formula for the surface area of a sphere of radius 1. The constant appears in many other integral formulae in topology, in particular, those involving characteristic classes via the Chern–Weil homomorphism. Cauchy's integral formula One of the key tools in complex analysis is contour integration of a function over a positively oriented (rectifiable) Jordan curve . A form of Cauchy's integral formula states that if a point is interior to , then Although the curve is not a circle, and hence does not have any obvious connection to the constant , a standard proof of this result uses Morera's theorem, which implies that the integral is invariant under homotopy of the curve, so that it can be deformed to a circle and then integrated explicitly in polar coordinates. More generally, it is true that if a rectifiable closed curve does not contain , then the above integral is times the winding number of the curve. The general form of Cauchy's integral formula establishes the relationship between the values of a complex analytic function on the Jordan curve and the value of at any interior point of : provided is analytic in the region enclosed by and extends continuously to . Cauchy's integral formula is a special case of the residue theorem, that if is a meromorphic function the region enclosed by and is continuous in a neighbourhood of , then where the sum is of the residues at the poles of . Vector calculus and physics The constant is ubiquitous in vector calculus and potential theory, for example in Coulomb's law, Gauss's law, Maxwell's equations, and even the Einstein field equations. Perhaps the simplest example of this is the two-dimensional Newtonian potential, representing the potential of a point source at the origin, whose associated field has unit outward flux through any smooth and oriented closed surface enclosing the source: The factor of is necessary to ensure that is the fundamental solution of the Poisson equation in : where is the Dirac delta function. In higher dimensions, factors of are present because of a normalization by the n-dimensional volume of the unit n sphere. For example, in three dimensions, the Newtonian potential is: which has the 2-dimensional volume (i.e., the area) of the unit 2-sphere in the denominator. Total curvature The gamma function and Stirling's approximation The factorial function is the product of all of the positive integers through . The gamma function extends the concept of factorial (normally defined only for non-negative integers) to all complex numbers, except the negative real integers, with the identity . When the gamma function is evaluated at half-integers, the result contains . For example, and . The gamma function is defined by its Weierstrass product development: where is the Euler–Mascheroni constant. Evaluated at and squared, the equation reduces to the Wallis product formula. The gamma function is also connected to the Riemann zeta function and identities for the functional determinant, in which the constant plays an important role. The gamma function is used to calculate the volume of the n-dimensional ball of radius r in Euclidean n-dimensional space, and the surface area of its boundary, the (n−1)-dimensional sphere: Further, it follows from the functional equation that The gamma function can be used to create a simple approximation to the factorial function for large : which is known as Stirling's approximation. Equivalently, As a geometrical application of Stirling's approximation, let denote the standard simplex in n-dimensional Euclidean space, and denote the simplex having all of its sides scaled up by a factor of . Then Ehrhart's volume conjecture is that this is the (optimal) upper bound on the volume of a convex body containing only one lattice point. Number theory and Riemann zeta function The Riemann zeta function is used in many areas of mathematics. When evaluated at it can be written as Finding a simple solution for this infinite series was a famous problem in mathematics called the Basel problem. Leonhard Euler solved it in 1735 when he showed it was equal to . Euler's result leads to the number theory result that the probability of two random numbers being relatively prime (that is, having no shared factors) is equal to . This probability is based on the observation that the probability that any number is divisible by a prime is (for example, every 7th integer is divisible by 7.) Hence the probability that two numbers are both divisible by this prime is , and the probability that at least one of them is not is . For distinct primes, these divisibility events are mutually independent; so the probability that two numbers are relatively prime is given by a product over all primes: This probability can be used in conjunction with a random number generator to approximate using a Monte Carlo approach. The solution to the Basel problem implies that the geometrically derived quantity is connected in a deep way to the distribution of prime numbers. This is a special case of Weil's conjecture on Tamagawa numbers, which asserts the equality of similar such infinite products of arithmetic quantities, localized at each prime p, and a geometrical quantity: the reciprocal of the volume of a certain locally symmetric space. In the case of the Basel problem, it is the hyperbolic 3-manifold . The zeta function also satisfies Riemann's functional equation, which involves as well as the gamma function: Furthermore, the derivative of the zeta function satisfies A consequence is that can be obtained from the functional determinant of the harmonic oscillator. This functional determinant can be computed via a product expansion, and is equivalent to the Wallis product formula. The calculation can be recast in quantum mechanics, specifically the variational approach to the spectrum of the hydrogen atom. Fourier series The constant also appears naturally in Fourier series of periodic functions. Periodic functions are functions on the group of fractional parts of real numbers. The Fourier decomposition shows that a complex-valued function on can be written as an infinite linear superposition of unitary characters of . That is, continuous group homomorphisms from to the circle group of unit modulus complex numbers. It is a theorem that every character of is one of the complex exponentials . There is a unique character on , up to complex conjugation, that is a group isomorphism. Using the Haar measure on the circle group, the constant is half the magnitude of the Radon–Nikodym derivative of this character. The other characters have derivatives whose magnitudes are positive integral multiples of 2. As a result, the constant is the unique number such that the group T, equipped with its Haar measure, is Pontrjagin dual to the lattice of integral multiples of 2. This is a version of the one-dimensional Poisson summation formula. Modular forms and theta functions The constant is connected in a deep way with the theory of modular forms and theta functions. For example, the Chudnovsky algorithm involves in an essential way the j-invariant of an elliptic curve. Modular forms are holomorphic functions in the upper half plane characterized by their transformation properties under the modular group (or its various subgroups), a lattice in the group . An example is the Jacobi theta function which is a kind of modular form called a Jacobi form. This is sometimes written in terms of the nome . The constant is the unique constant making the Jacobi theta function an automorphic form, which means that it transforms in a specific way. Certain identities hold for all automorphic forms. An example is which implies that transforms as a representation under the discrete Heisenberg group. General modular forms and other theta functions also involve , once again because of the Stone–von Neumann theorem. Cauchy distribution and potential theory The Cauchy distribution is a probability density function. The total probability is equal to one, owing to the integral: The Shannon entropy of the Cauchy distribution is equal to , which also involves . The Cauchy distribution plays an important role in potential theory because it is the simplest Furstenberg measure, the classical Poisson kernel associated with a Brownian motion in a half-plane. Conjugate harmonic functions and so also the Hilbert transform are associated with the asymptotics of the Poisson kernel. The Hilbert transform H is the integral transform given by the Cauchy principal value of the singular integral The constant is the unique (positive) normalizing factor such that H defines a linear complex structure on the Hilbert space of square-integrable real-valued functions on the real line. The Hilbert transform, like the Fourier transform, can be characterized purely in terms of its transformation properties on the Hilbert space : up to a normalization factor, it is the unique bounded linear operator that commutes with positive dilations and anti-commutes with all reflections of the real line. The constant is the unique normalizing factor that makes this transformation unitary. In the Mandelbrot set An occurrence of in the fractal called the Mandelbrot set was discovered by David Boll in 1991. He examined the behaviour of the Mandelbrot set near the "neck" at . When the number of iterations until divergence for the point is multiplied by , the result approaches as approaches zero. The point at the cusp of the large "valley" on the right side of the Mandelbrot set behaves similarly: the number of iterations until divergence multiplied by the square root of tends to . Projective geometry Let be the set of all twice differentiable real functions that satisfy the ordinary differential equation . Then is a two-dimensional real vector space, with two parameters corresponding to a pair of initial conditions for the differential equation. For any , let be the evaluation functional, which associates to each the value of the function at the real point . Then, for each t, the kernel of is a one-dimensional linear subspace of . Hence defines a function from from the real line to the real projective line. This function is periodic, and the quantity can be characterized as the period of this map. This is notable in that the constant , rather than 2, appears naturally in this context. Outside mathematics Describing physical phenomena Although not a physical constant, appears routinely in equations describing fundamental principles of the universe, often because of 's relationship to the circle and to spherical coordinate systems. A simple formula from the field of classical mechanics gives the approximate period of a simple pendulum of length , swinging with a small amplitude ( is the earth's gravitational acceleration): One of the key formulae of quantum mechanics is Heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (Δ) and momentum (Δ) cannot both be arbitrarily small at the same time (where is the Planck constant): The fact that is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant is where is the mass of the electron. is present in some structural engineering formulae, such as the buckling formula derived by Euler, which gives the maximum axial load that a long, slender column of length , modulus of elasticity , and area moment of inertia can carry without buckling: The field of fluid dynamics contains in Stokes' law, which approximates the frictional force exerted on small, spherical objects of radius , moving with velocity in a fluid with dynamic viscosity : In electromagnetics, the vacuum permeability constant μ0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation. Before 20 May 2019, it was defined as exactly Memorizing digits Piphilology is the practice of memorizing large numbers of digits of , and world-records are kept by the Guinness World Records. The record for memorizing digits of , certified by Guinness World Records, is 70,000 digits, recited in India by Rajveer Meena in 9 hours and 27 minutes on 21 March 2015. In 2006, Akira Haraguchi, a retired Japanese engineer, claimed to have recited 100,000 decimal places, but the claim was not verified by Guinness World Records. One common technique is to memorize a story or poem in which the word lengths represent the digits of : The first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. Such memorization aids are called mnemonics. An early example of a mnemonic for pi, originally devised by English scientist James Jeans, is "How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." When a poem is used, it is sometimes referred to as a piem. Poems for memorizing have been composed in several languages in addition to English. Record-setting memorizers typically do not rely on poems, but instead use methods such as remembering number patterns and the method of loci. A few authors have used the digits of to establish a new form of constrained writing, where the word lengths are required to represent the digits of . The Cadaeic Cadenza contains the first 3835 digits of in this manner, and the full-length book Not a Wake contains 10,000 words, each representing one digit of . In popular culture Perhaps because of the simplicity of its definition and its ubiquitous presence in formulae, has been represented in popular culture more than other mathematical constructs. In the Palais de la Découverte (a science museum in Paris) there is a circular room known as the pi room. On its wall are inscribed 707 digits of . The digits are large wooden characters attached to the dome-like ceiling. The digits were based on an 1873 calculation by English mathematician William Shanks, which included an error beginning at the 528th digit. The error was detected in 1946 and corrected in 1949. In Carl Sagan's 1985 novel Contact it is suggested that the creator of the universe buried a message deep within the digits of . This part of the story was omitted from the film adaptation of the novel. The digits of have also been incorporated into the lyrics of the song "Pi" from the 2005 album Aerial by Kate Bush. In the 1967 Star Trek episode "Wolf in the Fold", an out-of-control computer is contained by being instructed to "Compute to the last digit the value of ". In the United States, Pi Day falls on 14 March (written 3/14 in the US style), and is popular among students. and its digital representation are often used by self-described "math geeks" for inside jokes among mathematically and technologically minded groups. A college cheer variously attributed to the Massachusetts Institute of Technology or the Rensselaer Polytechnic Institute includes "3.14159". Pi Day in 2015 was particularly significant because the date and time 3/14/15 9:26:53 reflected many more digits of pi. In parts of the world where dates are commonly noted in day/month/year format, 22 July represents "Pi Approximation Day", as 22/7 = 3.142857. Some have proposed replacing by , arguing that , as the number of radians in one turn or the ratio of a circle's circumference to its radius, is more natural than and simplifies many formulae. This use of has not made its way into mainstream mathematics, but since 2010 this has led to people celebrating Two Pi Day or Tau Day on June 28. In 1897, an amateur mathematician attempted to persuade the Indiana legislature to pass the Indiana Pi Bill, which described a method to square the circle and contained text that implied various incorrect values for , including 3.2. The bill is notorious as an attempt to establish a value of mathematical constant by legislative fiat. The bill was passed by the Indiana House of Representatives, but rejected by the Senate, and thus it did not become a law. In computer culture In contemporary internet culture, individuals and organizations frequently pay homage to the number . For instance, the computer scientist Donald Knuth let the version numbers of his program TeX approach . The versions are 3, 3.1, 3.14, and so forth. has been added to several programming languages as a predefined constant. See also Approximations of Chronology of computation of List of mathematical constants References Explanatory notes Citations General and cited sources English translation by Catriona and David Lischka. Further reading External links Demonstration by Lambert (1761) of irrationality of , online and analysed BibNum (PDF). Search Engine 2 billion searchable digits of , and approximation von π by lattice points and approximation of π with rectangles and trapezoids (interactive illustrations) Complex analysis Mathematical series Real transcendental numbers
23603
https://en.wikipedia.org/wiki/Postmodernism
Postmodernism
Postmodernism is a term used to refer to a variety of artistic, cultural, and philosophical movements that claim to mark a break with modernism. What they have in common is the conviction that it is no longer possible to rely upon previous ways of representing reality. Still, there is disagreement among experts about its more precise meaning even within narrow contexts. The term began to acquire its current range of meanings in literary criticism and architectural theory during the 1950s–1960s. In opposition to modernism's alleged self-seriousness, postmodernism is characterized by its playful use of irony and pastiche, among other features. Critics claim it supplants moral, political, and aesthetic ideals with mere style and spectacle. In the 1990s, "postmodernism" came to denote a general – and, in general, celebratory – response to cultural pluralism. Proponents align themselves with feminism, multiculturalism, and postcolonialism. Building upon poststructural theory, postmodern thought defined itself by the rejection of any single, foundational historical narrative. This called into question the legitimacy of the Enlightenment account of progress and rationality. Critics allege that its premises lead to a nihilistic form of relativism. In this sense, it has become a term of abuse in popular culture. The problem of definition "Postmodernism" is "a highly contested term", referring to "a particularly unstable concept", that "names many different kinds of cultural objects and phenomena in many different ways". It is "diffuse, fragmentary, [and] multi-dimensional". Critics have described it as "an exasperating term" and claim that its indefinability is "a truism". Put otherwise, postmodernism is "several things at once". It has no single definition, and the term does not name any single unified phenomenon, but rather many diverse phenomena: "postmodernisms rather than one postmodernism". Although postmodernisms are generally united in their effort to transcend the perceived limits of modernism, "modernism" also means different things to different critics in various arts. Further, there are outliers on even this basic stance; for instance, literary critic William Spanos conceives postmodernism, not in period terms, but in terms of a certain kind of literary imagination so that pre-modern texts such as Euripides' Orestes or Cervantes' Don Quixote count as postmodern. Nevertheless, attempting to generalize, scholar Hans Bertens offers the following: If there is a common denominator to all these postmodernisms, it is that of a crisis in representation: a deeply felt loss of faith in our ability to represent the real, in the widest sense. No matter whether they are aesthestic [sic], epistemological, moral, or political in nature, the representations that we used to rely on can no longer be taken for granted. Historical overview The term first appeared in print in 1870, but it only began to enter circulation with its current range of meanings in the 1950s—60s. Early appearances The term "postmodern" was first used in 1870 by the artist John Watkins Chapman, who described "a Postmodern style of painting" as a departure from French Impressionism. Similarly, the first citation given by the Oxford English Dictionary is dated to 1916, describing Gus Mager as "one of the few 'post' modern painters whose style is convincing". Episcopal priest and cultural commentator J. M. Thompson, in a 1914 article, uses the term to describe changes in attitudes and beliefs in the critique of religion, writing, "the raison d'être of Post-Modernism is to escape from the double-mindedness of modernism by being thorough in its criticism by extending it to religion as well as theology, to Catholic feeling as well as to Catholic tradition". In 1926, Bernard Iddings Bell, president of St. Stephen's College and also an Episcopal priest, published Postmodernism and Other Essays, which marks the first use of the term to describe an historical period following modernity. The essay criticizes lingering socio-cultural norms, attitudes, and practices of the Enlightenment. It is also critical of a purported cultural shift away from traditional Christian beliefs. The term "postmodernity" was first used in an academic historical context as a general concept for a movement by Arnold J. Toynbee in a 1939 essay, which states that "Our own Post-Modern Age has been inaugurated by the general war of 1914–1918". In 1942, the literary critic and author H. R. Hays describes postmodernism as a new literary form. Also in the arts, the term was first used in 1949 to describe a dissatisfaction with the modernist architectural movement known as the International Style. Although these early uses anticipate some of the concerns of the debate in the second part of the 20th century, there is little direct continuity in the discussion. Just when the new discussion begins, however, is also a matter of dispute. Various authors place its beginnings in the 1950s, 1960s, 1970s, and 1980s. Theoretical development In the mid-1970s, the American sociologist Daniel Bell provided a general account of the postmodern as an effectively nihilistic response to modernism's alleged assault on the Protestant work ethic and its rejection of what he upheld as traditional values. The ideals of modernity, per his diagnosis, were degraded to the level of consumer choice. This research project, however, was not taken up in a significant way by others until the mid-1980s when the work of Jean Baudrillard and Fredrick Jameson, building upon art and literary criticism, reintroduced the term to sociology. Discussion about the postmodern in the second part of the 20th century was most articulate in areas with a large body of critical discourse around the modernist movement. Even here, however, there continued to be disagreement about such basic issues as whether postmodernism is a break with modernism, a renewal and intensification of modernism, or even, both at once, a rejection and a radicalization of its historical predecessor. According to scholar Steven Connor, discussions of the 1970s were dominated by literary criticism, to be supplanted by architectural theory in the 1980s. Some of these conversations made use of French poststructuralist thought, but only after these innovations and critical discourse in the arts did postmodernism emerge as a philosophical term in its own right. In literary and architectural theory According to scholar Ian Buchanan, the Black Mountain poets Charles Olson and Robert Creeley first introduced the term "postmodern" in its current sense during the 1950s. Their stance against modernist poetry – and Olson's Heideggerian orientation – were influential in the identification of postmodernism as a polemical position opposed to the rationalist values championed by the Enlightenment project. During the 1960s, this affirmative use gave way to a pejorative use by the New Left, who used it to describe a waning commitment among youth to the political ideals socialism and communism. The literary critic Irving Howe, for instance, denounced postmodern literature for being content to merely reflect, rather than actively attempt to refashion, what he saw as the "increasingly shapeless" character of contemporary society. In the 1970s, this changed again, largely under the influence of the literary critic Ihab Hassan's large-scale survey of works that he said could no longer be called modern. Taking the Black Mountain poets an exemplary instance of the new postmodern type, Hassan celebrates its Nietzschean playfulness and cheerfully anarchic spirit, which he sets off against the high seriousness of modernism. (Yet, from another perspective, Friedrich Nietzsche's attack on Western philosophy and Martin Heidegger's critique of metaphysics posed deep theoretical problems not necessarily a cause for aesthetic celebration. Their further influence on the conversation about postmodernism, however, would be largely mediated by French poststructuralism.) If literature was at the center of the discussion in the 1970s, architecture is at the center in the 1980s. The architectural theorist Charles Jencks, in particular, connects the artistic avant-garde to social change in a way that captures attention outside of academia. Jenckes, much influenced by the American architect Robert Venturi, celebrates a plurality of forms and encourages participation and active engagement with the local context of the built environment. He presents this as in opposition to the "authoritarian style" of International Modernism. The influence of poststructuralism In the 1970s, postmodern criticism increasingly came to incorporate poststructuralist theory, particularly the deconstructive approach to texts most strongly associated with Jacques Derrida. Derrida attempted to demonstrate that the whole foundationalist approach to language and knowledge was untenable and misguided. He was also critical of what he claimed to expose as the artificial binary oppositions (e.g., subject/object, speech/writing) that he claims are at the heart of Western culture and philosophy. It is during this period that postmodernism comes to be particularly equated with a kind of anti-representational self-reflexivity. In the 1980s, some critics begin to take an interest in the work of Michel Foucault. This introduces a political concern about social power-relations into discussions about postmodernism. Much of Foucault's project is, against the Enlightenment tradition, to expose modern social institutions and forms of knowledge as historically contingent forces of domination. He aims detotalize or decenter historical narratives to display modern consciousness as it is constituted by specific discourses and institutions that shape individuals into the docile subjects of social systems. This is also the beginning of the affiliation of postmodernism with feminism and multiculturalism. The art critic Craig Owens, in particular, not only made the connection to feminism explicit, but went so far as to claim feminism for postmodernism wholesale, a broad claim resisted by even many sympathetic feminists such as Nancy Fraser and Linda Nicholson. In social theory Although postmodern criticism and thought drew on philosophical ideas from early on, "postmodernism" was only introduced to the expressly philosophical lexicon by Jean-François Lyotard in his 1979 The Postmodern Condition: A Report on Knowledge. In this influential work, Lyotard offers the following definition: "Simplifying to the extreme, I define postmodern as incredulity towards metanarratives [such as Enlightenment progress or Marxist revolution]". In a society with no unifying narrative, he argues, we are left with heterogeneous, group-specific narratives (or "language games", as adopted from Ludwig Wittgenstein) with no universal perspective from which to adjudicate among them. According to Lyotard, this introduces a general crisis of legitimacy, a theme he adopts from the philosopher Jürgen Habermas, whose theory of communicative rationality Lyotard rejects. While he was particularly concerned with the way that this insight undermines claims of scientific objectivity, Lyotard's argument undermines the entire principle of transcendent legitimization. Instead, proponents of a language game must make the case for their legitimacy with reference to such considerations as efficiency or practicality. Far from celebrating the apparently relativistic consequences of this argument, however, Lyotard focused much of his subsequent work on how links among games could be established, particularly with respect to ethics and politics. Nevertheless, the appearance of linguistic relativism inspired an extensive rebuttal by the Marxist critic Fredrick Jameson. Building upon the theoretical foundations laid out by the Marxist economist Ernst Mandel and observations in the early work of the French sociologist Jean Baudrillard, Jameson develops his own conception of the postmodern as "the cultural logic of late capitalism" in the form of an enormous cultural expansion into an economy of spectacle and style, rather than the production of goods. Baudrillard himself broke with Marxism, but continued to theorize the postmodern as the condition in which the domain of reality has become so heavily mediated by signs as to become inaccessible in itself, leaving us entirely in the domain of the simulacrum, an image that bears no relation to anything outside of itself. Scholars, however, disagree about whether his later works are intended as science fiction or truthful theoretical claims. In the 1990s, postmodernism became increasingly identified with critical and philosophical discourse directly about postmodernity or the postmodern idiom itself. No longer centered on any particular art or even the arts in general, it instead turns to address the more general problems posed to society in general by a new proliferation of cultures and forms. It is during this period that it also comes to be associated with postcolonialism and identity politics. Around this time, postmodernism also begins to be conceived in popular culture as a general "philosophical disposition" associated with a loose sort of relativism. In this sense, the term also starts to appear as a "casual term of abuse" in non-academic contexts. Others identify it as an aesthetic "lifestyle" of eclecticism and playful self-irony. In various arts Architecture Scholarship regarding postmodernism and architecture is closely linked with the writings of critic-turned-architect Charles Jencks, beginning with lectures in the early 1970s and his essay "The Rise of Post-Modern Architecture" from 1975. His magnum opus, however, is the book The Language of Post-Modern Architecture, first published in 1977, and since running to seven editions. Jencks makes the point that postmodernism (like modernism) varies for each field of art, and that for architecture it is not just a reaction to modernism but what he terms double coding: "Double Coding: the combination of Modern techniques with something else (usually traditional building) in order for architecture to communicate with the public and a concerned minority, usually other architects." In their book, "Revisiting Postmodernism", Terry Farrell and Adam Furman argue that postmodernism brought a more joyous and sensual experience to the culture, particularly in architecture. For instance, in response to the modernist slogan of Ludwig Mies van der Rohe that "less is more", the postmodernist Robert Venturi rejoined that "less is a bore". Dance The term "postmodern dance" is most strongly associated with the dancers of the Judson Dance Theater located in New York City during the 1960s and 1970s. Arguably its most important principle is taken from the composer John Cage's efforts to break down the distinction between art and life. This was developed in particular by the American dancer and choreographer Merce Cunningham. In the 1980s and 1990s dance began to incorporate other typically postmodern features such as the mixing of genres, challenging high–low cultural distinctions, and incorporating a political dimension. Graphic design Early mention of postmodernism as an element of graphic design appeared in the British magazine, "Design". A characteristic of postmodern graphic design is that "retro, techno, punk, grunge, beach, parody, and pastiche were all conspicuous trends. Each had its own sites and venues, detractors and advocates." Literature In 1971, the American scholar Ihab Hassan made the term popular in literary studies as a description of the new art emerging in the 1960s. According to scholar David Herwitz, writers such as John Barth and Donald Barthelme (and, later, Thomas Pynchon) responded in various ways to the aesthetic innovations of Finnegans Wake and the late work of Samuel Beckett. Postmodern literature often calls attention to issues regarding its own complicated connection to reality. The French critic Roland Barthes declared the novel to be an exhaustive form and explored what it means to continue to write novels under such a condition. In Postmodernist Fiction (1987), Brian McHale details the shift from modernism to postmodernism, arguing that the former is characterized by an epistemological dominant and that postmodern works have developed out of modernism and are primarily concerned with questions of ontology. McHale's "What Was Postmodernism?" (2007) follows Raymond Federman's lead in now using the past tense when discussing postmodernism. Music The composer Jonathan Kramer has written that avant-garde musical compositions (which some would consider modernist rather than postmodernist) "defy more than seduce the listener, and they extend by potentially unsettling means the very idea of what music is." In the 1960s, composers such as Terry Riley, Henryk Górecki, Bradley Joseph, John Adams, Steve Reich, Philip Glass, Michael Nyman, and Lou Harrison reacted to the perceived elitism and dissonant sound of atonal academic modernism by producing music with simple textures and relatively consonant harmonies, whilst others, most notably John Cage challenged the prevailing narratives of beauty and objectivity common to Modernism. Author on postmodernism, Dominic Strinati, has noted, it is also important "to include in this category the so-called 'art rock' musical innovations and mixing of styles associated with groups like Talking Heads, and performers like Laurie Anderson, together with the self-conscious 'reinvention of disco' by the Pet Shop Boys". In the late-20th century, avant-garde academics labelled American singer Madonna as the "personification of the postmodern" because "the postmodern condition is characterized by fragmentation, de-differentiation, pastiche, retrospection and anti-foundationalism", which they argued Madonna embodied. Christian writer Graham Cray also said that "Madonna is perhaps the most visible example of what is called post-modernism", and Martin Amis described her as "perhaps the most postmodern personage on the planet". She was also suggested by literary critic Olivier Sécardin to epitomise postmodernism. In theory In the 1970s, a disparate group of poststructuralists in France developed a critique of modern philosophy with roots discernible in Friedrich Nietzsche, Søren Kierkegaard, and Martin Heidegger. Although few themselves relied upon the term, they became known to many as postmodern theorists. Notable figures include Jacques Derrida, Michel Foucault, Jean-François Lyotard, Jean Baudrillard, and others. By the 1980s, this spread to America in the work of Richard Rorty and others. Poststructuralism Poststructuralists, like structuralists, start from the assumption that people's identities, values, and economic conditions determine each other rather than having intrinsic properties that can be understood in isolation. While structuralism explores how meaning is produced by a set of essential relationships in an overarching quasi-linguistic system, poststructuralism accepts this premise, but rejects the assumption that such systems can ever be fixed or centered. Deconstruction Deconstruction is a practice of philosophy, literary criticism, and textual analysis developed by Jacques Derrida. Derrida's work has been seen as rooted in a statement found in Of Grammatology: "" ("there is no outside-text"). This statement is part of a critique of "inside" and "outside" metaphors when referring to the text, and is a corollary to the observation that there is no "inside" of a text as well. This attention to a text's unacknowledged reliance on metaphors and figures embedded within its discourse is characteristic of Derrida's approach. Derrida's method sometimes involves demonstrating that a given philosophical discourse depends on binary oppositions or excluding terms that the discourse itself has declared to be irrelevant or inapplicable. Derrida's philosophy inspired a postmodern movement called deconstructivism among architects, characterized by a design that rejects structural "centers" and encourages decentralized play among its elements. Derrida discontinued his involvement with the movement after the publication of his collaborative project with architect Peter Eisenman in Chora L Works: Jacques Derrida and Peter Eisenman. The Postmodern Condition Jean-François Lyotard is credited with being the first to use the term "postmodern" in a philosophical context, in his 1979 work . In it, he follows Wittgenstein's language games model and speech act theory, contrasting two different language games, that of the expert, and that of the philosopher. He talks about the transformation of knowledge into information in the computer age and likens the transmission or reception of coded messages (information) to a position within a language game. Lyotard defined philosophical postmodernism in The Postmodern Condition, writing: "Simplifying to the extreme, I define postmodern as incredulity towards metanarratives...." where what he means by metanarrative (in French, grands récits) is something like a unified, complete, universal, and epistemically certain story about everything that is. Against totalizing metanarratives, Lyotard and other postmodern philosophers argue that truth is always dependent upon historical and social context rather than being absolute and universal—and that truth is always partial and "at issue" rather than being complete and certain. In society Urban planning Modernism sought to design and plan cities that followed the logic of the new model of industrial mass production; reverting to large-scale solutions, aesthetic standardisation, and prefabricated design solutions. Modernism eroded urban living by its failure to recognise differences and aim towards homogeneous landscapes (Simonsen 1990, 57). Jane Jacobs' 1961 book The Death and Life of Great American Cities was a sustained critique of urban planning as it had developed within modernism and marked a transition from modernity to postmodernity in thinking about urban planning. The transition from modernism to postmodernism is often said to have happened at 3:32 pm on 15 July in 1972, when Pruitt–Igoe, a housing development for low-income people in St. Louis designed by architect Minoru Yamasaki, which had been a prize-winning version of Le Corbusier's 'machine for modern living,' was deemed uninhabitable and was torn down. Since then, postmodernism has involved theories that embrace and aim to create diversity. It exalts uncertainty, flexibility and change and rejects utopianism while embracing a utopian way of thinking and acting. Postmodernity of 'resistance' seeks to deconstruct modernism and is a critique of the origins without necessarily returning to them. As a result of postmodernism, planners are much less inclined to lay a firm or steady claim to there being one single 'right way' of engaging in urban planning and are more open to different styles and ideas of 'how to plan'. The postmodern approach to understanding the city were pioneered in the 1980s by what could be called the "Los Angeles School of Urbanism" centered on the UCLA's Urban Planning Department in the 1980s, where contemporary Los Angeles was taken to be the postmodern city par excellence, contra posed to what had been the dominant ideas of the Chicago School formed in the 1920s at the University of Chicago, with its framework of urban ecology and emphasis on functional areas of use within a city, and the concentric circles to understand the sorting of different population groups. Edward Soja of the Los Angeles School combined Marxist and postmodern perspectives and focused on the economic and social changes (globalization, specialization, industrialization/deindustrialization, neo-liberalism, mass migration) that lead to the creation of large city-regions with their patchwork of population groups and economic uses. Legacy Since the late 1990s, there has been a growing sentiment in popular culture and in academia that postmodernism "has gone out of fashion". Others argue that postmodernism is dead in the context of current cultural production. Post-postmodernism The connection between postmodernism, posthumanism, and cyborgism has led to a challenge to postmodernism, for which the terms Post-postmodernism and postpoststructuralism were first coined in 2003: More recently metamodernism, post-postmodernism and the "death of postmodernism" have been widely debated: in 2007 Andrew Hoberek noted in his introduction to a special issue of the journal Twentieth-Century Literature titled "After Postmodernism" that "declarations of postmodernism's demise have become a critical commonplace". A small group of critics has put forth a range of theories that aim to describe culture or society in the alleged aftermath of postmodernism, most notably Raoul Eshelman (performatism), Gilles Lipovetsky (hypermodernity), Nicolas Bourriaud (altermodern), and Alan Kirby (digimodernism, formerly called pseudo-modernism). None of these new theories or labels have so far gained very widespread acceptance. Sociocultural anthropologist Nina Müller-Schwarze offers neostructuralism as a possible direction. The exhibition Postmodernism – Style and Subversion 1970 –1990 at the Victoria and Albert Museum (London, 24 September 2011 – 15 January 2012) was billed as the first show to document postmodernism as a historical movement. Criticisms Criticisms of postmodernism are intellectually diverse. Since postmodernism criticizes both conservative and modernist values as well as universalist concepts such as objective reality, morality, truth, reason, and social progress, critics of postmodernism often defend such concepts from various angles. Media theorist Dick Hebdige criticized the vagueness of the term, enumerating a long list of otherwise unrelated concepts that people have designated as postmodernism, from "the décor of a room" or "a 'scratch' video", to fear of nuclear armageddon and the "implosion of meaning", and stated that anything that could signify all of those things was "a buzzword". The analytic philosopher Daniel Dennett criticized its impact on the humanities, characterizing it as producing conversations' in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster." Criticism of postmodernist movements in the arts include objections to departure from beauty, the reliance on language for the art to have meaning, a lack of coherence or comprehensibility, deviation from clear structure, and consistent use of dark and negative themes. See also Theory Culture and politics Religion History Opposed by Notes Citations Bibliography External links Discourses of Postmodernism. Multilingual bibliography by Janusz Przychodzen (PDF file) Modernity, postmodernism and the tradition of dissent, by Lloyd Spencer (1998) Postmodernism and truth by philosopher Daniel Dennett Stanford Encyclopedia of Philosophy's entry on postmodernism 1880s neologisms Criticism of rationalism Metanarratives Modernism Science fiction themes Philosophical schools and traditions Theories of aesthetics Art movements Cultural trends
23604
https://en.wikipedia.org/wiki/Photography
Photography
Photography is the art, application, and practice of creating images by recording light, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film. It is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication. A person who makes photographs is called a photographer. Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing. The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive, depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing. Etymology The word "photography" was created from the Greek roots (), genitive of (), "light" and () "representation by means of lines" or "drawing", together meaning "drawing with light". Several people may have coined the same new term from these roots independently. Hércules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, , in private notes which a Brazilian historian believes were written in 1834. This claim is widely reported but is not yet largely recognized internationally. The first use of the word by Florence became widely known after the research of Boris Kossoy in 1980. The German newspaper of 25 February 1839 contained an article entitled , discussing several priority claims – especially Henry Fox Talbot's – regarding Daguerre's claim of invention. The article is the earliest known occurrence of the word in public print. It was signed "J.M.", believed to have been Berlin astronomer Johann von Maedler. The astronomer John Herschel is also credited with coining the word, independent of Talbot, in 1839. The inventors Nicéphore Niépce, Talbot, and Louis Daguerre seem not to have known or used the word "photography", but referred to their processes as "Heliography" (Niépce), "Photogenic Drawing"/"Talbotype"/"Calotype" (Talbot), and "Daguerreotype" (Daguerre). History Precursor technologies Photography is the result of combining several technical discoveries, relating to seeing an image and capturing the image. The discovery of the camera obscura ("dark chamber" in Latin) that provides an image of a scene dates back to ancient China. Greek mathematicians Aristotle and Euclid independently described a camera obscura in the 5th and 4th centuries BCE. In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments. The Arab physicist Ibn al-Haytham (Alhazen) (965–1040) also invented a camera obscura as well as the first true pinhole camera. The invention of the camera has been traced back to the work of Ibn al-Haytham. While the effects of a single light passing through a pinhole had been described earlier, Ibn al-Haytham gave the first correct analysis of the camera obscura, including the first geometrical and quantitative descriptions of the phenomenon, and was the first to use a screen in a dark room so that an image from one side of a hole in the surface could be projected onto a screen on the other side. He also first understood the relationship between the focal point and the pinhole, and performed early experiments with afterimages, laying the foundations for the invention of photography in the 19th century. Leonardo da Vinci mentions natural camerae obscurae that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. It is a box with a small hole in one side, which allows specific light rays to enter, projecting an inverted image onto a viewing screen or paper. The birth of photography was then concerned with inventing means to capture and keep the image produced by the camera obscura. Albertus Magnus (1193–1280) discovered silver nitrate, and Georg Fabricius (1516–1571) discovered silver chloride, and the techniques described in Ibn al-Haytham's Book of Optics are capable of producing primitive photographs using medieval materials. Daniele Barbaro described a diaphragm in 1566. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. Around 1717, Johann Heinrich Schulze used a light-sensitive slurry to capture images of cut-out letters on a bottle and on that basis many German sources and some international ones credit Schulze as the inventor of photography. The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography. In June 1802, British inventor Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow copies of paintings on glass, it was reported in 1802 that "the images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver." The shadow images eventually darkened all over. Invention The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it. Niépce was successful again in 1825. In 1826 he made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens). Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. In partnership with Louis Daguerre, he worked out post-exposure processing methods that produced visually superior results and replaced the bitumen with a more light-sensitive resin, but hours of exposure in the camera were still required. With an eye to eventual commercial exploitation, the partners opted for total secrecy. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process. The essential elements—a silver-plated surface sensitized by iodine vapor, developed by mercury vapor, and "fixed" with hot saturated salt water—were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the several-minutes-long exposure to be visible. The existence of Daguerre's process was publicly announced, without details, on 7 January 1839. The news created an international sensation. France soon agreed to pay Daguerre a pension in exchange for the right to present his invention to the world as the gift of France, which occurred when complete working instructions were unveiled on 19 August 1839. In that same year, American photographer Robert Cornelius is credited with taking the earliest surviving photographic self-portrait. In Brazil, Hercules Florence had apparently started working out a silver-salt-based paper process in 1832, later naming it Photographie. Meanwhile, a British inventor, William Fox Talbot, had succeeded in making crude but reasonably light-fast silver images on paper as early as 1834 but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his hitherto secret method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, which used the chemical development of a latent image to greatly reduce the exposure needed and compete with the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies; this is the basis of most modern chemical photography up to the present day, as daguerreotypes could only be replicated by rephotographing them with a camera. Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence. In March 1837, Steinheil, along with Franz von Kobell, used silver chloride and a cardboard camera to make pictures in negative of the Frauenkirche and other buildings in Munich, then taking another picture of the negative to get a positive, the actual black and white reproduction of a view on the object. The pictures produced were round with a diameter of 4 cm, the method was later named the "Steinheil method". In France, Hippolyte Bayard invented his own process for producing direct positive paper prints and claimed to have invented photography earlier than Daguerre or Talbot. British chemist John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839. In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper. Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize in Physics in 1908. Glass plates were the medium for most original camera photography from the late 1850s until the general introduction of flexible plastic films during the 1890s. Although the convenience of the film greatly popularized amateur photography, early films were somewhat more expensive and of markedly lower optical quality than their glass plate equivalents, and until the late 1910s they were not available in the large formats preferred by most professional photographers, so the new medium did not immediately or completely replace the old. Because of the superior dimensional stability of glass, the use of plates for some scientific applications, such as astrophotography, continued into the 1990s, and in the niche field of laser holography, it has persisted into the 21st century. Film Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. The first flexible photographic roll film was marketed by George Eastman, founder of Kodak in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and transferred to a hardened gelatin support. The first transparent plastic roll film followed in 1889. It was made from highly flammable nitrocellulose known as nitrate film. Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was not completed for X-ray films until 1933, and although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm motion pictures until it was finally discontinued in 1951. Films remained the dominant form of photography until the early 21st century when advances in digital photography drew consumers to digital formats. Although modern photography is dominated by digital users, film continues to be used by enthusiasts and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors), (2) resolution, and (3) continuity of tone. Black-and-white Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography. Monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process, publicly revealed in 1847, produces brownish tones. Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color. Color Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light. The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855. The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s. Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images. Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability. Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s. Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure. Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product. Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963. Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look". Digital In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujfilm in 1988. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single-lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born. Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications. Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones. Techniques A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; dualphotography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques. Cameras The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory. Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper. The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera). As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens. The movie camera is a type of photographic camera that takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures to create the illusion of motion. Stereoscopic Photographs, both monochrome and color, can be captured and displayed through two side-by-side images that emulate human stereoscopic vision. Stereoscopic photography was the first that captured figures in motion. While known colloquially as "3-D" photography, the more accurate term is stereoscopy. Such cameras have long been realized by using film and more recently in digital electronic methods (including cell phone cameras). Dualphotography Dualphotography consists of photographing a scene from both sides of a photographic device at once (e.g. camera for back-to-back dualphotography, or two networked cameras for portal-plane dualphotography). The dualphoto apparatus can be used to simultaneously capture both the subject and the photographer, or both sides of a geographical place at once, thus adding a supplementary narrative layer to that of a single image. Full-spectrum, ultraviolet and infrared Ultraviolet and infrared films have been available for many decades and employed in a variety of photographic avenues since the 1960s. New technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions. Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm. Replacing a hot mirror or infrared blocking filter with an infrared pass or a wide spectrally transmitting filter allows the camera to detect the wider spectrum light at greater sensitivity. Without the hot-mirror, the red, green and blue (or cyan, yellow and magenta) colored micro-filters placed over the sensor elements pass varying amounts of ultraviolet (blue window) and infrared (primarily red and somewhat lesser the green and blue micro-filters). Uses of full spectrum photography are for fine art photography, geology, forensics and law enforcement. Layering Layering is a photographic composition technique that manipulates the foreground, subject or middle-ground, and background layers in a way that they all work together to tell a story through the image. Layers may be incorporated by altering the focal length, distorting the perspective by positioning the camera in a certain spot. People, movement, light and a variety of objects can be used in layering. Light field Digital methods of image capture and display processing have enabled the new technology of "light field photography" (also known as synthetic aperture photography). This process allows focusing at various depths of field to be selected after the photograph has been captured. As explained by Michael Faraday in 1846, the "light field" is understood as 5-dimensional, with each point in 3-D space having attributes of two more angles that define the direction of each ray passing through that point. These additional vector attributes can be captured optically through the use of microlenses at each pixel point within the 2-dimensional image sensor. Every pixel of the final image is actually a selection from each sub-array located under each microlens, as identified by a post-image capture focus algorithm. Other Besides the camera, other methods of forming images with light are available. For instance, a photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic medium, hence the term electrophotography. Photograms are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of an image scanner to produce digital pictures. Types Amateur Amateur photographers take photos for personal use, as a hobby or out of casual interest, rather than as a business or job. The quality of amateur work can be comparable to that of many professionals. Amateurs can fill a gap in subjects or topics that might not otherwise be photographed if they are not commercially useful or salable. Amateur photography grew during the late 19th century due to the popularization of the hand-held camera. Twenty-first century social media and near-ubiquitous camera phones have made photographic and video recording pervasive in everyday life. In the mid-2010s smartphone cameras added numerous automatic assistance features like color management, autofocus face detection and image stabilization that significantly decreased skill and effort needed to take high quality images. Commercial Commercial photography is probably best defined as any photography for which the photographer is paid for images rather than works of art. In this light, money could be paid for the subject of the photograph or the photograph itself. The commercial photographic world could include: Advertising photography: There are photographs made to illustrate and usually sell a service or product. These images, such as packshots, are generally done with an advertising agency, design firm or with an in-house corporate design team. Architectural photography focuses on capturing photographs of buildings and architectural structures that are aesthetically pleasing and accurate in terms of representations of their subjects. Event photography focuses on photographing guests and occurrences at mostly social events. Fashion and glamour photography usually incorporates models and is a form of advertising photography. Fashion photography, like the work featured in Harper's Bazaar, emphasizes clothes and other products; glamour emphasizes the model and body form while glamour photography is popular in advertising and men's magazines. Models in glamour photography sometimes work nude. 360 product photography displays a series of photos to give the impression of a rotating object. This technique is commonly used by ecommerce websites to help shoppers visualise products. Concert photography focuses on capturing candid images of both the artist or band as well as the atmosphere (including the crowd). Many of these photographers work freelance and are contracted through an artist or their management to cover a specific show. Concert photographs are often used to promote the artist or band in addition to the venue. Crime scene photography consists of photographing scenes of crime such as robberies and murders. A black and white camera or an infrared camera may be used to capture specific details. Still life photography usually depicts inanimate subject matter, typically commonplace objects which may be either natural or man-made. Still life is a broader category for food and some natural photography and can be used for advertising purposes. Real estate photography focuses on the production of photographs showcasing a property that is for sale, such photographs requires the use of wide-lens and extensive knowledge in high-dynamic-range imaging photography. Food photography can be used for editorial, packaging or advertising use. Food photography is similar to still life photography but requires some special skills. Photojournalism can be considered a subset of editorial photography. Photographs made in this context are accepted as a documentation of a news story. Paparazzi is a form of photojournalism in which the photographer captures candid images of athletes, celebrities, politicians, and other prominent people. Portrait and wedding photography: Are photographs made and sold directly to the end user of the images. Landscape photography typically captures the presence of nature but can also focus on human-made features or disturbances of landscapes. Wildlife photography demonstrates the life of wild animals. Art During the 20th century, both fine art photography and documentary photography became accepted by the English-speaking art world and the gallery system. In the United States, a handful of photographers, including Alfred Stieglitz, Edward Steichen, John Szarkowski, F. Holland Day, and Edward Weston, spent their lives advocating for photography as a fine art. At first, fine art photographers tried to imitate painting styles. This movement is called Pictorialism, often using soft focus for a dreamy, 'romantic' look. In reaction to that, Weston, Ansel Adams, and others formed the Group f/64 to advocate 'straight photography', the photograph as a (sharply focused) thing in itself and not an imitation of something else. The aesthetics of photography is a matter that continues to be discussed regularly, especially in artistic circles. Many artists argued that photography was the mechanical reproduction of an image. If photography is authentically art, then photography in the context of art would need redefinition, such as determining what component of a photograph makes it beautiful to the viewer. The controversy began with the earliest images "written with light"; Nicéphore Niépce, Louis Daguerre, and others among the very earliest photographers were met with acclaim, but some questioned if their work met the definitions and purposes of art. Clive Bell in his classic essay Art states that only "significant form" can distinguish art from what is not art. On 7 February 2007, Sotheby's London sold the 2001 photograph 99 Cent II Diptychon for an unprecedented $3,346,456 to an anonymous bidder, making it the most expensive at the time. Conceptual photography turns a concept or idea into a photograph. Even though what is depicted in the photographs are real objects, the subject is strictly abstract. In parallel to this development, the then largely separate interface between painting and photography was closed in the second half of the 20th century with the chemigram of Pierre Cordier and the chemogram of Josef H. Neumann. In 1974 the chemograms by Josef H. Neumann concluded the separation of the painterly background and the photographic layer by showing the picture elements in a symbiosis that had never existed before, as an unmistakable unique specimen, in a simultaneous painterly and at the same time real photographic perspective, using lenses, within a photographic layer, united in colors and shapes. This Neumann chemogram from the seventies of the 20th century thus differs from the beginning of the previously created cameraless chemigrams of a Pierre Cordier and the photogram Man Ray or László Moholy-Nagy of the previous decades. These works of art were almost simultaneous with the invention of photography by various important artists who characterized Hippolyte Bayard, Thomas Wedgwood, William Henry Fox Talbot in their early stages, and later Man Ray and László Moholy-Nagy in the twenties and by the painter in the thirties Edmund Kesting and Christian Schad by draping objects directly onto appropriately sensitized photo paper and using a light source without a camera. Photojournalism Photojournalism is a particular form of photography (the collecting, editing, and presenting of news material for publication or broadcast) that employs images in order to tell a news story. It is now usually understood to refer only to still images, but in some cases the term also refers to video used in broadcast journalism. Photojournalism is distinguished from other close branches of photography (e.g., documentary photography, social documentary photography, street photography or celebrity photography) by complying with a rigid ethical framework which demands that the work be both honest and impartial whilst telling the story in strictly journalistic terms. Photojournalists create pictures that contribute to the news media, and help communities connect with one other. Photojournalists must be well informed and knowledgeable about events happening right outside their door. They deliver news in a creative format that is not only informative, but also entertaining, including sports photography. Science and forensics The camera has a long and distinguished history as a means of recording scientific phenomena from the first use by Daguerre and Fox-Talbot, such as astronomical events (eclipses for example), small creatures and plants when the camera was attached to the eyepiece of microscopes (in photomicroscopy) and for macro photography of larger specimens. The camera also proved useful in recording crime scenes and the scenes of accidents, such as the Wootton bridge collapse in 1861. The methods used in analysing photographs for use in legal cases are collectively known as forensic photography. Crime scene photos are usually taken from three vantage points: overview, mid-range, and close-up. In 1845 Francis Ronalds, the Honorary Director of the Kew Observatory, invented the first successful camera to make continuous recordings of meteorological and geomagnetic parameters. Different machines produced 12- or 24- hour photographic traces of the minute-by-minute variations of atmospheric pressure, temperature, humidity, atmospheric electricity, and the three components of geomagnetic forces. The cameras were supplied to numerous observatories around the world and some remained in use until well into the 20th century. Charles Brooke a little later developed similar instruments for the Greenwich Observatory. Science regularly uses image technology that has derived from the design of the pinhole camera to avoid distortions that can be caused by lenses. X-ray machines are similar in design to pinhole cameras, with high-grade filters and laser radiation. Photography has become universal in recording events and data in science and engineering, and at crime scenes or accident scenes. The method has been much extended by using other wavelengths, such as infrared photography and ultraviolet photography, as well as spectroscopy. Those methods were first used in the Victorian era and improved much further since that time. The first photographed atom was discovered in 2012 by physicists at Griffith University, Australia. They used an electric field to trap an "Ion" of the element, Ytterbium. The image was recorded on a CCD, an electronic photographic film. Wildlife photography Wildlife photography involves capturing images of various forms of wildlife. Unlike other forms of photography such as product or food photography, successful wildlife photography requires a photographer to choose the right place and right time when specific wildlife are present and active. It often requires great patience and considerable skill and command of the right photographic equipment. Social and cultural implications There are many ongoing questions about different aspects of photography. In her On Photography (1977), Susan Sontag dismisses the objectivity of photography. This is a highly debated subject within the photographic community. Sontag argues, "To photograph is to appropriate the thing photographed. It means putting one's self into a certain relation to the world that feels like knowledge, and therefore like power." Photographers decide what to take a photo of, what elements to exclude and what angle to frame the photo, and these factors may reflect a particular socio-historical context. Along these lines, it can be argued that photography is a subjective form of representation. Modern photography has raised a number of concerns on its effect on society. In Alfred Hitchcock's Rear Window (1954), the camera is presented as promoting voyeurism. 'Although the camera is an observation station, the act of photographing is more than passive observing'. The camera doesn't rape or even possess, though it may presume, intrude, trespass, distort, exploit, and, at the farthest reach of metaphor, assassinate – all activities that, unlike the sexual push and shove, can be conducted from a distance, and with some detachment. Digital imaging has raised ethical concerns because of the ease of manipulating digital photographs in post-processing. Many photojournalists have declared they will not crop their pictures or are forbidden from combining elements of multiple photos to make "photomontages", passing them as "real" photographs. Today's technology has made image editing relatively simple for even the novice photographer. However, recent changes of in-camera processing allow digital fingerprinting of photos to detect tampering for purposes of forensic photography. Photography is one of the new media forms that changes perception and changes the structure of society. Further unease has been caused around cameras in regards to desensitization. Fears that disturbing or explicit images are widely accessible to children and society at large have been raised. Particularly, photos of war and pornography are causing a stir. Sontag is concerned that "to photograph is to turn people into objects that can be symbolically possessed". Desensitization discussion goes hand in hand with debates about censored images. Sontag writes of her concern that the ability to censor pictures means the photographer has the ability to construct reality. One of the practices through which photography constitutes society is tourism. Tourism and photography combine to create a "tourist gaze" in which local inhabitants are positioned and defined by the camera lens. However, it has also been argued that there exists a "reverse gaze" through which indigenous photographees can position the tourist photographer as a shallow consumer of images. Law Photography is both restricted and protected by the law in many jurisdictions. Protection of photographs is typically achieved through the granting of copyright or moral rights to the photographer. In the United States, photography is protected as a First Amendment right and anyone is free to photograph anything seen in public spaces as long as it is in plain view. In the UK, a recent law (Counter-Terrorism Act 2008) increases the power of the police to prevent people, even press photographers, from taking pictures in public places. In South Africa, any person may photograph any other person, without their permission, in public spaces and the only specific restriction placed on what may not be photographed by government is related to anything classed as national security. Each country has different laws. See also Outline of photography Science of photography List of photographers List of photography awards List of most expensive photographs List of photographs considered the most important Astrophotography Image editing Imaging Photolab and minilab Visual arts Large format Medium format Microform References Further reading Introduction Barrett, T 2012, Criticizing Photographs: an introduction to understanding images, 5th edn, McGraw-Hill, New York. Bate, D. (2009), Photography: The Key Concepts, Bloomsbury, New York. Berger, J. (Dyer, G. ed.), (2013), Understanding a Photograph, Penguin Classics, London. Bright, S 2011, Art Photography Now, Thames & Hudson, London. Cotton, C. (2015), The Photograph as Contemporary Art, 3rd edn, Thames & Hudson, New York. Heiferman, M. (2013), Photography Changes Everything, Aperture Foundation, US. Shore, S. (2015), The Nature of Photographs, 2nd ed. Phaidon, New York. Wells, L. (2004), Photography. A Critical Introduction [Paperback], 3rd ed. Routledge, London. History A New History of Photography, ed. by Michel Frizot, Köln : Könemann, 1998 Franz-Xaver Schlegel, Das Leben der toten Dinge – Studien zur modernen Sachfotografie in den USA 1914–1935, 2 Bände, Stuttgart/Germany: Art in Life 1999, . Reference works Hans-Michael Koetzle: Das Lexikon der Fotografen: 1900 bis heute, Munich: Knaur 2002, 512 p., John Hannavy (ed.): Encyclopedia of Nineteenth-Century Photography, 1736 p., New York: Routledge 2005 Lynne Warren (Hrsg.): Encyclopedia of Twentieth-Century Photography, 1719 p., New York: Routledge, 2006 The Oxford Companion to the Photograph, ed. by Robin Lenman, Oxford University Press 2005 "The Focal Encyclopedia of Photography", Richard Zakia, Leslie Stroebel, Focal Press 1993, Other books Photography and The Art of Seeing by Freeman Patterson, Key Porter Books 1989, . The Art of Photography: An Approach to Personal Expression by Bruce Barnbaum, Rocky Nook 2010, . Image Clarity: High Resolution Photography by John B. Williams, Focal Press 1990, . External links World History of Photography From The History of Art. Daguerreotype to Digital: A Brief History of the Photographic Process – State Library & Archives of Florida French inventions 19th-century inventions Imaging Audiovisual introductions in 1822
23607
https://en.wikipedia.org/wiki/Pentateuch%20%28disambiguation%29
Pentateuch (disambiguation)
The Pentateuch is the first part of the Bible, consisting of Genesis, Exodus, Leviticus, Numbers, and Deuteronomy. It is also known as the Torah. Pentateuch may also refer to: Ashburnham Pentateuch, late 6th- or early 7th-century Latin illuminated manuscript of the Pentateuch Chumash, printed Torah, as opposed to a Torah scroll Samaritan Pentateuch, a version of the Hebrew Pentateuch, written in the Samaritan alphabet and used by the Samaritans, for whom it is the entire biblical canon Targum Yerushalmi, a western targum (translation) of the Torah (Pentateuch) from the land of Israel (as opposed to the eastern Babylonian Targum Onkelos) See also Torah (disambiguation) Chumash (disambiguation) Tanak (disambiguation) Hexateuch Octateuch
23612
https://en.wikipedia.org/wiki/Postmodern%20philosophy
Postmodern philosophy
Postmodern philosophy is a philosophical movement that arose in the second half of the 20th century as a critical response to assumptions allegedly present in modernist philosophical ideas regarding culture, identity, history, or language that were developed during the 18th-century Age of Enlightenment. Postmodernist thinkers developed concepts like différance, repetition, trace, and hyperreality to subvert "grand narratives", univocity of being, and epistemic certainty. Postmodern philosophy questions the importance of power relationships, personalization, and discourse in the "construction" of truth and world views. Many postmodernists appear to deny that an objective reality exists, and appear to deny that there are objective moral values. Jean-François Lyotard defined philosophical postmodernism in The Postmodern Condition, writing "Simplifying to the extreme, I define postmodern as incredulity towards meta narratives...." where what he means by metanarrative is something like a unified, complete, universal, and epistemically certain story about everything that is. Postmodernists reject metanarratives because they reject the conceptualization of truth that metanarratives presuppose. Postmodernist philosophers in general argue that truth is always contingent on historical and social context rather than being absolute and universal and that truth is always partial and "at issue" rather than being complete and certain. Postmodern philosophy is often particularly skeptical about simple binary oppositions characteristic of structuralism, emphasizing the problem of the philosopher cleanly distinguishing knowledge from ignorance, social progress from reversion, dominance from submission, good from bad, and presence from absence. Subjects On Literature Postmodern philosophy has had strong relations with the substantial literature of critical theory, although some critical theorists such as Jurgen Habermas have opposed postmodern philosophy. On The Enlightenment Many postmodern claims are critical of certain 18th-century Enlightenment values. Some postmodernists tolerate multiple conceptions of morality, even if they disagree with them subjectively. Postmodern writings often focus on deconstructing the role that power and ideology play in shaping discourse and belief. Postmodern philosophy shares ontological similarities with classical skeptical and relativistic belief systems. On Truth and Objectivity The Routledge Encyclopedia of Philosophy states that "The assumption that there is no common denominator in 'nature' or 'truth' ... that guarantees the possibility of neutral or objective thought" is a key assumption of postmodernism. The National Research Council has characterized the belief that "social science research can never generate objective or trustworthy knowledge" as an example of a postmodernist belief. Jean-François Lyotard's seminal 1979 The Postmodern Condition stated that its hypotheses "should not be accorded predictive value in relation to reality, but strategic value in relation to the questions raised". Lyotard's statement in 1984 that "I define postmodern as incredulity toward meta-narratives" extends to incredulity toward science. Jacques Derrida, who is generally identified as a postmodernist, stated that "every referent, all reality has the structure of a differential trace". There are strong similarities with post-modernism in the work of Paul Feyerabend; Feyerabend held that modern science is no more justified than witchcraft, and has denounced the "tyranny" of "abstract concepts such as 'truth', 'reality', or 'objectivity', which narrow people's vision and ways of being in the world". Feyerabend also defended astrology, adopted alternative medicine, and sympathized with creationism. Defenders of postmodernism state that many descriptions of postmodernism exaggerate its antipathy to science; for example, Feyerabend denied that he was "anti-science", accepted that some scientific theories are superior to other theories (even if science itself is not superior to other modes of inquiry), and attempted conventional medical treatments during his fight against cancer. Influences Postmodern philosophy was greatly influenced by the writings of Søren Kierkegaard and Friedrich Nietzsche in the 19th century and other early-to-mid 20th-century philosophers, including the phenomenologist Martin Heidegger, the psychoanalyst Jacques Lacan, cultural critic Roland Barthes, theorist Georges Bataille, and the later work of Ludwig Wittgenstein. Postmodern philosophy also drew from the world of the arts and architecture, particularly Marcel Duchamp, John Cage, and artists who practiced collage, as well as the architecture of Las Vegas and the Pompidou Centre. Postmodern Philosophers Michel Foucault Michel Foucault is often cited as an early postmodernist although he personally rejected that label. Following Nietzsche, Foucault argued that knowledge is produced through the operations of power, and changes fundamentally in different historical periods. Jean Baudrillard Baudrillard, known for his simulation theory, argued that the individual's experience and perception of reality derives its basis entirely from media-propagated ideals and images. The real and fantasy become indistinguishable, leading to the emergence of a wide-spread simulation of reality. Jean François Lyotard The writings of Lyotard were largely concerned with the role of narrative in human culture, and particularly how that role has changed as we have left modernity and entered a "postindustrial" or postmodern condition. He argued that modern philosophies legitimized their truth-claims not (as they themselves claimed) on logical or empirical grounds, but rather on the grounds of accepted stories (or "metanarratives") about knowledge and the world—comparing these with Wittgenstein's concept of language-games. He further argued that in our postmodern condition, these metanarratives no longer work to legitimize truth-claims. He suggested that in the wake of the collapse of modern metanarratives, people are developing a new "language-game"—one that does not make claims to absolute truth but rather celebrates a world of ever-changing relationships (among people and between people and the world). Jacques Derrida Derrida, the father of deconstruction, practiced philosophy as a form of textual criticism. He criticized Western philosophy as privileging the concept of presence and logos, as opposed to absence and markings or writings. Richard Rorty In the United States, a well-known pragmatist and self-proclaimed postmodernist was Richard Rorty. An analytic philosopher, Rorty believed that combining Willard Van Orman Quine's criticism of the analytic-synthetic distinction with Wilfrid Sellars's critique of the "Myth of the Given" allowed for an abandonment of the view of the thought or language as a mirror of a reality or an external world. Further, drawing upon Donald Davidson's criticism of the dualism between conceptual scheme and empirical content, he challenges the sense of questioning whether our particular concepts are related to the world in an appropriate way, whether we can justify our ways of describing the world as compared with other ways. He argued that truth was not about getting it right or representing reality, but was part of a social practice and language was what served our purposes in a particular time; ancient languages are sometimes untranslatable into modern ones because they possess a different vocabulary and are unuseful today. Donald Davidson is not usually considered a postmodernist, although he and Rorty have both acknowledged that there are few differences between their philosophies. Douglas Kellner Douglas Kellner insists that the "assumptions and procedures of modern theory" must be forgotten. Kellner analyzes the terms of this theory in real-life experiences and examples. Kellner uses science and technology studies as a major part of his analysis; he urges that the theory is incomplete without it. The scale is larger than just postmodernism alone; it must be interpreted through cultural studies where science and technology studies play a large role. The reality of the September 11 attacks on the United States of America is the catalyst for his explanation. In response, Kellner continues to examine the repercussions of understanding the effects of the 11 September attacks. He questions if the attacks are only able to be understood in a limited form of postmodern theory due to the level of irony. The conclusion he depicts is simple: postmodernism, as most use it today, will decide what experiences and signs in one's reality will be one's reality as they know it. Criticism Some criticism responds to postmodernist skepticism towards objective reality and claims that truth and morality are relative, including the argument that this relativism is self-contradictory. In part in reference to postmodernism, conservative English philosopher Roger Scruton wrote, "A writer who says that there are no truths, or that all truth is 'merely relative,' is asking you not to believe him. So don't." In 2014, the philosophers Theodore Schick and Lewis Vaughn wrote: "the statement that 'No unrestricted universal generalizations are true' is itself an unrestricted universal generalization. So if relativism in any of its forms is true, it's false." Some responses to postmodernist relativism argue that, contrary to its proponents' usual intentions, it does not necessarily benefit the political left. For example, the historian Richard J. Evans argued that if relativism rejects truth, it can legitimize far-right pseudohistory such as Holocaust denial. Further lines of criticism are that postmodernist discourse is characterized by obscurantism, that the term itself is vaguely defined, and that postmodernism lacks a clear epistemology. The linguist and philosopher Noam Chomsky accused postmodernist intellectuals of failing to meaningfully answer questions such as "what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc.?" The French psychotherapist and philosopher, Félix Guattari, rejected its theoretical assumptions by arguing that the structuralist and postmodernist visions of the world were not flexible enough to seek explanations in psychological, social, and environmental domains at the same time. In an interview with Truls Lie, Jean Baudrillard noted: "[Transmodernism, etc.] are better terms than "postmodernism". It is not about modernity; it is about every system that has developed its mode of expression to the extent that it surpasses itself and its own logic. This is what I am trying to analyze." "There is no longer any ontologically secret substance. I perceive this to be nihilism rather than postmodernism." See also Hyperreality Natural philosophy Ontological pluralism Physical ontology Postmaterialism Postmodern art Postmodernism Postmodernity Notes Further reading Charles Arthur Willard Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. 1996. John Deely "Quid sit Postmodernismus?," in Roman Ciapalo (ed.) Postmodernism and Christian philosophy, 68–96, Washington, D.C.: Catholic University of America Press. 1997. External links Modern Philosophical Discussions (archived 14 July 2011) Philosophical schools and traditions
23613
https://en.wikipedia.org/wiki/Postmodern%20music
Postmodern music
Postmodern music is music in the art music tradition produced in the postmodern era. It also describes any music that follows aesthetical and philosophical trends of postmodernism. As an aesthetic movement it was formed partly in reaction to modernism but is not primarily defined as oppositional to modernist music. Postmodernists question the tight definitions and categories of academic disciplines, which they regard simply as the remnants of modernity. The postmodernist musical attitude Postmodernism in music is not a distinct musical style, but rather refers to music of the postmodern era. Postmodernist music, on the other hand, shares characteristics with postmodernist art—that is, art that comes after and reacts against modernism (see Modernism in Music). Rebecca Day, Lecturer in Music Analysis, writes "within music criticism, postmodernism is seen to represent a conscious move away from the perceptibly damaging hegemony of binaries such as aestheticism/formalism, subject/object, unity/disunity, part/whole, that were seen to dominate former aesthetic discourse, and that when left unchallenged (as postmodernists claim of modernist discourse) are thought to de-humanise music analysis". Fredric Jameson, a major figure in the thinking on postmodernism and culture, calls postmodernism "the cultural dominant of the logic of late capitalism", meaning that, through globalization, postmodern culture is tied inextricably with capitalism (Mark Fisher, writing 20 years later, goes further, essentially calling it the sole cultural possibility). Drawing from Jameson and other theorists, David Beard and Kenneth Gloag argue that, in music, postmodernism is not just an attitude but also an inevitability in the current cultural climate of fragmentation. As early as 1938, Theodor Adorno had already identified a trend toward the dissolution of "a culturally dominant set of values", citing the commodification of all genres as beginning of the end of genre or value distinctions in music. In some respects, Postmodern music could be categorized as simply the music of the postmodern era, or music that follows aesthetic and philosophical trends of postmodernism, but with Jameson in mind, it is clear these definitions are inadequate. As the name suggests, the postmodernist movement formed partly in reaction to the ideals of modernism, but in fact postmodern music is more to do with functionality and the effect of globalization than it is with a specific reaction, movement, or attitude. In the face of capitalism, Jameson says, "It is safest to grasp the concept of the postmodern as an attempt to think the present historically in an age that has forgotten how to think historically in the first place". Characteristics Jonathan Kramer posits the idea (following Umberto Eco and Jean-François Lyotard) that postmodernism (including musical postmodernism) is less a surface style or historical period (i.e., condition) than an attitude. Kramer enumerates 16 (arguably subjective) "characteristics of postmodern music, by which I mean music that is understood in a postmodern manner, or that calls forth postmodern listening strategies, or that provides postmodern listening experiences, or that exhibits postmodern compositional practices." According to Kramer, postmodern music: is not simply a repudiation of modernism or its continuation, but has aspects of both a break and an extension is, on some level and in some way, ironic does not respect boundaries between sonorities and procedures of the past and of the present challenges barriers between 'high' and 'low' styles shows disdain for the often unquestioned value of structural unity questions the mutual exclusivity of elitist and populist values avoids totalizing forms (e.g., does not want entire pieces to be tonal or serial or cast in a prescribed formal mold) considers music not as autonomous but as relevant to cultural, social, and political contexts includes quotations of or references to music of many traditions and cultures considers technology not only as a way to preserve and transmit music but also as deeply implicated in the production and essence of music embraces contradictions distrusts binary oppositions includes fragmentations and discontinuities encompasses pluralism and eclecticism presents multiple meanings and multiple temporalities locates meaning and even structure in listeners, more than in scores, performances, or composers Daniel Albright summarizes the main tendencies of musical postmodernism as: Bricolage Polystylism Randomness Timescale One author has suggested that the emergence of postmodern music in popular music occurred in the late 1960s, influenced in part by psychedelic rock and one or more of the later Beatles albums. Beard and Gloag support this position, citing Jameson's theory that "the radical changes of musical styles and languages throughout the 1960s [are] now seen as a reflection of postmodernism". Others have placed the beginnings of postmodernism in the arts, with particular reference to music, at around 1930. See also List of postmodernist composers 20th-century classical music 21st-century classical music Neoconservative postmodernism References Sources Further reading Berger, Arthur Asa. 2003. The Portable Postmodernist. Walnut Creek: Altamira Press. (cloth); (pbk). Bertens, Hans. 1995. The Idea of the Postmodern: A History. London and New York: Routledge. . Beverley, John. 1989. "The Ideology of Postmodern Music and Left Politics". Critical Quarterly 31, no. 1 (Spring): 40–56. Born, Georgina. 1995. Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde. Berkeley, Los Angeles, and London: University of California Press. Burkholder, J. Peter. 1995. All Made of Tunes: Charles Ives and the Uses of Musical Borrowings. New Haven: Yale University Press. Carl, Robert. 1990. "Six Case Studies in New American Music: A Postmodern Portrait Gallery". College Music Symposium 30, no. 1 (Spring): 45–63. Butler, Christopher. 1980. After the Wake: An Essay on the Contemporary Avant-Garde. Oxford: Clarendon Press; New York: Oxford University Press. Connor, Steven. 2001. "The Decomposing Voice of Postmodern Music". New Literary History 32, no. 3: Voice and Human Experience (Summer): 467–483. Danuser, Hermann. 1991. "Postmodernes Musikdenken—Lösung oder Flucht?". In Neue Musik im politischen Wandel: fünf Kongressbeiträge und drei Seminarberichte, edited by Hermann Danuser, 56–66. Mainz & New York: Schott. . Edwards, George. 1991. "Music and Postmodernism". Partisan Review 58, no. 4 (Fall): 693–705. Reprinted in his Collected Essays on Modern and Classical Music, with a foreword by Fred Lerdahl and an afterword by Joseph Dubiel, 49–60. Lanham, Maryland: Scarecrow Press, 2008. . Fox, Christopher. 2004. "Tempestuous Times: The Recent Music of Thomas Adès". The Musical Times 145, No. 1888 (Autumn): 41–56. Gagné, Nicole V. 2012. Historical Dictionary of Modern and Contemporary Classical Music. Historical Dictionaries of Literature and the Arts. Lanham, Maryland: Scarecrow Press. . Gloag, Kenneth. 2012. Postmodernism in Music. Cambridge Introductions to Music, Cambridge and New York: Cambridge University Press. . Harrison, Max, Charles Fox, Eric Thacker, and Stuart Nicholson. 1999. The Essential Jazz Records: Vol. 2: Modernism to Postmodernism. London: Mansell Publishing. (cloth); (pbk). Heilbroner, Robert L. 1961. The Future as History. New York: Grove Press. Hiekel, Jörn Peter. 2009. "Die Freiheit zum Staunen: Wirkungen und Weitungen von Lachenmanns Komponieren". Musik-Konzepte, no. 146 (July): 5–25. Hurley, Andrew W. 2009. "Postnationalism, postmodernism and the German discourse(s) of Weltmusik". New Formations, no. 66 (Spring): 100–117. Klemm, Eberhardt. 1987. "Nichts Neues unter der Sonne: Postmoderne". Musik und Gesellschaft 37, no. 8: 400–403. Kutschke, Beate. 2010. "The Celebration of Beethoven's Bicentennial in 1970: The Antiauthoritarian Movement and Its Impact on Radical Avant-garde and Postmodern Music in West Germany". The Musical Quarterly 93, nos. 3–4 (Fall–Winter): 560–615. LeBaron, Ann. 2002. "Reflections of Surrealism in Postmodern Musics". In Postmodern Music/Postmodern Thought, edited by Judy Lochhead and Joseph Aunder, 27–74. New York: Routledge. . Mankowskaya, Nadia. 1993. "L'esthétique musicale et le postmodernisme". New Sound: International Magazine for Music, no. 1:91–100. Morris, Geoffrey. 2009. "The Guitar Works of Aldo Clementi". Contemporary Music Review 28, no. 6 (Aldo Clementi: Mirror of time I): 559–586. O'Reilly, Tim. 1994. "Bad Religion Takes Postmodern Punk Mainstream". The Daily Princetonian 118, no. 4 (3 February): 10. Ofenbauer, Christian. 1995. "Vom Faltenlegen: Versuch einer Lektüre von Pierre Boulez' Notation(s) I(1)". Musik-Konzepte, nos. 89–90:55–75. }} Ortega y Gasset, José. 1932. The Revolt of the Masses. New York & London: W. W. Norton & Company. Online edition Petrusëva, Nadežda Andreevna. 2003. "Новая форма в новейшей музыке" [The Formal Innovations of Postmodern Music]. Muzyka i vremâ: Ežemesâčnyj naučnyj kritiko-publicističeskij žurnal, no. 8: 45–48. Pickstock, Catherine. 2011. "Quasi una sonata: Modernism, Postmodernism, Religion, and Music". In Resonant Witness: Conversations between Music and Theology, edited and introduced by Jeremy S. Begbie and Steven R. Guthrie, with an afterword by John D. Witvliet, 190–211. Calvin Institute of Christian Worship Liturgical Studies. Grand Rapids, Michigan: William B. Eerdmans. . Sanches, Pedro Alexandre. 2000. Tropicalismo: Decadência Bonita do Samba. São Paulo: Boitempo Editorial. Siōpsī, Anastasia. 2010. "On the Various Roles of Tradition in 20th-Century Greek Art Music: The Case Study of Music Written for Ancient Dramas". In Простори модернизма: Опус Љубице Марић у контексту музике њеног времена, edited by Dejan Despić, Melita Milin, Dimitrije Stefanović, and Danica Petrović, 197–214. Naučni skupovi, no. 130; Odelenje za likovne umetnosti i muziku, no. 7. Belgrade: Srpska Akademija Nauka i Umetnosti. . Smart, Barry. 1993. Postmodernity. Key Ideas, series editor Peter Hamilton. London and New York: Routledge. . Taylor, Anthony. 2010. "John Adams' Gnarly Buttons: Context and Analysis: I". The Clarinet 37, no. 2 (March): 72–76. Varga, Bálint András, and Rossana Dalmonte. 1985. Luciano Berio: Two Interviews, translated and edited by David Osmond-Smith. London: Boyars. . Wellmer, Albrecht. 1991. The Persistence of Modernity: Essays on Aesthetics, Ethics and Postmodernism, translated by David Midgley. Cambridge [Massachusetts]: MIT Press. . 20th-century classical music Contemporary classical music music
23615
https://en.wikipedia.org/wiki/Protocol
Protocol
Protocol may refer to: Sociology and politics Protocol (politics), a formal agreement between nation states Protocol (diplomacy), the etiquette of diplomacy and affairs of state Etiquette, a code of personal behavior Science and technology Protocol (science), a predefined written procedural method of conducting experiments Medical protocol (disambiguation) Computing Protocol (object-oriented programming), a common means for unrelated objects to communicate with each other (sometimes also called interfaces) Communication protocol, a defined set of rules and regulations that determine how data is transmitted in telecommunications and computer networking Cryptographic protocol, a protocol for encrypting messages Decentralized network protocol, a protocol for operation of an open source peer-to-peer network where no single entity nor colluding group controls a majority of the network nodes Music Protocol (album), by Simon Phillips Protocol (band), a British band "Protocol", a song by Gordon Lightfoot from the album Summertime Dream "Protocol", a song by the Vamps from their 2020 album Cherry Blossom Other uses Protocol (film), a 1984 comedy film Protocol (website), an offshoot of Politico Minutes, also known as protocols, the written record of a meeting Protocol, a news website owned by Capitol News Company See also Proprietary protocol, a communications protocol owned by a single organization or individual Proto (disambiguation) Quantum cryptography protocol, a protocol for encrypting messages The Protocols of the Elders of Zion, a notorious antisemitic hoax that has circulated since the early 20th century
23617
https://en.wikipedia.org/wiki/Pump
Pump
A pump is a device that moves fluids (liquids or gases), or sometimes slurries, by mechanical action, typically converted from electrical energy into hydraulic energy. Mechanical pumps serve in a wide range of applications such as pumping water from wells, aquarium filtering, pond filtering and aeration, in the car industry for water-cooling and fuel injection, in the energy industry for pumping oil and natural gas or for operating cooling towers and other components of heating, ventilation and air conditioning systems. In the medical industry, pumps are used for biochemical processes in developing and manufacturing medicine, and as artificial replacements for body parts, in particular the artificial heart and penile prosthesis. When a pump contains two or more pump mechanisms with fluid being directed to flow through them in series, it is called a multi-stage pump. Terms such as two-stage or double-stage may be used to specifically describe the number of stages. A pump that does not fit this description is simply a single-stage pump in contrast. In biology, many different types of chemical and biomechanical pumps have evolved; biomimicry is sometimes used in developing new types of mechanical pumps. Types Mechanical pumps may be submerged in the fluid they are pumping or be placed external to the fluid. Pumps can be classified by their method of displacement into electromagnetic pumps, positive-displacement pumps, impulse pumps, velocity pumps, gravity pumps, steam pumps and valveless pumps. There are three basic types of pumps: positive-displacement, centrifugal and axial-flow pumps. In centrifugal pumps the direction of flow of the fluid changes by ninety degrees as it flows over an impeller, while in axial flow pumps the direction of flow is unchanged. Electromagnetic pump Positive-displacement pumps A positive-displacement pump makes a fluid move by trapping a fixed amount and forcing (displacing) that trapped volume into the discharge pipe. Some positive-displacement pumps use an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pump as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses. The volume is constant through each cycle of operation. Positive-displacement pump behavior and safety Positive-displacement pumps, unlike centrifugal, can theoretically produce the same flow at a given rotational speed no matter what the discharge pressure. Thus, positive-displacement pumps are constant flow machines. However, a slight increase in internal leakage as the pressure increases prevents a truly constant flow rate. A positive-displacement pump must not operate against a closed valve on the discharge side of the pump, because it has no shutoff head like centrifugal pumps. A positive-displacement pump operating against a closed discharge valve continues to produce flow and the pressure in the discharge line increases until the line bursts, the pump is severely damaged, or both. A relief or safety valve on the discharge side of the positive-displacement pump is therefore necessary. The relief valve can be internal or external. The pump manufacturer normally has the option to supply internal relief or safety valves. The internal valve is usually used only as a safety precaution. An external relief valve in the discharge line, with a return line back to the suction line or supply tank, provides increased safety. Positive-displacement types A positive-displacement pump can be further classified according to the mechanism used to move the fluid: Rotary-type positive displacement: internal and external gear pump, screw pump, lobe pump, shuttle block, flexible vane and sliding vane, circumferential piston, flexible impeller, helical twisted roots (e.g. the Wendelkolben pump) and liquid-ring pumps Reciprocating-type positive displacement: piston pumps, plunger pumps and diaphragm pumps Linear-type positive displacement: rope pumps and chain pumps Rotary positive-displacement pumps These pumps move fluid using a rotating mechanism that creates a vacuum that captures and draws in the liquid. Advantages: Rotary pumps are very efficient because they can handle highly viscous fluids with higher flow rates as viscosity increases. Drawbacks: The nature of the pump requires very close clearances between the rotating pump and the outer edge, making it rotate at a slow, steady speed. If rotary pumps are operated at high speeds, the fluids cause erosion, which eventually causes enlarged clearances that liquid can pass through, which reduces efficiency. Rotary positive-displacement pumps fall into five main types: Gear pumps – a simple type of rotary pump where the liquid is pushed around a pair of gears. Screw pumps – the shape of the internals of this pump is usually two screws turning against each other to pump the liquid Rotary vane pumps Hollow disc pumps (also known as eccentric disc pumps or hollow rotary disc pumps), similar to scroll compressors, these have an eccentric cylindrical rotor encased in a circular housing. As the rotor orbits, it traps fluid between the rotor and the casing, drawing the fluid through the pump. It is used for highly viscous fluids like petroleum-derived products, and it can also support high pressures of up to 290 psi. Peristaltic pumps have rollers which pinch a section of flexible tubing, forcing the liquid ahead as the rollers advance. Because they are very easy to keep clean, these are popular for dispensing food, medicine, and concrete. Reciprocating positive-displacement pumps Reciprocating pumps move the fluid using one or more oscillating pistons, plungers, or membranes (diaphragms), while valves restrict fluid motion to the desired direction. In order for suction to take place, the pump must first pull the plunger in an outward motion to decrease pressure in the chamber. Once the plunger pushes back, it will increase the chamber pressure and the inward pressure of the plunger will then open the discharge valve and release the fluid into the delivery pipe at constant flow rate and increased pressure. Pumps in this category range from simplex, with one cylinder, to in some cases quad (four) cylinders, or more. Many reciprocating-type pumps are duplex (two) or triplex (three) cylinder. They can be either single-acting with suction during one direction of piston motion and discharge on the other, or double-acting with suction and discharge in both directions. The pumps can be powered manually, by air or steam, or by a belt driven by an engine. This type of pump was used extensively in the 19th century—in the early days of steam propulsion—as boiler feed water pumps. Now reciprocating pumps typically pump highly viscous fluids like concrete and heavy oils, and serve in special applications that demand low flow rates against high resistance. Reciprocating hand pumps were widely used to pump water from wells. Common bicycle pumps and foot pumps for inflation use reciprocating action. These positive-displacement pumps have an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pumps as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses. The volume is constant given each cycle of operation and the pump's volumetric efficiency can be achieved through routine maintenance and inspection of its valves. Typical reciprocating pumps are: Plunger pump – a reciprocating plunger pushes the fluid through one or two open valves, closed by suction on the way back. Diaphragm pump – similar to plunger pumps, where the plunger pressurizes hydraulic oil which is used to flex a diaphragm in the pumping cylinder. Diaphragm valves are used to pump hazardous and toxic fluids. Piston pump displacement pumps – usually simple devices for pumping small amounts of liquid or gel manually. The common hand soap dispenser is such a pump. Radial piston pumpa form of hydraulic pump where pistons extend in a radial direction. Vibratory pump or vibration pumpa particularly low-cost form of plunger pump, popular in low-cost espresso machines. The only moving part is a spring-loaded piston, the armature of a solenoid. Driven by half-wave rectified alternating current, the piston is forced forward while energized, and is retracted by the spring during the other half cycle. Due to their inefficiency, vibratory pumps typically cannot be operated for more than one minute without overheating, so are limited to intermittent duty. Various positive-displacement pumps The positive-displacement principle applies in these pumps: Rotary lobe pump Progressing cavity pump Rotary gear pump Piston pump Diaphragm pump Screw pump Gear pump Hydraulic pump Rotary vane pump Peristaltic pump Rope pump Flexible impeller pump Gear pump This is the simplest form of rotary positive-displacement pumps. It consists of two meshed gears that rotate in a closely fitted casing. The tooth spaces trap fluid and force it around the outer periphery. The fluid does not travel back on the meshed part, because the teeth mesh closely in the center. Gear pumps see wide use in car engine oil pumps and in various hydraulic power packs. Screw pump A screw pump is a more complicated type of rotary pump that uses two or three screws with opposing thread — e.g., one screw turns clockwise and the other counterclockwise. The screws are mounted on parallel shafts that have gears that mesh so the shafts turn together and everything stays in place. The screws turn on the shafts and drive fluid through the pump. As with other forms of rotary pumps, the clearance between moving parts and the pump's casing is minimal. Progressing cavity pump Widely used for pumping difficult materials, such as sewage sludge contaminated with large particles, a progressing cavity pump consists of a helical rotor, about ten times as long as its width. This can be visualized as a central core of diameter x with, typically, a curved spiral wound around of thickness half x, though in reality it is manufactured in a single casting. This shaft fits inside a heavy-duty rubber sleeve, of wall thickness also typically x. As the shaft rotates, the rotor gradually forces fluid up the rubber sleeve. Such pumps can develop very high pressure at low volumes. Roots-type pump Named after the Roots brothers who invented it, this lobe pump displaces the fluid trapped between two long helical rotors, each fitted into the other when perpendicular at 90°, rotating inside a triangular shaped sealing line configuration, both at the point of suction and at the point of discharge. This design produces a continuous flow with equal volume and no vortex. It can work at low pulsation rates, and offers gentle performance that some applications require. Applications include: High capacity industrial air compressors. Roots superchargers on internal combustion engines. A brand of civil defense siren, the Federal Signal Corporation's Thunderbolt. Peristaltic pump A peristaltic pump is a type of positive-displacement pump. It contains fluid within a flexible tube fitted inside a circular pump casing (though linear peristaltic pumps have been made). A number of rollers, shoes, or wipers attached to a rotor compress the flexible tube. As the rotor turns, the part of the tube under compression closes (or occludes), forcing the fluid through the tube. Additionally, when the tube opens to its natural state after the passing of the cam it draws (restitution) fluid into the pump. This process is called peristalsis and is used in many biological systems such as the gastrointestinal tract. Plunger pumpsPlunger pumps are reciprocating positive-displacement pumps. These consist of a cylinder with a reciprocating plunger. The suction and discharge valves are mounted in the head of the cylinder. In the suction stroke, the plunger retracts and the suction valves open causing suction of fluid into the cylinder. In the forward stroke, the plunger pushes the liquid out of the discharge valve. Efficiency and common problems: With only one cylinder in plunger pumps, the fluid flow varies between maximum flow when the plunger moves through the middle positions, and zero flow when the plunger is at the end positions. A lot of energy is wasted when the fluid is accelerated in the piping system. Vibration and water hammer may be a serious problem. In general, the problems are compensated for by using two or more cylinders not working in phase with each other. Centrifugal pumps are also susceptible to water hammer. Surge analysis, a specialized study, helps evaluate this risk in such systems. Triplex-style plunger pump Triplex plunger pumps use three plungers, which reduces the pulsation relative to single reciprocating plunger pumps. Adding a pulsation dampener on the pump outlet can further smooth the pump ripple, or ripple graph of a pump transducer. The dynamic relationship of the high-pressure fluid and plunger generally requires high-quality plunger seals. Plunger pumps with a larger number of plungers have the benefit of increased flow, or smoother flow without a pulsation damper. The increase in moving parts and crankshaft load is one drawback. Car washes often use these triplex-style plunger pumps (perhaps without pulsation dampers). In 1968, William Bruggeman reduced the size of the triplex pump and increased the lifespan so that car washes could use equipment with smaller footprints. Durable high-pressure seals, low-pressure seals and oil seals, hardened crankshafts, hardened connecting rods, thick ceramic plungers and heavier duty ball and roller bearings improve reliability in triplex pumps. Triplex pumps now are in a myriad of markets across the world. Triplex pumps with shorter lifetimes are commonplace to the home user. A person who uses a home pressure washer for 10 hours a year may be satisfied with a pump that lasts 100 hours between rebuilds. Industrial-grade or continuous duty triplex pumps on the other end of the quality spectrum may run for as much as 2,080 hours a year. The oil and gas drilling industry uses massive semi-trailer-transported triplex pumps called mud pumps to pump drilling mud, which cools the drill bit and carries the cuttings back to the surface. Drillers use triplex or even quintuplex pumps to inject water and solvents deep into shale in the extraction process called fracking. Compressed-air-powered double-diaphragm pump Run on compressed air, these pumps are intrinsically safe by design, although all manufacturers offer ATEX-certified models to comply with industry regulation. These pumps are relatively inexpensive and can perform a wide variety of duties, from pumping water out of bunds to pumping hydrochloric acid from secure storage (dependent on how the pump is manufactured – elastomers / body construction). These double-diaphragm pumps can handle viscous fluids and abrasive materials with a gentle pumping process ideal for transporting shear-sensitive media. Rope pump Devised in China as chain pumps over 1000 years ago, these pumps can be made from very simple materials: A rope, a wheel and a pipe are sufficient to make a simple rope pump. Rope pump efficiency has been studied by grassroots organizations and the techniques for making and running them have been continuously improved. Impulse pump Impulse pumps use pressure created by gas (usually air). In some impulse pumps the gas trapped in the liquid (usually water), is released and accumulated somewhere in the pump, creating a pressure that can push part of the liquid upwards. Conventional impulse pumps include: Hydraulic ram pumps – kinetic energy of a low-head water supply is stored temporarily in an air-bubble hydraulic accumulator, then used to drive water to a higher head. Pulser pumps – run with natural resources, by kinetic energy only. Airlift pumps – run on air inserted into pipe, which pushes the water up when bubbles move upward Instead of a gas accumulation and releasing cycle, the pressure can be created by burning of hydrocarbons. Such combustion driven pumps directly transmit the impulse from a combustion event through the actuation membrane to the pump fluid. In order to allow this direct transmission, the pump needs to be almost entirely made of an elastomer (e.g. silicone rubber). Hence, the combustion causes the membrane to expand and thereby pumps the fluid out of the adjacent pumping chamber. The first combustion-driven soft pump was developed by ETH Zurich. Hydraulic ram pump A hydraulic ram is a water pump powered by hydropower. It takes in water at relatively low pressure and high flow-rate and outputs water at a higher hydraulic-head and lower flow-rate. The device uses the water hammer effect to develop pressure that lifts a portion of the input water that powers the pump to a point higher than where the water started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower, and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water. Velocity pumps Rotodynamic pumps (or dynamic pumps) are a type of velocity pump in which kinetic energy is added to the fluid by increasing the flow velocity. This increase in energy is converted to a gain in potential energy (pressure) when the velocity is reduced prior to or as the flow exits the pump into the discharge pipe. This conversion of kinetic energy to pressure is explained by the First law of thermodynamics, or more specifically by Bernoulli's principle. Dynamic pumps can be further subdivided according to the means in which the velocity gain is achieved. These types of pumps have a number of characteristics: Continuous energy Conversion of added energy to increase in kinetic energy (increase in velocity) Conversion of increased velocity (kinetic energy) to an increase in pressure head A practical difference between dynamic and positive-displacement pumps is how they operate under closed valve conditions. Positive-displacement pumps physically displace fluid, so closing a valve downstream of a positive-displacement pump produces a continual pressure build up that can cause mechanical failure of pipeline or pump. Dynamic pumps differ in that they can be safely operated under closed valve conditions (for short periods of time). Radial-flow pump Such a pump is also referred to as a centrifugal pump. The fluid enters along the axis or center, is accelerated by the impeller and exits at right angles to the shaft (radially); an example is the centrifugal fan, which is commonly used to implement a vacuum cleaner. Another type of radial-flow pump is a vortex pump. The liquid in them moves in tangential direction around the working wheel. The conversion from the mechanical energy of motor into the potential energy of flow comes by means of multiple whirls, which are excited by the impeller in the working channel of the pump. Generally, a radial-flow pump operates at higher pressures and lower flow rates than an axial- or a mixed-flow pump. Axial-flow pump These are also referred to as all-fluid pumps. The fluid is pushed outward or inward to move fluid axially. They operate at much lower pressures and higher flow rates than radial-flow (centrifugal) pumps. Axial-flow pumps cannot be run up to speed without special precaution. If at a low flow rate, the total head rise and high torque associated with this pipe would mean that the starting torque would have to become a function of acceleration for the whole mass of liquid in the pipe system. Mixed-flow pumps function as a compromise between radial and axial-flow pumps. The fluid experiences both radial acceleration and lift and exits the impeller somewhere between 0 and 90 degrees from the axial direction. As a consequence mixed-flow pumps operate at higher pressures than axial-flow pumps while delivering higher discharges than radial-flow pumps. The exit angle of the flow dictates the pressure head-discharge characteristic in relation to radial and mixed-flow. Regenerative turbine pump Also known as drag, friction, liquid-ring pump, peripheral, traction, turbulence, or vortex pumps, regenerative turbine pumps are a class of rotodynamic pump that operates at high head pressures, typically . The pump has an impeller with a number of vanes or paddles which spins in a cavity. The suction port and pressure ports are located at the perimeter of the cavity and are isolated by a barrier called a stripper, which allows only the tip channel (fluid between the blades) to recirculate, and forces any fluid in the side channel (fluid in the cavity outside of the blades) through the pressure port. In a regenerative turbine pump, as fluid spirals repeatedly from a vane into the side channel and back to the next vane, kinetic energy is imparted to the periphery, thus pressure builds with each spiral, in a manner similar to a regenerative blower. As regenerative turbine pumps cannot become vapor locked, they are commonly applied to volatile, hot, or cryogenic fluid transport. However, as tolerances are typically tight, they are vulnerable to solids or particles causing jamming or rapid wear. Efficiency is typically low, and pressure and power consumption typically decrease with flow. Additionally, pumping direction can be reversed by reversing direction of spin. Side-channel pump A side-channel pump has a suction disk, an impeller, and a discharge disk. Eductor-jet pump This uses a jet, often of steam, to create a low pressure. This low pressure sucks in fluid and propels it into a higher-pressure region. Gravity pumps Gravity pumps include the syphon and Heron's fountain. The hydraulic ram is also sometimes called a gravity pump. In a gravity pump the fluid is lifted by gravitational force. Steam pump Steam pumps have been for a long time mainly of historical interest. They include any type of pump powered by a steam engine and also pistonless pumps such as Thomas Savery's or the Pulsometer steam pump. Recently there has been a resurgence of interest in low-power solar steam pumps for use in smallholder irrigation in developing countries. Previously small steam engines have not been viable because of escalating inefficiencies as vapour engines decrease in size. However the use of modern engineering materials coupled with alternative engine configurations has meant that these types of system are now a cost-effective opportunity. Valveless pumps Valveless pumping assists in fluid transport in various biomedical and engineering systems. In a valveless pumping system, no valves (or physical occlusions) are present to regulate the flow direction. The fluid pumping efficiency of a valveless system, however, is not necessarily lower than that having valves. In fact, many fluid-dynamical systems in nature and engineering more or less rely upon valveless pumping to transport the working fluids therein. For instance, blood circulation in the cardiovascular system is maintained to some extent even when the heart's valves fail. Meanwhile, the embryonic vertebrate heart begins pumping blood long before the development of discernible chambers and valves. Similar to blood circulation in one direction, bird respiratory systems pump air in one direction in rigid lungs, but without any physiological valve. In microfluidics, valveless impedance pumps have been fabricated, and are expected to be particularly suitable for handling sensitive biofluids. Ink jet printers operating on the piezoelectric transducer principle also use valveless pumping. The pump chamber is emptied through the printing jet due to reduced flow impedance in that direction and refilled by capillary action. Pump repairs Examining pump repair records and mean time between failures (MTBF) is of great importance to responsible and conscientious pump users. In view of that fact, the preface to the 2006 Pump User's Handbook alludes to "pump failure" statistics. For the sake of convenience, these failure statistics often are translated into MTBF (in this case, installed life before failure). In early 2005, Gordon Buck, John Crane Inc.'s chief engineer for field operations in Baton Rouge, Louisiana, examined the repair records for a number of refinery and chemical plants to obtain meaningful reliability data for centrifugal pumps. A total of 15 operating plants having nearly 15,000 pumps were included in the survey. The smallest of these plants had about 100 pumps; several plants had over 2000. All facilities were located in the United States. In addition, considered as "new", others as "renewed" and still others as "established". Many of these plants—but not all—had an alliance arrangement with John Crane. In some cases, the alliance contract included having a John Crane Inc. technician or engineer on-site to coordinate various aspects of the program. Not all plants are refineries, however, and different results occur elsewhere. In chemical plants, pumps have historically been "throw-away" items as chemical attack limits life. Things have improved in recent years, but the somewhat restricted space available in "old" DIN and ASME-standardized stuffing boxes places limits on the type of seal that fits. Unless the pump user upgrades the seal chamber, the pump only accommodates more compact and simple versions. Without this upgrading, lifetimes in chemical installations are generally around 50 to 60 percent of the refinery values. Unscheduled maintenance is often one of the most significant costs of ownership, and failures of mechanical seals and bearings are among the major causes. Keep in mind the potential value of selecting pumps that cost more initially, but last much longer between repairs. The MTBF of a better pump may be one to four years longer than that of its non-upgraded counterpart. Consider that published average values of avoided pump failures range from US$2600 to US$12,000. This does not include lost opportunity costs. One pump fire occurs per 1000 failures. Having fewer pump failures means having fewer destructive pump fires. As has been noted, a typical pump failure, based on actual year 2002 reports, costs US$5,000 on average. This includes costs for material, parts, labor and overhead. Extending a pump's MTBF from 12 to 18 months would save US$1,667 per year — which might be greater than the cost to upgrade the centrifugal pump's reliability.Submersible slurry pumps in high demand. Engineeringnews.co.za. Retrieved on 2011-05-25. Applications Pumps are used throughout society for a variety of purposes. Early applications includes the use of the windmill or watermill to pump water. Today, the pump is used for irrigation, water supply, gasoline supply, air conditioning systems, refrigeration (usually called a compressor), chemical movement, sewage movement, flood control, marine services, etc. Because of the wide variety of applications, pumps have a plethora of shapes and sizes: from very large to very small, from handling gas to handling liquid, from high pressure to low pressure, and from high volume to low volume. Priming a pump Typically, a liquid pump cannot simply draw air. The feed line of the pump and the internal body surrounding the pumping mechanism must first be filled with the liquid that requires pumping: An operator must introduce liquid into the system to initiate the pumping. This is called priming the pump. Loss of prime is usually due to ingestion of air into the pump. The clearances and displacement ratios in pumps for liquids, whether thin or more viscous, usually cannot displace air due to its compressibility. This is the case with most velocity (rotodynamic) pumps — for example, centrifugal pumps. For such pumps, the position of the pump should always be lower than the suction point, if not the pump should be manually filled with liquid or a secondary pump should be used until all air is removed from the suction line and the pump casing. Positive–displacement pumps, however, tend to have sufficiently tight sealing between the moving parts and the casing or housing of the pump that they can be described as self-priming. Such pumps can also serve as priming pumps, so-called when they are used to fulfill that need for other pumps in lieu of action taken by a human operator. Pumps as public water supplies One sort of pump once common worldwide was a hand-powered water pump, or 'pitcher pump'. It was commonly installed over community water wells in the days before piped water supplies. In parts of the British Isles, it was often called the parish pump. Though such community pumps are no longer common, people still used the expression parish pump to describe a place or forum where matters of local interest are discussed. Because water from pitcher pumps is drawn directly from the soil, it is more prone to contamination. If such water is not filtered and purified, consumption of it might lead to gastrointestinal or other water-borne diseases. A notorious case is the 1854 Broad Street cholera outbreak. At the time it was not known how cholera was transmitted, but physician John Snow suspected contaminated water and had the handle of the public pump he suspected removed; the outbreak then subsided. Modern hand-operated community pumps are considered the most sustainable low-cost option for safe water supply in resource-poor settings, often in rural areas in developing countries. A hand pump opens access to deeper groundwater that is often not polluted and also improves the safety of a well by protecting the water source from contaminated buckets. Pumps such as the Afridev pump are designed to be cheap to build and install, and easy to maintain with simple parts. However, scarcity of spare parts for these type of pumps in some regions of Africa has diminished their utility for these areas. Sealing multiphase pumping applications Multiphase pumping applications, also referred to as tri-phase, have grown due to increased oil drilling activity. In addition, the economics of multiphase production is attractive to upstream operations as it leads to simpler, smaller in-field installations, reduced equipment costs and improved production rates. In essence, the multiphase pump can accommodate all fluid stream properties with one piece of equipment, which has a smaller footprint. Often, two smaller multiphase pumps are installed in series rather than having just one massive pump. Types and features of multiphase pumps Helico-axial (centrifugal) A rotodynamic pump with one single shaft that requires two mechanical seals, this pump uses an open-type axial impeller. It is often called a Poseidon pump, and can be described as a cross between an axial compressor and a centrifugal pump. Twin-screw (positive-displacement) The twin-screw pump is constructed of two inter-meshing screws that move the pumped fluid. Twin screw pumps are often used when pumping conditions contain high gas volume fractions and fluctuating inlet conditions. Four mechanical seals are required to seal the two shafts. Progressive cavity (positive-displacement) When the pumping application is not suited to a centrifugal pump, a progressive cavity pump is used instead. Progressive cavity pumps are single-screw types typically used in shallow wells or at the surface. This pump is mainly used on surface applications where the pumped fluid may contain a considerable amount of solids such as sand and dirt. The volumetric efficiency and mechanical efficiency of a progressive cavity pump increases as the viscosity of the liquid does. Electric submersible (centrifugal) These pumps are basically multistage centrifugal pumps and are widely used in oil well applications as a method for artificial lift. These pumps are usually specified when the pumped fluid is mainly liquid.Buffer tankA buffer tank is often installed upstream of the pump suction nozzle in case of a slug flow. The buffer tank breaks the energy of the liquid slug, smooths any fluctuations in the incoming flow and acts as a sand trap. As the name indicates, multiphase pumps and their mechanical seals can encounter a large variation in service conditions such as changing process fluid composition, temperature variations, high and low operating pressures and exposure to abrasive/erosive media. The challenge is selecting the appropriate mechanical seal arrangement and support system to ensure maximized seal life and its overall effectiveness.John Crane Seal Sentinel – John Crane Increases Production Capabilities with Machine that Streamlines Four Machining Functions into One . Sealsentinel.com. Retrieved on 2011-05-25. Specifications Pumps are commonly rated by horsepower, volumetric flow rate, outlet pressure in metres (or feet) of head, inlet suction in suction feet (or metres) of head. The head can be simplified as the number of feet or metres the pump can raise or lower a column of water at atmospheric pressure. From an initial design point of view, engineers often use a quantity termed the specific speed to identify the most suitable pump type for a particular combination of flow rate and head. Net Positive Suction Head (NPSH) is crucial for pump performance. It has two key aspects: 1) NPSHr (Required): The Head required for the pump to operate without cavitation issues. 2) NPSHa (Available): The actual pressure provided by the system (e.g., from an overhead tank). For optimal pump operation, NPSHa must always exceed NPSHr. This ensures the pump has enough pressure to prevent cavitation, a damaging condition. Pumping power The power imparted into a fluid increases the energy of the fluid per unit volume. Thus the power relationship is between the conversion of the mechanical energy of the pump mechanism and the fluid elements within the pump. In general, this is governed by a series of simultaneous differential equations, known as the Navier–Stokes equations. However a more simple equation relating only the different energies in the fluid, known as Bernoulli's equation can be used. Hence the power, P, required by the pump: where Δp is the change in total pressure between the inlet and outlet (in Pa), and Q, the volume flow-rate of the fluid is given in m3/s. The total pressure may have gravitational, static pressure and kinetic energy components; i.e. energy is distributed between change in the fluid's gravitational potential energy (going up or down hill), change in velocity, or change in static pressure. η is the pump efficiency, and may be given by the manufacturer's information, such as in the form of a pump curve, and is typically derived from either fluid dynamics simulation (i.e. solutions to the Navier–Stokes for the particular pump geometry), or by testing. The efficiency of the pump depends upon the pump's configuration and operating conditions (such as rotational speed, fluid density and viscosity etc.) For a typical "pumping" configuration, the work is imparted on the fluid, and is thus positive. For the fluid imparting the work on the pump (i.e. a turbine), the work is negative. Power required to drive the pump is determined by dividing the output power by the pump efficiency. Furthermore, this definition encompasses pumps with no moving parts, such as a siphon. Efficiency Pump efficiency is defined as the ratio of the power imparted on the fluid by the pump in relation to the power supplied to drive the pump. Its value is not fixed for a given pump, efficiency is a function of the discharge and therefore also operating head. For centrifugal pumps, the efficiency tends to increase with flow rate up to a point midway through the operating range (peak efficiency or Best Efficiency Point (BEP) ) and then declines as flow rates rise further. Pump performance data such as this is usually supplied by the manufacturer before pump selection. Pump efficiencies tend to decline over time due to wear (e.g. increasing clearances as impellers reduce in size). When a system includes a centrifugal pump, an important design issue is matching the head loss-flow characteristic with the pump so that it operates at or close to the point of its maximum efficiency. Pump efficiency is an important aspect and pumps should be regularly tested. Thermodynamic pump testing is one method. Minimum flow protection Most large pumps have a minimum flow requirement below which the pump may be damaged by overheating, impeller wear, vibration, seal failure, drive shaft damage or poor performance. A minimum flow protection system ensures that the pump is not operated below the minimum flow rate. The system protects the pump even if it is shut-in or dead-headed, that is, if the discharge line is completely closed. The simplest minimum flow system is a pipe running from the pump discharge line back to the suction line. This line is fitted with an orifice plate sized to allow the pump minimum flow to pass. The arrangement ensures that the minimum flow is maintained, although it is wasteful as it recycles fluid even when the flow through the pump exceeds the minimum flow. A more sophisticated, but more costly, system (see diagram) comprises a flow measuring device (FE) in the pump discharge which provides a signal into a flow controller (FIC) which actuates a flow control valve (FCV) in the recycle line. If the measured flow exceeds the minimum flow then the FCV is closed. If the measured flow falls below the minimum flow the FCV opens to maintain the minimum flowrate. As the fluids are recycled the kinetic energy of the pump increases the temperature of the fluid. For many pumps this added heat energy is dissipated through the pipework. However, for large industrial pumps, such as oil pipeline pumps, a recycle cooler is provided in the recycle line to cool the fluids to the normal suction temperature. Alternatively the recycled fluids may be returned to upstream of the export cooler in an oil refinery, oil terminal, or offshore installation. References Further reading Australian Pump Manufacturers' Association. Australian Pump Technical Handbook, 3rd edition. Canberra: Australian Pump Manufacturers' Association, 1987. . Hicks, Tyler G. and Theodore W. Edwards. Pump Application Engineering. McGraw-Hill Book Company.1971. Robbins, L. B. "Homemade Water Pressure Systems". Popular Science'', February 1919, pages 83–84. Article about how a homeowner can easily build a pressurized home water system that does not use electricity. Ancient inventions
23618
https://en.wikipedia.org/wiki/Progressive
Progressive
Progressive may refer to: Politics Progressivism, a political philosophy in support of social reform Progressivism in the United States, the political philosophy in the American context Progressivism in South Korea, the political philosophy in the South Korean context Progressive realism, an American foreign policy paradigm focused on producing measurable results in pursuit of widely supported goals Political organizations Congressional Progressive Caucus, members within the Democratic Party in the United States Congress dedicated to the advancement of progressive issues and positions Progressive Alliance (disambiguation) Progressive Conservative (disambiguation) Progressive Party (disambiguation) Progressive Unionist (disambiguation) Other uses in politics Progressive Era, a period of reform in the United States (c. 1890–1930) Progressive tax, a type of tax rate structure Arts, entertainment, and media Music Progressive music, a type of music that expands stylistic boundaries outwards Progressive pop Progressive rock Post-progressive Progressive soul Progressive house Progressive rap Progressive, a 2015 EP by Mrs. Green Apple "Progressive" (song), a 2009 single by Kalafina "Progressive" (Megumi Ogata and Aya Uchida song), the ending theme to the 2014 video game Danganronpa Another Episode: Ultra Despair Girls Progressive, a demo album by the band Haggard Other uses in arts, entertainment, and media Progressive chess, a chess variant Progressive talk radio, a talk radio format devoted to expressing liberal or progressive viewpoints of issues The Progressive, an American left-wing magazine Brands and enterprises Progressive Corporation, a U.S. insurance company Progressive Enterprises, a New Zealand retail cooperative Healthcare Progressive disease Progressive lens, a type of corrective eyeglass lenses Religion Progressive Adventism, a sect of the Seventh-day Adventist Church Progressive Christianity, a movement within contemporary Protestantism Progressive creationism, a form of Old Earth creationism Progressive Islam, a modern liberal interpretation of Islam Progressive Judaism, a major denomination within Judaism Progressive religion, a religious tradition which embraces theological diversity Progressive revelation (Bahá'í), a core teaching of Bahá'í that suggests that religious truth is revealed by God progressively and cyclically over time Progressive revelation (Christianity), the concept that the sections of the Bible written later contain a fuller revelation of God Technology Progressive disclosure, a technique used in human computer interaction Progressive scan, a form of video transmission Progressive shifting, a technique for changing gears in trucks Progressive stamping, a metalworking technique Verb forms Progressive aspect (also called continuous), a verb form that expresses incomplete action Past progressive Perfect progressive aspects, see Uses of English verb forms and English verbs Other uses Progressive education, which emphasizes a hands-on approach to learning Progressive Field (originally Jacobs Field), home of the Cleveland Guardians Progressive function, a function in mathematics Progressive historians, group of 20th century historians of the United States associated with a historiographical tradition that embraced an economic interpretation of American history See also Progress (disambiguation) Progression (disambiguation) Progressivism (disambiguation)
23619
https://en.wikipedia.org/wiki/Pressure
Pressure
Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure (also spelled gage pressure) is the pressure relative to the ambient pressure. Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m2); similarly, the pound-force per square inch (psi, symbol lbf/in2) is the traditional unit of pressure in the imperial and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the unit atmosphere (atm) is equal to this pressure, and the torr is defined as of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Definition Pressure is the amount of force applied perpendicular to the surface of an object per unit area. The symbol for it is "p" or P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style. Formula Mathematically: where: is the pressure, is the magnitude of the normal force, is the area of the surface on contact. Pressure is a scalar quantity. It relates the vector area element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates the two normal vectors: The minus sign comes from the convention that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. Units The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). This name for the unit was added in 1971; before that, pressure in SI was expressed in newtons per square metre. Other units of pressure, such as pounds per square inch (lbf/in2) and bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2, or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre ("g/cm2" or "kg/cm2") and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is deprecated in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa, or 14.223 psi). Pressure is related to energy density and may be expressed in units such as joules per cubic metre (J/m3, which is equal to Pa). Mathematically: Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, except aviation where the hecto- prefix is commonly used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as . Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation , where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury (or inches of mercury) are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres (or centimetres) of mercury in most of the world, and lung pressures in centimetres of water are still common. Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar (= 10,000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm (1 atm = 101,325 Pa / 33.066 = 3,064.326 Pa). The pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft. Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. For example, rather than . Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: atmosphere (atm) manometric units: centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury, height of equivalent column of water, including millimetre (mm ), centimetre (cm ), metre, inch, and foot of water; imperial and customary units: kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch, short ton-force and long ton-force per square inch, fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression; non-SI metric units: bar, decibar, millibar, msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression, kilogram-force, or kilopond, per square centimetre (technical atmosphere), gram-force and tonne-force (metric ton-force) per square centimetre, barye (dyne per square centimetre), kilogram-force and tonne-force per square metre, sthene per square metre (pieze). Examples As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density. Another example is a knife. If the flat edge is used, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressure For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about . In technical work, this is written "a gauge pressure of ". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred. Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is , a gas (such as helium) at (gauge) ( [absolute]) is 50% denser than the same gas at (gauge) ( [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. Scalar nature In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because there are an extremely large number of molecules and because the motion of the individual molecules is random in every direction, no motion is detected. When the gas is at least partially confined (that is, not free to expand rapidly), the gas will exhibit a hydrostatic pressure. This confinement can be achieved with either a physical container of some sort, or in a gravitational well such as a planet, otherwise known as atmospheric pressure. In the case of planetary atmospheres, the pressure-gradient force of the gas pushing outwards from higher pressure, lower altitudes to lower pressure, higher altitudes is balanced by the gravitational force, preventing the gas from diffusing into outer space and maintaining hydrostatic equilibrium. In a physical container, the pressure of the gas originates from the molecules colliding with the walls of the container. The walls of the container can be anywhere inside the gas, and the force per unit area (the pressure) is the same. If the "container" is shrunk down to a very small point (becoming less true as the atomic scale is approached), the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor σ, which relates the vector force to the vector area via the linear relation . This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested. Types Fluid pressure Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.) Fluid pressure occurs in one of two situations: An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere. A closed condition, called "closed conduit", e.g. a water line or gas line. Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is where: p, pressure of the fluid, = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid, v, velocity of the fluid, g, acceleration of gravity, z, elevation, , pressure head, , velocity head. Applications Hydraulic brakes Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup Pressure washing Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces. Negative pressures While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen. Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water). The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive stress along one surface normal, with a component of negative stress acting along another surface normal. The pressure is then defined as the average of the three principal stresses. The stresses in an electromagnetic field are generally non-isotropic, with the stress normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe. Stagnation pressure Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: where is the stagnation pressure, is the density, is the flow velocity, is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. Surface pressure and surface tension There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, , at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". Pressure of an ideal gas In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: where: p is the absolute pressure of the gas, n is the amount of substance, T is the absolute temperature, V is the volume, R is the ideal gas constant. Real gases exhibit a more complex dependence on the variables of state. Vapour pressure Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure. Liquid pressure When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula: where: p is liquid pressure, g is gravity at the surface of overlaying material, ρ is density of liquid, h is height of liquid column or depth within a substance. Another way of saying the same formula is the following: The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. The pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of exerts only half the average pressure that a small deep pond does. (The total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon. But for a given -wide section of each dam, the deep water will apply one quarter the force of deep water). A person will feel the same pressure whether their head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four interconnected vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference which vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Restating this as an energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth. Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are Terms have the same meaning as in section Fluid pressure. Direction of liquid pressure An experimentally determined fact about liquid pressure is that it is exerted equally in all directions. If someone is submerged in water, no matter which way that person tilts their head, the person will feel the same amount of water pressure on their ears. Because a liquid can flow, this pressure is not only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward by water pressure (buoyancy). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure does not have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why liquid particles' velocity only alters in a normal component after they're collided to the container's wall. Likewise, if the collision site is a hole, water spurting from the hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is , where h is the depth below the free surface. This is the same speed the water (or anything else) would have if freely falling the same vertical distance h. Kinematic pressure is the kinematic pressure, where is the pressure and constant mass density. The SI unit of P is m2/s2. Kinematic pressure is used in the same manner as kinematic viscosity in order to compute the Navier–Stokes equation without explicitly showing the density . Navier–Stokes equation with kinematic quantities See also Notes References External links Introduction to Fluid Statics and Dynamics on Project PHYSNET Pressure being a scalar quantity wikiUnits.org - Convert units of pressure Atmospheric thermodynamics Underwater diving physics Fluid dynamics Fluid mechanics Hydraulics Thermodynamic properties State functions Thermodynamics
23621
https://en.wikipedia.org/wiki/Polygon
Polygon
In geometry, a polygon () is a plane figure made up of line segments connected to form a closed polygonal chain. The segments of a closed polygonal chain are called its edges or sides. The points where two edges meet are the polygon's vertices or corners. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon. A simple polygon is one which does not intersect itself. More precisely, the only allowed intersections among the line segments that make up the polygon are the shared endpoints of consecutive segments in the polygonal chain. A simple polygon is the boundary of a region of the plane that is called a solid polygon. The interior of a solid polygon is its body, also known as a polygonal region or polygonal area. In contexts where one is concerned only with simple and solid polygons, a polygon may refer only to a simple polygon or to a solid polygon. A polygonal chain may cross over itself, creating star polygons and other self-intersecting polygons. Some sources also consider closed polygonal chains in Euclidean space to be a type of polygon (a skew polygon), even when the chain does not lie in a single plane. A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes. Etymology The word polygon derives from the Greek adjective πολύς (polús) 'much', 'many' and γωνία (gōnía) 'corner' or 'angle'. It has been suggested that γόνυ (gónu) 'knee' may be the origin of gon. Classification Number of sides Polygons are primarily classified by the number of sides. Convexity and intersection Polygons may be characterized by their convexity or type of non-convexity: Convex: any line drawn through the polygon (and not tangent to an edge or corner) meets its boundary exactly twice. As a consequence, all its interior angles are less than 180°. Equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. This condition is true for polygons in any geometry, not just Euclidean. Non-convex: a line may be found which meets its boundary more than twice. Equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple: the boundary of the polygon does not cross itself. All convex polygons are simple. Concave: Non-convex and simple. There is at least one interior angle greater than 180°. Star-shaped: the whole interior is visible from at least one point, without crossing any edge. The polygon must be simple, and may be convex or concave. All convex polygons are star-shaped. Self-intersecting: the boundary of the polygon crosses itself. The term complex is sometimes used in contrast to simple, but this usage risks confusion with the idea of a complex polygon as one which exists in the complex Hilbert plane consisting of two complex dimensions. Star polygon: a polygon which self-intersects in a regular way. A polygon cannot be both a star and star-shaped. Equality and symmetry Equiangular: all corner angles are equal. Equilateral: all edges are of the same length. Regular: both equilateral and equiangular. Cyclic: all corners lie on a single circle, called the circumcircle. Tangential: all sides are tangent to an inscribed circle. Isogonal or vertex-transitive: all corners lie within the same symmetry orbit. The polygon is also cyclic and equiangular. Isotoxal or edge-transitive: all sides lie within the same symmetry orbit. The polygon is also equilateral and tangential. The property of regularity may be defined in other ways: a polygon is regular if and only if it is both isogonal and isotoxal, or equivalently it is both cyclic and equilateral. A non-convex regular polygon is called a regular star polygon. Miscellaneous Rectilinear: the polygon's sides meet at right angles, i.e. all its interior angles are 90 or 270 degrees. Monotone with respect to a given line L: every line orthogonal to L intersects the polygon not more than twice. Properties and formulas Euclidean geometry is assumed throughout. Angles Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are: Interior angle – The sum of the interior angles of a simple n-gon is radians or degrees. This is because any simple n-gon ( having n sides ) can be considered to be made up of triangles, each of which has an angle sum of π radians or 180 degrees. The measure of any interior angle of a convex regular n-gon is radians or degrees. The interior angles of regular star polygons were first studied by Poinsot, in the same paper in which he describes the four regular star polyhedra: for a regular -gon (a p-gon with central density q), each interior angle is radians or degrees. Exterior angle – The exterior angle is the supplementary angle to the interior angle. Tracing around a convex n-gon, the angle "turned" at a corner is the exterior or external angle. Tracing all the way around the polygon makes one full turn, so the sum of the exterior angles must be 360°. This argument can be generalized to concave simple polygons, if external angles that turn in the opposite direction are subtracted from the total turned. Tracing around an n-gon in general, the sum of the exterior angles (the total amount one rotates at the vertices) can be any integer multiple d of 360°, e.g. 720° for a pentagram and 0° for an angular "eight" or antiparallelogram, where d is the density or turning number of the polygon. Area In this section, the vertices of the polygon under consideration are taken to be in order. For convenience in some formulas, the notation will also be used. Simple polygons If the polygon is non-self-intersecting (that is, simple), the signed area is or, using determinants where is the squared distance between and The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive -axis to the positive -axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or surveyor's formula. The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from: The formula was described by Lopshits in 1963. If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1. In every polygon with perimeter p and area A , the isoperimetric inequality holds. For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon. The lengths of the sides of a polygon do not in general determine its area. However, if the polygon is simple and cyclic then the sides do determine the area. Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic). Regular polygons Many specialized formulas apply to the areas of regular polygons. The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by This radius is also termed its apothem and is often represented as a. The area of a regular n-gon in terms of the radius R of its circumscribed circle can be expressed trigonometrically as: The area of a regular n-gon inscribed in a unit-radius circle, with side s and interior angle can also be expressed trigonometrically as: Self-intersecting The area of a self-intersecting polygon can be defined in two different ways, giving different answers: Using the formulas for simple polygons, we allow that particular regions within the polygon may have their area multiplied by a factor which we call the density of the region. For example, the central convex pentagon in the center of a pentagram has density 2. The two triangular regions of a cross-quadrilateral (like a figure 8) have opposite-signed densities, and adding their areas together can give a total area of zero for the whole figure. Considering the enclosed regions as point sets, we can find the area of the enclosed point set. This corresponds to the area of the plane covered by the polygon or to the area of one or more simple polygons having the same outline as the self-intersecting one. In the case of the cross-quadrilateral, it is treated as two simple triangles. Centroid Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are In these formulas, the signed value of area must be used. For triangles (), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for . The centroid of the vertex set of a polygon with vertices has the coordinates Generalizations The idea of a polygon has been generalized in various ways. Some of the more important include: A spherical polygon is a circuit of arcs of great circles (sides) and vertices on the surface of a sphere. It allows the digon, a polygon having only two sides and two corners, which is impossible in a flat plane. Spherical polygons play an important role in cartography (map making) and in Wythoff's construction of the uniform polyhedra. A skew polygon does not lie in a flat plane, but zigzags in three (or more) dimensions. The Petrie polygons of the regular polytopes are well known examples. An apeirogon is an infinite sequence of sides and angles, which is not closed but has no ends because it extends indefinitely in both directions. A skew apeirogon is an infinite sequence of sides and angles that do not lie in a flat plane. A polygon with holes is an area-connected or multiply-connected planar polygon with one external boundary and one or more interior boundaries (holes). A complex polygon is a configuration analogous to an ordinary polygon, which exists in the complex plane of two real and two imaginary dimensions. An abstract polygon is an algebraic partially ordered set representing the various elements (sides, vertices, etc.) and their connectivity. A real geometric polygon is said to be a realization of the associated abstract polygon. Depending on the mapping, all the generalizations described here can be realized. A polyhedron is a three-dimensional solid bounded by flat polygonal faces, analogous to a polygon in two dimensions. The corresponding shapes in four or higher dimensions are called polytopes. (In other conventions, the words polyhedron and polytope are used in any dimension, with the distinction between the two that a polytope is necessarily bounded.) Naming The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions. Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon. Exceptions exist for side counts that are easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram. To construct the name of a polygon with more than 20 and fewer than 100 edges, combine the prefixes as follows. The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity of concatenated prefix numbers in the naming of quasiregular polyhedra, though not all sources use it. History Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum. The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century. In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons. In nature Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made. Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California. In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons. Computer graphics In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials. Any surface is modelled as a tessellation called polygon mesh. If a square mesh has points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines). The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation. In computer graphics and computational geometry, it is often necessary to determine whether a given point lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test. See also Boolean operations on polygons Complete graph Constructible polygon Cyclic polygon Geometric shape Golygon List of polygons Polyform Polygon soup Polygon triangulation Precision polygon Spirolateral Synthetic geometry Tiling Tiling puzzle References Bibliography Coxeter, H.S.M.; Regular Polytopes, Methuen and Co., 1948 (3rd Edition, Dover, 1973). Cromwell, P.; Polyhedra, CUP hbk (1997), pbk. (1999). Grünbaum, B.; Are your polyhedra the same as my polyhedra? Discrete and comput. geom: the Goodman-Pollack festschrift, ed. Aronov et al. Springer (2003) pp. 461–488. (pdf) Notes External links What Are Polyhedra?, with Greek Numerical Prefixes Polygons, types of polygons, and polygon properties, with interactive animation How to draw monochrome orthogonal polygons on screens, by Herbert Glarner comp.graphics.algorithms Frequently Asked Questions, solutions to mathematical problems computing 2D and 3D polygons Comparison of the different algorithms for Polygon Boolean operations, compares capabilities, speed and numerical robustness Interior angle sum of polygons: a general formula, Provides an interactive Java investigation that extends the interior angle sum formula for simple closed polygons to include crossed (complex) polygons Euclidean plane geometry
23622
https://en.wikipedia.org/wiki/Player%20character
Player character
A player character (also known as a playable character or PC) is a fictional character in a video game or tabletop role-playing game whose actions are controlled by a player rather than the rules of the game. The characters that are not controlled by a player are called non-player characters (NPCs). The actions of non-player characters are typically handled by the game itself in video games, or according to rules followed by a gamemaster refereeing tabletop role-playing games. The player character functions as a fictional, alternate body for the player controlling the character. Video games typically have one player character for each person playing the game. Some games, such as multiplayer online battle arena, hero shooter, and fighting games, offer a group of player characters for the player to choose from, allowing the player to control one of them at a time. Where more than one player character is available, the characters may have distinctive abilities and differing styles of play. Overview Avatars A player character may sometimes be based on a real person, especially in sports games that use the names and likenesses of real athletes. Historical figures and leaders may sometimes appear as characters too, particularly in strategy or empire building games such as in Sid Meier's Civilization series. Such a player character is more properly an avatar as the player character's name and image typically have little bearing on the game itself. Avatars are also commonly seen in casino game simulations. Blank characters In many video games, and especially first-person shooters, the player character is a "blank slate" without any notable characteristics or even backstory. Pac-Man, Crono from Chrono Trigger, Link from The Legend of Zelda, Chell from Portal, and Claude from Grand Theft Auto III are examples of such characters. These characters are generally silent protagonists. Some games will go even further, never showing or naming the player character at all. This is somewhat common in first-person videogames, such as in Myst, but is more often done in strategy video games such as Dune 2000, Emperor: Battle for Dune, and Command & Conquer series. In such games, the only real indication that the player has a character (instead of an omnipresent status), is from the cutscenes during which the character is being given a mission briefing or debriefing; the player is usually addressed as "general", "commander", or another military rank. In gaming culture, such a character was called Ageless, Faceless, Gender-Neutral, Culturally Ambiguous Adventure Person, abbreviated as AFGNCAAP; a term that originated in Zork: Grand Inquisitor where it is used satirically to refer to the player. Character action games Character action games (also called character-driven games, character games or just action games) are a broad category of action games, referring to a variety of games that are driven by the physical actions of player characters. The term dates back to the golden age of arcade video games in the early 1980s, when the terms "action games" and "character games" began being used to distinguish a new emerging genre of character-driven action games from the space shoot 'em ups that had previously dominated the arcades in the late 1970s. Classic examples of character action games from that period include maze games like Pac-Man, platformers like Donkey Kong, and Frogger. Side-scrolling character action games (also called "side-scrolling action games" or "side-scrollers") are a broad category of character action games that were popular from the mid-1980s to the 1990s, which involve player characters defeating large groups of weaker enemies along a side-scrolling playfield. Examples include beat 'em ups like Kung-Fu Master and Double Dragon, ninja action games like The Legend of Kage and Shinobi, scrolling platformers like Super Mario Bros. and Sonic the Hedgehog, and run and gun shooters like Rolling Thunder and Gunstar Heroes. "Character action games" is also a term used for 3D hack and slash games modelled after Devil May Cry, which represent an evolution of arcade character action games. Other examples of this sub-genre include Ninja Gaiden, God of War, and Bayonetta. Fighting games Fighting games typically have a larger number of player characters to choose from, with some basic moves available to all or most characters and some unique moves only available to one or a few characters. Having many distinctive characters to play as and against, all possessing different moves and abilities, is necessary to create a larger gameplay variety in such games. Hero shooters Similarly to MOBAs, hero shooters emphasize pre-designed "hero" characters with distinctive abilities and weapons that are not available to the other characters. Hero shooters strongly encourage teamwork between players on a team, guiding players to select effective combinations of hero characters and coordinate the use of hero abilities during a match. Multiplayer online battle arena Multiplayer online battle arena games offer a large group of viable player characters for the player to choose from, each of which having distinctive abilities, strengths, and weaknesses to make the game play style different. Characters can learn new abilities or augment existing ones over the course of a match by collecting experience points. Choosing a character who complements the player's teammates and counters their opponents opens up a strategy before the beginning of the match itself. Playable characters blend a variety of fantasy tropes, featuring numerous references to popular culture and mythology. Role-playing games In both tabletop role playing games such as Dungeons & Dragons and role-playing video games such as Final Fantasy, a player typically creates or takes on the identity of a character that may have nothing in common with the player. The character is often of a certain (usually fictional) race and class (such as zombie, berserker, rifleman, elf, or cleric), each with strengths and weaknesses. The attributes of the characters (such as magic and fighting ability) are given as numerical values which can be increased as the gamer progresses and gains rank and experience points through accomplishing goals or fighting enemies. Sports games In many sports games, player characters are often modelled after real-life athletes, as opposed to fictional characters. This is particularly the case for sports simulation games, whereas many arcade-style sports games often have fictional characters instead. Secret characters A secret or unlockable character is a playable character in a video game available only after either completing the game or meeting another requirement. In some video games, characters that are not secret but appear only as non-player characters like bosses or enemies become playable characters after completing certain requirements, or sometimes cheating. See also Alternate character Avatar (computing) Non-player character References MUD terminology Role-playing game terminology Video game terminology
23623
https://en.wikipedia.org/wiki/Parish
Parish
A parish is a territorial entity in many Christian denominations, constituting a division within a diocese. A parish is under the pastoral care and clerical jurisdiction of a priest, often termed a parish priest, who might be assisted by one or more curates, and who operates from a parish church. Historically, a parish often covered the same geographical area as a manor. Its association with the parish church remains paramount. By extension the term parish refers not only to the territorial entity but to the people of its community or congregation as well as to church property within it. In England this church property was technically in ownership of the parish priest ex officio, vested in him on his institution to that parish. Etymology and use First attested in English in the late 13th century, the word parish comes from the Old French , in turn from , the Romanisation of the , "sojourning in a foreign land", itself from (paroikos), "dwelling beside, stranger, sojourner", which is a compound of (pará), "beside, by, near" and (oîkos), "house". As an ancient concept, the term "parish" occurs in the long-established Christian denominations: Catholic, Anglican Communion, the Eastern Orthodox Church, and Lutheran churches, and in some Methodist, Congregationalist and Presbyterian administrations. The eighth Archbishop of Canterbury Theodore of Tarsus (c. 602–690) appended the parish structure to the Anglo-Saxon township unit, where it existed, and where minsters catered to the surrounding district. Territorial structure Broadly speaking, the parish is the standard unit in episcopal polity of church administration, although parts of a parish may be subdivided as a chapelry, with a chapel of ease or filial church serving as the local place of worship in cases of difficulty to access the main parish church. In the wider picture of ecclesiastical polity, a parish comprises a division of a diocese or see. Parishes within a diocese may be grouped into a deanery or vicariate forane (or simply vicariate), overseen by a dean or vicar forane, or in some cases by an archpriest. Some churches of the Anglican Communion have deaneries as units of an archdeaconry. Outstations An outstation is a newly-created congregation, a term usually used where the church is evangelical, or a mission and particularly in African countries, but also historically in Australia. They exist mostly within the Catholic and Anglican parishes. The Anglican Diocese of Cameroon describes their outstations as the result of outreach work "initiated, sponsored and supervised by the mother parishes". Once there is a big enough group of worshippers in the same place, the outstation in named by the bishop of the diocese. They are run by "catechists/evangelists" or lay readers, and supervised by the creator parish or archdeaconry. Outstations are not self-supporting, and in poor areas often consist of a very simple structure. The parish priest visits as often as possible. If and when the community has grown enough, the outstation may become a parish and have a parish priest assigned to it. Catholic Church In the Catholic Church, each parish normally has its own parish priest (in some countries called pastor or provost), who has responsibility and canonical authority over the parish. What in most English-speaking countries is termed the "parish priest" is referred to as the "pastor" in the United States, where the term "parish priest" is used of any priest assigned to a parish even in a subordinate capacity. These are called "assistant priests", "parochial vicars", "curates", or, in the United States, "associate pastors" and "assistant pastors". Each diocese (administrative region) is divided into parishes, each with their own central church called the parish church, where religious services take place. Some larger parishes or parishes that have been combined under one parish priest may have two or more such churches, or the parish may be responsible for chapels (or chapels of ease) located at some distance from the mother church for the convenience of distant parishioners. In addition to a parish church, each parish may maintain auxiliary organizations and their facilities such as a rectory, parish hall, parochial school, or convent, frequently located on the same campus or adjacent to the church. Normally, a parish comprises all Catholics living within its geographically defined area, but non-territorial parishes can also be established within a defined area on a personal basis for Catholics belonging to a particular rite, language, nationality, or community. An example is that of personal parishes established in accordance with the 7 July 2007 motu proprio Summorum Pontificum for those attached to the pre-Vatican II liturgy. Church of England The Church of England's geographical structure uses the local parish church as its basic unit. The parish system survived the Reformation with the Anglican Church's secession from Rome remaining largely untouched; thus, it shares its roots with the Catholic Church's system described below. Parishes may extend into different counties or hundreds and historically many parishes comprised extra outlying portions in addition to its principal district, usually being described as 'detached' and intermixed with the lands of other parishes. Church of England parishes nowadays all lie within one of 42 dioceses divided between the provinces of Canterbury, 30 and York, 12. Each parish normally has its own parish priest (either a vicar or rector, owing to the vagaries of the feudal tithe system: rectories usually having had greater income) and perhaps supported by one or more curates or deacons - although as a result of ecclesiastical pluralism some parish priests might have held more than one parish living, placing a curate in charge of those where they do not reside. Now, however, it is common for a number of neighbouring parishes to be placed under one benefice in the charge of a priest who conducts services by rotation, with additional services being provided by lay readers or other non-ordained members of the church community. A chapelry was a subdivision of an ecclesiastical parish in England, and parts of Lowland Scotland up to the mid 19th century. It had a similar status to a township but was so named as it had a chapel which acted as a subsidiary place of worship to the main parish church. In England civil parishes and their governing parish councils evolved in the 19th century as ecclesiastical parishes began to be relieved of what became considered to be civic responsibilities. Thus their boundaries began to diverge. The word "parish" acquired a secular usage. Since 1895, a parish council elected by public vote or a (civil) parish meeting administers a civil parish and is formally recognised as the level of local government below a district council. The traditional structure of the Church of England with the parish as the basic unit has been exported to other countries and churches throughout the Anglican Communion and Commonwealth but does not necessarily continue to be administered in the same way. Church of Scotland The parish is also the basic level of church administration in the Church of Scotland. Spiritual oversight of each parish church in Scotland is responsibility of the congregation's Kirk Session. Patronage was regulated in 1711 (Patronage Act) and abolished in 1874, with the result that ministers must be elected by members of the congregation. Many parish churches in Scotland today are "linked" with neighbouring parish churches served by a single minister. Since the abolition of parishes as a unit of civil government in Scotland in 1929, Scottish parishes have purely ecclesiastical significance and the boundaries may be adjusted by the local Presbytery. Church in Wales The Church in Wales was disestablished in 1920 and is made up of six dioceses. It retained the parish system and parishes were also civil administration areas until communities were established in 1974, but did not necessarily share the same boundaries. The reduction in the numbers of worshippers, and the increasing costs of maintaining often ancient buildings, led over time to parish reorganisation, parish groupings and Rectorial Benefices (merged parishes led by a Rector). In 2010, the Church in Wales engaged the Rt Rev Richard Harries (Lord Harries of Pentregarth), a former Church of England Bishop of Oxford; Prof Charles Handy; and Prof Patricia Peattie, to carry out a review into the organisation of the Church and make recommendations as to its future shape. The group published its report ("Church in Wales Review") in July 2012 and proposed that parishes should be reorganised into larger Ministry Areas (Ardaloedd Gweinidogaeth). It stated that: "The parish system... is no longer sustainable" and suggested that the Ministry Areas should each have a leadership team containing lay people as well as clergy, following the principles of 'collaborative ministry'. Over the next decade, the six dioceses all implemented the report, with the final Ministry Areas being instituted in 2022. In the Diocese of St Asaph (Llanelwy), they are known as Mission Areas (Ardaloedd Cenhadaeth) Methodist Church In the United Methodist Church congregations are called parishes, though they are more often simply called congregations and have no geographic boundaries. A prominent example of this usage comes in The Book of Discipline of The United Methodist Church, in which the committee of every local congregation that handles staff support is referred to as the committee on Pastor-Parish Relations. This committee gives recommendations to the bishop on behalf of the parish/congregation since it is the United Methodist Bishop of the episcopal area who appoints a pastor to each congregation. The same is true in the African Methodist Episcopal Church and the Christian Methodist Episcopal Church. In New Zealand, a local grouping of Methodist churches that share one or more ministers (which in the United Kingdom would be called a circuit) is referred to as a parish. See also Parish church Parish pump Parish registers: Birth certificate, Marriage certificate, Death certificate Collegiate church Priory church Cathedral Parochial school References Citations Sources Sidney Webb, Beatrice Potter. English Local Government from the Revolution to the Municipal Corporations. London: Longmans, Green and Co., 1906 James Barry Bird. The laws respecting parish matters: containing the several offices and duties of churchwardens, overseers of the poor, constables, watchmen, and other parish officers : the laws concerning rates and assessments, settlements and removals of the poor, and of the poor in general. Publisher W. Clarke, 1799 Further reading Hart, A. Tindal (1959) The Country Priest in English History. London: Phoenix House --do.-- (1958) The Country Clergy in Elizabethan & Stuart Times, 1558-1660. London: Phoenix House --do.-- (1955) The Eighteenth Century Country Parson, circa 1689 to 1830, Shrewsbury: Wilding & Son --do.-- & Carpenter, E. F. (1954) The Nineteenth Century Country Parson; circa 1832-1900. Shrewsbury: Wilding & Son External links Crockford's Clerical Directory In praise of ... civil parishes Editorial in The Guardian, 2011-05-16. Christian terminology Anglican organizations Parishes
23624
https://en.wikipedia.org/wiki/Procopius
Procopius
Procopius of Caesarea ( Prokópios ho Kaisareús; ; –565) was a prominent late antique Greek scholar and historian from Caesarea Maritima. Accompanying the Roman general Belisarius in Emperor Justinian's wars, Procopius became the principal Roman historian of the 6th century, writing the History of the Wars, the Buildings, and the Secret History. Early life Apart from his own writings, the main source for Procopius's life is an entry in the Suda, a Byzantine Greek encyclopaedia written sometime after 975 which discusses his early life. He was a native of Caesarea in the province of Palaestina Prima. He would have received a conventional upper class education in the Greek classics and rhetoric, perhaps at the famous school at Gaza. He may have attended law school, possibly at Berytus (present-day Beirut) or Constantinople (now Istanbul), and became a lawyer (rhetor). He evidently knew Latin, as was natural for a man with legal training. Career In 527, the first year of the reign of the emperor JustinianI, he became the legal adviser () for Belisarius, a general whom Justinian made his chief military commander in a great attempt to restore control over the lost western provinces of the empire. Procopius was with Belisarius on the eastern front until the latter was defeated at the Battle of Callinicum in 531 and recalled to Constantinople. Procopius witnessed the Nika riots of January, 532, which Belisarius and his fellow general Mundus repressed with a massacre in the Hippodrome there. In 533, he accompanied Belisarius on his victorious expedition against the Vandal kingdom in North Africa, took part in the capture of Carthage, and remained in Africa with Belisarius's successor Solomon the Eunuch when Belisarius returned east to the capital. Procopius recorded a few of the extreme weather events of 535–536, although these were presented as a backdrop to Byzantine military activities, such as a mutiny in and around Carthage. He rejoined Belisarius for his campaign against the Ostrogothic kingdom in Italy and experienced the Gothic siege of Rome that lasted a year and nine days, ending in mid-March 538. He witnessed Belisarius's entry into the Gothic capital, Ravenna, in 540. Both the Wars and the Secret History suggest that his relationship with Belisarius cooled thereafter. When Belisarius was sent back to Italy in 544 to cope with a renewal of the war with the Goths, now led by the able king Totila, Procopius appears to have no longer been on Belisarius's staff. As magister militum, Belisarius was an "illustrious man" (; , illoústrios); being his , Procopius must therefore have had at least the rank of a "visible man" (vir spectabilis). He thus belonged to the mid-ranking group of the senatorial order (). However, the Suda, which is usually well-informed in such matters, also describes Procopius himself as one of the . Should this information be correct, Procopius would have had a seat in Constantinople's senate, which was restricted to the under Justinian. He also wrote that under Justinian's reign in 560, a major Christian church dedicated to the Virgin Mary was built on the site of the Temple Mount. Death It is not certain when Procopius died. Many historiansincluding Howard-Johnson, Cameron, and Geoffrey Greatrexdate his death to 554, but there was an urban prefect of Constantinople () who was called Procopius in 562. In that year, Belisarius was implicated in a conspiracy and was brought before this urban prefect. In fact, some scholars have argued that Procopius died at least a few years after 565 as he unequivocally states in the beginning of his Secret History that he planned to publish it after the death of Justinian for fear he would be tortured and killed by the emperor (or even by general Belisarius) if the emperor (or the general) learned about what Procopius wrote (his scathing criticism of the emperor, of his wife, of Belisarius, of the general's wife, Antonia: calling the former "demons in human form" and the latter incompetent and treacherous) in this later history. However, most scholars believe that the Secret History was written in 550 and remained unpublished during Procopius' lifetime. Writings The writings of Procopius are the primary source of information for the rule of the emperor JustinianI. Procopius was the author of a history in eight books on the wars prosecuted by Justinian, a panegyric on the emperor's public works projects throughout the empire, and a book known as the Secret History that claims to report the scandals that Procopius could not include in his officially sanctioned history for fear of angering the emperor, his wife, Belisarius, and the general's wife. Consequently publication was delayed until all of them were dead to avoid retaliation. History of the Wars Procopius's Wars or History of the Wars (, Hypèr tōn Polémon Lógoi, "Words on the Wars"; , "On the Wars") is his most important work, although less well known than the Secret History. The first seven books seem to have been largely completed by 545 and may have been published as a set. They were, however, updated to mid-century before publication, with the latest mentioned event occurring in early 551. The eighth and final book brought the history to 553. The first two booksoften known as The Persian War ()deal with the conflict between the Romans and Sassanid Persia in Mesopotamia, Syria, Armenia, Lazica, and Iberia (present-day Georgia). It details the campaigns of the Sassanid shah KavadhI, the 532 'Nika' revolt, the war by Kavadh's successor KhosrauI in 540, his destruction of Antioch and deportation of its inhabitants to Mesopotamia, and the great plague that devastated the empire from 542. The Persian War also covers the early career of Procopius's patron Belisarius in some detail. The Wars’ next two booksknown as The Vandal War or Vandalic War ()cover Belisarius's successful campaign against the Vandal kingdom that had occupied Rome's provinces in northwest Africa for the last century. The final four booksknown as The Gothic War ()cover the Italian campaigns by Belisarius and others against the Ostrogoths. Procopius includes accounts of the 1st and 2nd sieges of Naples and the 1st, 2nd, and 3rd sieges of Rome. He also includes an account of the rise of the Franks (see Arborychoi). The last book describes the eunuch Narses's successful conclusion of the Italian campaign and includes some coverage of campaigns along the empire's eastern borders as well. The Wars proved influential on later Byzantine historiography. In the 570s Agathias wrote Histories, a continuation of Procopius's work in a similar style. Secret History Procopius's now famous Anecdota, also known as Secret History (, Apókryphe Historía; ), was discovered centuries later at the Vatican Library in Rome and published in Lyon by Niccolò Alamanni in 1623. Its existence was already known from the Suda, which referred to it as Procopius's "unpublished works" containing "comedy" and "invective" of Justinian, Theodora, Belisarius and Antonina. The Secret History covers roughly the same years as the first seven books of The History of the Wars and appears to have been written after they were published. Current consensus generally dates it to 550, or less commonly 558. In the eyes of many scholars, the Secret History reveals an author who had become deeply disillusioned with Emperor Justinian, his wife Theodora, the general Belisarius, and his wife Antonina. The work claims to expose the secret springs of their public actions, as well as the private lives of the emperor and his entourage. Justinian is portrayed as cruel, venal, prodigal, and incompetent. In one passage, it is even claimed that he was possessed by demonic spirits or was himself a demon: Similarly, the Theodora of the Secret History is a garish portrait of vulgarity and insatiable lust juxtaposed with cold-blooded self-interest, shrewishness, and envious and fearful mean-spiritedness. Among the more titillating (and dubious) revelations in the Secret History is Procopius's account of Theodora's thespian accomplishments: Furthermore, Secret History portrays Belisarius as a weak man completely emasculated by his wife, Antonina, who is portrayed in very similar terms to Theodora. They are both said to be former actresses and close friends. Procopius claimed Antonina worked as an agent for Theodora against Belisarius, and had an ongoing affair with Belisarius' godson, Theodosius. On the other hand, it has been argued that Procopius prepared the Secret History as an exaggerated document out of fear that a conspiracy might overthrow Justinian's regime, whichas a kind of court historianmight be reckoned to include him. The unpublished manuscript would then have been a kind of insurance, which could be offered to the new ruler as a way to avoid execution or exile after the coup. If this hypothesis were correct, the Secret History would not be proof that Procopius hated Justinian or Theodora. The Buildings The Buildings (, ; , "On Buildings") is a panegyric on Justinian's public works projects throughout the empire. The first book may date to before the collapse of the first dome of Hagia Sophia in 557, but some scholars think that it is possible that the work postdates the building of the bridge over the Sangarius in the late 550s. Historians consider Buildings to be an incomplete work due to evidence of the surviving version being a draft with two possible redactions. Buildings was likely written at Justinian's behest, and it is doubtful that its sentiments expressed are sincere. It tells us nothing further about Belisarius, and it takes a sharply different attitude towards Justinian. He is presented as an idealised Christian emperor who built churches for the glory of God and defenses for the safety of his subjects. He is depicted showing particular concern for the water supply, building new aqueducts and restoring those that had fallen into disuse. Theodora, who was dead when this panegyric was written, is mentioned only briefly, but Procopius's praise of her beauty is fulsome. Due to the panegyrical nature of Procopius's Buildings, historians have discovered several discrepancies between claims made by Procopius and accounts in other primary sources. A prime example is Procopius's starting the reign of Justinian in 518, which was actually the start of the reign of his uncle and predecessor By treating the uncle's reign as part of his nephew's, Procopius was able to credit Justinian with buildings erected or begun under Justin's administration. Such works include renovation of the walls of Edessa after its 525 flood and consecration of several churches in the region. Similarly, Procopius falsely credits Justinian for the extensive refortification of the cities of Tomis and Histria in Scythia Minor. This had actually been carried out under who reigned before Justin. Style Procopius belongs to the school of late antique historians who continued the traditions of the Second Sophistic. They wrote in Attic Greek. Their models were Herodotus, Polybius and in particular Thucydides. Their subject matter was secular history. They avoided vocabulary unknown to Attic Greek and inserted an explanation when they had to use contemporary words. Thus Procopius includes glosses of monks ("the most temperate of Christians") and churches (as equivalent to a "temple" or "shrine"), since monasticism was unknown to the ancient Athenians and their ekklesía had been a popular assembly. The secular historians eschewed the history of the Christian church. Ecclesiastical history was left to a separate genre after Eusebius. However, Cameron has argued convincingly that Procopius's works reflect the tensions between the classical and Christian models of history in 6th-century Constantinople. This is supported by Whitby's analysis of Procopius's depiction of the capital and its cathedral in comparison to contemporary pagan panegyrics. Procopius can be seen as depicting Justinian as essentially God's vicegerent, making the case for buildings being a primarily religious panegyric. Procopius indicates that he planned to write an ecclesiastical history himself and, if he had, he would probably have followed the rules of that genre. As far as known, however, such an ecclesiastical history was never written. Some historians have criticized Propocius's description of some barbarians, for example, he dehumanized the unfamiliar Moors as "not even properly human". This was however, inline with Byzantine ethnographic practice in late antiquity. Legacy A number of historical novels based on Procopius's works (along with other sources) have been written. Count Belisarius was written by poet and novelist Robert Graves in 1938. Procopius himself appears as a minor character in Felix Dahn's A Struggle for Rome and in L. Sprague de Camp's alternate history novel Lest Darkness Fall. The novel's main character, archaeologist Martin Padway, derives most of his knowledge of historical events from the Secret History. The narrator in Herman Melville's novel Moby-Dick cites Procopius's description of a captured sea monster as evidence of the narrative's feasibility. List of selected works Seven volumes, Greek text and English translation. English translation of the Anecdota. See also Jordanes Gregory of Tours Notes References This article is based on an earlier version by James Allan Evans, originally posted at Nupedia. Further reading Adshead, Katherine: Procopius' Poliorcetica: continuities and discontinuities, in: G. Clarke et al. (eds.): Reading the past in late antiquity, Australian National UP, Rushcutters Bay 1990, pp. 93–119 Alonso-Núñez, J. M.: Jordanes and Procopius on Northern Europe, in: Nottingham Medieval Studies 31 (1987), 1–16. Amitay, Ory: Procopius of Caesarea and the Girgashite Diaspora, in: Journal for the Study of the Pseudepigrapha 20 (2011), 257–276. Anagnostakis, Ilias: Procopius's dream before the campaign against Libya: a reading of Wars 3.12.1-5, in: C. Angelidi and G. Calofonos (eds.), Dreaming in Byzantium and Beyond, Farnham: Ashgate Publishing 2014, 79–94. Bachrach, Bernard S.: Procopius, Agathias and the Frankish Military, in: Speculum 45 (1970), 435–441. Bachrach, Bernard S.: Procopius and the chronology of Clovis's reign, in: Viator 1 (1970), 21–32. Baldwin, Barry: An Aphorism in Procopius, in: Rheinisches Museum für Philologie 125 (1982), 309–311. Baldwin, Barry: Sexual Rhetoric in Procopius, in: Mnemosyne 40 (1987), pp. 150–152 Belke, Klaus: Prokops De aedificiis, Buch V, zu Kleinasien, in: Antiquité Tardive 8 (2000), 115–125. Börm, Henning: Prokop und die Perser. Stuttgart: Franz Steiner Verlag, 2007. (Review in English by G. Greatrex and Review in English by A. Kaldellis) Börm, Henning: Procopius of Caesarea, in Encyclopaedia Iranica Online, New York 2013. Börm, Henning: Procopius, his predecessors, and the genesis of the Anecdota: Antimonarchic discourse in late antique historiography, in: H. Börm (ed.): Antimonarchic discourse in Antiquity. Stuttgart: Franz Steiner Verlag 2015, 305–346. Braund, David: Procopius on the Economy of Lazica, in: The Classical Quarterly 41 (1991), 221–225. Brodka, Dariusz: Die Geschichtsphilosophie in der spätantiken Historiographie. Studien zu Prokopios von Kaisareia, Agathias von Myrina und Theophylaktos Simokattes. Frankfurt am Main: Peter Lang, 2004. Brodka, Dariusz: Prokop von Caesarea. Hildesheim: Olms 2022. Burn, A. R.: Procopius and the island of ghosts, in: English Historical Review 70 (1955), 258–261. Cameron, Averil: Procopius and the Sixth Century. Berkeley: University of California Press, 1985. Cameron, Averil: The scepticism of Procopius, in: Historia 15 (1966), 466–482. Colvin, Ian: Reporting Battles and Understanding Campaigns in Procopius and Agathias: Classicising Historians' Use of Archived Documents as Sources, in: A. Sarantis (ed.): War and warfare in late antiquity. Current perspectives, Leiden: Brill 2013, 571–598. Cresci, Lia Raffaella: Procopio al confine tra due tradizioni storiografiche, in: Rivista di Filologia e di Istruzione Classica 129 (2001), 61–77. Cristini, Marco: Il seguito ostrogoto di Amalafrida: confutazione di Procopio, Bellum Vandalicum 1.8.12, in: Klio 99 (2017), 278–289. Cristini, Marco: Totila and the Lucanian Peasants: Procop. Goth. 3.22.20, in: Greek, Roman and Byzantine Studies 61 (2021), 73–84. Croke, Brian and James Crow: Procopius and Dara, in: The Journal of Roman Studies 73 (1983), 143–159. Downey, Glanville: The Composition of Procopius, De Aedificiis, in: Transactions and Proceedings of the American Philological Association 78 (1947), 171–183. Evans, James A. S.: Justinian and the Historian Procopius, in: Greece & Rome 17 (1970), 218–223. Evans, James A. S.: Procopius. New York: Twayne Publishers, 1972. Gordon, C. D.: Procopius and Justinian's Financial Policies, in: Phoenix 13 (1959), 23–30. Greatrex, Geoffrey: Procopius and the Persian Wars, D.Phil. thesis, Oxford, 1994. Greatrex, Geoffrey: The dates of Procopius' works, in: BMGS 18 (1994), 101–114. Greatrex, Geoffrey: The Composition of Procopius' Persian Wars and John the Cappadocian, in: Prudentia 27 (1995), 1–13. Greatrex, Geoffrey: Rome and Persia at War, 502–532. London: Francis Cairns, 1998. Greatrex, Geoffrey: Recent work on Procopius and the composition of Wars VIII, in: BMGS 27 (2003), 45–67. Greatrex, Geoffrey: Perceptions of Procopius in Recent Scholarship, in: Histos 8 (2014), 76–121 and 121a–e (addenda). Greatrex, Geoffrey: Procopius of Caesarea: The Persian Wars. A Historical Commentary. Cambridge, Cambridge University Press, 2022. Howard-Johnson, James: The Education and Expertise of Procopius, in: Antiquité Tardive 10 (2002), 19–30 Kaçar, Turhan: "Procopius in Turkey", Histos Supplement 9 (2019) 19.1–8. Kaegi, Walter: Procopius the military historian, in: Byzantinische Forschungen. 15, 1990, , 53–85 (online (PDF; 989 KB)). Kaldellis, Anthony: Classicism, Barbarism, and Warfare: Prokopios and the Conservative Reaction to Later Roman Military Policy, American Journal of Ancient History, n.s. 3-4 (2004-2005 [2007]), 189–218. Kaldellis, Anthony: Identifying Dissident Circles in Sixth-Century Byzantium: The Friendship of Prokopios and Ioannes Lydos, Florilegium, Vol. 21 (2004), 1–17. Kaldellis, Anthony: Procopius of Caesarea: Tyranny, History and Philosophy at the End of Antiquity. Philadelphia: University of Pennsylvania Press, 2004. Kaldellis, Anthony: Prokopios’ Persian War: A Thematic and Literary Analysis, in: R. Macrides, ed., History as Literature in Byzantium, Aldershot: Ashgate, 2010, 253–273. Kaldellis, Anthony: Prokopios’ Vandal War: Thematic Trajectories and Hidden Transcripts, in: S. T. Stevens & J. Conant, eds., North Africa under Byzantium and Early Islam, Washington, D.C: Dumbarton Oaks, 2016, 13–21. Kaldellis, Anthony: The Date and Structure of Prokopios’ Secret History and his Projected Work on Church History, in: Greek, Roman, and Byzantine Studies, Vol. 49 (2009), 585–616. Kovács, Tamás: "Procopius's Sibyl - the fall of Vitigis and the Ostrogoths", Graeco-Latina Brunensia 24.2 (2019), 113–124. Kruse, Marion: The Speech of the Armenians in Procopius: Justinian's Foreign Policy and the Transition between Books 1 and 2 of the Wars, in: The Classical Quarterly 63 (2013), 866–881. Lillington-Martin, Christopher, 2007–2017: 2007, "Archaeological and Ancient Literary Evidence for a Battle near Dara Gap, Turkey, AD 530: Topography, Texts and Trenches" in BAR –S1717, 2007 The Late Roman Army in the Near East from Diocletian to the Arab Conquest Proceedings of a colloquium held at Potenza, Acerenza and Matera, Italy edited by Ariel S. Lewin and Pietrina Pellegrini, pp. 299–311; 2009, "Procopius, Belisarius and the Goths" in Journal of the Oxford University History Society,(2009) Odd Alliances edited by Heather Ellis and Graciela Iglesias Rogers. , pages 1– 17, https://sites.google.com/site/jouhsinfo/issue7specialissueforinternetexplorer ; 2011, "Secret Histories", http://classicsconfidential.co.uk/2011/11/19/secret-histories/; 2012, "Hard and Soft Power on the Eastern Frontier: a Roman Fortlet between Dara and Nisibis, Mesopotamia, Turkey: Prokopios’ Mindouos?" in The Byzantinist, edited by Douglas Whalin, Issue 2 (2012), pp. 4–5, http://oxfordbyzantinesociety.files.wordpress.com/2012/06/obsnews2012final.pdf; 2013, Procopius on the struggle for Dara and Rome, in A. Sarantis, N. Christie (eds.): War and Warfare in Late Antiquity: Current Perspectives (Late Antique Archaeology 8.1–8.2 2010–11), Leiden: Brill 2013, pp. 599–630, ; 2013 “La defensa de Roma por Belisario” in: Justiniano I el Grande (Desperta Ferro) edited by Alberto Pérez Rubio, no. 18 (July 2013), pages 40–45, ISSN 2171-9276; 2017, Procopius of Caesarea: Literary and Historical Interpretations (editor), Routledge (July 2017), www.routledge.com/9781472466044; 2017, "Introduction" and chapter 10, “Procopius, πάρεδρος / quaestor, Codex Justinianus, I.27 and Belisarius’ strategy in the Mediterranean” in Procopius of Caesarea: Literary and Historical Interpretations above. Maas, Michael Robert: Strabo and Procopius: Classical Geography for a Christian Empire, in H. Amirav et al. (eds.): From Rome to Constantinople. Studies in Honour of Averil Cameron, Leuven: Peeters, 2007, 67–84. Martindale, John: The Prosopography of the Later Roman Empire III, Cambridge 1992, 1060–1066. Max, Gerald E., "Procopius' Portrait of the (Western Roman) Emperor Majorian: History and Historiography," Byzantinische Zeitschrift, Sonderdruck Aus Band 74/1981, pp. 1-6. Meier, Mischa: Prokop, Agathias, die Pest und das ′Ende′ der antiken Historiographie, in Historische Zeitschrift 278 (2004), 281–310. Meier, Mischa and Federico Montinaro (eds.): A Companion to Procopius of Caesarea. Brill, Leiden 2022, ISBN 978-3-89781-215-4. Pazdernik, Charles F.: Xenophon’s Hellenica in Procopius’ Wars: Pharnabazus and Belisarius, in: Greek, Roman and Byzantine Studies 46 (2006) 175–206. Rance, Philip: Narses and the Battle of Taginae (552 AD): Procopius and Sixth-Century Warfare, in: Historia. Zeitschrift für alte Geschichte 30.4 (2005) 424–472. Rubin, Berthold: Prokopios, in Realencyclopädie der Classischen Altertumswissenschaft 23/1 (1957), 273–599. Earlier published (with index) as Prokopios von Kaisareia, Stuttgart: Druckenmüller, 1954. Stewart, Michael, Contests of Andreia in Procopius’ Gothic Wars, Παρεκβολαι 4 (2014), pp. 21–54. Stewart, Michael, The Andreios Eunuch-Commander Narses: Sign of a Decoupling of martial Virtues and Hegemonic Masculinity in the early Byzantine Empire?, Cerae 2 (2015), pp. 1–25. Stewart, Michael, Masculinity, Identity, and Power Politics in the Age of Justinian: A Study of Procopius, Amsterdam: Amsterdam University Press, 2020:https://www.aup.nl/en/book/9789462988231/masculinity-identity-and-power-politics-in-the-age-of-justinian Treadgold, Warren: The Early Byzantine Historians, Basingstoke: Macmillan 2007, 176–226. The Secret History of Art by Noah Charney on the Vatican Library and Procopius. An article by art historian Noah Charney about the Vatican Library and its famous manuscript, Historia Arcana by Procopius. Whately, Conor, Battles and Generals: Combat, Culture, and Didacticism in Procopius' Wars. Leiden, 2016. Whitby, L. M. "Procopius and the Development of Roman Defences in Upper Mesopotamia", in P. Freeman and D. Kennedy (ed.), The Defence of the Roman and Byzantine East, Oxford, 1986, 717–35. External links Texts of Procopius Complete Works, Greek text (Migne Patrologia Graeca) with analytical indexes The Secret History, English translation (Atwater, 1927) at the Internet Medieval Sourcebook The Secret History, English translation (Dewing, 1935) at LacusCurtius The Buildings, English translation (Dewing, 1935) at LacusCurtius The Buildings, Book IV Greek text with commentaries, index nominum, etc. at Sorin Olteanu's LTDM Project H. B. Dewing's Loeb edition of the works of Procopius: vols. I–VI at the Internet Archive (History of the Wars, Secret History) Palestine Pilgrims' Text Society (1888): Of the buildings of Justinian by Procopius, (ca 560 A.D) Complete Works 1, Greek ed. by K. W. Dindorf, Latin trans. by Claude Maltret in Corpus Scriptorum Historiae Byzantinae Pars II Vol. 1, 1833. (Persian Wars I–II, Vandal Wars I–II) Complete Works 2, Greek ed. by K. W. Dindorf, Latin trans. by Claude Maltret in Corpus Scriptorum Historiae Byzantinae Pars II Vol. 2, 1833. (Gothic Wars I–IV) Complete Works 3, Greek ed. by K. W. Dindorf, Latin trans. by Claude Maltret in Corpus Scriptorum Historiae Byzantinae Pars II Vol. 3, 1838. (Secret History, Buildings of Justinian) Secondary material 500 births 565 deaths 6th-century Byzantine historians Historians of Justinian I Secret histories De bello Gothico Vandalic War People from Caesarea Maritima People of the Roman–Sasanian Wars
23626
https://en.wikipedia.org/wiki/Property
Property
Property is a system of rights that gives people legal control of valuable things, and also refers to the valuable things themselves. Depending on the nature of the property, an owner of property may have the right to consume, alter, share, redefine, rent, mortgage, pawn, sell, exchange, transfer, give away, or destroy it, or to exclude others from doing these things, as well as to perhaps abandon it; whereas regardless of the nature of the property, the owner thereof has the right to properly use it under the granted property rights. In economics and political economy, there are three broad forms of property: private property, public property, and collective property (also called cooperative property). Property that jointly belongs to more than one party may be possessed or controlled thereby in very similar or very distinct ways, whether simply or complexly, whether equally or unequally. However, there is an expectation that each party's will (rather discretion) with regard to the property be clearly defined and unconditional, to distinguish ownership and easement from rent. The parties might expect their wills to be unanimous, or alternatively every given one of them, when no opportunity for or possibility of a dispute with any other of them exists, may expect his, her, it's or their own will to be sufficient and absolute. The first Restatement defines property as anything, tangible or intangible, whereby a legal relationship between persons and the State enforces a possessory interest or legal title in that thing. This mediating relationship between individual, property, and State is called a property regime. In sociology and anthropology, property is often defined as a relationship between two or more individuals and an object, in which at least one of these individuals holds a bundle of rights over the object. The distinction between "collective property" and "private property" is regarded as confusion since different individuals often hold differing rights over a single object. Types of property include real property (the combination of land and any improvements to or on the ground), personal property (physical possessions belonging to a person), private property (property owned by legal persons, business entities or individual natural persons), public property (State-owned or publicly owned and available possessions) and intellectual property (exclusive rights over artistic creations, inventions, etc.). However, the last is not always as widely recognized or enforced. An article of property may have physical and incorporeal parts. A title, or a right of ownership, establishes the relation between the property and other persons, assuring the owner the right to dispose of the property as the owner sees fit. The unqualified term "property" is often used to refer specifically to real property. Overview Property is often defined by the code of the local sovereignty and protected wholly or - more usually, partially - by such entity, the owner being responsible for any remainder of protection. The standards of the proof concerning proofs of ownerships are also addressed by the code of the local sovereignty, and such entity plays a role accordingly, typically somewhat managerial. Some philosophers assert that property rights arise from social convention, while others find justifications for them in morality or in natural law. Various scholarly disciplines (such as law, economics, anthropology or sociology) may treat the concept more systematically, but definitions vary, most particularly when involving contracts. Positive law defines such rights, and the judiciary can adjudicate and enforce property rights. According to Adam Smith (1723-1790), the expectation of profit from "improving one's stock of capital" rests on private-property rights. Capitalism has as a central assumption that property rights encourage their holders to develop the property, generate wealth, and efficiently allocate resources based on the operation of markets. From this has evolved the modern conception of property as a right enforced by positive law, in the expectation that this will produce more wealth and better standards of living. However, Smith also expressed a very critical view of the effects of property laws on inequality: In his 1881 text "The Common Law", Oliver Wendell Holmes describes property as having two fundamental aspects. The first, possession, can be defined as control over a resource based on the practical inability to contradict the ends of the possessor. The second title is the expectation that others will recognize rights to control resources, even when not in possession. He elaborates on the differences between these two concepts and proposes a history of how they came to be attached to persons, as opposed to families or entities such as the church. Classical liberalism subscribes to the labor theory of property. Its proponents hold that individuals each own their own life; it follows that one must acknowledge the products of that life and that those products can be traded in free exchange with others. "Every man has a property in his person. This nobody has a right to, but himself." (John Locke, "Second Treatise on Civil Government", 1689) "The reason why men enter into society is the preservation of their property." (John Locke, "Second Treatise on Civil Government", 1689) "Life, liberty, and property do not exist because men have made laws. On the contrary, it was the fact that life, liberty, and property existed beforehand that caused men to make laws in the first place." (Frédéric Bastiat, The Law, 1850) Conservatism subscribes to the concept that freedom and property are closely linked - building on traditions of thought that property guarantees freedom or causes freedom. The more widespread the possession of the private property, conservatism propounds, the more stable and productive a state or nation is. Conservatives maintain that the economic leveling of property, especially of the forced kind, is not economic progress. "Separate property from private possession and Leviathan becomes master of all... Upon the foundation of private property, great civilizations are built. The conservative acknowledges that the possession of property fixes certain duties upon the possessor; he accepts those moral and legal obligations cheerfully." (Russell Kirk, The Politics of Prudence, 1993) Socialism's fundamental principles center on a critique of this concept, stating (among other things) that the cost of defending property exceeds the returns from private property ownership and that, even when property rights encourage their holders to develop their property or generate wealth, they do so only for their benefit, which may not coincide with advantage to other people or society at large. Libertarian Socialism generally accepts property rights with a short abandonment period. In other words, a person must make (more-or-less) continuous use of the item or else lose ownership rights. This is usually referred to as "possession property" or "usufruct." Thus, in this usufruct system, absentee ownership is illegitimate, and workers own the machines or other equipment they work with. Communism argues that only common ownership of the means of production will assure the minimization of unequal or unjust outcomes and the maximization of benefits and that; therefore humans should abolish private ownership of capital (as opposed to property). Both communism and some forms of socialism have also upheld the notion that private ownership of capital is inherently illegitimate. This argument centers on the idea that private ownership of capital always benefits one class over another, giving rise to domination through this privately owned capital. Communists do not oppose personal property that is "hard-won, self-acquired, self-earned" (as "The Communist Manifesto" puts it) by members of the proletariat. Both socialism and communism distinguish carefully between private ownership of capital (land, factories, resources, etc.) and private property (homes, material objects, and so forth). Types of property Most legal systems distinguish between different types of property, especially between land (immovable property, estate in land, real estate, real property) and all other forms of property—goods and chattels, movable property or personal property, including the value of legal tender if not the legal tender itself, as the manufacturer rather than the possessor might be the owner. They often distinguish tangible and intangible property. One categorization scheme specifies three species of property: land, improvements (immovable man-made things), and personal property (movable man-made things). In common law, real property (immovable property) is the combination of interests in land and improvements thereto, and personal property is interest in movable property. Real property rights are rights relating to the land. These rights include ownership and usage. Owners can grant rights to persons and entities in the form of leases, licenses, and easements. Throughout the last centuries of the second millennium, with the development of more complex theories of property, the concept of personal property had become divided into tangible property (such as cars and clothing) and intangible property (such as financial assets and related rights, including stocks and bonds; intellectual property, including patents, copyrights and trademarks; digital files; communication channels; and certain forms of identifier, including Internet domain names, some forms of network address, some forms of handle and again trademarks). Treatment of intangible property is such that an article of property is, by law or otherwise by traditional conceptualization, subject to expiration even when inheritable, which is a key distinction from tangible property. Upon expiration, the property, if of the intellectual category, becomes a part of public domain, to be used by but not owned by anybody, and possibly used by more than one party simultaneously due to the inapplicability of scarcity to intellectual property. Whereas things such as communications channels and pairs of electromagnetic spectrum bands and signal transmission power can only be used by a single party at a time, or a single party in a divisible context, if owned or used. Thus far or usually, those are not considered property, or at least not private property, even though the party bearing right of exclusive use may transfer that right to another. In many societies the human body is considered property of some kind or other. The question of the ownership and rights to one's body arise in general in the discussion of human rights, including the specific issues of slavery, conscription, rights of children under the age of majority, marriage, abortion, prostitution, drugs, euthanasia and organ donation. Related concepts Of the following, only sale and at-will sharing involve no encumbrance. Violation Miscellaneous action Issues in property theory Principle The two major justifications are given for the original property, or the homestead principle, are effort and scarcity. John Locke emphasized effort, "mixing your labor" with an object, or clearing and cultivating virgin land. Benjamin Tucker preferred to look at the telos of property, i.e., what is the purpose of property? His answer: to solve the scarcity problem. Only when items are relatively scarce concerning people's desires, do they become property. For example, hunter-gatherers did not consider land to be property, since there was no shortage of land. Agrarian societies later made arable land property, as it was scarce. For something to be economically scarce, it must necessarily have the "exclusivity property"—that use by one person excludes others from using it. These two justifications lead to different conclusions on what can be property. Intellectual property—incorporeal things like ideas, plans, orderings and arrangements (musical compositions, novels, computer programs)—are generally considered valid property to those who support an effort justification, but invalid to those who support a scarcity justification, since the things don't have the exclusivity property (however, those who support a scarcity justification may still support other "intellectual property" laws such as Copyright, as long as these are a subject of contract instead of government arbitration). Thus even ardent propertarians may disagree about IP. By either standard, one's body is one's property. From some anarchist points of view, the validity of property depends on whether the "property right" requires enforcement by the State. Different forms of "property" require different amounts of enforcement: intellectual property requires a great deal of state intervention to enforce, ownership of distant physical property requires quite a lot, ownership of carried objects requires very little. In contrast, requesting one's own body requires absolutely no state intervention. So some anarchists don't believe in property at all. Many things have existed that did not have an owner, sometimes called the commons. The term "commons," however, is also often used to mean something entirely different: "general collective ownership"—i.e. common ownership. Also, the same term is sometimes used by statists to mean government-owned property that the general public is allowed to access (public property). Law in all societies has tended to reduce the number of things not having clear owners. Supporters of property rights argue that this enables better protection of scarce resources due to the tragedy of the commons. At the same time, critics say that it leads to the 'exploitation' of those resources for personal gain and that it hinders taking advantage of potential network effects. These arguments have differing validity for different types of "property"—things that are not scarce are, for instance, not subject to the tragedy of the commons. Some apparent critics advocate general collective ownership rather than ownerlessness. Things that do not have owners include: ideas (except for intellectual property), seawater (which is, however, protected by anti-pollution laws), parts of the seafloor (see the United Nations Convention on the Law of the Sea for restrictions), gases in Earth's atmosphere, animals in the wild (although in most nations, animals are tied to the land. In the United States and Canada, wildlife is generally defined in statute as property of the State. This public ownership of wildlife is referred to as the North American Model of Wildlife Conservation and is based on The Public Trust Doctrine.), celestial bodies and outer space, and land in Antarctica. The nature of children under the age of majority is another contested issue here. In ancient societies, children were generally considered the property of their parents. However, children in most modern communities theoretically own their bodies but are not regarded as competent to exercise their rights. Their parents or guardians are given most of the fundamental rights of control over them. Questions regarding the nature of ownership of the body also come up in the issue of abortion, drugs, and euthanasia. In many ancient legal systems (e.g., early Roman law), religious sites (e.g. temples) were considered property of the God or gods they were devoted to. However, religious pluralism makes it more convenient to have sacred sites owned by the spiritual body that runs them. Intellectual property and air (airspace, no-fly zone, pollution laws, which can include tradable emissions rights) can be property in some senses of the word. Ownership of land can be held separately from the ownership of rights over that land, including sporting rights, mineral rights, development rights, air rights, and such other rights as may be worth segregating from simple land ownership. Ownership Ownership laws may vary widely among countries depending on the nature of the property of interest (e.g., firearms, real property, personal property, animals). Persons can own property directly. In most societies legal entities, such as corporations, trusts and nations (or governments) own property. In many countries women have limited access to property following restrictive inheritance and family laws, under which only men have actual or formal rights to own property. In the Inca empire, the dead emperors, considered gods, still controlled property after death. Government interference In 17th-century England, the legal directive that nobody may enter a home (which in the 17th century would typically have been male-owned) unless by the owner's invitation or consent, was established as common law in Sir Edward Coke 's "Institutes of the Lawes of England". "For a man's house is his castle, et domus sua cuique est tutissimum refugium [and each man's home is his safest refuge]." It is the origin of the famous dictum, "an Englishman's home is his castle". The ruling enshrined into law what several English writers had espoused in the 16th century. Unlike the rest of Europe the British had a proclivity towards owning their own homes. British Prime Minister William Pitt, 1st Earl of Chatham defined the meaning of castle in 1763, "The poorest man may in his cottage bid defiance to all the forces of the crown. It may be frail – its roof may shake – the wind may blow through it – the storm may enter – the rain may enter – but the King of England cannot enter." That principle was carried to the United States. Under U.S. law, the principal limitations on whether and the extent to which the State may interfere with property rights are set by the Constitution. The Takings clause requires that the government (whether State or federal—for the 14th Amendment's due process clause imposes the 5th Amendment's takings clause on state governments) may take private property only for a public purpose after exercising due process of law, and upon making "just compensation." If an interest is not deemed a "property" right or the conduct is merely an intentional tort, these limitations do not apply, and the doctrine of sovereign immunity precludes relief. Moreover, if the interference does not almost completely make the property valueless, the interference will not be deemed a taking but instead a mere regulation of use. On the other hand, some governmental regulations of property use have been deemed so severe that they have been considered "regulatory takings." Moreover, conduct is sometimes deemed only a nuisance, or another tort has been held a taking of property where the conduct was sufficiently persistent and severe. Theories There exist many theories of property. One is the relatively rare first possession theory of property, where ownership of something is seen as justified simply by someone seizing something before someone else does. Perhaps one of the most popular is the natural rights definition of property rights as advanced by John Locke. Locke advanced the theory that God granted dominion over nature to man through Adam in the book of Genesis. Therefore, he theorized that when one mixes one's labor with nature, one gains a relationship with that part of nature with which the labor is mixed, subject to the limitation that there should be "enough, and as good, left in common for others." (see Lockean proviso) In his encyclical letter Rerum novarum (1891), Pope Leo XIII wrote, "It is surely undeniable that, when a man engages in remunerative labor, the impelling reason and motive of his work is to obtain property, and after that to hold it as his very own." Anthropology studies the diverse ownership systems, rights of use and transfer, and possession under the term "theories of property". As mentioned, western legal theory is based on the owner of property being a legal person. However, not all property systems are founded on this basis. In every culture studied, ownership and possession are the subjects of custom and regulation, and "law" is where the term can meaningfully be applied. Many tribal cultures balance individual rights with the laws of collective groups: tribes, families, associations, and nations. For example, the 1839 Cherokee Constitution frames the issue in these terms: Communal property systems describe ownership as belonging to the entire social and political unit. Common ownership in a hypothetical communist society is distinguished from primitive forms of common property that have existed throughout history, such as Communalism and primitive communism, in that communist common ownership is the outcome of social and technological developments leading to the elimination of material scarcity in society. Corporate systems describe ownership as being attached to an identifiable group with an identifiable responsible individual. The Roman property law was based on such a corporate system. In a well-known paper that contributed to the creation of the field of law and economics in the late 1960s, the American scholar Harold Demsetz described how the concept of property rights makes social interactions easier: Different societies may have other theories of property for differing types of ownership. For example, Pauline Peters argued that property systems are not isolable from the social fabric, and notions of property may not be stated as such but instead may be framed in negative terms: for example, the taboo system among Polynesian peoples. Property in philosophy In medieval and Renaissance Europe the term "property" essentially referred to land. After much rethinking, land has come to be regarded as only a special case of the property genus. This rethinking was inspired by at least three broad features of early modern Europe: the surge of commerce, the breakdown of efforts to prohibit interest (then called "usury"), and the development of centralized national monarchies. Ancient philosophy Urukagina, the king of the Sumerian city-state Lagash, established the first laws that forbade compelling the sale of property. The Bible in Leviticus 19:11 and ibid. 19:13 states that the Israelites are not to steal. Aristotle, in Politics, advocates "private property." He argues that self-interest leads to neglect of the commons. "[T]hat which is common to the greatest number has the least care bestowed upon it. Everyone thinks chiefly of his own, hardly at all of the common interest, and only when he is himself concerned as an individual." In addition, he says that when property is common, there are natural problems that arise due to differences in labor: "If they do not share equally enjoyments and toils, those who labor much and get little will necessarily complain of those who labor little and receive or consume much. But indeed, there is always a difficulty in men living together and having all human relations in common, but especially in their having common property." (Politics, 1261b34) Cicero held that there is no private property under natural law but only under human law. Seneca viewed property as only becoming necessary when men become avaricious. St. Ambrose later adopted this view and St. Augustine even derided heretics for complaining the Emperor could not confiscate property they had labored for. Medieval philosophy Thomas Aquinas (13th century) The canon law Decretum Gratiani maintained that mere human law creates property, repeating the phrases used by St. Augustine. St. Thomas Aquinas agreed with regard to the private consumption of property but modified patristic theory in finding that the private possession of property is necessary. Thomas Aquinas concludes that, given certain detailed provisions, it is natural for man to possess external things it is lawful for a man to possess a thing as his own The essence of theft consists in taking another's thing secretly Theft and robbery are sins of different species, and robbery is a more grievous sin than theft theft is a sin; it is also a mortal sin it is, however, lawful to steal through stress of need:" in cases of need, all things are common property." Modern philosophy Thomas Hobbes (17th century) The principal writings of Thomas Hobbes appeared between 1640 and 1651—during and immediately following the war between forces loyal to King Charles I and those loyal to Parliament. In his own words, Hobbes' reflection began with the idea of "giving to every man his own," a phrase he drew from the writings of Cicero. But he wondered: How can anybody call anything his own? James Harrington (17th century) A contemporary of Hobbes, James Harrington, reacted to the same tumult differently: he considered property natural but not inevitable. The author of "Oceana," he may have been the first political theorist to postulate that political power is a consequence, not the cause, of the distribution of property. He said that the worst possible situation is when the commoners have half a nation's property, with the crown and nobility holding the other half—a circumstance fraught with instability and violence. He suggested a much better situation (a stable republic) would exist once the commoners own most property. In later years, the ranks of Harrington's admirers included American revolutionary and founder John Adams. Robert Filmer (17th century) Another member of the Hobbes/Harrington generation, Sir Robert Filmer, reached conclusions much like Hobbes', but through Biblical exegesis. Filmer said that the institution of kingship is analogous to that of fatherhood, that subjects are still, children, whether obedient or unruly and that property rights are akin to the household goods that a father may dole out among his children—his to take back and dispose of according to his pleasure. John Locke (17th century) In the following generation, John Locke sought to answer Filmer, creating a rationale for a balanced constitution in which the monarch had a part to play, but not an overwhelming part. Since Filmer's views essentially require that the Stuart family be uniquely descended from the patriarchs of the Bible, and even in the late 17th century, that was a difficult view to uphold, Locke attacked Filmer's views in his First Treatise on Government, freeing him to set out his own views in the Second Treatise on Civil Government. Therein, Locke imagined a pre-social world each of the unhappy residents which are willing to create a social contract because otherwise, "the enjoyment of the property he has in this state is very unsafe, very insecure," and therefore, the "great and chief end, therefore, of men's uniting into commonwealths, and putting themselves under government, is the preservation of their property." They would, he allowed, create a monarchy, but its task would be to execute the will of an elected legislature. "To this end" (to achieve the previously specified goal), he wrote, "it is that men give up all their natural power to the society they enter into, and the community put the Legislative power into such hands as they think fit, with this trust, that they shall be governed by declared laws, or else their peace, quiet, and property will still be at the same uncertainty as it was in the state of nature." Even when it keeps to proper legislative form, Locke held that there are limits to what a government established by such a contract might rightly do. "It cannot be supposed that [the hypothetical contractors] they should intend, had they a power so to do, to give anyone or more an absolute arbitrary power over their persons and estates, and put a force into the magistrate's hand to execute his unlimited will arbitrarily upon them; this were to put themselves into a worse condition than the State of nature, wherein they had a liberty to defend their right against the injuries of others, and were upon equal terms of force to maintain it, whether invaded by a single man or many in combination. Whereas by supposing they have given themselves up to the absolute arbitrary power and will of a legislator, they have disarmed themselves, and armed him to make a prey of them when he pleases..." Both "persons" and "estates" are to be protected from the arbitrary power of any magistrate, including legislative power and will." In Lockean terms, depredations against an estate are just as plausible a justification for resistance and revolution as are those against persons. In neither case are subjects required to allow themselves to become prey. To explain the ownership of property, Locke advanced a labor theory of property. David Hume (18th century) In contrast to the figures discussed in this section thus far David Hume lived a relatively quiet life that had settled down to a relatively stable social and political structure. He lived the life of a solitary writer until 1763 when, at 52 years of age, he went off to Paris to work at the British embassy. In contrast, one might think to his polemical works on religion and his empiricism-driven skeptical epistemology, Hume's views on law and property were quite conservative. He did not believe in hypothetical contracts or the love of humanity in general and sought to ground politics upon actual human beings as one knows them. "In general," he wrote, "it may be affirmed that there is no such passion in the human mind, as the love of mankind, merely as such, independent of personal qualities, or services, or of relation to ourselves." Existing customs should not lightly be disregarded because they have come to be what they are due to human nature. With this endorsement of custom comes an endorsement of existing governments because he conceived of the two as complementary: "A regard for liberty, though a laudable passion, ought commonly to be subordinate to a reverence for established government." Therefore, Hume's view was that there are property rights because of and to the extent that the existing law, supported by social customs, secure them. He offered some practical home-spun advice on the general subject, though, as when he referred to avarice as "the spur of industry," and expressed concern about excessive levels of taxation, which "destroy industry, by engendering despair." Adam Smith "The property that every man has in his labour is the original foundation of all other property, so it is the most sacred and inviolable. The inheritance of a poor man lies in the strength and dexterity of his hands, and to hinder him from employing this strength and dexterity in what manner he thinks proper without injury to his neighbor, is a plain violation of this most sacred property. It is a manifest encroachment upon the just liberty of the workman and those who might be disposed to employ him. It hinders the one from working at what he thinks proper, so it hinders the others from employing whom they think proper. To judge whether he is fit to be employed may surely be trusted to the discretion of the employers whose interest it so much concerns. The affected anxiety of the law-giver lest they should employ an improper person is as impertinent as it is oppressive." — (Source: Adam Smith, The Wealth of Nations, 1776, Book I, Chapter X, Part II.) By the mid 19th century, the industrial revolution had transformed England and the United States and had begun in France. As a result, the conventional conception of what constitutes property expanded beyond land to encompass scarce goods. In France, the revolution of the 1790s had led to large-scale confiscation of land formerly owned by the church and king. The restoration of the monarchy led to claims by those dispossessed to have their former lands returned. Karl Marx Section VIII, "Primitive Accumulation" of Capital involves a critique of Liberal Theories of property rights. Marx notes that under Feudal Law, peasants were legally entitled to their land as the aristocracy was to its manors. Marx cites several historical events in which large numbers of the peasantry were removed from their lands, then seized by the nobility. This seized land was then used for commercial ventures (sheep herding). Marx sees this "Primitive Accumulation" as integral to the creation of English Capitalism. This event created a sizeable un-landed class that had to work for wages to survive. Marx asserts that liberal theories of property are "idyllic" fairy tales that hide a violent historical process. Charles Comte: legitimate origin of property Charles Comte, in "Traité de la propriété" (1834), attempted to justify the legitimacy of private property in response to the Bourbon Restoration. According to David Hart, Comte had three main points: "firstly, that interference by the state over the centuries in property ownership has had dire consequences for justice as well as for economic productivity; secondly, that property is legitimate when it emerges in such a way as not to harm anyone; and thirdly, that historically some, but by no means all, property which has evolved has done so legitimately, with the implication that the present distribution of property is a complex mixture of legitimately and illegitimately held titles." Comte, as Proudhon later did, rejected Roman legal tradition with its toleration of slavery. Instead, he posited a communal "national" property consisting of non-scarce goods, such as land in ancient hunter-gatherer societies. Since agriculture was so much more efficient than hunting and gathering, private property appropriated by someone for farming left remaining hunter-gatherers with more land per person and hence did not harm them. Thus this type of land appropriation did not violate the Lockean proviso – there was "still enough, and as good left." Later theorists would use Comte's analysis in response to the socialist critique of property. Pierre-Joseph Proudhon: property is theft In his 1840 treatise What is Property?, Pierre Proudhon answers with "Property is theft!". In natural resources, he sees two types of property, de jure property (legal title) and de facto property (physical possession), and argues that the former is illegitimate. Proudhon's conclusion is that "property, to be just and possible, must necessarily have equality for its condition." His analysis of the product of labor upon natural resources as property (usufruct) is more nuanced. He asserts that land itself cannot be property, yet it should be held by individual possessors as stewards of humanity, with the product of labor being the producer's property. Proudhon reasoned that any wealth gained without labor was stolen from those who labored to create that wealth. Even a voluntary contract to surrender the product of work to an employer was theft, according to Proudhon, since the controller of natural resources had no moral right to charge others for the use of that which he did not labor to create did not own. Proudhon's theory of property greatly influenced the budding socialist movement, inspiring anarchist theorists such as Mikhail Bakunin who modified Proudhon's ideas, as well as antagonizing theorists like Karl Marx. Frédéric Bastiat: property is value Frédéric Bastiat 's main treatise on property can be found in chapter 8 of his book "Economic Harmonies" (1850). In a radical departure from traditional property theory, he defines property, not as a physical object, but rather as a relationship between people concerning a thing. Thus, saying one owns a glass of water is merely verbal shorthand for "I may justly gift or trade this water to another person." In essence, what one owns is not the object but the object's value. By "value," Bastiat means "market value"; he emphasizes this is quite different from utility. "In our relations with one another, we are not owners of the utility of things, but their value, and value is the appraisal made of reciprocal services." Bastiat theorized that, as a result of technological progress and the division of labor, the stock of communal wealth increases over time; that the hours of work an unskilled laborer expends to buy e.g., 100 liters of wheat, decreases over time, thus amounting to "gratis" satisfaction. Thus, private property continually destroys itself, becoming transformed into communal wealth. The increasing proportion of communal wealth to private property results in a tendency toward equality of humanity. "Since the human race began in greatest poverty, that is, when there were the most obstacles to overcome, all that has been achieved from one era to the next is due to the spirit of property." This transformation of private property into the communal domain, Bastiat points out, does not imply that personal property will ever totally disappear. On the contrary, this is because man, as he progresses, continually invents new and more sophisticated needs and desires. Andrew J. Galambos: a precise definition of property Andrew J. Galambos (1924–1997) was an astrophysicist and philosopher who innovated a social structure that sought to maximize human peace and freedom. Galambos' concept of property was essential to his philosophy. He defined property as a man's life and all non-procreative derivatives of his life. (Because the English language is deficient in omitting the feminine from "man" when referring to humankind, it is implicit and obligatory that the feminine is included in the term "man.") Galambos taught that property is essential to a non-coercive social structure. He defined freedom as follows: "Freedom is the societal condition that exists when every individual has full (100%) control over his property." Galambos defines property as having the following elements: Primordial property, which is an individual's life Primary property, which includes ideas, thoughts, and actions Secondary property includes all tangible and intangible possessions that are derivatives of the individual's primary property. Property includes all non-procreative derivatives of an individual's life; this means children are not the property of their parents. and "primary property" (a person's own ideas). Galambos repeatedly emphasized that actual government exists to protect property and that the State attacks property. For example, the State requires payment for its services in the form of taxes whether or not people desire such services. Since an individual's money is his property, the confiscation of money in the form of taxes is an attack on property. Military conscription is likewise an attack on a person's primordial property. Contemporary views Contemporary political thinkers who believe that natural persons enjoy rights to own property and enter into contracts espouse two views about John Locke. On the one hand, some admire Locke, such as William H. Hutt (1956), who praised Locke for laying down the "quintessence of individualism." On the other hand, those such as Richard Pipes regard Locke's arguments as weak and think that undue reliance thereon has weakened the cause of individualism in recent times. Pipes has written that Locke's work "marked a regression because it rested on the concept of Natural Law" rather than upon Harrington's sociological framework. Hernando de Soto has argued that an essential characteristic of the capitalist market economy is the functioning state protection of property rights in a formal property system which records ownership and transactions. These property rights and the whole legal system of property make possible: Greater independence for individuals from local community arrangements to protect their assets Clear, provable, and protectable ownership The standardization and integration of property rules and property information in a country as a whole Increased trust arising from a greater certainty of punishment for cheating in economic transactions More formal and complex written statements of ownership that permit the more straightforward assumption of shared risk and ownership in companies, and insurance against the risk Greater availability of loans for new projects since more things can serve as collateral for the loans Easier access to and more reliable information regarding such things as credit history and the worth of assets Increased fungibility, standardization, and transferability of statements documenting the ownership of property, which paves the way for structures such as national markets for companies and the easy transportation of property through complex networks of individuals and other entities Greater protection of biodiversity due to minimizing of shifting agriculture practices According to de Soto, all of the above enhance economic growth. Academics have criticized the capitalist frame through which property is viewed pointing to the fact that commodifying property or land by assigning it monetary value takes away from the traditional cultural heritage, particularly from first nation inhabitants. These academics point to the personal nature of property and its link to identity being irreconcilable with wealth creation that contemporary Western society subscribes to. See also Allemansrätten Anarchism Binary economics Buying agent Capitalism Communism Homestead principle Immovable property Inclusive Democracy International Property Rights Index Labor theory of property Land (economics) Libertarianism Lien Off plan Ownership society Patrimony Personal property Propertarian Property is theft Property law Property rights (economics) Socialism Sovereignty Taxation as theft Interpersonal relationship Public liability Property-giving (legal) Charity Essenes Gift Kibbutz Monasticism Tithe, Zakat (modern sense) Property-taking (legal) Adverse possession Confiscation Eminent domain Fine Jizya Nationalization Regulatory fees and costs Search and seizure Tariff Tax Turf and twig (historical) Tithe, Zakat (historical sense) RS 2477 Property-taking (illegal) Theft References Bibliography Bastiat, Frédéric, 1850. Economic Harmonies. W. Hayden Boyers. Bastiat, Frédéric, 1850. "The Law", tr. Dean Russell. Bethell, Tom, 1998. "The Noblest Triumph: Property and Prosperity through the Ages." New York: St. Martin's Press. Blackstone, William, 1765–69. "Commentaries on the Laws of England", 4 vols. Oxford Univ. Press. Especially Books the Second and Third. De Soto, Hernando, 1989. "The Other Path". Harper & Row. De Soto, Hernando, and Francis Cheneval, 2006. Realizing Property Rights. Ruffer & Rub. Ellickson, Robert, 1993. " ", Yale Law Journal 102: 1315–1400. Mckay, John P., 2004, "A History of World Societies". Boston: Houghton Mifflin Company Palda, Filip (2011) "Pareto's Republic and the New Science of Peace" 2011 chapters online. Published by Cooper-Wolfling. Pipes, Richard, 1999. "Property and Freedom". New York: Knopf Doubleday. External links Concepts of Property, Hugh Breakey, Internet Encyclopedia of Philosophy "Right to Private Property", Tibor Machan, Internet Encyclopedia of Philosophy "Property and Ownership" Jeremy Waldron, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.). Economic anthropology Social inequality Environmental social science concepts Concepts in political philosophy
23627
https://en.wikipedia.org/wiki/Police
Police
The police are a constituted body of persons empowered by a state with the aim of enforcing the law and protecting the public order as well as the public itself. This commonly includes ensuring the safety, health, and possessions of citizens, and to prevent crime and civil disorder. Their lawful powers encompass arrest and the use of force legitimized by the state via the monopoly on violence. The term is most commonly associated with the police forces of a sovereign state that are authorized to exercise the police power of that state within a defined legal or territorial area of responsibility. Police forces are often defined as being separate from the military and other organizations involved in the defense of the state against foreign aggressors; however, gendarmerie are military units charged with civil policing. Police forces are usually public sector services, funded through taxes. Law enforcement is only part of policing activity. Policing has included an array of activities in different situations, but the predominant ones are concerned with the preservation of order. In some societies, in the late 18th and early 19th centuries, these developed within the context of maintaining the class system and the protection of private property. Police forces have become ubiquitous and a necessity in complex modern societies. However, their role can sometimes be controversial, as they may be involved to varying degrees in corruption, brutality, and the enforcement of authoritarian rule. A police force may also be referred to as a police department, police service, constabulary, gendarmerie, crime prevention, protective services, law enforcement agency, civil guard, or civic guard. Members may be referred to as police officers, troopers, sheriffs, constables, rangers, peace officers or civic/civil guards. Ireland differs from other English-speaking countries by using the Irish language terms Garda (singular) and Gardaí (plural), for both the national police force and its members. The word police is the most universal and similar terms can be seen in many non-English speaking countries. Numerous slang terms exist for the police. Many slang terms for police officers are decades or centuries old with lost etymologies. One of the oldest, cop, has largely lost its slang connotations and become a common colloquial term used both by the public and police officers to refer to their profession. Etymology First attested in English in the early 15th century, originally in a range of senses encompassing '(public) policy; state; public order', the word police comes from Middle French ('public order, administration, government'), in turn from Latin , which is the romanization of the Ancient Greek () 'citizenship, administration, civil polity'. This is derived from () 'city'. History Ancient China Law enforcement in ancient China was carried out by "prefects" for thousands of years since it developed in both the Chu and Jin kingdoms of the Spring and Autumn period. In Jin, dozens of prefects were spread across the state, each having limited authority and employment period. They were appointed by local magistrates, who reported to higher authorities such as governors, who in turn were appointed by the emperor, and they oversaw the civil administration of their "prefecture", or jurisdiction. Under each prefect were "subprefects" who helped collectively with law enforcement in the area. Some prefects were responsible for handling investigations, much like modern police detectives. Prefects could also be women. Local citizens could report minor judicial offenses against them such as robberies at a local prefectural office. The concept of the "prefecture system" spread to other cultures such as Korea and Japan. Babylonia In Babylonia, law enforcement tasks were initially entrusted to individuals with military backgrounds or imperial magnates during the Old Babylonian period, but eventually, law enforcement was delegated to officers known as , who were present in both cities and rural settlements. A was responsible for investigating petty crimes and carrying out arrests. Egypt In ancient Egypt evidence of law enforcement exists as far back as the Old Kingdom period. There are records of an office known as "Judge Commandant of the Police" dating to the fourth dynasty. During the fifth dynasty at the end of the Old Kingdom period, warriors armed with wooden sticks were tasked with guarding public places such as markets, temples, and parks, and apprehending criminals. They are known to have made use of trained monkeys, baboons, and dogs in guard duties and catching criminals. After the Old Kingdom collapsed, ushering in the First Intermediate Period, it is thought that the same model applied. During this period, Bedouins were hired to guard the borders and protect trade caravans. During the Middle Kingdom period, a professional police force was created with a specific focus on enforcing the law, as opposed to the previous informal arrangement of using warriors as police. The police force was further reformed during the New Kingdom period. Police officers served as interrogators, prosecutors, and court bailiffs, and were responsible for administering punishments handed down by judges. In addition, there were special units of police officers trained as priests who were responsible for guarding temples and tombs and preventing inappropriate behavior at festivals or improper observation of religious rites during services. Other police units were tasked with guarding caravans, guarding border crossings, protecting royal necropolises, guarding slaves at work or during transport, patrolling the Nile River, and guarding administrative buildings. By the Eighteenth Dynasty of the New Kingdom period, an elite desert-ranger police force called the Medjay was used to protect valuable areas, especially areas of pharaonic interest like capital cities, royal cemeteries, and the borders of Egypt. Though they are best known for their protection of the royal palaces and tombs in Thebes and the surrounding areas, the Medjay were used throughout Upper and Lower Egypt. Each regional unit had its own captain. The police forces of ancient Egypt did not guard rural communities, which often took care of their own judicial problems by appealing to village elders, but many of them had a constable to enforce state laws. Greece In ancient Greece, publicly owned slaves were used by magistrates as police. In Athens, the Scythian Archers (the 'rod-bearers'), a group of about 300 Scythian slaves, was used to guard public meetings to keep order and for crowd control, and also assisted with dealing with criminals, handling prisoners, and making arrests. Other duties associated with modern policing, such as investigating crimes, were left to the citizens themselves. Athenian police forces were supervised by the Areopagus. In Sparta, the Ephors were in charge of maintaining public order as judges, and they used Sparta's Hippeis, a 300-member Royal guard of honor, as their enforcers. There were separate authorities supervising women, children, and agricultural issues. Sparta also had a secret police force called the crypteia to watch the large population of helots, or slaves. Rome In the Roman Empire, the army played a major role in providing security. Roman soldiers detached from their legions and posted among civilians carried out law enforcement tasks. The Praetorian Guard, an elite army unit which was primarily an Imperial bodyguard and intelligence-gathering unit, could also act as a riot police force if required. Local watchmen were hired by cities to provide some extra security. Lictors, civil servants whose primary duty was to act as bodyguards to magistrates who held imperium, could carry out arrests and inflict punishments at their magistrate's command. Magistrates such as tresviri capitales, and investigated crimes. There was no concept of public prosecution, so victims of crime or their families had to organize and manage the prosecution themselves. Under the reign of Augustus, when the capital had grown to almost one million inhabitants, 14 wards were created; the wards were protected by seven squads of 1,000 men called , who acted as night watchmen and firemen. In addition to firefighting, their duties included apprehending petty criminals, capturing runaway slaves, guarding the baths at night, and stopping disturbances of the peace. As well as the city of Rome, vigiles were also stationed in the harbor cities of Ostia and Portus. Augustus also formed the Urban Cohorts to deal with gangs and civil disturbances in the city of Rome, and as a counterbalance to the Praetorian Guard's enormous power in the city. They were led by the urban prefect. Urban Cohort units were later formed in Roman Carthage and Lugdunum. India Law enforcement systems existed in the various kingdoms and empires of ancient India. The Apastamba Dharmasutra prescribes that kings should appoint officers and subordinates in the towns and villages to protect their subjects from crime. Various inscriptions and literature from ancient India suggest that a variety of roles existed for law enforcement officials such as those of a constable, thief catcher, watchman, and detective. In ancient India up to medieval and early modern times, kotwals were in charge of local law enforcement. Achaemenid (First Persian) Empire The Achaemenid Empire had well-organized police forces. A police force existed in every place of importance. In the cities, each ward was under the command of a Superintendent of Police, known as a . Police officers also acted as prosecutors and carried out punishments imposed by the courts. They were required to know the court procedure for prosecuting cases and advancing accusations. Israel In ancient Israel and Judah, officials with the responsibility of making declarations to the people, guarding the king's person, supervising public works, and executing the orders of the courts existed in the urban areas. They are repeatedly mentioned in the Hebrew Bible, and this system lasted into the period of Roman rule. The first century Jewish historian Josephus related that every judge had two such officers under his command. Levites were preferred for this role. Cities and towns also had night watchmen. Besides officers of the town, there were officers for every tribe. The temple in Jerusalem had special temple police to guard it. The Talmud mentions various local police officials in the Jewish communities of the Land of Israel and Babylon who supervised economic activity. Their Greek-sounding titles suggest that the roles were introduced under Hellenic influence. Most of these officials received their authority from local courts and their salaries were drawn from the town treasury. The Talmud also mentions city watchmen and mounted and armed watchmen in the suburbs. Africa In many regions of pre-colonial Africa, particularly West and Central Africa, guild-like secret societies emerged as law enforcement. In the absence of a court system or written legal code, they carried out police-like activities, employing varying degrees of coercion to enforce conformity and deter antisocial behavior. In ancient Ethiopia, armed retainers of the nobility enforced law in the countryside according to the will of their leaders. The Songhai Empire had officials known as assara-munidios, or "enforcers", acting as police. The Americas Pre-Columbian civilizations in the Americas also had organized law enforcement. The city-states of the Maya civilization had constables known as . In the Aztec Empire, judges had officers serving under them who were empowered to perform arrests, even of dignitaries. In the Inca Empire, officials called enforced the law among the households they were assigned to oversee, with inspectors known as () also stationed throughout the provinces to keep order. Post-classical In medieval Spain, , or 'holy brotherhoods', peacekeeping associations of armed individuals, were a characteristic of municipal life, especially in Castile. As medieval Spanish kings often could not offer adequate protection, protective municipal leagues began to emerge in the twelfth century against banditry and other rural criminals, and against the lawless nobility or to support one or another claimant to a crown. These organizations were intended to be temporary, but became a long-standing fixture of Spain. The first recorded case of the formation of an occurred when the towns and the peasantry of the north united to police the pilgrim road to Santiago de Compostela in Galicia, and protect the pilgrims against robber knights. Throughout the Middle Ages such alliances were frequently formed by combinations of towns to protect the roads connecting them, and were occasionally extended to political purposes. Among the most powerful was the league of North Castilian and Basque ports, the Hermandad de las marismas: Toledo, Talavera, and Villarreal. As one of their first acts after end of the War of the Castilian Succession in 1479, Ferdinand II of Aragon and Isabella I of Castile established the centrally-organized and efficient Holy Brotherhood as a national police force. They adapted an existing brotherhood to the purpose of a general police acting under officials appointed by themselves, and endowed with great powers of summary jurisdiction even in capital cases. The original brotherhoods continued to serve as modest local police-units until their final suppression in 1835. The Vehmic courts of Germany provided some policing in the absence of strong state institutions. Such courts had a chairman who presided over a session and lay judges who passed judgement and carried out law enforcement tasks. Among the responsibilities that lay judges had were giving formal warnings to known troublemakers, issuing warrants, and carrying out executions. In the medieval Islamic Caliphates, police were known as . Bodies termed existed perhaps as early as the Caliphate of Uthman. The Shurta is known to have existed in the Abbasid and Umayyad Caliphates. Their primary roles were to act as police and internal security forces but they could also be used for other duties such as customs and tax enforcement, rubbish collection, and acting as bodyguards for governors. From the 10th century, the importance of the Shurta declined as the army assumed internal security tasks while cities became more autonomous and handled their own policing needs locally, such as by hiring watchmen. In addition, officials called were responsible for supervising bazaars and economic activity in general in the medieval Islamic world. In France during the Middle Ages, there were two Great Officers of the Crown of France with police responsibilities: The Marshal of France and the Grand Constable of France. The military policing responsibilities of the Marshal of France were delegated to the Marshal's provost, whose force was known as the Marshalcy because its authority ultimately derived from the Marshal. The marshalcy dates back to the Hundred Years' War, and some historians trace it back to the early 12th century. Another organisation, the Constabulary (), was under the command of the Constable of France. The constabulary was regularised as a military body in 1337. Under Francis I (reigned 1515–1547), the was merged with the constabulary. The resulting force was also known as the , or, formally, the Constabulary and Marshalcy of France. In late medieval Italian cities, police forces were known as berovierri. Individually, their members were known as birri. Subordinate to the city's podestà, the berovierri were responsible for guarding the cities and their suburbs, patrolling, and the pursuit and arrest of criminals. They were typically hired on short-term contracts, usually six months. Detailed records from medieval Bologna show that birri had a chain of command, with constables and sergeants managing lower-ranking birri, that they wore uniforms, that they were housed together with other employees of the podestà together with a number of servants including cooks and stable-keepers, that their parentage and places of origin were meticulously recorded, and that most were not native to Bologna, with many coming from outside Italy. The English system of maintaining public order since the Norman conquest was a private system of tithings known as the mutual pledge system. This system was introduced under Alfred the Great. Communities were divided into groups of ten families called tithings, each of which was overseen by a chief tithingman. Every household head was responsible for the good behavior of his own family and the good behavior of other members of his tithing. Every male aged 12 and over was required to participate in a tithing. Members of tithings were responsible for raising "hue and cry" upon witnessing or learning of a crime, and the men of his tithing were responsible for capturing the criminal. The person the tithing captured would then be brought before the chief tithingman, who would determine guilt or innocence and punishment. All members of the criminal's tithing would be responsible for paying the fine. A group of ten tithings was known as a "hundred" and every hundred was overseen by an official known as a reeve. Hundreds ensured that if a criminal escaped to a neighboring village, he could be captured and returned to his village. If a criminal was not apprehended, then the entire hundred could be fined. The hundreds were governed by administrative divisions known as shires, the rough equivalent of a modern county, which were overseen by an official known as a shire-reeve, from which the term sheriff evolved. The shire-reeve had the power of , meaning he could gather the men of his shire to pursue a criminal. Following the Norman conquest of England in 1066, the tithing system was tightened with the frankpledge system. By the end of the 13th century, the office of constable developed. Constables had the same responsibilities as chief tithingmen and additionally as royal officers. The constable was elected by his parish every year. Eventually, constables became the first 'police' official to be tax-supported. In urban areas, watchmen were tasked with keeping order and enforcing nighttime curfew. Watchmen guarded the town gates at night, patrolled the streets, arrested those on the streets at night without good reason, and also acted as firefighters. Eventually the office of justice of the peace was established, with a justice of the peace overseeing constables. There was also a system of investigative "juries". The Assize of Arms of 1252, which required the appointment of constables to summon men to arms, quell breaches of the peace, and to deliver offenders to the sheriff or reeve, is cited as one of the earliest antecedents of the English police. The Statute of Winchester of 1285 is also cited as the primary legislation regulating the policing of the country between the Norman Conquest and the Metropolitan Police Act 1829. From about 1500, private watchmen were funded by private individuals and organisations to carry out police functions. They were later nicknamed 'Charlies', probably after the reigning monarch King Charles II. Thief-takers were also rewarded for catching thieves and returning the stolen property. They were private individuals usually hired by crime victims. The earliest English use of the word police seems to have been the term Polles mentioned in the book The Second Part of the Institutes of the Lawes of England published in 1642. Early modern The first example of a statutory police force in the world was probably the High Constables of Edinburgh, formed in 1611 to police the streets of Edinburgh, then part of the Kingdom of Scotland. The constables, of whom half were merchants and half were craftsmen, were charged with enforcing 16 regulations relating to curfews, weapons, and theft. At that time, maintenance of public order in Scotland was mainly done by clan chiefs and feudal lords. The first centrally organised and uniformed police force was created by the government of King Louis XIV in 1667 to police the city of Paris, then the largest city in Europe. The royal edict, registered by the of Paris on March 15, 1667, created the office of ("lieutenant general of police"), who was to be the head of the new Paris police force, and defined the task of the police as "ensuring the peace and quiet of the public and of private individuals, purging the city of what may cause disturbances, procuring abundance, and having each and everyone live according to their station and their duties". This office was first held by Gabriel Nicolas de la Reynie, who had 44 ('police commissioners') under his authority. In 1709, these commissioners were assisted by ('police inspectors'). The city of Paris was divided into 16 districts policed by the , each assigned to a particular district and assisted by a growing bureaucracy. The scheme of the Paris police force was extended to the rest of France by a royal edict of October 1699, resulting in the creation of lieutenants general of police in all large French cities and towns. After the French Revolution, Napoléon I reorganized the police in Paris and other cities with more than 5,000 inhabitants on February 17, 1800, as the Prefecture of Police. On March 12, 1829, a government decree created the first uniformed police in France, known as ('city sergeants'), which the Paris Prefecture of Police's website claims were the first uniformed policemen in the world. In feudal Japan, samurai warriors were charged with enforcing the law among commoners. Some Samurai acted as magistrates called , who acted as judges, prosecutors, and as chief of police. Beneath them were other Samurai serving as , or assistant magistrates, who conducted criminal investigations, and beneath them were Samurai serving as , who were responsible for patrolling the streets, keeping the peace, and making arrests when necessary. The were responsible for managing the . and were typically drawn from low-ranking samurai families. Assisting the were the , non-Samurai who went on patrol with them and provided assistance, the , non-Samurai from the lowest outcast class, often former criminals, who worked for them as informers and spies, and or , chōnin, often former criminals, who were hired by local residents and merchants to work as police assistants in a particular neighborhood. This system typically did not apply to the Samurai themselves. Samurai clans were expected to resolve disputes among each other through negotiation, or when that failed through duels. Only rarely did Samurai bring their disputes to a magistrate or answer to police. In Joseon-era Korea, the Podocheong emerged as a police force with the power to arrest and punish criminals. Established in 1469 as a temporary organization, its role solidified into a permanent one. In Sweden, local governments were responsible for law and order by way of a royal decree issued by Magnus III in the 13th century. The cities financed and organized groups of watchmen who patrolled the streets. In the late 1500s in Stockholm, patrol duties were in large part taken over by a special corps of salaried city guards. The city guard was organized, uniformed and armed like a military unit and was responsible for interventions against various crimes and the arrest of suspected criminals. These guards were assisted by the military, fire patrolmen, and a civilian unit that did not wear a uniform, but instead wore a small badge around the neck. The civilian unit monitored compliance with city ordinances relating to e.g. sanitation issues, traffic and taxes. In rural areas, the King's bailiffs were responsible for law and order until the establishment of counties in the 1630s. Up to the early 18th century, the level of state involvement in law enforcement in Britain was low. Although some law enforcement officials existed in the form of constables and watchmen, there was no organized police force. A professional police force like the one already present in France would have been ill-suited to Britain, which saw examples such as the French one as a threat to the people's liberty and balanced constitution in favor of an arbitrary and tyrannical government. Law enforcement was mostly up to the private citizens, who had the right and duty to prosecute crimes in which they were involved or in which they were not. At the cry of 'murder!' or 'stop thief!' everyone was entitled and obliged to join the pursuit. Once the criminal had been apprehended, the parish constables and night watchmen, who were the only public figures provided by the state and who were typically part-time and local, would make the arrest. As a result, the state set a reward to encourage citizens to arrest and prosecute offenders. The first of such rewards was established in 1692 of the amount of £40 for the conviction of a highwayman and in the following years it was extended to burglars, coiners and other forms of offense. The reward was to be increased in 1720 when, after the end of the War of the Spanish Succession and the consequent rise of criminal offenses, the government offered £100 for the conviction of a highwayman. Although the offer of such a reward was conceived as an incentive for the victims of an offense to proceed to the prosecution and to bring criminals to justice, the efforts of the government also increased the number of private thief-takers. Thief-takers became infamously known not so much for what they were supposed to do, catching real criminals and prosecuting them, as for "setting themselves up as intermediaries between victims and their attackers, extracting payments for the return of stolen goods and using the threat of prosecution to keep offenders in thrall". Some of them, such as Jonathan Wild, became infamous at the time for staging robberies in order to receive the reward. In 1737, George II began paying some London and Middlesex watchmen with tax monies, beginning the shift to government control. In 1749, Judge Henry Fielding began organizing a force of quasi-professional constables known as the Bow Street Runners. The Bow Street Runners are considered to have been Britain's first dedicated police force. They represented a formalization and regularization of existing policing methods, similar to the unofficial 'thief-takers'. What made them different was their formal attachment to the Bow Street magistrates' office, and payment by the magistrate with funds from the central government. They worked out of Fielding's office and court at No. 4 Bow Street, and did not patrol but served writs and arrested offenders on the authority of the magistrates, travelling nationwide to apprehend criminals. Fielding wanted to regulate and legalize law enforcement activities due to the high rate of corruption and mistaken or malicious arrests seen with the system that depended mainly on private citizens and state rewards for law enforcement. Henry Fielding's work was carried on by his brother, Justice John Fielding, who succeeded him as magistrate in the Bow Street office. Under John Fielding, the institution of the Bow Street Runners gained more and more recognition from the government, although the force was only funded intermittently in the years that followed. In 1763, the Bow Street Horse Patrol was established to combat highway robbery, funded by a government grant. The Bow Street Runners served as the guiding principle for the way that policing developed over the next 80 years. Bow Street was a manifestation of the move towards increasing professionalisation and state control of street life, beginning in London. The Macdaniel affair, a 1754 British political scandal in which a group of thief-takers was found to be falsely prosecuting innocent men in order to collect reward money from bounties, added further impetus for a publicly salaried police force that did not depend on rewards. Nonetheless, In 1828, there were privately financed police units in no fewer than 45 parishes within a 10-mile radius of London. The word police was borrowed from French into the English language in the 18th century, but for a long time it applied only to French and continental European police forces. The word, and the concept of police itself, were "disliked as a symbol of foreign oppression". Before the 19th century, the first use of the word police recorded in government documents in the United Kingdom was the appointment of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798. Modern Scotland and Ireland Following early police forces established in 1779 and 1788 in Glasgow, Scotland, the Glasgow authorities successfully petitioned the government to pass the Glasgow Police Act establishing the City of Glasgow Police in 1800. Other Scottish towns soon followed suit and set up their own police forces through acts of parliament. In Ireland, the Irish Constabulary Act of 1822 marked the beginning of the Royal Irish Constabulary. The Act established a force in each barony with chief constables and inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over 8,600 men. London In 1797, Patrick Colquhoun was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames to establish a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of cargo in imports alone. The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import. In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to the principle of the British constitution". Moreover, he went so far as to praise the French system, which had reached "the greatest degree of perfection" in his estimation. With the initial investment of £4,200, the new force the Marine Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom Colquhoun claimed 11,000 were known criminals and "on the game". The force was part funded by the London Society of West India Planters and Merchants. The force was a success after its first year, and his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives". Word of this success spread quickly, and the government passed the Depredations on the Thames Act 1800 on 28 July 1800, establishing a fully funded police force the Thames River Police together with new laws including police powers; now the oldest police force in the world. Colquhoun published a book on the experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired similar forces in other cities, notably, New York City, Dublin, and Sydney. Colquhoun's utilitarian approach to the problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames. Metropolitan London was fast reaching a size unprecedented in world history, due to the onset of the Industrial Revolution. It became clear that the locally maintained system of volunteer constables and "watchmen" was ineffective, both in detecting and preventing crime. A parliamentary committee was appointed to investigate the system of policing in London. Upon Sir Robert Peel being appointed as Home Secretary in 1822, he established a second and more effective committee, and acted upon its findings. Royal assent to the Metropolitan Police Act 1829 was given and the Metropolitan Police Service was established on September 29, 1829, in London. Peel, widely regarded as the father of modern policing, was heavily influenced by the social and legal philosophy of Jeremy Bentham, who called for a strong and centralised, but politically neutral, police force for the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban crime and disorder. Peel decided to standardise the police force as an official paid profession, to organise it in a civilian fashion, and to make it answerable to the public. Due to public fears concerning the deployment of the military in domestic matters, Peel organised the force along civilian lines, rather than paramilitary. To appear neutral, the uniform was deliberately manufactured in blue, rather than red which was then a military colour, along with the officers being armed only with a wooden truncheon and a rattle to signal the need for assistance. Along with this, police ranks did not include military titles, with the exception of Sergeant. To distance the new police force from the initial public view of it as a new tool of government repression, Peel publicised the so-called Peelian principles, which set down basic guidelines for ethical policing: Whether the police are effective is not measured on the number of arrests but on the deterrence of crime. Above all else, an effective authority figure knows trust and accountability are paramount. Hence, Peel's most often quoted principle that "The police are the public and the public are the police." The Metropolitan Police Act 1829 created a modern police force by limiting the purview of the force and its powers and envisioning it as merely an organ of the judicial system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according to the law. This was very different from the "continental model" of the police force that had been developed in France, where the police force worked within the parameters of the absolutist state as an extension of the authority of the monarch and functioned as part of the governing state. In 1863, the Metropolitan Police were issued with the distinctive custodian helmet, and in 1884 they switched to the use of whistles that could be heard from much further away. The Metropolitan Police became a model for the police forces in many countries, including the United States and most of the British Empire. Bobbies can still be found in many parts of the Commonwealth of Nations. Australia In Australia, organized law enforcement emerged soon after British colonization began in 1788. The first law enforcement organizations were the Night Watch and Row Boat Guard, which were formed in 1789 to police Sydney. Their ranks were drawn from well-behaved convicts deported to Australia. The Night Watch was replaced by the Sydney Foot Police in 1790. In New South Wales, rural law enforcement officials were appointed by local justices of the peace during the early to mid-19th century and were referred to as "bench police" or "benchers". A mounted police force was formed in 1825. The first police force having centralised command as well as jurisdiction over an entire colony was the South Australia Police, formed in 1838 under Henry Inman. However, whilst the New South Wales Police Force was established in 1862, it was made up from a large number of policing and military units operating within the then Colony of New South Wales and traces its links back to the Royal Marines. The passing of the Police Regulation Act of 1862 essentially tightly regulated and centralised all of the police forces operating throughout the Colony of New South Wales. Each Australian state and territory maintain its own police force, while the Australian Federal Police enforces laws at the federal level. The New South Wales Police Force remains the largest police force in Australia in terms of personnel and physical resources. It is also the only police force that requires its recruits to undertake university studies at the recruit level and has the recruit pay for their own education. Brazil In 1566, the first police investigator of Rio de Janeiro was recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July 9, 1775, a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the ('General Police Intendancy') for investigations. He also created a Royal Police Guard for Rio de Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with order maintenance tasks. The Federal Railroad Police was created in 1852, Federal Highway Police, was established in 1928, and Federal Police in 1967. Canada During the early days of English and French colonization, municipalities hired watchmen and constables to provide security. Established in 1729, the Royal Newfoundland Constabulary (RNC) was the first policing service founded in Canada. The establishment of modern policing services in the Canadas occurred during the 1830s, modelling their services after the London Metropolitan Police, and adopting the ideas of the Peelian principles. The Toronto Police Service was established in 1834 as the first municipal police service in Canada. Prior to that, local able-bodied male citizens had been required to report for night watch duty as special constables for a fixed number of nights a year on penalty of a fine or imprisonment in a system known as "watch and ward." The Quebec City Police Service was established in 1840. A national police service, the Dominion Police, was founded in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. In 1870, Rupert's Land and the North-Western Territory were incorporated into the country. In an effort to police its newly acquired territory, the Canadian government established the North-West Mounted Police in 1873 (renamed Royal North-West Mounted Police in 1904). In 1920, the Dominion Police, and the Royal Northwest Mounted Police were amalgamated into the Royal Canadian Mounted Police (RCMP). The RCMP provides federal law enforcement; and law enforcement in eight provinces, and all three territories. The provinces of Ontario, and Quebec maintain their own provincial police forces, the Ontario Provincial Police (OPP), and the Sûreté du Québec (SQ). Policing in Newfoundland and Labrador is provided by the RCMP, and the RNC. The aforementioned services also provide municipal policing, although larger Canadian municipalities may establish their own police service. Lebanon In Lebanon, the current police force was established in 1861, with creation of the Gendarmerie. India Under the Mughal Empire, provincial governors called subahdars (or nazims), as well as officials known as faujdars and thanadars were tasked with keeping law and order. Kotwals were responsible for public order in urban areas. In addition, officials called amils, whose primary duties were tax collection, occasionally dealt with rebels. The system evolved under growing British influence that eventually culminated in the establishment of the British Raj. In 1770, the offices of faujdar and amil were abolished. They were brought back in 1774 by Warren Hastings, the first Governor of the Presidency of Fort William (Bengal). In 1791, the first permanent police force was established by Charles Cornwallis, the Commander-in-Chief of British India and Governor of the Presidency of Fort William. A single police force was established after the formation of the British Raj with the Government of India Act 1858. A uniform police bureaucracy was formed under the Police Act 1861, which established the Superior Police Services. This later evolved into the Indian Imperial Police, which kept order until the Partition of India and independence in 1947. In 1948, the Indian Imperial Police was replaced by the Indian Police Service. In modern India, the police are under the control of respective States and union territories and are known to be under State Police Services (SPS). The candidates selected for the SPS are usually posted as Deputy Superintendent of Police or Assistant Commissioner of Police once their probationary period ends. On prescribed satisfactory service in the SPS, the officers are nominated to the Indian Police Service. The service color is usually dark blue and red, while the uniform color is Khaki. United States In Colonial America, the county sheriff was the most important law enforcement official. For instance, the New York Sheriff's Office was founded in 1626, and the Albany County Sheriff's Department in the 1660s. The county sheriff, who was an elected official, was responsible for enforcing laws, collecting taxes, supervising elections, and handling the legal business of the county government. Sheriffs would investigate crimes and make arrests after citizens filed complaints or provided information about a crime but did not carry out patrols or otherwise take preventive action. Villages and cities typically hired constables and marshals, who were empowered to make arrests and serve warrants. Many municipalities also formed a night watch, a group of citizen volunteers who would patrol the streets at night looking for crime and fires. Typically, constables and marshals were the main law enforcement officials available during the day while the night watch would serve during the night. Eventually, municipalities formed day watch groups. Rioting was handled by local militias. In the 1700s, the Province of Carolina (later North- and South Carolina) established slave patrols in order to prevent slave rebellions and enslaved people from escaping. By 1785 the Charleston Guard and Watch had "a distinct chain of command, uniforms, sole responsibility for policing, salary, authorized use of force, and a focus on preventing crime." In 1789 the United States Marshals Service was established, followed by other federal services such as the U.S. Parks Police (1791) and U.S. Mint Police (1792). In 1751 moves towards a municipal police service in Philadelphia were made when the city's night watchmen and constables began receiving wages and a Board of Wardens was created to oversee the night watch. Municipal police services were createed in Richmond, Virginia in 1807, Boston in 1838, and New York City in 1845. The United States Secret Service was founded in 1865 and was for some time the main investigative body for the federal government. Modern policing influenced by the British model of policing established in 1829 based on the Peelian principles began emerging in the United States in the mid-19th century, replacing previous law enforcement systems based primarily on night watch organizations. Cities began establishing organized, publicly funded, full-time professional police services. In Boston, a day police consisting of six officers under the command of the city marshal was established in 1838 to supplement the city's night watch. This paved the way for the establishment of the Boston Police Department in 1854. In New York City, law enforcement up to the 1840s was handled by a night watch as well as 100 city marshals, 51 municipal police officers, and 31 constables. In 1845, the New York City Police Department was established. In Philadelphia, the first police officers to patrol the city in daytime were employed in 1833 as a supplement to the night watch system, leading to the establishment of the Philadelphia Police Department in 1854. In the American Old West, law enforcement was carried out by local sheriffs, rangers, constables, and federal marshals. There were also town marshals responsible for serving civil and criminal warrants, maintaining the jails, and carrying out arrests for petty crime. In addition to federal, state, and local forces, some special districts have been formed to provide extra police protection in designated areas. These districts may be known as neighborhood improvement districts, crime prevention districts, or security districts. In 2022, San Francisco supervisors approved a policy allowing municipal police (San Francisco Police Department) to use robots for various law enforcement and emergency operations, permitting their employment as a deadly force option in cases where the "risk of life to members of the public or officers is imminent and outweighs any other force option available to SFPD." This policy has been criticized by groups such as the Electronic Frontier Foundation and the ACLU, who have argued that "killer robots will not make San Francisco better" and "police might even bring armed robots to a protest." Development of theory Michel Foucault wrote that the contemporary concept of police as a paid and funded functionary of the state was developed by German and French legal scholars and practitioners in public administration and statistics in the 17th and early 18th centuries, most notably with Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft (Science of Police) first theorized by Philipp von Hörnigk, a 17th-century Austrian political economist and civil servant, and much more famously by Johann Heinrich Gottlob Justi, who produced an important theoretical work known as Cameral science on the formulation of police. Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften (1937) in which the author makes note of a substantial bibliography was produced of over 4,000 pieces of the practice of Polizeiwissenschaft. However, this may be a mistranslation of Foucault's own work since the actual source of Magdalene Humpert states over 14,000 items were produced from the 16th century dates ranging from 1520 to 1850. As conceptualized by the Polizeiwissenschaft, according to Foucault the police had an administrative, economic and social duty ("procuring abundance"). It was in charge of demographic concerns and needed to be incorporated within the western political philosophy system of raison d'état and therefore giving the superficial appearance of empowering the population (and unwittingly supervising the population), which, according to mercantilist theory, was to be the main strength of the state. Thus, its functions largely overreached simple law enforcement activities and included public health concerns, urban planning (which was important because of the miasma theory of disease; thus, cemeteries were moved out of town, etc.), and surveillance of prices. The concept of preventive policing, or policing to deter crime from taking place, gained influence in the late 18th century. Police Magistrate John Fielding, head of the Bow Street Runners, argued that "...it is much better to prevent even one man from being a rogue than apprehending and bringing forty to justice." The Utilitarian philosopher, Jeremy Bentham, promoted the views of Italian Marquis Cesare Beccaria, and disseminated a translated version of "Essay on Crime in Punishment". Bentham espoused the guiding principle of "the greatest good for the greatest number": It is better to prevent crimes than to punish them. This is the chief aim of every good system of legislation, which is the art of leading men to the greatest possible happiness or to the least possible misery, according to calculation of all the goods and evils of life. Patrick Colquhoun's influential work, A Treatise on the Police of the Metropolis (1797) was heavily influenced by Benthamite thought. Colquhoun's Thames River Police was founded on these principles, and in contrast to the Bow Street Runners, acted as a deterrent by their continual presence on the riverfront, in addition to being able to intervene if they spotted a crime in progress. Edwin Chadwick's 1829 article, "Preventive police" in the London Review, argued that prevention ought to be the primary concern of a police body, which was not the case in practice. The reason, argued Chadwick, was that "A preventive police would act more immediately by placing difficulties in obtaining the objects of temptation." In contrast to a deterrent of punishment, a preventive police force would deter criminality by making crime cost-ineffective – "crime doesn't pay". In the second draft of his 1829 Police Act, the "object" of the new Metropolitan Police, was changed by Robert Peel to the "principal object," which was the "prevention of crime." Later historians would attribute the perception of England's "appearance of orderliness and love of public order" to the preventive principle entrenched in Peel's police system. Development of modern police forces around the world was contemporary to the formation of the state, later defined by sociologist Max Weber as achieving a "monopoly on the legitimate use of physical force" and which was primarily exercised by the police and the military. Marxist theory situates the development of the modern state as part of the rise of capitalism, in which the police are one component of the bourgeoisie's repressive apparatus for subjugating the working class. By contrast, the Peelian principles argue that "the power of the police ... is dependent on public approval of their existence, actions and behavior", a philosophy known as policing by consent. Personnel and organization Police forces include both preventive (uniformed) police and detectives. Terminology varies from country to country. Police functions include protecting life and property, enforcing criminal law, criminal investigations, regulating traffic, crowd control, public safety duties, civil defense, emergency management, searching for missing persons, lost property and other duties concerned with public order. Regardless of size, police forces are generally organized as a hierarchy with multiple ranks. The exact structures and the names of rank vary considerably by country. Uniformed The police who wear uniforms make up the majority of a police service's personnel. Their main duty is to respond to calls for service. When not responding to these calls, they do work aimed at preventing crime, such as patrols. The uniformed police are known by varying names such as preventive police, the uniform branch/division, administrative police, order police, the patrol bureau/division, or patrol. In Australia and the United Kingdom, patrol personnel are also known as "general duties" officers. Atypically, Brazil's preventive police are known as Military Police. As stated by the name, uniformed police wear uniforms. They perform functions that require an immediate recognition of an officer's legal authority and a potential need for force. Most commonly this means intervening to stop a crime in progress and securing the scene of a crime that has already happened. Besides dealing with crime, these officers may also manage and monitor traffic, carry out community policing duties, maintain order at public events or carry out searches for missing people (in 2012, the latter accounted for 14% of police time in the United Kingdom). As most of these duties must be available as a 24/7 service, uniformed police are required to do shift work. Detectives Police detectives are responsible for investigations and detective work. Detectives may be called Investigations Police, Judiciary/Judicial Police, or Criminal Police. In the United Kingdom, they are often referred to by the name of their department, the Criminal Investigation Department. Detectives typically make up roughly 15–25% of a police service's personnel. Detectives, in contrast to uniformed police, typically wear business-styled attire in bureaucratic and investigative functions, where a uniformed presence would be either a distraction or intimidating but a need to establish police authority still exists. "Plainclothes" officers dress in attire consistent with that worn by the general public for purposes of blending in. In some cases, police are assigned to work "undercover", where they conceal their police identity to investigate crimes, such as organized crime or narcotics crime, that are unsolvable by other means. In some cases, this type of policing shares aspects with espionage. The relationship between detective and uniformed branches varies by country. In the United States, there is high variation within the country itself. Many American police departments require detectives to spend some time on temporary assignments in the patrol division. The argument is that rotating officers helps the detectives to better understand the uniformed officers' work, to promote cross-training in a wider variety of skills, and prevent "cliques" that can contribute to corruption or other unethical behavior. Conversely, some countries regard detective work as being an entirely separate profession, with detectives working in separate agencies and recruited without having to serve in uniform. A common compromise in English-speaking countries is that most detectives are recruited from the uniformed branch, but once qualified they tend to spend the rest of their careers in the detective branch. Another point of variation is whether detectives have extra status. In some forces, such as the New York Police Department and Philadelphia Police Department, a regular detective holds a higher rank than a regular police officer. In others, such as British police and Canadian police, a regular detective has equal status with regular uniformed officers. Officers still have to take exams to move to the detective branch, but the move is regarded as being a specialization, rather than a promotion. Volunteers and auxiliary Police services often include part-time or volunteer officers, some of whom have other jobs outside policing. These may be paid positions or entirely volunteer. These are known by a variety of names, such as reserves, auxiliary police or special constables. Other volunteer organizations work with the police and perform some of their duties. Groups in the U.S. including the Retired and Senior Volunteer Program, Community Emergency Response Team, and the Boy Scouts Police Explorers provide training, traffic and crowd control, disaster response, and other policing duties. In the U.S., the Volunteers in Police Service program assists over 200,000 volunteers in almost 2,000 programs. Volunteers may also work on the support staff. Examples of these schemes are Volunteers in Police Service in the US, Police Support Volunteers in the UK and Volunteers in Policing in New South Wales. Specialized Specialized preventive and detective groups, or Specialist Investigation Departments, exist within many law enforcement organizations either for dealing with particular types of crime, such as traffic law enforcement, K9/use of police dogs, crash investigation, homicide, or fraud; or for situations requiring specialized skills, such as underwater search, aviation, explosive disposal ("bomb squad"), and computer crime. Most larger jurisdictions employ police tactical units, specially selected and trained paramilitary units with specialized equipment, weapons, and training, for the purposes of dealing with particularly violent situations beyond the capability of a patrol officer response, including standoffs, counterterrorism, and rescue operations. In counterinsurgency-type campaigns, select and specially trained units of police armed and equipped as light infantry have been designated as police field forces who perform paramilitary-type patrols and ambushes whilst retaining their police powers in areas that were highly dangerous. Because their situational mandate typically focuses on removing innocent bystanders from dangerous people and dangerous situations, not violent resolution, they are often equipped with non-lethal tactical tools like chemical agents, stun grenades, and rubber bullets. The Specialist Firearms Command (MO19) of the Metropolitan Police in London is a group of armed police used in dangerous situations including hostage taking, armed robbery/assault and terrorism. Administrative duties Police may have administrative duties that are not directly related to enforcing the law, such as issuing firearms licenses. The extent that police have these functions varies among countries, with police in France, Germany, and other continental European countries handling such tasks to a greater extent than British counterparts. Military Military police may refer to: a section of the military solely responsible for policing the armed forces, referred to as provosts (e.g., United States Air Force Security Forces) a section of the military responsible for policing in both the armed forces and in the civilian population (e.g., most gendarmeries, such as the French Gendarmerie, the Italian Carabinieri, the Spanish Guardia Civil, and the Portuguese National Republican Guard) a section of the military solely responsible for policing the civilian population (e.g., Romanian Gendarmerie) the civilian preventive police of a Brazilian state (e.g., Policia Militar) a special military law enforcement service (e.g., Russian Military Police) Religious Some jurisdictions with religious laws may have dedicated religious police to enforce said laws. These religious police forces, which may operate either as a unit of a wider police force or as an independent agency, may only have jurisdiction over members of said religion, or they may have the ability to enforce religious customs nationwide regardless of individual religious beliefs. Religious police may enforce social norms, gender roles, dress codes, and dietary laws per religious doctrine and laws, and may also prohibit practices that run contrary to said doctrine, such as atheism, proselytism, homosexuality, socialization between different genders, business operations during religious periods or events such as salah or the Sabbath, or the sale and possession of "offending material" ranging from pornography to foreign media. Forms of religious law enforcement were relatively common in historical religious civilizations, but eventually declined in favor of religious tolerance and pluralism. One of the most common forms of religious police in the modern world are Islamic religious police, which enforce the application of Sharia (Islamic religious law). As of 2018, there are eight Islamic countries that maintain Islamic religious police: Afghanistan, Iran, Iraq, Mauritania, Pakistan, Saudi Arabia, Sudan, and Yemen. Some forms of religious police may not enforce religious law, but rather suppress religion or religious extremism. This is often done for ideological reasons; for example, communist states such as China and Vietnam have historically suppressed and tightly controlled religions such as Christianity. Secret Secret police organizations are typically used to suppress dissidents for engaging in non-politically correct communications and activities, which are deemed counter-productive to what the state and related establishment promote. Secret police interventions to stop such activities are often illegal, and are designed to debilitate, in various ways, the people targeted in order to limit or stop outright their ability to act in a non-politically correct manner. The methods employed may involve spying, various acts of deception, intimidation, framing, false imprisonment, false incarceration under mental health legislation, and physical violence. Countries widely reported to use secret police organizations include China (The Ministry of State Security) and North Korea (The Ministry of State Security). By country Police forces are usually organized and funded by some level of government. The level of government responsible for policing varies from place to place, and may be at the national, regional or local level. Some countries have police forces that serve the same territory, with their jurisdiction depending on the type of crime or other circumstances. Other countries, such as Austria, Chile, Israel, New Zealand, the Philippines, South Africa and Sweden, have a single national police force. In some places with multiple national police forces, one common arrangement is to have a civilian police force and a paramilitary gendarmerie, such as the Police Nationale and National Gendarmerie in France. The French policing system spread to other countries through the Napoleonic Wars and the French colonial empire. Another example is the Policía Nacional and Guardia Civil in Spain. In both France and Spain, the civilian force polices urban areas and the paramilitary force polices rural areas. Italy has a similar arrangement with the Polizia di Stato and Carabinieri, though their jurisdictions overlap more. Some countries have separate agencies for uniformed police and detectives, such as the Military Police and Civil Police in Brazil and the Carabineros and Investigations Police in Chile. Other countries have sub-national police forces, but for the most part their jurisdictions do not overlap. In many countries, especially federations, there may be two or more tiers of police force, each serving different levels of government and enforcing different subsets of the law. In Australia and Germany, the majority of policing is carried out by state (i.e. provincial) police forces, which are supplemented by a federal police force. Though not a federation, the United Kingdom has a similar arrangement, where policing is primarily the responsibility of a regional police force and specialist units exist at the national level. In Canada, the Royal Canadian Mounted Police (RCMP) are the federal police, while municipalities can decide whether to run a local police service or to contract local policing duties to a larger one. Most urban areas have a local police service, while most rural areas contract it to the RCMP, or to the provincial police in Ontario and Quebec. The United States has a highly decentralized and fragmented system of law enforcement, with over 17,000 state and local law enforcement agencies. These agencies include local police, county law enforcement (often in the form of a sheriff's office, or county police), state police and federal law enforcement agencies. Federal agencies, such as the FBI, only have jurisdiction over federal crimes or those that involve more than one state. Other federal agencies have jurisdiction over a specific type of crime. Examples include the Federal Protective Service, which patrols and protects government buildings; the Postal Inspection Service, which protect United States Postal Service facilities, vehicles and items; the Park Police, which protect national parks; and Amtrak Police, which patrol Amtrak stations and trains. There are also some government agencies and uniformed services that perform police functions in addition to other duties, such as the Coast Guard. International Most countries are members of the International Criminal Police Organization (Interpol), established to detect and fight transnational crime and provide for international co-operation and co-ordination of other police activities, such as notifying relatives of the death of foreign nationals. Interpol does not conduct investigations or arrests by itself, but only serves as a central point for information on crime, suspects and criminals. Political crimes are excluded from its competencies. The terms international policing, transnational policing, and/or global policing began to be used from the early 1990s onwards to describe forms of policing that transcended the boundaries of the sovereign nation-state. These terms refer in variable ways to practices and forms for policing that, in some sense, transcend national borders. This includes a variety of practices, but international police cooperation, criminal intelligence exchange between police agencies working in different nation-states, and police development-aid to weak, failed or failing states are the three types that have received the most scholarly attention. Historical studies reveal that policing agents have undertaken a variety of cross-border police missions for many years. For example, in the 19th century a number of European policing agencies undertook cross-border surveillance because of concerns about anarchist agitators and other political radicals. A notable example of this was the occasional surveillance by Prussian police of Karl Marx during the years he remained resident in London. The interests of public police agencies in cross-border co-operation in the control of political radicalism and ordinary law crime were primarily initiated in Europe, which eventually led to the establishment of Interpol before World War II. There are also many interesting examples of cross-border policing under private auspices and by municipal police forces that date back to the 19th century. It has been established that modern policing has transgressed national boundaries from time to time almost from its inception. It is also generally agreed that in the post–Cold War era this type of practice became more significant and frequent. Few empirical works on the practices of inter/transnational information and intelligence sharing have been undertaken. A notable exception is James Sheptycki's study of police cooperation in the English Channel region, which provides a systematic content analysis of information exchange files and a description of how these transnational information and intelligence exchanges are transformed into police casework. The study showed that transnational police information sharing was routinized in the cross-Channel region from 1968 on the basis of agreements directly between the police agencies and without any formal agreement between the countries concerned. By 1992, with the signing of the Schengen Treaty, which formalized aspects of police information exchange across the territory of the European Union, there were worries that much, if not all, of this intelligence sharing was opaque, raising questions about the efficacy of the accountability mechanisms governing police information sharing in Europe. Studies of this kind outside of Europe are even rarer, so it is difficult to make generalizations, but one small-scale study that compared transnational police information and intelligence sharing practices at specific cross-border locations in North America and Europe confirmed that the low visibility of police information and intelligence sharing was a common feature. Intelligence-led policing is now common practice in most advanced countries and it is likely that police intelligence sharing and information exchange has a common morphology around the world. James Sheptycki has analyzed the effects of the new information technologies on the organization of policing-intelligence and suggests that a number of "organizational pathologies" have arisen that make the functioning of security-intelligence processes in transnational policing deeply problematic. He argues that transnational police information circuits help to "compose the panic scenes of the security-control society". The paradoxical effect is that, the harder policing agencies work to produce security, the greater are feelings of insecurity. Police development-aid to weak, failed or failing states is another form of transnational policing that has garnered attention. This form of transnational policing plays an increasingly important role in United Nations peacekeeping and this looks set to grow in the years ahead, especially as the international community seeks to develop the rule of law and reform security institutions in states recovering from conflict. With transnational police development-aid the imbalances of power between donors and recipients are stark and there are questions about the applicability and transportability of policing models between jurisdictions. One topic concerns making transnational policing institutions democratically accountable. According to the Global Accountability Report for 2007, Interpol had the lowest scores in its category (IGOs), coming in tenth with a score of 22% on overall accountability capabilities. Overseas policing A police force may establish its presence in a foreign country with or without the permission of the host state. In the case of China and the ruling Communist Party, this has involved setting up unofficial police service stations around the world, and using coercive means to influence the behaviour of members of the Chinese diaspora and especially those who hold Chinese citizenship. Political dissidents have been harassed and intimidated in a form of transnational repression and convinced to return to China. Many of these actions were illegal in the states where they occurred. Such police stations have been established in dozens of countries around the world, with some, such as the UK and the US, forcing them to close. Equipment Weapons In many jurisdictions, police officers carry firearms, primarily handguns, in the normal course of their duties. In the United Kingdom (except Northern Ireland), Iceland, Ireland, Norway, New Zealand, and Malta, with the exception of specialist units, officers do not carry firearms as a matter of course. Norwegian police carry firearms in their vehicles, but not on their duty belts, and must obtain authorization before the weapons can be removed from the vehicle. Police often have specialized units for handling armed offenders or dangerous situations where combat is likely, such as police tactical units or authorised firearms officers. In some jurisdictions, depending on the circumstances, police can call on the military for assistance, as military aid to the civil power is an aspect of many armed forces. Perhaps the most high-profile example of this was in 1980, when the British Army's Special Air Service was deployed to resolve the Iranian Embassy siege on behalf of the Metropolitan Police. They can also be armed with "non-lethal" (more accurately known as "less than lethal" or "less-lethal" given that they can still be deadly) weaponry, particularly for riot control, or to inflict pain against a resistant suspect to force them to surrender without lethally wounding them. Non-lethal weapons include batons, tear gas, riot control agents, rubber bullets, riot shields, water cannons, and electroshock weapons. Police officers typically carry handcuffs to restrain suspects. The use of firearms or deadly force is typically a last resort only to be used when necessary to save the lives of others or themselves, though some jurisdictions (such as Brazil) allow its use against fleeing felons and escaped convicts. Police officers in the United States are generally allowed to use deadly force if they believe their life is in danger, a policy that has been criticized for being vague. South African police have a "shoot-to-kill" policy, which allows officers to use deadly force against any person who poses a significant threat to them. With the country having one of the highest rates of violent crime, President Jacob Zuma stated that South Africa needs to handle crime differently from other countries. Communications Modern police forces make extensive use of two-way radio communications equipment, carried both on the person and installed in vehicles, to coordinate their work, share information, and get help quickly. Vehicle-installed mobile data terminals enhance the ability of police communications, enabling easier dispatching of calls, criminal background checks on persons of interest to be completed in a matter of seconds, and updating officers' daily activity log and other required reports, on a real-time basis. Other common pieces of police equipment include flashlights, whistles, police notebooks, and "ticket books" or citations. Some police departments have developed advanced computerized data display and communication systems to bring real time data to officers, one example being the NYPD's Domain Awareness System. Vehicles Police vehicles are used for detaining, patrolling, and transporting over wide areas that an officer could not effectively cover otherwise. The average police car used for standard patrol is a four-door sedan, SUV, or CUV, often modified by the manufacturer or police force's fleet services to provide better performance. Pickup trucks, off-road vehicles, and vans are often used in utility roles, though in some jurisdictions or situations (such as those where dirt roads are common, off-roading is required, or the nature of the officer's assignment necessitates it), they may be used as standard patrol cars. Sports cars are typically not used by police due to cost and maintenance issues, though those that are used are typically only assigned to traffic enforcement or community policing, and are rarely, if ever, assigned to standard patrol or authorized to respond to dangerous calls (such as armed calls or pursuits) where the likelihood of the vehicle being damaged or destroyed is high. Police vehicles are usually marked with appropriate symbols and equipped with sirens and flashing emergency lights to make others aware of police presence or response; in most jurisdictions, police vehicles with their sirens and emergency lights on have right of way in traffic, while in other jurisdictions, emergency lights may be kept on while patrolling to ensure ease of visibility. Unmarked or undercover police vehicles are used primarily for traffic enforcement or apprehending criminals without alerting them to their presence. The use of unmarked police vehicles for traffic enforcement is controversial, with the state of New York banning this practice in 1996 on the grounds that it endangered motorists who might be pulled over by police impersonators. Motorcycles, having historically been a mainstay in police fleets, are commonly used, particularly in locations that a car may not be able to reach, to control potential public order situations involving meetings of motorcyclists, and often in police escorts where motorcycle police officers can quickly clear a path for escorted vehicles. Bicycle patrols are used in some areas, often downtown areas or parks, because they allow for wider and faster area coverage than officers on foot. Bicycles are also commonly used by riot police to create makeshift barricades against protesters. Police aviation consists of helicopters and fixed-wing aircraft, while police watercraft tend to consist of RHIBs, motorboats, and patrol boats. SWAT vehicles are used by police tactical units, and often consist of four-wheeled armored personnel carriers used to transport tactical teams while providing armored cover, equipment storage space, or makeshift battering ram capabilities; these vehicles are typically not armed and do not patrol and are only used to transport. Mobile command posts may also be used by some police forces to establish identifiable command centers at the scene of major situations. Police cars may contain issued long guns, ammunition for issued weapons, less-lethal weaponry, riot control equipment, traffic cones, road flares, physical barricades or barricade tape, fire extinguishers, first aid kits, or defibrillators. Strategies The advent of the police car, two-way radio, and telephone in the early 20th century transformed policing into a reactive strategy that focused on responding to calls for service away from their beat. With this transformation, police command and control became more centralized. In the United States, August Vollmer introduced other reforms, including education requirements for police officers. O.W. Wilson, a student of Vollmer, helped reduce corruption and introduce professionalism in Wichita, Kansas, and later in the Chicago Police Department. Strategies employed by O.W. Wilson included rotating officers from community to community to reduce their vulnerability to corruption, establishing of a non-partisan police board to help govern the police force, a strict merit system for promotions within the department, and an aggressive recruiting drive with higher police salaries to attract professionally qualified officers. During the professionalism era of policing, law enforcement agencies concentrated on dealing with felonies and other serious crime and conducting visible car patrols in between, rather than broader focus on crime prevention. The Kansas City Preventive Patrol study in the early 1970s showed flaws in using visible car patrols for crime prevention. It found that aimless car patrols did little to deter crime and often went unnoticed by the public. Patrol officers in cars had insufficient contact and interaction with the community, leading to a social rift between the two. In the 1980s and 1990s, many law enforcement agencies began to adopt community policing strategies, and others adopted problem-oriented policing. Broken windows' policing was another, related approach introduced in the 1980s by James Q. Wilson and George L. Kelling, who suggested that police should pay greater attention to minor "quality of life" offenses and disorderly conduct. The concept behind this method is simple: broken windows, graffiti, and other physical destruction or degradation of property create an environment in which crime and disorder is more likely. The presence of broken windows and graffiti sends a message that authorities do not care and are not trying to correct problems in these areas. Therefore, correcting these small problems prevents more serious criminal activity. The theory was popularised in the early 1990s by police chief William J. Bratton and New York City Mayor Rudy Giuliani. It was emulated in 2010s in Kazakhstan through zero tolerance policing. Yet it failed to produce meaningful results in this country because citizens distrusted police while state leaders preferred police loyalty over police good behavior. Building upon these earlier models, intelligence-led policing has also become an important strategy. Intelligence-led policing and problem-oriented policing are complementary strategies, both of which involve systematic use of information. Although it still lacks a universally accepted definition, the crux of intelligence-led policing is an emphasis on the collection and analysis of information to guide police operations, rather than the reverse. A related development is evidence-based policing. In a similar vein to evidence-based policy, evidence-based policing is the use of controlled experiments to find which methods of policing are more effective. Leading advocates of evidence-based policing include the criminologist Lawrence W. Sherman and philanthropist Jerry Lee. Findings from controlled experiments include the Minneapolis Domestic Violence Experiment, evidence that patrols deter crime if they are concentrated in crime hotspots and that restricting police powers to shoot suspects does not cause an increase in crime or violence against police officers. Use of experiments to assess the usefulness of strategies has been endorsed by many police services and institutions, including the U.S. Police Foundation and the UK College of Policing. Power restrictions In many nations, criminal procedure law has been developed to regulate officers' discretion, so that they do not arbitrarily or unjustly exercise their powers of arrest, search and seizure, and use of force. In the United States, Miranda v. Arizona led to the widespread use of Miranda warnings or constitutional warnings. In Miranda the court created safeguards against self-incriminating statements made after an arrest. The court held that "The prosecution may not use statements, whether exculpatory or inculpatory, stemming from questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way, unless it demonstrates the use of procedural safeguards effective to secure the Fifth Amendment's privilege against self-incrimination" Police in the United States are also prohibited from holding criminal suspects for more than a reasonable amount of time (usually 24–48 hours) before arraignment, using torture, abuse or physical threats to extract confessions, using excessive force to effect an arrest, and searching suspects' bodies or their homes without a warrant obtained upon a showing of probable cause. The four exceptions to the constitutional requirement of a search warrant are: Consent Search incident to arrest Motor vehicle searches Exigent circumstances In Terry v. Ohio (1968) the court divided seizure into two parts, the investigatory stop and arrest. The court further held that during an investigatory stop a police officer's search " [is] confined to what [is] minimally necessary to determine whether [a suspect] is armed, and the intrusion, which [is] made for the sole purpose of protecting himself and others nearby, [is] confined to ascertaining the presence of weapons" (U.S. Supreme Court). Before Terry, every police encounter constituted an arrest, giving the police officer the full range of search authority. Search authority during a Terry stop (investigatory stop) is limited to weapons only. Using deception for confessions is permitted, but not coercion. There are exceptions or exigent circumstances such as an articulated need to disarm a suspect or searching a suspect who has already been arrested (Search Incident to an Arrest). The Posse Comitatus Act severely restricts the use of the military for police activity, giving added importance to police SWAT units. British police officers are governed by similar rules, such as those introduced to England and Wales under the Police and Criminal Evidence Act 1984 (PACE), but generally have greater powers. They may, for example, legally search any suspect who has been arrested, or their vehicles, home or business premises, without a warrant, and may seize anything they find in a search as evidence. All police officers in the United Kingdom, whatever their actual rank, are 'constables' in terms of their legal position. This means that a newly appointed constable has the same arrest powers as a Chief Constable or Commissioner. However, certain higher ranks have additional powers to authorize certain aspects of police operations, such as a power to authorize a search of a suspect's house (section 18 PACE in England and Wales) by an officer of the rank of Inspector, or the power to authorize a suspect's detention beyond 24 hours by a Superintendent. Conduct, accountability and public confidence Police services commonly include units for investigating crimes committed by the police themselves. These units are typically called internal affairs or inspectorate-general units. In some countries separate organizations outside the police exist for such purposes, such as the British Independent Office for Police Conduct. However, due to American laws around qualified immunity, it has become increasingly difficult to investigate and charge police misconduct and crimes. Likewise, some state and local jurisdictions, for example, Springfield, Illinois have similar outside review organizations. The Police Service of Northern Ireland is investigated by the Police Ombudsman for Northern Ireland, an external agency set up as a result of the Patten report into policing the province. In the Republic of Ireland the Garda Síochána is investigated by the Garda Síochána Ombudsman Commission, an independent commission that replaced the Garda Complaints Board in May 2007. The Special Investigations Unit of Ontario, Canada, is one of only a few civilian agencies around the world responsible for investigating circumstances involving police and others that have resulted in a death, serious injury, or allegations of sexual assault. The agency has made allegations of insufficient cooperation from various police services hindering their investigations. In Hong Kong, any allegations of corruption within the police are investigated by the Independent Commission Against Corruption and the Independent Police Complaints Council, two agencies which are independent of the police force. In the United States, body cameras are often worn by police officers to record their interactions with the public and each other, providing audiovisual recorded evidence for review in the event an officer or agency's actions are investigated. Use of force Police forces also find themselves under criticism for their use of force, particularly deadly force. Specifically, tension increases when a police officer of one ethnic group harms or kills a suspect of another one. In the United States, such events occasionally spark protests and accusations of racism against police and allegations that police departments practice racial profiling. Similar incidents have also happened in other countries. In the United States since the 1960s, concern over such issues has increasingly weighed upon law enforcement agencies, courts and legislatures at every level of government. Incidents such as the 1965 Watts riots, the videotaped 1991 beating by LAPD officers of Rodney King, and the riot following their acquittal have been suggested by some people to be evidence that U.S. police are dangerously lacking in appropriate controls. The fact that this trend has occurred contemporaneously with the rise of the civil rights movement, the "War on Drugs", and a precipitous rise in violent crime from the 1960s to the 1990s has made questions surrounding the role, administration and scope of police authority increasingly complicated. Police departments and the local governments that oversee them in some jurisdictions have attempted to mitigate some of these issues through community outreach programs and community policing to make the police more accessible to the concerns of local communities, by working to increase hiring diversity, by updating training of police in their responsibilities to the community and under the law, and by increased oversight within the department or by civilian commissions. In cases in which such measures have been lacking or absent, civil lawsuits have been brought by the United States Department of Justice against local law enforcement agencies, authorized under the 1994 Violent Crime Control and Law Enforcement Act. This has compelled local departments to make organizational changes, enter into consent decree settlements to adopt such measures, and submit to oversight by the Justice Department. In May 2020, a global movement to increase scrutiny of police violence grew in popularity, starting in Minneapolis, Minnesota with the murder of George Floyd. Calls for defunding of the police and full abolition of the police gained larger support in the United States as more criticized systemic racism in policing. Critics also argue that sometimes this abuse of force or power can extend to police officer civilian life as well. For example, critics note that women in around 40% of police officer families have experienced domestic violence and that police officers are convicted of misdemeanors and felonies at a rate of more than six times higher than concealed carry weapon permit holders. Protection of individuals The Supreme Court of the United States has consistently ruled that law enforcement officers in the U.S. have no duty to protect any individual, only to enforce the law in general. This is despite the motto of many police departments in the U.S. being a variation of "protect and serve"; regardless, many departments generally expect their officers to protect individuals. The first case to make such a ruling was South v. State of Maryland in 1855, and the most recent was Town of Castle Rock v. Gonzales in 2005. In contrast, the police are entitled to protect private rights in some jurisdictions. To ensure that the police would not interfere in the regular competencies of the courts of law, some police acts require that the police may only interfere in such cases where protection from courts cannot be obtained in time, and where, without interference of the police, the realization of the private right would be impeded. This would, for example, allow police to establish a restaurant guest's identity and forward it to the innkeeper in a case where the guest cannot pay the bill at nighttime because his wallet had just been stolen from the restaurant table. In addition, there are federal law enforcement agencies in the United States whose mission includes providing protection for executives such as the president and accompanying family members, visiting foreign dignitaries, and other high-ranking individuals. Such agencies include the U.S. Secret Service and the U.S. Park Police. See also Chief of police Criminal citation Criminal justice Fraternal Order of Police Highway patrol Law enforcement agency Militsiya Officer Down Memorial Page Police academy Police car Police certificate Police foundation Police science Police state Police training officer Private police Public administration Public security Riot police State police Vigilante Women in law enforcement Lists List of basic law enforcement topics List of countries by size of police forces List of law enforcement agencies List of protective service agencies Police rank References Further reading Mitrani, Samuel (2014). The Rise of the Chicago Police Department: Class and Conflict, 1850–1894. University of Illinois Press, 272 pages. Interview with Sam Mitrani: "The Function of Police in Modern Society: Peace or Control?" (January 2015), The Real News External links United Nations Police Division Crime prevention Law enforcement Legal professions National security Public safety Security Surveillance
23628
https://en.wikipedia.org/wiki/PDP-10
PDP-10
Digital Equipment Corporation (DEC)'s PDP-10, later marketed as the DECsystem-10, is a mainframe computer family manufactured beginning in 1966 and discontinued in 1983. 1970s models and beyond were marketed under the DECsystem-10 name, especially as the TOPS-10 operating system became widely used. The PDP-10's architecture is almost identical to that of DEC's earlier PDP-6, sharing the same 36-bit word length and slightly extending the instruction set. The main difference was a greatly improved hardware implementation. Some aspects of the instruction set are unusual, most notably the byte instructions, which operate on bit fields of any size from 1 to 36 bits inclusive, according to the general definition of a byte as a contiguous sequence of a fixed number of bits. The PDP-10 was found in many university computing facilities and research labs during the 1970s, the most notable being Harvard University's Aiken Computation Laboratory, MIT's AI Lab and Project MAC, Stanford's SAIL, Computer Center Corporation (CCC), ETH (ZIR), and Carnegie Mellon University. Its main operating systems, TOPS-10 and TENEX, were used to build out the early ARPANET. For these reasons, the PDP-10 looms large in early hacker folklore. Projects to extend the PDP-10 line were eclipsed by the success of the unrelated VAX superminicomputer, and the cancellation of the PDP-10 line was announced in 1983. According to reports, DEC sold "about 1500 DECsystem-10s by the end of 1980." Models and technical evolution The original PDP-10 processor is the KA10, introduced in 1968. It uses discrete transistors packaged in DEC's Flip-Chip technology, with backplanes wire wrapped via a semi-automated manufacturing process. Its cycle time is 1 μs and its add time 2.1 μs. In 1973, the KA10 was replaced by the KI10, which uses transistor–transistor logic (TTL) SSI. This was joined in 1975 by the higher-performance KL10 (later faster variants), which is built from emitter-coupled logic (ECL), microprogrammed, and has cache memory. The KL10's performance was about 1 megaflops using 36-bit floating point numbers on matrix row reduction. It was slightly faster than the newer VAX-11/750, although more limited in memory. A smaller, less expensive model, the KS10, was introduced in 1978, using TTL and Am2901 bit-slice components and including the PDP-11 Unibus to connect peripherals. The KS10 was marketed as the DECSYSTEM-2020, part of the DECSYSTEM-20 range; it was DEC's entry in the distributed processing arena, and it was introduced as "the world's lowest cost mainframe computer system." KA10 The KA10 has a maximum main memory capacity (both virtual and physical) of 256 kilowords (equivalent to 1152 kilobytes); the minimum main memory required is 16 kilowords. As supplied by DEC, it did not include paging hardware; memory management consists of two sets of protection and relocation registers, called base and bounds registers. This allows each half of a user's address space to be limited to a set section of main memory, designated by the base physical address and size. This allows the model of separate read-only shareable code segment (normally the high segment) and read-write data/stack segment (normally the low segment) used by TOPS-10 and later adopted by Unix. Some KA10 machines, first at MIT, and later at Bolt, Beranek and Newman (BBN), were modified to add virtual memory and support for demand paging, and more physical memory. The KA10 weighs about . The 10/50 was the top-of-the-line Uni-processor KA machine at the time when the PA1050 software package was introduced. Two other KA10 models were the uniprocessor 10/40, and the dual-processor 10/55. KI10 The KI10 introduced support for paged memory management, and also support a larger physical address space of 4 megawords. KI10 models include 1060, 1070 and 1077, the latter incorporating two CPUs. KL10 The original KL10 PDP-10 (also marketed as DECsystem-10) models (1080, 1088, etc.) use the original PDP-10 memory bus, with external memory modules. Module in this context meant a cabinet, dimensions roughly (WxHxD) 30 x 75 x 30 in. with a capacity of 32 to 256 kWords of magnetic-core memory. The processors used in the DECSYSTEM-20 (2040, 2050, 2060, 2065), commonly but incorrectly called "KL20", use internal memory, mounted in the same cabinet as the CPU. The 10xx models also have different packaging; they come in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECSYSTEM-20. The differences between the 10xx and 20xx models were primarily which operating system they ran, either TOPS-10 or TOPS-20. Apart from that, differences are more cosmetic than real; some 10xx systems have "20-style" internal memory and I/O, and some 20xx systems have "10-style" external memory and an I/O bus. In particular, all ARPAnet TOPS-20 systems had an I/O bus because the AN20 IMP interface was an I/O bus device. Both could run either TOPS-10 or TOPS-20 microcode and thus the corresponding operating system. Model B The later Model B version of the 2060 processors removes the 256 kiloword limit on the virtual address space by supporting up to 32 "sections" of up to 256 kilowords each, along with substantial changes to the instruction set. The two versions are effectively different CPUs. The first operating system that takes advantage of the Model B's capabilities is TOPS-20 release 3, and user mode extended addressing is offered in TOPS-20 release 4. TOPS-20 versions after release 4.1 only run on a Model B. TOPS-10 versions 7.02 and 7.03 also use extended addressing when run on a 1090 (or 1091) Model B processor running TOPS-20 microcode. MCA25 The final upgrade to the KL10 was the MCA25 upgrade of a 2060 to 2065 (or a 1091 to 1095), which gave some performance increases for programs which run in multiple sections. Massbus The I/O architecture of the 20xx series KL machines is based on a DEC bus design called the Massbus. While many attributed the success of the PDP-11 to DEC's decision to make the PDP-11 Unibus an open architecture, DEC reverted to prior philosophy with the KL, making Massbus both unique and proprietary. Consequently, there were no aftermarket peripheral manufacturers who made devices for the Massbus, and DEC chose to price their own Massbus devices, notably the RP06 disk drive, at a substantial premium above comparable IBM-compatible devices. CompuServe for one, designed its own alternative disk controller that could operate on the Massbus, but connect to IBM style 3330 disk subsystems. Front-end processors The KL class machines have a PDP-11/40 front-end processor for system start-up and monitoring. The PDP-11 is booted from a dual-ported RP06 disk drive (or alternatively from an 8" floppy disk drive or DECtape), and then commands can be given to the PDP-11 to start the main processor, which is typically booted from the same RP06 disk drive as the PDP-11. The PDP-11 performs watchdog functions once the main processor is running. Communication with IBM mainframes, including Remote Job Entry (RJE), was accomplished via a DN61 or DN-64 front-end processor, using a PDP-11/40 or PDP-11/34a. KS10 The KS10 is a lower-cost PDP-10 built using AMD 2901 bit-slice chips, with an Intel 8080A microprocessor as a control processor. The KS10 design was crippled to be a Model A even though most of the necessary data paths needed to support the Model B architecture are present. This was no doubt intended to segment the market, but it greatly shortened the KS10's product life. The KS system uses a similar boot procedure to the KL10. The 8080 control processor loads the microcode from an RM03, RM80, or RP06 disk or magnetic tape and then starts the main processor. The 8080 switches modes after the operating system boots and controls the console and remote diagnostic serial ports. Magnetic tape drives Two models of tape drives were supported by the TM10 Magnetic Tape Control subsystem: TU20 Magnetic Tape Transport – 45 ips (inches/second) TU30 Magnetic Tape Transport – 75 ips (inches/second) TU45 Magnetic Tape Transport – 75 ips (inches/second) A mix of up to eight of these could be supported, using seven-track or nine-track devices. The TU20 and TU30 each came in A (9-track) and B (7-track) versions, and all of the aforementioned tape drives could read/write from/to 200 BPI, 556 BPI and 800 BPI IBM-compatible tapes. The TM10 Magtape controller was available in two submodels: TM10A did cycle-stealing to/from PDP-10 memory using the KA10 Arithmetic Processor TM10B accessed PDP-10 memory using a DF10 Data Channel, without "cycle stealing" from the KA10 Arithmetic Processor Instruction set architecture From the first PDP-6s to the KL-10 and KS-10, the user-mode instruction set architecture is largely the same. This section covers that architecture. The only major change to the architecture is the addition of multi-section extended addressing in the KL-10; extended addressing, which changes the process of generating the effective address of an instruction, is briefly discussed at the end. Generally, the system has 36-bit words and instructions, and 18-bit addresses. Registers There are 16 general-purpose, 36-bit registers. The right half of these registers (other than register 0) may be used for indexing. A few instructions operate on pairs of registers. The "PC Word" register is split in half; the right 18 bits contains the program counter and the left 13 bits contains the processor status flags, with five zeros between the two sections. The condition register bits, which record the results of arithmetic operations (e.g. overflow), can be accessed by only a few instructions. In the original KA-10 systems, these registers are simply the first 16 words of main memory. The "fast registers" hardware option implements them as registers in the CPU, still addressable as the first 16 words of memory. Some software takes advantage of this by using the registers as an instruction cache by loading code into the registers and then jumping to the appropriate address; this is used, for example, in Maclisp to implement one version of the garbage collector. Later models all have registers in the CPU. Supervisor mode There are two operational modes, supervisor and user mode. Besides the difference in memory referencing described above, supervisor-mode programs can execute input/output operations. Communication from user-mode to supervisor-mode is done through Unimplemented User Operations (UUOs): instructions which are not defined by the hardware, and are trapped by the supervisor. This mechanism is also used to emulate operations which may not have hardware implementations in cheaper models. Data types The major datatypes which are directly supported by the architecture are two's complement 36-bit integer arithmetic (including bitwise operations), 36-bit floating-point, and halfwords. Extended, 72-bit, floating point is supported through special instructions designed to be used in multi-instruction sequences. Byte pointers are supported by special instructions. A word structured as a "count" half and a "pointer" half facilitates the use of bounded regions of memory, notably stacks. Instructions Instructions are stored in 36-bit words. There are two formats, general instructions and input/output instructions. In general instructions, the leftmost 9 bits, 0 to 8, contain an instruction opcode. Many of the possible 512 codes are not defined in the base model machines and are reserved for expansion like the addition of a hardware floating point unit. Following the opcode in bits 9 to 12 is the number of a register which will be used for the instruction. The input/output instructions all start with bits 0 through 2 being set to 1 (decimal value 7), bits 3 through 9 containing a device number, and 10 through 12 the instruction opcode. In both formats, bits 13 through 35 are used to form the "effective address", E. Bits 18 through 35 contain a numerical constant address, Y. This address may be modified by adding the 18-bit value in a register, X, the register number indicated in bits 14 to 17. If these are set to zero, no indexing is used, meaning register 0 cannot be used for indexing. Bit 13, I, indicates indirection, meaning the ultimate effective address used by the instruction is not E, but the address stored in memory location E. When using indirection, the data in word E is interpreted in the same way as the layout of the instruction; bits 0 to 12 are ignored, and 13 through 35 form I, X and Y as above. Instruction execution begins by calculating E. It adds the contents of the given register X (if not 0) to the offset Y; then, if the indirect bit is 1, the value at E is fetched and the effective address calculation is repeated. If I is 1 in the stored value at E in memory, the system will then indirect through that address as well, possibly following many such steps. This process continues until an indirect word with a zero indirect bit is reached. Indirection of this sort was a common feature of processor designs of this era. In supervisor mode, addresses correspond directly to physical memory. In user mode, addresses are translated to physical memory. Earlier models give a user process a "high" and a "low" memory: addresses with a 0 top bit use one base register and those with a 1 use another. Each segment is contiguous. Later architectures have paged memory access, allowing non-contiguous address spaces. The CPU's general-purpose registers can also be addressed as memory locations 0–15. General instructions There are three main classes of general instructions: arithmetic, logical, and move; conditional jump; conditional skip (which may have side effects). There are also several smaller classes. The arithmetic, logical, and move operations include variants which operate immediate-to-register, memory-to-register, register-to-memory, register-and-memory-to-both or memory-to-memory. Since registers may be addressed as part of memory, register-to-register operations are also defined. (Not all variants are useful, though they are well-defined.) For example, the ADD operation has as variants ADDI (add an 18-bit Immediate constant to a register), ADDM (add register contents to a Memory location), ADDB (add to Both, that is, add register contents to memory and also put the result in the register). A more elaborate example is HLROM (Half Left to Right, Ones to Memory), which takes the Left half of the register contents, places them in the Right half of the memory location, and replaces the left half of the memory location with Ones. Halfword instructions are also used for linked lists: HLRZ is the Lisp CAR operator; HRRZ is CDR. The conditional jump operations examine register contents and jump to a given location depending on the result of the comparison. The mnemonics for these instructions all start with JUMP, JUMPA meaning "jump always" and JUMP meaning "jump never" – as a consequence of the symmetric design of the instruction set, it contains several no-ops such as JUMP. For example, JUMPN A,LOC jumps to the address LOC if the contents of register A is non-zero. There are also conditional jumps based on the processor's condition register using the JRST instruction. On the KA10 and KI10, JRST is faster than JUMPA, so the standard unconditional jump is JRST. The conditional skip operations compare register and memory contents and skip the next instruction (which is often an unconditional jump) depending on the result of the comparison. A simple example is CAMN A,LOC which compares the contents of register A with the contents of location LOC and skips the next instruction if they are not equal. A more elaborate example is TLCE A,LOC (read "Test Left Complement, skip if Equal"), which using the contents of LOC as a mask, selects the corresponding bits in the left half of register A. If all those bits are Equal to zero, skip the next instruction; and in any case, replace those bits by their Boolean complement. Some smaller instruction classes include the shift/rotate instructions and the procedure call instructions. Particularly notable are the stack instructions PUSH and POP, and the corresponding stack call instructions PUSHJ and POPJ. The byte instructions use a special format of indirect word to extract and store arbitrary-sized bit fields, possibly advancing a pointer to the next unit. Input/output instructions The PDP-10 does not use memory-mapped devices, in contrast to the PDP-11 and later DEC machines. A separate set of instructions is used to move data to and from devices defined by a device number in the instruction. Bits 3 to 9 contain the device number, with the 7 bits allowing a total of 128 devices. Instructions allow for the movement of data to and from devices in word-at-a-time (DATAO and DATAI) or block-at-a-time (BLKO, BLKI). In block mode, the value pointed to by E is a word in memory that is split in two, the right 18 bits indicate a starting address in memory where the data is located (or written into) and the left 18 bits are a counter. The block instructions increment both values every time they are called, thereby increasing the counter as well as moving to the next location in memory. It then performs a DATAO or DATAI. Finally, it checks the counter side of the value at E, if it is non-zero, it skips the next instruction. If it is zero, it performs the next instruction, normally a JUMP back to the top of the loop. The BLK instructions are effectively small programs that loop over a DATA and increment instructions, but by having this implemented in the processor itself, it avoids the need to repeatedly read the series of instructions from main memory and thus performs the loop much more rapidly. The final set of I/O instructions are used to write and read condition codes on the device, CONO and CONI. Additionally, CONSZ will perform a CONI, bitmask the retrieved data against the value in E, and then skip the next instruction if it is zero, used in a fashion similar to the BLK commands. Only the right 18 bits are tested in CONSZ. Interrupt handling A second use of the CONO instruction is to set the device's priority level for interrupt handling. There are three bits in the CONO instruction, 33 through 35, allowing the device to be set to level 0 through 7. Level 1 is the highest, meaning that if two devices raise an interrupt at the same time, the lowest-numbered device will begin processing. Level 0 means "no interrupts", so a device set to level 0 will not stop the processor even if it does raise an interrupt. Each device channel has two memory locations associated with it, one at 40+2N and the other at 41+2N, where N is the channel number. Thus, channel 1 uses locations 42 and 43. When the interrupt is received and accepted, meaning no higher-priority interrupt is already running, the system stops at the next memory read part of the instruction cycle and instead begins processing at the address stored in the first of those two locations. It is up to the interrupt handler to turn off the interrupt level when it is complete, which it can do by running a CONO, DATA or BLK instruction. Two of the device numbers are set aside for special purposes. Device 0 is the computer's front-panel console; reading that device retrieves the settings of the panel switches while writing lights up the status lamps. Device 4 is the "priority interrupt", which can be read using CONI to gain additional information about an interrupt that has occurred. Extended addressing In processors supporting extended addressing, the address space is divided into "sections". An 18-bit address is a "local address", containing an offset within a section, and a "global address" is 30 bits, divided into a 12-bit section number at the bottom of the left 18 bits and an 18-bit offset within that section in the right 18 bits. A register can contain either a "local index", with an 18-bit unsigned displacement or local address in the right 18 bits, or a "global index", with a 30-bit unsigned displacement or global address in the right 30 bits. An indirect word can either be a "local indirect word", with its uppermost bit set, the next 12 bits reserved, and the remaining bits being an indirect bit, a 4-bit register code, and an 18-bit displacement, or a "global indirect word", with its uppermost bit clear, the next bit being an indirect bit, the next 4 bits being a register code, and the remaining 30 bits being a displacement. The process of calculating the effective address generates a 12-bit section number and an 18-bit offset within that segment. Software Operating systems The original PDP-10 operating system was simply called "Monitor", but was later renamed TOPS-10. Eventually the PDP-10 system itself was renamed the DECsystem-10. Early versions of Monitor and TOPS-10 formed the basis of Stanford's WAITS operating system and the CompuServe time-sharing system. Over time, some PDP-10 operators began running operating systems assembled from major components developed outside DEC. For example, the main Scheduler might come from one university, the Disk Service from another, and so on. The commercial timesharing services such as CompuServe, On-Line Systems, Inc. (OLS), and Rapidata maintained sophisticated inhouse systems programming groups so that they could modify the operating system as needed for their own businesses without being dependent on DEC or others. There are also strong user communities such as DECUS through which users can share software that they have developed. BBN developed their own alternative operating system, TENEX, which fairly quickly became popular in the research community. DEC later ported TENEX to the KL10, enhanced it considerably, and named it TOPS-20, forming the DECSYSTEM-20 line. MIT, which had developed CTSS, Compatible Time-Sharing System to run on their IBM 709 (and later a modified IBM 7094 system), also developed ITS, Incompatible Timesharing System to run on their PDP-6 (and later a modified PDP-10); Tymshare developed TYMCOM-X, derived from TOPS-10 but using a page-based file system like TOPS-20. Programming languages DEC maintained DECsystem-10 FORTRAN IV (F40) for the PDP-10 from 1967 to 1975 MACRO-10 (assembly language macro compiler), COBOL, BASIC and AID were supported under the multi processing and swapping monitors. In practice a number of other programming environments were available including LISP and SNOBOL at the Hatfield Polytechnic site around 1970. Clones In 1971 to 1972, researchers at Xerox PARC were frustrated by top company management's refusal to let them buy a PDP-10. Xerox had just bought Scientific Data Systems (SDS) in 1969, and wanted PARC to use an SDS machine. Instead, a group led by Charles P. Thacker designed and constructed two PDP-10 clone systems named MAXC (pronounced as Max, in honour of Max Palevsky, who had sold SDS to Xerox) for their own use. MAXC was also a backronym for Multiple Access Xerox Computer. MAXC ran a modified version of TENEX. Third-party attempts to sell PDP-10 clones were relatively unsuccessful; see Foonly, Systems Concepts, and XKL. Use by CompuServe One of the largest collections of DECsystem-10 architecture systems ever assembled was at CompuServe, which, at its peak, operated over 200 loosely coupled systems in three data centers in Columbus, Ohio. CompuServe used these systems as 'hosts', providing access to commercial applications, and the CompuServe Information Service. While the first such systems were bought from DEC, when DEC abandoned the PDP-10 architecture in favor of the VAX, CompuServe and other PDP-10 customers began buying plug compatible computers from Systems Concepts. , CompuServe was operating a small number of PDP-10 architecture machines to perform some billing and routing functions. The main power supplies used in the KL-series machines were so inefficient that CompuServe engineers designed a replacement supply that used about half the energy. CompuServe offered to license the design for its KL supply to DEC for free if DEC would promise that any new KL bought by CompuServe would have the more efficient supply installed. DEC declined the offer. Another modification made to the PDP-10 by CompuServe engineers was replacing the hundreds of incandescent indicator lamps on the KI10 processor cabinet with LED lamp modules. The cost of conversion was easily offset by cost savings in electricity use, reduced heat, and labor needed to replace burned-out lamps. Digital followed this step all over the world. The picture on the right hand side shows the light panel of the MF10 memory which is contemporary with the KI10 CPU. This item is part of a computer museum, and was populated with LEDs in 2008 for demonstration purposes only. There were no similar banks of indicator lamps on KL and KS processors themselves - only on legacy memory and peripheral devices. Cancellation and influence The PDP-10 was eventually eclipsed by the VAX superminicomputer machines (descendants of the PDP-11) when DEC recognized that the PDP-10 and VAX product lines were competing with each other and decided to concentrate its software development effort on the more profitable VAX. The PDP-10 product line cancellation was announced in 1983, including cancelling the ongoing Jupiter project to produce a new high-end PDP-10 processor (despite that project being in good shape at the time of the cancellation) and the Minnow project to produce a desktop PDP-10, which may then have been at the prototyping stage. This event spelled the doom of ITS and the technical cultures that had spawned the original jargon file, but by the 1990s it had become something of a badge of honor among old-time hackers to have cut one's teeth on a PDP-10. The PDP-10 assembly language instructions LDB and DPB (load/deposit byte) live on as functions in the programming language Common Lisp. See the "References" section on the LISP article. The 36-bit word size of the PDP-6 and PDP-10 was influenced by the programming convenience of having 2 LISP pointers, each 18 bits, in one word. Will Crowther created Adventure, the prototypical computer adventure game, for a PDP-10. Don Daglow created the first computer baseball game (1971) and Dungeon (1975), the first role-playing video game on a PDP-10. Walter Bright originally created Empire for the PDP-10. Roy Trubshaw and Richard Bartle created the first MUD on a PDP-10. Zork was written on the PDP-10. Infocom used PDP-10s for game development and testing. Bill Gates and Paul Allen originally wrote Altair BASIC using an Intel 8080 simulator running on a PDP-10 at Harvard University. Allen repurposed the PDP-10 assembler as a cross assembler for the 8080 chip. They founded Microsoft shortly after. Emulation or simulation The software for simulation of historical computers, SIMH, contains modules to emulate all the PDP-10 CPU models on a Windows or Unix-based machine. Copies of DEC's original distribution tapes are available as downloads from the Internet so that a running TOPS-10 or TOPS-20 system may be established. ITS and WAITS are also available for SIMH. Ken Harrenstien's KLH10 software for Unix-like systems emulates a KL10B processor with extended addressing and 4 MW of memory or a KS10 processor with 512 KW of memory. The KL10 emulation supports v.442 of the KL10 microcode, which enables it to run the final versions of both TOPS-10 and TOPS-20. The KS10 emulation supports both ITS v.262 microcode for the final version of KS10 ITS and DEC v.130 microcode for the final versions of KS TOPS-10 and TOPS-20. See also ITS TENEX (operating system) TOPS-10 TOPS-20 WAITS Notes References Sources Further reading C. Gordon Bell, Alan Kotok, Thomas N. Hastings, Richard Hill, "The Evolution of the DECsystem 10", Communications of the ACM 21:1:44 (January 1978) , reprint in C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital Press, 1978, ) External links 36 Bits Forever! PDP-10 Models — Shows CPUs and models PDP-10 stuff PDP10 Miscellany Page Life in the Fast AC's Columbia University DEC PDP-10 page Panda Programming TOPS-20 page Online PDP-10 and related systems at SDF's Interim Computer Museum (includes some systems that were originally part of the Paul Allen collection at Living Computers: Museum + Labs). Empire for the PDP-10 (zip file of FORTRAN-10 source code download) from Classic Empire PDP-10 software archive at Trailing Edge The Personal Mainframe ad Computer World ad for Personal Mainframe PDP-10 documentation at Bitsavers Newsgroups alt.sys.pdp10 DEC mainframe computers 36-bit computers Computer-related introductions in 1966 Computers using bit-slice designs
23629
https://en.wikipedia.org/wiki/DECSYSTEM-20
DECSYSTEM-20
The DECSYSTEM-20 was a family of 36-bit Digital Equipment Corporation PDP-10 mainframe computers running the TOPS-20 operating system and was introduced in 1977. PDP-10 computers running the TOPS-10 operating system were labeled DECsystem-10 as a way of differentiating them from the PDP-11. Later on, those systems running TOPS-20 (on the KL10 PDP-10 processors) were labeled DECSYSTEM-20 (the block capitals being the result of a lawsuit brought against DEC by Singer, which once made a computer called "The System Ten"). The DECSYSTEM-20 was sometimes called PDP-20, although this designation was never used by DEC. Models The following models were produced: DECSYSTEM-2020: KS10 bit-slice processor with up to 512 kilowords of solid state RAM (The ADP OnSite version of the DECSYSTEM-2020 supported 1 MW of RAM) DECSYSTEM-2040: KL10 ECL processor with up to 1024 kilowords of magnetic core RAM DECSYSTEM-2050: KL10 ECL processor with 2k words of cache and up to 1024 kilowords of RAM DECSYSTEM-2060: KL10 ECL processor with 2k words of cache and up to 4096 kilowords of solid state memory DECSYSTEM-2065: DECSYSTEM-2060 with MCA25 pager (double-sized (1024 entry) two-way associative hardware page table) The only significant difference the user could see between a DECsystem-10 and a DECSYSTEM-20 was the operating system and the color of the paint. Most (but not all) machines sold to run TOPS-10 were painted "Blasi Blue", whereas most TOPS-20 machines were painted "Terracotta" (often mistakenly called "Chinese Red" or orange; the actual name of the color on the paint cans was Terra Cotta). There were some significant internal differences between the earlier KL10 Model A processors, used in the earlier DECsystem-10s running on KL10 processors, and the later KL10 Model Bs, used for the DECSYSTEM-20s. Model As used the original PDP-10 memory bus, with external memory modules. The later Model B processors used in the DECSYSTEM-20 used internal memory, mounted in the same cabinet as the CPU. The Model As also had different packaging; they came in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECSYSTEM-20. The last released implementation of DEC's 36-bit architecture was the single cabinet DECSYSTEM-2020, using a KS10 processor. The DECSYSTEM-20 was primarily designed and used as a small mainframe for timesharing. That is, multiple users would concurrently log on to individual user accounts and share use of the main processor to compile and run applications. Separate disk allocations were maintained for all users by the operating system, and various levels of protection could be maintained by for System, Owner, Group, and World users. A model 2060, for example, could typically host up to 40 to 60 simultaneous users before exhibiting noticeably delayed response time. Remaining machines The Living Computer Museum of Seattle, Washington, maintained a 2065 running TOPS-10, which was available to interested parties via SSH upon registration (at no cost) at their website. References C. Gordon Bell, Alan Kotok, Thomas N. Hasting, Richard Hill, "The Evolution of the DECsystem-10", in C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital Equipment, Bedford, 1979) Frank da Cruz, Christine Gianone, The DECSYSTEM-20 at Columbia University 1977–1988 Further reading Storage Organization and Management in TENEX. Daniel L. Murphy. AFIPS Proceedings, 1972 FJCC. "DECsystem-10/DECSYSTEM-20 Processor Reference Manual". 1982. "Manuals for DEC 36-bit computers". "Introduction to DECSYSTEM-20 Assembly Language Programming" (Ralph E. Gorin, 1981, ) External links PDP-10 Models—Explains all the various KL-10 models in detail Columbia University DECSYSTEM-20 Login into the Living Computer Museum, a portal into the Paul Allen collection of timesharing and interactive computers, including an operational DECSYSTEM-20 KL-10 2065 36-bit computers DEC mainframe computers Computer-related introductions in 1977
23630
https://en.wikipedia.org/wiki/Programmed%20Data%20Processor
Programmed Data Processor
Programmed Data Processor (PDP), referred to by some customers, media and authors as "Programmable Data Processor," is a term used by the Digital Equipment Corporation from 1957 to 1990 for several lines of minicomputers. The name 'PDP' intentionally avoids the use of the term 'computer'. At the time of the first PDPs, computers had a reputation of being large, complicated, and expensive machines. The venture capitalists behind Digital (especially Georges Doriot) would not support Digital's attempting to build a 'computer' and the term 'minicomputer' had not yet been coined. So instead, Digital used their existing line of logic modules to build a Programmed Data Processor and aimed it at a market that could not afford the larger computers. The various PDP machines can generally be grouped into families based on word length. Series Members of the PDP series include: PDP-1 The original PDP, an 18-bit 4-rack machine used in early time-sharing operating system work, and prominent in MIT's early hacker culture, which led to the (Massachusetts) Route 128 hardware startup belt (DEC's second home, Prime Computer, etc.). What is believed to be the first video game, Spacewar!, was developed for this machine, along with the first known word processing program for a general-purpose computer, "Expensive Typewriter". It was based to some extent on the TX-0 which Ben Gurley had also contributed to. His engineering requirement was to build it from inventory (DEC's existing product, System Modules). The last of DEC's 53 PDP-1 computers was built in 1969, a decade after the first, and nearly all of them were still in use as of 1975. "An average configuration cost $120,000" at a time "when most computer systems sold for a million dollars or more." Its architectural successors as 18-bit machines were the PDP-4, PDP-7, PDP-9, and the PDP-15. PDP-2 A number reserved for an unbuilt, undesigned 24-bit design. PDP-3 First DEC-designed (for US "black budget" outfits) 36-bit machine, though DEC did not offer it as a product. The only PDP-3 was built from DEC modules by the CIA's Scientific Engineering Institute (SEI) in Waltham, Massachusetts to process radar cross section data for the Lockheed A-12 reconnaissance aircraft in 1960. Architecturally it was essentially a PDP-1 controlling a PDP-1 stretched to 36-bit word width. PDP-4 This 18-bit machine, first shipped in 1962 of which "approximately 54 were sold" was a compromise: "with slower memory and different packaging" than the PDP-1, but priced at $65,000 - considerably less than its predecessor (about half the price). All later 18-bit PDP machines (7, 9 and 15) are based on a similar, but enlarged instruction set, more powerful, but based on the same concepts as the 12-bit PDP-5/PDP-8 series. One customer of these early PDP machines was Atomic Energy of Canada. The installation at Chalk River, Ontario included an early PDP-4 with a display system and a new PDP-5 as interface to the research reactor instrumentation and control. PDP-5 It was the world's first commercially produced minicomputer and DEC's first 12-bit machine (1963). The instruction set was later expanded in the PDP-8 to handle more bit rotations and to increase the maximum memory size from 4K words to 32K words. It was one of the first computer series with more than 1,000 built. PDP-6 This 36-bit machine, DEC's first large PDP computer, came in 1964 with the first DEC-supported timesharing system. 23 were installed. Although the PDP-6 was "disappointing to management," it introduced the instruction set and was the prototype for the far more successful PDP-10 and DEC System-20, of which hundreds were sold. PDP-7 Replacement for the PDP-4; DEC's first wire-wrapped machine using the associated Flip-Chip module form-factor. It was introduced in 1964, and a second version, the 7A, was subsequently added. A total of 120 7 & 7A systems were sold. The first version of Unix, and the first version of B, a predecessor of C, were written for the PDP-7 at Bell Labs, as was the first version (by DEC) of MUMPS. PDP-8 12-bit machine (1965) with a tiny instruction set; DEC's first major commercial success and the start of the minicomputer revolution. Many were purchased (at discount prices, a DEC tradition, which also included free manuals for anyone who asked during the Ken Olsen years) by schools, university departments, and research laboratories. Over 50,000 units among various models of the family (A, E, F, I, S, L, M) were sold. Later models are also used in the DECmate word processor and the VT-78 workstation. LINC-8 The system contained both a PDP-8 CPU and a LINC CPU; two instruction sets; 1966. Progenitor of the PDP-12. PDP-9 Successor to the PDP-7; DEC's first micro-programmed machine (1966). It features a speed increase of approximately twice that of the PDP-7. The PDP-9 is also one of the first small or medium scale computers to have a keyboard monitor system based on DIGITAL's own small magnetic tape units (DECtape). The PDP-9 established minicomputers as the leading edge of the computer industry. PDP-10 Also marketed as the DECsystem-10, this 36-bit timesharing machine (1966) was quite successful over several different implementations (KA, KI, KL, KS) and models. The instruction set is a slightly elaborated form of that of the PDP-6. The KL was also used for the DECSYSTEM-20. The KS was used for the 2020, DEC's entry in the distributed processing market, introduced as "the world's lowest cost mainframe computer system." PDP-11 The archetypal minicomputer (1970); a 16-bit machine and another commercial success for DEC. The LSI-11 is a four-chip PDP-11 used primarily for embedded systems. The 32-bit VAX series is descended from the PDP-11, and early VAX models have a PDP-11 compatibility mode. The 16-bit PDP-11 instruction set has been very influential, with processors ranging from the Motorola 68000 to the Renesas H8 and Texas Instruments MSP430, inspired by its highly orthogonal, general-register oriented instruction set and rich addressing modes. The PDP-11 family was extremely long-lived, spanning 20 years and many different implementations and technologies. PDP-12 12-bit machine (1969), descendant of the LINC-8 and thus of the PDP-8. It had one CPU that could change modes and execute the instruction set of either system. See LINC and PDP-12 User Manual. With slight redesign, and different livery, officially followed by, and marketed as, the "Lab-8". PDP-13 Designation was not used. PDP-14 A machine with 12-bit instructions, intended as an industrial controller (PLC; 1969). It has no data memory or data registers; instructions can test Boolean input signals, set or clear Boolean output signals, jump conditional or unconditionally, or call a subroutine. Later versions (for example, the PDP-14/30) are based on PDP-8 physical packaging technology. I/O is line voltage. PDP-15 DEC's final 18-bit machine (1970). It is the only 18-bit machine constructed from TTL integrated circuits rather than discrete transistors, and, like every DEC 18-bit system (except mandatory on the PDP-1, absent on the PDP-4) has an optional integrated vector graphics terminal, DEC's first improvement on its early-designed 34n where n equalled the PDP's number. Later versions of the PDP-15 run a real-time multi-user OS called "XVM". The final model, the PDP-15/76 uses a small PDP-11 to allow Unichannel peripherals to be used. PDP-16 A "roll-your-own" digital system using Register Transfer Modules, mainly intended for industrial control systems with more capability than the PDP-14. They could be used to design a custom controller consisting of a control structure and associated data storage and manipulation modules, or to design a small computer which could then be programmed. The PDP-16 modules were based on the RTMs designed by Gordon Bell during his time at CMU. The PDP-16/M was introduced in 1972 as a pre-assembled set of the PDP-16 modules that could be programmed and was nicknamed a "Subminicomputer". Related computers TX-0 designed by MIT's Lincoln Laboratory, important as influence for DEC products including Ben Gurley's design for the PDP-1. When the memory was replaced with a smaller one, the instruction set was expanded, and it was moved to the MIT campus. When a PDP-1 arrived on campus, it was placed in the next room. Software such as an assembler was ported from the TX-0 to the PDP-1 and the machines were connected for communications between them. LINC (Laboratory Instrument Computer), originally designed by MIT's Lincoln Laboratory, some built by DEC. Not in the PDP family, but important as progenitor of the PDP-12. The LINC and the PDP-8 can be considered the first minicomputers, and perhaps the first personal computers as well. The PDP-8 and PDP-11 are the most popular of the PDP series of machines. Digital never made a PDP-20, although the term was sometimes used for a PDP-10 running TOPS-20 (officially known as a DECSYSTEM-20). Several unlicensed clones of the PDP-11. TOAD-1 and TOAD-2, Foonly, and Systems Concepts PDP-10/DECSYSTEM-20-compatible machines. Notes References C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital, 1978) Bell, C.G., Grason, J., and Newell, A., Designing Computers and Digital Systems. Digital Press, Maynard, Mass., 1972. Conversations with David M. Razler ([email protected]), owner/restorer of PDP-7s,8s,9s and 15s until the cost of hauling around 2 tons of DEC gear led him to sell off or give away everything he owned. External links Mark Crispin's 1986 list of PDP's Several PDP and LAB's, still runnable in a German computer museum DEC's PDP-6 was the world's first commercial time-sharing system Gordon Bell interview at the Smithsonian DEC PRODUCT TIMELINE Description and Use of Register Transfer Modules on Gordon Bell's site at Microsoft. pdp12.lofty.com shows a recently restored PDP-12 http://www.soemtron.org/pdp7.html information about the PDP-7 and PDP7A including some manuals and a customer list covering 99 of the 120 systems shipped. Various sites list documents by Charles Lasner, the creator of the alt.sys.pdp8 discussion group, and related documents by various members of the alt.sys.pdp8 readership with even more authoritative information about the various models, especially detailed focus upon the various members of the PDP-8 "family" of computers both made and not made by DEC. Minicomputers DEC hardware
23631
https://en.wikipedia.org/wiki/Primary%20mirror
Primary mirror
A primary mirror (or primary) is the principal light-gathering surface (the objective) of a reflecting telescope. Description The primary mirror of a reflecting telescope is a spherical or parabolic shaped disks of polished reflective metal (speculum metal up to the mid 19th century), or in later telescopes, glass or other material coated with a reflective layer. One of the first known reflecting telescopes, Newton's reflector of 1668, used a 3.3 cm polished metal primary mirror. The next major change was to use silver on glass rather than metal, in the 19th century such was with the Crossley reflector. This was changed to vacuum deposited aluminum on glass, used on the 200-inch Hale telescope. Solid primary mirrors have to sustain their own weight and not deform under gravity, which limits the maximum size for a single piece primary mirror. Segmented mirror configurations are used to get around the size limitation on single primary mirrors. For example, the Giant Magellan Telescope will have seven 8.4 meter primary mirrors, with the resolving power equivalent to a optical aperture. Superlative primary mirrors The largest optical telescope in the world as of 2009 to use a non-segmented single-mirror as its primary mirror is the Subaru telescope of the National Astronomical Observatory of Japan, located in Mauna Kea Observatory on Hawaii since 1997; however, this is not the largest diameter single mirror in a telescope, the U.S./German/Italian Large Binocular Telescope has two mirrors (which can be used together for interferometric mode). Both of these are smaller than the 10 m segmented primary mirrors on the dual Keck telescope. The Hubble Space Telescope has a primary mirror. Radio and submillimeter telescopes use much larger dishes or antennae, which do not have to be made as precisely as the mirrors used in optical telescopes. The Arecibo Telescope used a 305 m dish, which was the world largest single-dish radio telescope fixed to the ground. The Green Bank Telescope has the world's largest steerable single radio dish with 100 m in diameter. There are larger radio arrays, composed of multiple dishes which have better image resolution but less sensitivity. See also Active optics Honeycomb mirror Liquid-mirror telescope List of largest optical reflecting telescopes List of telescope parts and construction Mirror mount Mirror support cell Secondary mirror Silvering References Optical telescope components Mirrors
23633
https://en.wikipedia.org/wiki/List%20of%20physicists
List of physicists
Following is a list of physicists who are notable for their achievements. A Aryabhatta – India (476–550 CE) Jules Aarons – United States (1921–2016) Ernst Karl Abbe – Germany (1840–1905) Derek Abbott – Australia (born 1960) Hasan Abdullayev – Azerbaijan Democratic Republic, Soviet Union, Azerbaijan (1918–1993) Alexei Alexeyevich Abrikosov – Soviet Union, Russia (1928–2017) Nobel laureate Robert Adler – United States (1913–2007) Stephen L. Adler – United States (born 1939) Franz Aepinus – Rostock (1724–1802) Mina Aganagic – Albania, United States David Z Albert – United States (born 1954) Felicie Albert – France, United States Miguel Alcubierre – Mexico (born 1964) Zhores Ivanovich Alferov – Russia (1930–2019) Nobel laureate Hannes Olof Gösta Alfvén – Sweden (1908–1995) Nobel laureate Alhazen – Basra, Iraq (965–1040) Artem Alikhanian – Armenia (1908–1978) Abram Alikhanov – Russia (1904–1970) John E. Allen – United Kingdom (born 1928) William Allis – United States (1901–1999) Samuel King Allison – United States (1900–1965) Yakov Lvovich Alpert – Russia, United States (1911–2010) Ralph Asher Alpher – United States (1921–2007) Semen Altshuler – Vitebsk (1911–1983) Luis Walter Alvarez – United States (1911–1988) Nobel laureate Viktor Ambartsumian – Soviet Union, Armenia (1908–1996) André-Marie Ampère – France (1775–1836) Anja Cetti Andersen – Denmark (born 1965) Hans Henrik Andersen – Denmark (1937–2012) Philip Warren Anderson – United States (1923–2020) Nobel laureate Carl David Anderson – United States (1905–1991) Nobel laureate Herbert L. Anderson – United States (1914–1988) Elephter Andronikashvili – Georgia (1910–1989) Anders Jonas Ångström – Sweden (1814–1874) Alexander Animalu, Nigeria (born 1938) Edward Victor Appleton – United Kingdom (1892–1965) Nobel laureate François Arago – France (1786–1853) Archimedes – Syracuse, Greece (ca. 287–212 BC) Manfred von Ardenne – Germany (1907–1997) Aristarchus of Samos – Samos, Greece (310–ca. 230 BC) Aristotle – Athens, Greece (384–322 BC) Nima Arkani-Hamed – United States (born 1972) Lev Artsimovich – Moscow (1909–1973) Aryabhata – Pataliputra, India (476–550) Neil Ashby – United States (born 1934) Maha Ashour-Abdalla – Egypt, United States (1943–2016) Gurgen Askaryan – Soviet Union (1928–1997) Alain Aspect – France (born 1947) Marcel Audiffren – France Avicenna – Persia (980–1037) Amedeo Avogadro – Italy (1776–1856) David Awschalom – United States (born 1956) APJ Abdul Kalam – India B Abu sahl Al-Quhi – İran (born 940) Xiaoyi Bao – Canada Mani Lal Bhaumik – United States (born 1931) Tom Baehr-Jones – United States (born 1980) Gilbert Ronald Bainbridge – U.K. (1925–2003) Cornelis Bakker – Netherlands (1904–1960) Aiyalam Parameswaran Balachandran – India (born 1938) V Balakrishnan – India (born 1943) Milla Baldo-Ceolin – Italy (1924–2011) Johann Jakob Balmer – Switzerland (1825–1898) Tom Banks – United States (born 1949) Riccardo Barbieri – Italy (born 1944) Marcia Barbosa – Brazil (born 1960) John Bardeen – United States (1908–1991) double Nobel laureate William A. Bardeen – United States (born 1941) Ronald Hugh Barker – Ireland (1915–2015) Charles Glover Barkla – U.K. (1877–1944) Nobel laureate Amanda Barnard – Australia (born 1971) Boyd Bartlett – United States (1897–1965) Asım Orhan Barut – Malatya, Turkey (1926–1994) Heinz Barwich – Germany (1911–1966) Nikolay Basov – Russia (1922–2001) Nobel laureate Laura Maria Caterina Bassi – Italy (1711–1778) Zoltán Lajos Bay – Hungary (1900–1992) Karl Bechert – Germany (1901–1981) Henri Becquerel – France (1852–1908) Nobel laureate Johannes Georg Bednorz – Germany (born 1950) Nobel laureate Isaac Beeckman – Netherlands (1588–1637) Alexander Graham Bell – Scotland, Canada, U.S.A. (1847–1922) John Stewart Bell – U.K. (1928–1990) Jocelyn Bell Burnell – Northern Ireland, U.K. (born 1943) Carl M. Bender – United States (born 1943) Abraham Bennet – England (1749–1799) Daniel Bernoulli – Switzerland (1700–1782) Hans Bethe – Germany, United States (1906–2005) Nobel laureate Homi J. Bhabha – India (1909–1966) Lars Bildsten – United States (1964) James Binney – England (born 1950) Gerd Binnig – Germany (born 1947) Nobel laureate Jean-Baptiste Biot – France (1774–1862) Raymond T. Birge – United States (1887–1980) Abū Rayhān al-Bīrūnī – Persia (973–1048) Vilhelm Bjerknes – Norway (1862–1951) James Bjorken – United States (1934–2024) Patrick Blackett – U.K. (1897–1974) Nobel laureate Felix Bloch – Switzerland (1905–1983) Nobel laureate Nicolaas Bloembergen – Netherlands, United States (1920–2017) Nobel laureate Walter Boas – Germany, Australia (1904–1982) Céline Bœhm – France (born 1974) Nikolay Bogolyubov – Soviet Union, Russia (1909–1992) David Bohm – United States (1917–1992) Aage Bohr – Denmark (1922–2009) Nobel laureate Niels Bohr – Denmark (1885–1962) Nobel laureate Martin Bojowald – Germany (born 1973) Ludwig Boltzmann – Austria (1844–1906) Eugene T. Booth – United States (1912–2004) Max Born – Germany, U.K. (1882–1970) Nobel laureate Rudjer Josip Boscovich – Croatia (1711–1787) Jagadish Chandra Bose – India (1858–1937) Margrete Heiberg Bose – Denmark (1866–1952) Satyendra Nath Bose – India (1894–1974) Johannes Bosscha – Netherlands (1831–1911) Walther Bothe – Germany (1891–1957) Nobel laureate Edward Bouchet – United States (1852–1918) Mustapha Ishak Boushaki – Algeria (1967–) Mark Bowick – United States (born 1957) Robert Boyle – Ireland, England (1627–1691) Willard S. Boyle – Canada, United States (1924–2011) Nobel laureate William Henry Bragg – U.K. (1862–1942) Nobel laureate William Lawrence Bragg – U.K., Australia (1890–1971) Nobel laureate Tycho Brahe – Denmark (1546–1601) Howard Brandt – United States (1939–2014) Walter Houser Brattain – United States (1902–1987) Nobel laureate Karl Ferdinand Braun – Germany (1850–1918) Nobel laureate David Brewster – U.K. (1781–1868) Percy Williams Bridgman – United States (1882–1961) Nobel laureate Léon Nicolas Brillouin – France (1889–1969) Marcel Brillouin – France (1854–1948) Bertram Brockhouse – Canada (1918–2003) Nobel laureate Louis-Victor de Broglie – France (1892–1987) Nobel laureate William Fuller Brown, Jr. – United States (1904–1983) Ernst Brüche – Germany (1900–1985) Hermann Brück – Germany (1905–2000) Ari Brynjolfsson – Iceland (1927–2013) Hans Buchdahl – Germany, Australia (1918–2010) Gersh Budker – Soviet Union (1918–1977) Silke Bühler-Paschen – Austria (born 1967) Johannes Martinus Burgers – Netherlands (1895–1981) Friedrich Burmeister – Germany (1890–1969) Bimla Buti – India (born 1933) Christophorus Buys Ballot – Netherlands (1817–1890) C Nicola Cabibbo – Italy (1935–2010) Nicolás Cabrera – Spain (1913–1989) Orion Ciftja – United States Curtis Callan – United States (born 1942) Annie Jump Cannon – United States (1863–1941) Fritjof Capra – Austria, United States (born 1939) Marcela Carena – Argentina (born 1962) Ricardo Carezani – Argentina, United States (1921–2016) Nicolas Léonard Sadi Carnot – France (1796–1832) David Carroll – United States (born 1963) Brandon Carter – Australia (born 1942) Hendrik Casimir – Netherlands (1909–2000) Henry Cavendish – U.K. (1731–1810) James Chadwick – U.K. (1891–1974) Nobel laureate Owen Chamberlain – United States (1920–2006) Nobel laureate Moses H. W. Chan – Hong Kong (born 1946) Subrahmanyan Chandrasekhar – India, United States (1910–1995) Nobel laureate Tsao Chang - Chinese (born 1942) Georges Charpak – France (1924–2010) Nobel laureate Émilie du Châtelet – France (1706–1749) Swapan Chattopadhyay – India (born 1951) Pavel Alekseyevich Cherenkov – Imperial Russia, Soviet Union (1904–1990) Nobel laureate Maxim Chernodub – Russia, France (born 1973) Geoffrey Chew – United States (1924–2019) Boris Chirikov – Soviet Union, Russia (1928–2008) Juansher Chkareuli – Georgia (born 1940) Ernst Chladni – Germany (1756–1827) Nicholas Christofilos – Greece (1916–1972) Steven Chu – United States (born 1948) Nobel laureate Giovanni Ciccotti – Italy (born 1943) Benoît Clapeyron – France (1799–1864) George W. Clark – United States John Clauser – United States (born 1942) Nobel laureate Rudolf Clausius – Germany (1822–1888) Richard Clegg – U.K. Gari Clifford – British-American physicist, biomedical engineer, academic, researcher John Cockcroft – U.K. (1897–1967) Nobel laureate Claude Cohen-Tannoudji – France (born 1933) Nobel laureate Arthur Compton – United States (1892–1962) Nobel laureate Karl Compton – United States (1887–1954) Edward Condon – United States (1902–1974) Leon Cooper – United States (born 1930) Nobel laureate Alejandro Corichi – Mexico (born 1967) Gaspard-Gustave Coriolis – France (1792–1843) Allan McLeod Cormack – South Africa, United States (1924–1998) Eric Allin Cornell – United States (born 1961) Nobel laureate Marie Alfred Cornu – France (1841–1902) Charles-Augustin de Coulomb – France (1736–1806) Ernest Courant – United States (1920–2020) Brian Cox – U.K. (born 1968) Charles Critchfield – United States (1910–1994) James Cronin – United States (1931–2016) Nobel laureate Sir William Crookes – U.K. (1832–1919) Paul Crowell – United States Marie Curie – Poland, France (1867–1934) twice Nobel laureate Pierre Curie – France (1859–1906) Nobel laureate Predrag Cvitanović – Croatia (born 1946) D Jean le Rond d'Alembert – France (1717–1783) Gustaf Dalén – Sweden (1869–1937) Nobel laureate Jean Dalibard – France (born 1958) Richard Dalitz – U.K., United States (1925–2006) John Dalton – U.K. (1766–1844) Sanja Damjanović – Montenegro (born 1972) Ranjan Roy Daniel – India (1923–2005) Charles Galton Darwin – U.K. (1887–1962) Ashok Das – India, United States (born 1953) James C. Davenport – United States (born 1938) Paul Davies – Australia (born 1946) Raymond Davis, Jr. – United States (1914–2006) Nobel laureate Clinton Davisson – United States (1881–1958) Nobel laureate Peter Debije – Netherlands (1884–1966) Hans Georg Dehmelt – Germany, United States (1922–2017) Nobel laureate Max Delbrück – Germany, United States (1906–1981) Democritus – Abdera (ca. 460–360 BC) David M. Dennison – United States (1900–1976) Beryl May Dent – U.K. (1900–1977) David Deutsch – Israel, U.K. (born 1953) René Descartes – France (1596–1650) James Dewar – U.K. (1842–1923) Scott Diddams – United States Ulrike Diebold – Austria (born 1961) Robbert Dijkgraaf – Netherlands (born 1960) Viktor Dilman – Russia (born 1926) Savas Dimopoulos – United States (born 1952) Paul Dirac – Switzerland, U.K. (1902–1984) Nobel laureate Revaz Dogonadze – Soviet Union, Georgia (1931–1985) Louise Dolan – United States (born 1950) Amos Dolbear – United States (1837–1910) Robert Döpel – Germany (1895–1982) Christian Doppler – Austria (1803–1853) Henk Dorgelo – Netherlands (1894–1961) Friedrich Ernst Dorn – Germany (1848–1916) Geneva Smith Douglas – United States (1932–1993) Michael R. Douglas – United States (born 1961) Jonathan Dowling – United States (1955–2020) Claudia Draxl – Germany (born 1959) Sidney Drell – United States (1926–2016) Mildred Dresselhaus – United States (1930–2017) Paul Drude – Germany (1863–1906) F. J. Duarte – United States (born 1954) Émilie du Châtelet – France (1706–1749) Pierre Louis Dulong – France (1785–1838) Janette Dunlop – Scotland (1891–1971) Samuel T. Durrance – United States (born 1943) Freeman Dyson – U.K., United States (1923–2020) Wolf laureate Arthur Jeffrey Dempster – Canada (1886–1950) E Joseph H. Eberly – United States (born 1935) William Eccles – U.K. (1875–1966) Carl Eckart – United States (1902–1973) Arthur Stanley Eddington – U.K. (1882–1944) Thomas Edison – U.S. Invented the lightbulb. Paul Ehrenfest – Austria-Hungary, Netherlands (1880–1933) Felix Ehrenhaft – Austria-Hungary, United States (1879–1952) Manfred Eigen – Germany (1927–2019) Albert Einstein – Germany, Italy, Switzerland, United States (1879–1955) Nobel laureate Laura Eisenstein – (1942–1985) professor of physics at University of Illinois Terence James Elkins – Australia, United States (born 1936) John Ellis – U.K. (born 1946) Paul John Ellis – U.K., United States (1941–2005) Richard Keith Ellis – U.K., United States (born 1949) Arpad Elo – Hungary (1903–1992) François Englert – Belgium (born 1932) Nobel laureate David Enskog – Sweden (1884–1947) Loránd Eötvös – Austria-Hungary (1848–1919) Frederick J. Ernst – United States (born 1933) Leo Esaki – Japan (born 1925) Nobel laureate Ernest Esclangon – France (1876–1954) Louis Essen – U.K. (1908–1997) Leonhard Euler – Switzerland (1707–1783) Denis Evans – Australia (born 1951) Paul Peter Ewald – Germany, United States (1888–1985) James Alfred Ewing – U.K. (1855–1935) Franz S. Exner – Austria (1849–1926) F Ludvig Faddeev – Russia (1934–2017) Daniel Gabriel Fahrenheit – Prussia (1686–1736) Kazimierz Fajans – Poland, United States (1887–1975) James E. Faller – United States Michael Faraday – U.K. (1791–1867) Eugene Feenberg – United States (1906–1977) Mitchell Feigenbaum – United States (1944–2019) Gerald Feinberg – United States (1933–1992) Enrico Fermi – Italy (1901–1954) Nobel laureate Albert Fert – France (born 1938) Nobel laureate Herman Feshbach – United States (1917–2000) Richard Feynman – United States (1918–1988) Nobel laureate Wolfgang Finkelnburg – Germany (1905–1967) David Finkelstein – United States (1929–2016) Johannes Fischer – Germany (born 1887) Willy Fischler – Belgium (born 1949) Val Logsdon Fitch – United States (1923–2015) Nobel laureate George Francis FitzGerald – Ireland (1851–1901) Hippolyte Fizeau – France (1819–1896) Georgy Flyorov – Rostov-on-Don (1913–1990) Vladimir Fock – Imperial Russia, Soviet Union (1898–1974) Adriaan Fokker – Netherlands (1887–1972) Arthur Foley – America (1867–1945) James David Forbes – U.K. (1809–1868) Jeff Forshaw – U.K. (born 1968) Léon Foucault – France (1819–1868) Joseph Fourier – France (1768–1830) Ralph H. Fowler – U.K. (1889–1944) William Alfred Fowler – United States (1911–1995) Nobel laureate James Franck – Germany, United States (1882–1964) Nobel laureate Ilya Frank – Soviet Union (1908–1990) Nobel laureate Benjamin Franklin – British America, United States (1706–1790) Rosalind Franklin – U.K. (1920–1958) Walter Franz – Germany (1911–1992) Joseph von Fraunhofer – Germany (1787–1826) Steven Frautschi – United States (born 1933) Joan Maie Freeman – Australia (1918–1998) Phyllis S. Freier – United States (1921–1992)) Yakov Frenkel – Imperial Russia, Soviet Union (1894–1952) Augustin-Jean Fresnel – France (1788–1827) Peter Freund – United States (1936–2018) Daniel Friedan – United States (born 1948) B. Roy Frieden – United States (born 1936) Alexander Friedman – Imperial Russia, Soviet Union (1888–1925) Jerome Isaac Friedman – United States (born 1930) Nobel laureate Otto Frisch – Austria, U.K. (1904–1979) Erwin Fues – Germany (1893–1970) Harald Fuchs – Germany (born 1951) G Dennis Gabor – Hungary (1900–1979) Nobel laureate Mary K. Gaillard – France, United States (born 1939) Galileo Galilei – Italy (1564–1642) Luigi Galvani – Italy (1737–1798) George Gamow – Russia, United States (1904–1968) Sylvester James Gates – United States (born 1950) Carl Friedrich Gauss – Germany (1777–1855) Pamela L. Gay – United States (born 1973) Joseph Louis Gay-Lussac – France (1778–1850) Hans Geiger – Germany (1882–1945) Andre Geim – Russian/British (born 1958) Nobel laureate Murray Gell-Mann – United States (1929–2019) Nobel laureate Pierre-Gilles de Gennes – France (1932–2007) Nobel laureate Howard Georgi – United States (born 1947) Walter Gerlach – Germany (1889–1979) Christian Gerthsen – Denmark, Germany (1894–1956) Ezra Getzler – Australia (born 1962) Andrea M. Ghez – United States (born 1955) Nobel laureate Riccardo Giacconi – Italy, United States (1931–2018) Nobel laureate Ivar Giaever – Norway, United States (born 1929) Nobel laureate Josiah Willard Gibbs – United States (1839–1903) Valerie Gibson – U.K. (born 19??) William Gilbert – England (1544–1603) Piara Singh Gill – India (1911–2002) Naomi Ginsberg – United States (born 1979) Vitaly Lazarevich Ginzburg – Soviet Union, Russia (1916–2009) Nobel laureate Marvin D. Girardeau – United States (1930–2015) Marissa Giustina – United States (born 19??) Donald Arthur Glaser – United States (1926–2013) Nobel laureate Sheldon Glashow – United States (born 1932) Nobel laureate G. N. Glasoe – United States (1902–1987) Roy Jay Glauber – United States (1925–2018) Nobel laureate James Glimm – United States (born 1934) Karl Glitscher – Germany (1886–1945) Peter Goddard – U.K. (born 1945) Maria Goeppert-Mayer – Germany, United States (1906–1972) Nobel laureate Gerald Goertzel – United States (1920–2002) Marvin Leonard Goldberger – United States (1922–2014) Maurice Goldhaber – Austria, United States (1911–2011) Jeffrey Goldstone – U.K., United States (born 1933) Sixto González – Puerto Rico, United States (born 1965) Ravi Gomatam – India (born 1950) Lev Gor'kov – United States (1929–2016) Samuel Goudsmit – Netherlands, United States (1902–1978) Leo Graetz – Germany (1856–1941) Willem 's Gravesande – Netherlands (1688–1742) Michael Green (physicist) – Britain (born 1946) Daniel Greenberger – United States (born 1932) Brian Greene – United States (born 1963) John Gribbin – U.K. (born 1946) Vladimir Gribov – Russia (1930–1997) David J. Griffiths – United States (born 1942) David Gross – United States (born 1941) Nobel laureate Frederick Grover – United States (1876–1973) Peter Grünberg – Germany (1939–2018) Nobel laureate Charles Édouard Guillaume – Switzerland (1861–1931) Nobel laureate Ayyub Guliyev – Azerbaijan (born 1954) Feza Gürsey – Turkey (1921–1992) Alan Guth – United States (born 1947) Martin Gutzwiller – Switzerland (1925–2014) H Rudolf Haag – Germany (1922–2016) Wander Johannes de Haas – Netherlands (1878–1960) Alain Haché – Canada (born 1970) Carl Richard Hagen – United States (born 1937) Otto Hahn – Germany (1879–1968) Edwin Hall – United States (1855–1938) John Lewis Hall – United States (born 1934) Nobel laureate Alexander Hamilton – U.K., Australia (born 1967) William Rowan Hamilton – Ireland (1805–1865) Theodor Wolfgang Hänsch – Germany (born 1941) Nobel laureate Peter Andreas Hansen – Denmark (1795–1874) W.W. Hansen – United States (1909–1949) Serge Haroche – France (born 1944) Nobel laureate Paul Harteck – Germany (1902–1985) John G. Hartnett – Australia (born 1952) Douglas Hartree – U.K. (1897–1958) Friedrich Hasenöhrl – Austria, Hungary (1874–1915) Lene Vestergaard Hau – Vejle, Denmark (born 1959) Stephen Hawking – U.K. (1942–2018) Wolf laureate Ibn al-Haytham – Iraq (965–1039) Evans Hayward – United States (1922–2020) Oliver Heaviside – U.K. (1850–1925) Werner Heisenberg – Germany (1901–1976) Nobel laureate Walter Heitler – Germany, Ireland (1904–1981) Hermann von Helmholtz – Germany (1821–1894) Charles H. Henry – United States (1937–2016) Joseph Henry – United States (1797–1878) John Herapath – U.K. (1790–1868) Carl Hermann – Germany (1898–1961) Gustav Ludwig Hertz – Germany (1887–1975) Nobel laureate Heinrich Rudolf Hertz – Germany (1857–1894) Karl Herzfeld – Austria, United States (1892–1978) Victor Francis Hess – Austria, United States (1883–1964) Nobel laureate Mahmoud Hessaby – Iran (1903–1992) Antony Hewish – U.K. (1924–2021) Nobel laureate Paul G. Hewitt – United States (born 1931) Peter Higgs – U.K. (1929–2024) Nobel laureate George William Hill – United States (1838–1914) Gustave-Adolphe Hirn – France (1815–1890) Carol Hirschmugl – United States, professor of physics, laboratory director Dorothy Crowfoot Hodgkin – England (1910–1994) Robert Hofstadter – United States (1915–1990) Nobel laureate Helmut Hönl – Germany (1903–1981) Pervez Hoodbhoy – Pakistan (born 1950) Gerardus 't Hooft – Netherlands (born 1946) Nobel laureate Robert Hooke – England (1635–1703) John Hopkinson – United Kingdom (1849–1898) Johann Baptiste Horvath – Slovakia (1732–1799) William V. Houston – United States (1900–1968) Charlotte (née Riefenstahl) Houtermans – Germany (1899–1993) Fritz Houtermans – Netherlands, Germany, Austria (1903–1966) Archibald Howie – U.K. (born 1934) Fred Hoyle – U.K. (1915–2001) Veronika Hubeny – United States John Hubbard – U.K. (1931–1980) John H. Hubbell – United States (1925–2007) Edwin Powell Hubble – United States (1889–1953) Russell Alan Hulse – United States (born 1950) Nobel laureate Friedrich Hund – Germany (1896–1997) Tahir Hussain – Pakistan (1923–2010) Andrew D. Huxley – U.K. (born 1966) Christiaan Huygens – Netherlands (1629–1695) I Arthur Iberall – United States (1918–2002) Sumio Iijima – Japan (born 1939) John Iliopoulos – Greece (born 1940) Ataç İmamoğlu – Turkey, United States (born 1962) Elmer Imes – United States (1883–1941) Abram Ioffe – Russia (1880–1960) Nathan Isgur – United States, Canada (1947–2001) Ernst Ising – Germany (1900–1998) Jamal Nazrul Islam – Bangladesh (1939–2013) Werner Israel – Canada (born 1931) J Lyssy Jack - TX San Antonio, United States (born 2012) Roman Jackiw – Poland, United States (1939–2023) Shirley Ann Jackson – United States (born 1946) Boris Jacobi – Germany, Russia (1801–1874) Gregory Jaczko – United States (born 1970) Chennupati Jagadish – India, Australia (born 1957) Jainendra Jain – India (born 1960) Ratko Janev – North Macedonia (1939–2019) Andreas Jaszlinszky – Hungary (1715–1783) Ali Javan – Iran (1928–2016) Edwin Jaynes – United States (1922–1998) Antal István Jákli – Hungary (born 1958) Sir James Jeans – U.K. (1877–1946) Johannes Hans Daniel Jensen – Germany (1907–1973) Nobel laureate Deborah S. Jin – United States (born 1968) Anthony M. Johnson – United States (born 1954) Irène Joliot-Curie – France (1897–1956) Lorella Jones – United States (1943–1995) Pascual Jordan – Germany (1902–1980) Vania Jordanova – United States, physicist, space weather and geomagnetic storms Brian David Josephson – U.K. (born 1940) Nobel laureate James Prescott Joule – U.K. (1818–1889) Adolfas Jucys – Lithuania (1904–1974) Chang Kee Jung – South Korea, United States K Menas Kafatos – Greece, United States (born 1945) Takaaki Kajita – Japan (born 1959) Nobel laureate Michio Kaku – United States (born 1947) Theodor Kaluza – Germany (1885–1954) Heike Kamerlingh Onnes – Netherlands (1853–1926) Nobel laureate William R. Kanne – United States Charles K. Kao – China, Hong Kong, U.K., United States (1933–2018) Nobel laureate Pyotr Kapitsa – Russian Empire, Soviet Union (1894–1984) Nobel laureate Theodore von Kármán – Hungary, United States (1881–1963) aeronautical engineer Alfred Kastler – France (1902–1984) Nobel laureate Amrom Harry Katz – United States (1915–1997) Moshe Kaveh – Israel (born 1943) President of Bar-Ilan University Predhiman Krishan Kaw – India (1948–2017) Heinrich Kayser – Germany (1853–1940) Willem Hendrik Keesom – Netherlands (1876–1956) Edwin C. Kemble – United States (1889–1984) Henry Way Kendall – United States (1926–1999) Nobel laureate Johannes Kepler – Germany (1571–1630) John Kerr – Scotland (1824–1907) Wolfgang Ketterle – Germany (born 1957) Nobel laureate Isaak Markovich Khalatnikov – Soviet Union (1919–2021) Jim Al-Khalili – U.K. (born 1962) Abdul Qadeer Khan – Pakistan (1936–2021) Yulii Borisovich Khariton – Soviet Union, Russia (1904–1996) Erhard Kietz – Germany, United States (1909–1982) Jack Kilby – United States (1923–2005) electronics engineer, Nobel laureate Toichiro Kinoshita – Japan, United States (1925–2023) Gustav Kirchhoff – Germany (1824–1887) Oskar Klein – Sweden (1894–1977) Hagen Kleinert – Germany (born 1941) Klaus von Klitzing – Germany (born 1943) Nobel laureate Jens Martin Knudsen – Denmark (1930–2005) Martin Knudsen – Denmark (1871–1949) Makoto Kobayashi – Japan (born 1944) Nobel laureate Arthur Korn – Germany (1870–1945) Masatoshi Koshiba – Japan (1926–2020) Nobel laureate Matthew Koss – United States (born 1961) Walther Kossel – Germany (1888–1956) Ashutosh Kotwal – United States (born 1965) Lew Kowarski – France (1907–1979) Hendrik Kramers – Netherlands (1894–1952) Serguei Krasnikov – Russia (born 1961) Adolf Kratzer – Germany (1893–1983) Lawrence M. Krauss – United States (born 1954) Herbert Kroemer – Germany (1928–2024) Nobel laureate August Krönig – Germany (1822–1879) Ralph Kronig – Germany, United States (1904–1995) Nikolay Sergeevich Krylov – Soviet Union (1917–1947) Ryogo Kubo – Japan (1920–1995) Daya Shankar Kulshreshtha – India (born 1951) Igor Vasilyevich Kurchatov – Soviet Union (1903–1960) Behram Kursunoglu – Turkey (1922–2003) Polykarp Kusch – Germany (1911–1993) Nobel laureate L Anne L'Huillier – France, Sweden (born 1958) Nobel laureate James W. LaBelle – United States Joseph-Louis Lagrange – France (1736–1813) Willis Lamb – United States (1913–2008) Nobel laureate Lev Davidovich Landau – Imperial Russia, Soviet Union (1908–1968) Nobel laureate Rolf Landauer – United States (1927–1999) Grigory Landsberg – Vologda (1890–1957) Kenneth Lane – United States Paul Langevin – France (1872–1946) Irving Langmuir – United States (1881–1957) Pierre-Simon Laplace – France (1749–1827) Joseph Larmor – U.K. (1857–1942) John Latham - U.K. (1937–2021) Cesar Lattes – Brazil (1924–2005) Max von Laue – Germany (1879–1960) Nobel laureate Robert Betts Laughlin – United States (born 1950) Nobel laureate Mikhail Lavrentyev – Kazan (1900–1980) Melvin Lax – United States (1922–2002) Ernest Lawrence – United States (1901–1958) Nobel laureate TH Laby – Australia (1880–1946) Pyotr Nikolaevich Lebedev – Imperial Russia (1866–1912) Leon Max Lederman – United States (1922–2018) Nobel laureate Benjamin Lee – Korea, United States (1935–1977) David Lee – United States (born 1931) Nobel laureate Tsung-Dao Lee – China, United States (1926–2024) Nobel laureate Anthony James Leggett – U.K., United States (born 1938) Nobel laureate Gottfried Wilhelm Leibniz – Germany (1646–1716) Robert B. Leighton – United States (1919–1997) Georges Lemaître – Belgium (1894–1966) Philipp Lenard – Hungary, Germany (1862–1947) Nobel laureate John Lennard-Jones – U.K. (1894–1954) John Leslie – U.K. (1766–1832) Walter Lewin – Netherlands, United States (born 1936) Martin Lewis Perl – United States (1927–2014) Robert von Lieben – Austria-Hungary (1878–1913) Alfred-Marie Liénard – France (1869–1958) Evgeny Lifshitz – Soviet Union (1915–1985) David Lindley – United States (born 1956) John Linsley – United States (1925–2002) Chris Lintott – U.K. (born 1980) Gabriel Jonas Lippmann – France, Luxemburg (1845–1921) Nobel laureate Antony Garrett Lisi – United States (born 1968) Karl L. Littrow – Austria (1811–1877) Seth Lloyd – United States (born 1960) Oliver Lodge – U.K. (1851–1940) Maurice Loewy – Austria, France (1833–1907) Robert K. Logan – United States (born 1939) Mikhail Lomonosov – Denisovka (1711–1765) Alfred Lee Loomis – United States (1887–1975) Ramón E. López – United States (born 1959) Hendrik Lorentz – Netherlands (1853–1928) Nobel laureate Ludvig Lorenz – Denmark (1829–1891) Johann Josef Loschmidt – Austria (1821–1895) Oleg Losev – Tver (1903–1942) Archibald Low – U.K. (1888–1956) Per-Olov Löwdin – Sweden (1916–2000) Lucretius – Rome (98?–55 BC) Aleksandr Mikhailovich Lyapunov – Imperial Russia (1857–1918) Joseph Lykken – United States (born 1957) M Arthur B. McDonald – Canada (born 1943) Nobel laureate Carolina Henriette MacGillavry – Netherlands (1904–1993) Ernst Mach – Austria-Hungary (1838–1916) Katie Mack (astrophysicist) – United States (born 1981) Gladys Mackenzie – Scotland (1903–1972) Ray Mackintosh – U.K. Luciano Maiani – Italy, San Marino (born 1941) Theodore Maiman – United States (1927–2007) Arthur Maitland – U.K. (1925–1994) Ettore Majorana – Italy (1906–1938 presumed dead) Sudhansu Datta Majumdar – India (1915–1997) Richard Makinson – Australia (1913–1979) Juan Martín Maldacena – Argentina (born 1968) Étienne-Louis Malus – France (1775–1812) Leonid Isaakovich Mandelshtam – Imperial Russia, Soviet Union (1879–1944) Franz Mandl – U.K. (1923–2009) Charles Lambert Manneback – Belgium (1894–1975) Peter Mansfield – U.K. (1933–2017) Carlo Marangoni – Italy (1840–1925) M. Cristina Marchetti – Italy, United States (born 1955) Guglielmo Marconi – Italy (1874–1937) Nobel laureate Henry Margenau – Germany, United States (1901–1977) Nina Marković – Croatia, United States William Markowitz – United States (1907–1998) Laurence D. Marks – United States (born 1954) Robert Marshak – United States (1916–1992) Walter Marshall – U.K. (1932–1996) Toshihide Maskawa – Japan (1940–2021) Nobel laureate Harrie Massey – Australia (1908–1983) John Cromwell Mather – United States (born 1946) Nobel laureate James Clerk Maxwell – U.K. (1831–1879) Brian May – U.K. (born 1947) Maria Goeppert Mayer – Germany, United States (1906–1972) Ronald E. McNair – United States (1950–1986) Anna McPherson – Canadian (1901–1979) Simon van der Meer – Netherlands (1925–2011) Nobel laureate Lise Meitner – Austria (1878–1968) Fulvio Melia – United States (born 1956) Macedonio Melloni – Italy (1798–1854) Adrian Melott – United States (born 1947) Thomas Corwin Mendenhall – United States (1841–1924) M. G. K. Menon – India (1928–2016) David Merritt – United States Albert Abraham Michelson – United States (1852–1931) Nobel laureate Arthur Alan Middleton – United States Stanislav Mikheyev – Russia (1940–2011) Robert Andrews Millikan – United States (1868–1953) Nobel laureate Robert Mills – United States (1927–1999) Arthur Milne – U.K. (1896–1950) Shiraz Minwalla – India (born 1972) Bedangadas Mohanty – India (born 1973) Rabindra Nath Mohapatra – India, United States (born 1944) Kathryn Moler – United States Merritt Moore – United States (born 1988) Tanya Monro – Australia (born 1973) John J. Montgomery – United States (1858–1911) Jagadeesh Moodera – India, United States (born 1950) Jonathan P. Morris – United States (born 2010) Henry Moseley – U.K. (1887–1915) Rudolf Mössbauer – Germany (1929–2011) Nobel laureate Nevill Mott – U.K. (1905–1996) Nobel laureate Ben Roy Mottelson – Denmark, United States (1926–2022) Nobel laureate Amédée Mouchez – Spain, France (1821–1892) Ali Moustafa – Egypt (1898–1950) José Enrique Moyal – Palestine, France, U.K., United States, Australia (1910–1998) Christine Muschik – Germany Karl Alexander Müller – Switzerland (1927–2023) Nobel laureate Richard A. Muller – United States (born 1944) Robert S. Mulliken – United States (1896–1986) Pieter van Musschenbroek – Netherlands (1692–1762) N Yoichiro Nambu – Japan, United States (1921–2015) Nobel laureate Meenakshi Narain – United States (1964–2022) Jayant Narlikar – India (born 1938) Quirino Navarro – Filipino nuclear physicist and chemist (1936–2002) Seth Neddermeyer – United States (1907–1988) Louis Néel – France (1904–2000) Nobel laureate Yuval Ne'eman – Israel (1925–2006) Ann Nelson – United States (1958–2019) John von Neumann – Austria-Hungary, United States (1903–1957) Simon Newcomb – United States (1835–1909) Sir Isaac Newton – England (1642–1727) Edward P. Ney – United States (1920–1996) Kendal Nezan – France, Kurdistan (born 1949) Holger Bech Nielsen – Denmark (born 1941) Leopoldo Nobili – Italy (1784–1835) Emmy Noether – Germany (1882–1935) Lothar Nordheim – Germany (1899–1985) Gunnar Nordström – Finland (1881–1923) Johann Gottlieb Nörremberg – Germany (1787–1862) Konstantin Novoselov – Soviet Union, U.K. (born 1974) Nobel laureate H. Pierre Noyes – United States (1923–2016) John Nye – U.K. (1923–2019) O Yuri Oganessian – Russia (born 1933) Georg Ohm – Germany (1789–1854) Hideo Ohno – Japan (born 1954) Susumu Okubo – Japan, United States (1930–2015) Sir Mark Oliphant – Australia (1901–2000) David Olive – U.K. (1937–2012) Zaira Ollano – Italy (1904–1997) Gerard K. O'Neill – United States (1927–1992) Lars Onsager – Norway (1903–1976) Robert Oppenheimer – United States (1904–1967) Nicole Oresme – France (1325–1382) Yuri Orlov – Soviet Union, United States (1924–2020) Leonard Salomon Ornstein – Netherlands (1880–1941) Egon Orowan – Austria-Hungary, United States (1901–1989) Hans Christian Ørsted – Denmark (1777–1851) Douglas Dean Osheroff – United States (born 1945) Nobel laureate Silke Ospelkaus – Germany Mikhail Vasilievich Ostrogradsky – Russia (1801–1862) P Thanu Padmanabhan – India (1957–2021) Heinz Pagels – United States (1939–1988) Abraham Pais – Netherlands, United States (1918–2000) Wolfgang K. H. Panofsky – Germany, United States (1919–2007) Blaise Pascal – France (1623–1662) John Pasta – United States (1918–1984) Jogesh Pati – United States (born 1937) Petr Paucek – United States Stephen Paul – United States (1953–2012) Wolfgang Paul – Germany (1913–1993) Nobel laureate Wolfgang Pauli – Austria-Hungary (1900–1958) Nobel laureate Cecilia Payne-Gaposchkin – United States (1900–1979) astronomer and astrophysicist Ruby Payne-Scott – Australia (1912–1981) George B. Pegram – United States (1876–1958) Rudolf Peierls – Germany, U.K. (1907–1995) Jean Peltier – France (1785–1845) Roger Penrose – U.K. (born 1931) Wolf laureate, mathematician Arno Allan Penzias – United States (1933–2024) Nobel laureate, electrical engineer Martin Lewis Perl – United States (1927–2014) Nobel laureate Saul Perlmutter – United States (born 1959) Nobel laureate Jean Baptiste Perrin – France (1870–1942) Nobel laureate Mario Petrucci - U.K. (born 1958) Konstantin Petrzhak – Soviet Union, Russia (1907–1998) Bernhard Philberth – Germany (1927–2010) William Daniel Phillips – United States (born 1948) Nobel laureate Max Planck – Germany (1858–1947) Nobel laureate Joseph Plateau – Belgium (1801–1883) Milton S. Plesset – United States (1908–1991) Ward Plummer – United States (1940–2020) Boris Podolsky – Taganrog (1896–1966) Henri Poincaré – France (1854–1912) mathematician Eric Poisson – Canada (born 1965) Siméon Denis Poisson – France (1781–1840) mathematician Balthasar van der Pol – Netherlands (1889–1959) electrical engineer Joseph Polchinski – United States (1954–2018) Hugh David Politzer – United States (born 1949) Nobel laureate John Polkinghorne – U.K. (1930–2021) Julianne Pollard-Larkin – United States Alexander M. Polyakov – Russia, United States (born 1945) Bruno Pontecorvo – Italy, Soviet Union (1913–1993) Heraclides Ponticus – Greece (387–312 BC) Heinz Pose – Germany (1905–1975) Cecil Frank Powell – U.K. (1903–1969) Nobel laureate John Henry Poynting – U.K. (1852–1914) Ludwig Prandtl – Germany (1875–1953) Willibald Peter Prasthofer – Austria (1917–1993) Ilya Prigogine – Belgium (1917–2003) Alexander Prokhorov – Soviet, Russian (1916–2002) Nobel laureate William Prout – U.K. (1785–1850) Luigi Puccianti – Italy (1875–1952) Ivan Pulyuy – Ukraine (1845–1918) Mihajlo Idvorski Pupin – Serbia, United States (1858–1935) Edward Mills Purcell – United States (1912–1997) Nobel laureate Q Xuesen Qian – China (1911–2009) Helen Quinn – Australia, United States (born 1943) R Raúl Rabadán – United States Gabriele Rabel – Austria, United Kingdom (1880–1963) Isidor Isaac Rabi – Austria, United States (1898–1988) Nobel laureate Giulio Racah – Italian-Israeli (1909–1965) James Rainwater – United States (1917–1986) Nobel laureate Mark G. Raizen – New York City United States (born 1955) Alladi Ramakrishnan – India (1923–2008) Chandrasekhara Venkata Raman – India (1888–1970) Nobel laureate Edward Ramberg – United States (1907–1995) Carl Ramsauer – Germany (1879–1955) Norman Foster Ramsey, Jr. – United States (1915–2011) Nobel laureate Lisa Randall – United States (born 1962) Riccardo Rattazzi – Italy (born 1964) Lord Rayleigh – U.K. (1842–1919) Nobel laureate René Antoine Ferchault de Réaumur – France (1683–1757) Sidney Redner – Canada, United States (born 1951) Martin John Rees – U.K. (born 1942) Hubert Reeves – Canada (born 1932) Tullio Regge – Italy (1931–2014) Frederick Reines – United States (1918–1998) Nobel laureate Louis Rendu – France (1789–1859) Osborne Reynolds – U.K. (1842–1912) Owen Willans Richardson – U.K. (1879–1959) Nobel laureate Robert Coleman Richardson – United States (1937–2013) Nobel laureate Burton Richter – United States (1931–2018) Nobel laureate Floyd K. Richtmyer – United States (1881–1939) Robert D. Richtmyer – (1910–2003) Charlotte Riefenstahl – Germany (1899–1993) Nikolaus Riehl – Germany (1901–1990) Adam Riess – United States (born 1969) Nobel laureate Karl-Heinrich Riewe – Germany Walther Ritz – Switzerland (1878–1909) Étienne-Gaspard Robert – Belgium (1763–1837) Heinrich Rohrer – Switzerland (1933–2013) Nobel laureate Joseph Romm – United States (born 1960) Wilhelm Conrad Röntgen – Germany (1845–1923) Nobel laureate Clemens C. J. Roothaan – Netherlands (1918–2019) Nathan Rosen – United States, Israel (1909–1995) Marshall Rosenbluth – United States (1927–2003) Yasha Rosenfeld – Israel (1948–2002) Carl-Gustav Arvid Rossby – Sweden, United States (1898–1957) Bruno Rossi – Italy, United States (1905–1993) Joseph Rotblat – Poland, U.K. (1908–2005) Carlo Rovelli – Italy (born 1956) Mary Laura Chalk Rowles – United States (1904–1996) Subrata Roy (scientist) – India, United States Carlo Rubbia – Italy (born 1934) Nobel laureate Vera Rubin – United States (1928–2016) Serge Rudaz – Canada, United States (born 1954) David Ruelle – Belgium, France (born 1935) Ernst August Friedrich Ruska – Germany (1906–1988) Nobel laureate Ernest Rutherford – New Zealand, U.K. (1871–1937) Janne Rydberg – Sweden (1854–1919) Martin Ryle – U.K. (1918–1984) Nobel laureate S Subir Sachdev – United States (born 1961) Mendel Sachs – United States (1927–2012) Rainer K. Sachs – Germany and United States (born 1932) Robert G. Sachs – United States (1916–1999) Carl Sagan – United States (1934–1996) Georges-Louis le Sage – Switzerland (1724–1803) Georges Sagnac – France (1869–1926) Megh Nad Saha – Bengali India (1893–1956) Shoichi Sakata – Japan (1911–1970) Andrei Dmitrievich Sakharov – Soviet Union (1929–1989) Oscar Sala – Brazil (1922–2010) Abdus Salam – Pakistan (1926–1996) Nobel laureate Edwin Ernest Salpeter – Austria, Australia, United States (1924–2008) Anthony Ichiro Sanda – Japan, United States (born 1944) Antonella De Santo – Italy, U.K. Vikram Sarabhai – India (1919–1971) Isidor Sauers – Austria (born 1948) Félix Savart – France (1791–1841) Brendan Scaife – Ireland (born 1928) Martin Schadt – Switzerland (born 1938) Arthur Leonard Schawlow – United States (1921–1999) Nobel laureate Craige Schensted – United States Joël Scherk – France (1946–1979) Otto Scherzer – Germany (1909–1982) Brian Schmidt – Australia, United States (born 1967) Nobel laureate Alan Schoen – United States (1924–2023) Walter H. Schottky – Germany (1886–1976) Kees A. Schouhamer Immink – Netherlands (born 1946) John Robert Schrieffer – United States (1931–2019) Nobel laureate Erwin Schrödinger – Austria-Hungary (1887–1961) Nobel laureate John Henry Schwarz – United States (born 1941) Melvin Schwartz – United States (1932–2006) Nobel laureate Karl Schwarzschild – German Empire (1876–1916) Julian Schwinger – United States (1918–1994) Nobel laureate Marlan Scully – United States (born 1939) Dennis William Sciama – U.K. (1926–1999) Bice Sechi-Zorn – Italy, United States (1928–1984) Thomas Johann Seebeck – Estonia (1770–1831) Raymond Seeger – United States (1906–1992) Emilio G. Segre – Italy, United States (1905–1989) Nobel laureate Nathan Seiberg – United States (born 1956) Frederick Seitz – United States (1911–2008) Nikolay Semyonov – Russia (1896–1986) Ashoke Sen – India (born 1956) Hiranmay Sen Gupta – Bangladesh (1934–2022) Robert Serber – United States (1909–1997) Roman U. Sexl – Austria (1939–1986) Shen Kuo – China (1031–1095) Mikhail Shifman – Russia, United States (born 1949) Dmitry Shirkov – Russia (1928–2016) William Shockley – United States (1910–1989) Nobel laureate Boris Shraiman – United States (1956) Lev Shubnikov – Russia, Netherlands, Ukraine (1901–1937) Clifford Shull – United States (1915–2001) Nobel laureate Kai Siegbahn – Sweden (1918–2007) Nobel laureate Manne Siegbahn – Sweden (1886–1978) Nobel laureate Ludwik Silberstein – Poland, Germany, Italy, United States, Canada (1872–1948) Eva Silverstein – United States (born 1970) John Alexander Simpson – United States (1916–2000) Willem de Sitter – Netherlands (1872–1934) Uri Sivan – Israel (born 1955) Tamitha Skov – United States space weather physicist, researcher and public speaker G. V. Skrotskii – Russia (1915–1992) Francis G. Slack – United States (1897–1985) John C. Slater – United States (1900–1976) Louis Slotin – United States (1910–1946) Alexei Yuryevich Smirnov – Russia, Italy (born 1951) George E. Smith – United States (born 1930) Nobel laureate Lee Smolin – United States (born 1955) Marian Smoluchowski – Poland (1872–1917) George Smoot – United States (born 1945) Nobel laureate Willebrord Snell – Netherlands (1580–1626) Arsenij Sokolov – Russia (1910–1986) Arnold Sommerfeld – Germany (1868–1951) Bent Sørensen – Denmark (born 1941) Rafael Sorkin – United States (born 1945) Zeinabou Mindaoudou Souley (1964–) Nuclear physicist from Niger Nicola Spaldin – United Kingdom (born 1969) Maria Spiropulu – Greece (born 1970) Henry Stapp – United States (born 1928) Johannes Stark – Germany (1874–1957) Nobel laureate Max Steenbeck – (1901–1981) Joseph Stefan – Austria-Hungary, Slovenia (1835–1893) Jack Steinberger – Germany, United States (1921–2020) Nobel laureate Paul J. Steinhardt – United States (born 1952) Carl August Steinheil – Germany (1801–1870) George Sterman – United States (born 1946) Otto Stern – Germany (1888–1969) Nobel laureate Simon Stevin – Belgium, Netherlands (1548–1620) Thomas H. Stix – United States (1924–2001) George Gabriel Stokes – Ireland, U.K. (1819–1903) Aleksandr Stoletov – Russia (1839–1896) Donna Strickland – Canada (born 1959) Nobel laureate Horst Ludwig Störmer – Germany (born 1949) Nobel laureate Leonard Strachan – United States, astrophysicist Julius Adams Stratton – United States Andrew Strominger – United States (born 1955) Audrey Stuckes – U.K. (1923–2006) Ernst Stueckelberg – Switzerland (1905–1984) George Sudarshan – India, United States (1931–2018) Rashid Sunyaev – USSR (born 1943) Oleg Sushkov – USSR, Australia (born 1950) Leonard Susskind – United States (born 1940) Joseph Swan – U.K. (1828–1914) Jean Henri van Swinden – Netherlands (1746–1823) Bertha Swirles – U.K. (1903–1999) Leo Szilard – Austria-Hungary, United States (1898–1964) T Igor Yevgenyevich Tamm – Imperial Russia, Soviet Union (1895–1971) Nobel laureate Rachel (Raya) Takserman-Krozer – Ukraine (1921–1987) Abraham H. Taub – United States (1911–1999) Martin Tajmar – Austria (born 1974) Geoffrey Ingram Taylor – U.K. (1886–1975) Joseph Hooton Taylor, Jr. – United States (born 1941) Nobel laureate Richard Edward Taylor – United States (1929–2018) Nobel laureate Max Tegmark – Sweden, United States (born 1967) Valentine Telegdi – Hungary, United States (1922–2006) Wolf laureate Edward Teller – Austria-Hungary, United States (1908–2003) Igor Ternov – Russia (1921–1996) George Paget Thomson – U.K. (1892–1975) Nobel laureate J. J. Thomson – U.K. (1856–1940) Nobel laureate William Thomson (Lord Kelvin) – Ireland, U.K. (1824–1907) Charles Thorn – United States (born 1946) Kip Stephen Thorne – United States (born 1940) Peter Adolf Thiessen – Germany (1899–1990) Samuel Chao Chung Ting – United States (born 1936) Nobel laureate Frank J. Tipler – United States (born 1947) Ernest William Titterton – U.K., Australia (1916–1990) Yoshinori Tokura – Japan (born 1954) Samuel Tolansky – U.K. (1907–1973) Sin-Itiro Tomonaga – Japan (1906–1979) Nobel laureate Lewi Tonks – United States (1897–1971) Akira Tonomura – Japan (1942–2012) Evangelista Torricelli – Italy (1608–1647) Yoji Totsuka – Japan (1942–2008) Bruno Touschek – Italy (1921–1978) Charles Townes – United States (1915–2015) Nobel laureate John Townsend – U.K. (1868–1957) Johann Georg Tralles – Germany (1763–1822) Sam Treiman – United States (1925–1999) Daniel Chee Tsui – China, United States (born 1939) Nobel laureate Vipin Kumar Tripathi – India (born 1948) John J. Turin – United States (1913–1973) Neil Turok – South Africa (born 1958) Victor Twersky – United States (1923–1998) Sergei Tyablikov – Russia (1921–1968) John Tyndall – U.K. (1820–1893) Neil deGrasse Tyson – United States (born 1958) U George Eugene Uhlenbeck – Netherlands, United States (1900–1988) Stanislaw Ulam – Poland, United States (1909–1984) Nikolay Umov – Russia (1846–1915) Juris Upatnieks – Latvia, United States (born 1936) V Cumrun Vafa – Iran, United States (born 1960) Oriol Valls – (born 1947 in Barcelona, Spain) university physics professor Léon Van Hove – Belgium (1924–1990) Sergei Vavilov – Soviet Union (1891–1951) Vlatko Vedral – U.K., Serbia (born 1971) Evgeny Velikhov – Russia (born 1935) Martinus J. G. Veltman – Netherlands, United States (1931–2021) Nobel laureate Gabriele Veneziano – Italy (born 1942) Giovanni Battista Venturi – Italy (1746–1822) Émile Verdet – France (1824–1866) Erik Verlinde – Netherlands (1962) Herman Verlinde – Netherlands (1962) Leonardo da Vinci – Italy (1452–1519) Jean-Pierre Vigier – France (1920–2004) Gaetano Vignola – Italy Anatoly Vlasov – Russia (1908–1975) John Hasbrouck van Vleck – United States (1899–1980) Nobel laureate Woldemar Voigt – Germany (1850–1919) Burchard de Volder – Netherlands (1643–1709) Max Volmer – Germany (1885–1965) Alessandro Volta – Italy (1745–1827) Wernher Von Braun – Germany (1912–1977) aerospace engineer W Johannes Diderik van der Waals – Netherlands (1837–1923) Nobel laureate James Wait – Canada (1924–1998) Ludwig Waldmann – Germany (1913–1980) Alan Walsh – U.K., Australia (1916–1988) Ernest Walton – Ireland (1903–1995) Nobel laureate Dezhao Wang – China (1905–1998) Enge Wang – China (born 1957) Huanyu Wang – China (1954—2018) Kan-Chang Wang – China (1907–1998) Pu (Paul) Wang – China (1902–1969) Zhuxi Wang – China (1911–1983) Aaldert Wapstra – Netherlands (1923–2006) John Clive Ward – England, Australia (1924–2000) Gleb Wataghin – Ukraine, Italy, Brazil (1896–1986) John James Waterston – U.K. (1811–1883) Alan Andrew Watson – U.K. (born 1938) James Watt – U.K. (1736–1819) Denis Weaire – Ireland (born 1942) Colin Webb – U.K. (born 1937) Wilhelm Weber – Germany (1804–1891) Katherine Weimer – United States (1919–2000) Alvin Weinberg – United States (1915–2006) Steven Weinberg – United States (1933–2021) Nobel laureate Rainer Weiss – United States (born 1932) Nobel laureate Victor Frederick Weisskopf – Austria, United States (1908–2002) Carl Friedrich von Weizsäcker – Germany (1912–2007) Heinrich Welker – Germany (1912–1981) Gregor Wentzel – Germany (1898–1978) Paul Werbos – United States (born 1947) Siebren van der Werf – Netherlands (born 1942) Peter Westervelt – United States (1919–2015) Hermann Weyl – Germany (1885–1955) Christof Wetterich – Germany (born 1952) John Archibald Wheeler – United States (1911–2008) Gian-Carlo Wick – Italy (1909–1992) Emil Wiechert – Prussia (1861–1928) Carl Wieman – United States (born 1951) Nobel laureate Wilhelm Wien – Germany (1864–1928) Nobel laureate Arthur Wightman – United States (1922–2013) Eugene Wigner – Austria-Hungary, United States (1902–1993) Nobel laureate Frank Wilczek – United States (born 1951) Nobel laureate Charles Thomson Rees Wilson – U.K. (1869–1959) Nobel laureate Christine Wilson (scientist) – Canadian-American physicist and astronomer Kenneth Geddes Wilson – United States (1936–2013) Nobel laureate Robert R. Wilson – United States (1914–2000) Nobel laureate Robert Woodrow Wilson – United States (born 1936) John R. Winckler – United States (1918–2001) David J. Wineland – United States (born 1944) Nobel laureate Karl Wirtz – Germany (1910–1994) Mark B. Wise – Canada, United States (born 1953) Edward Witten – United States (born 1951) Emil Wolf – Czechoslovakia, United States (1922–2018) Fred Alan Wolf – United States (born 1934) Lincoln Wolfenstein – United States (1923–2015) Stephen Wolfram – U.K. (born 1959) Ewald Wollny – Germany (1846–1901) Michael Woolfson – U.K. (1927–2019) Chien-Shiung Wu – United States (1912–1997) Sau Lan Wu – United States (born early 1940s) Tai Tsun Wu – United States (1933-2024) X Bartholomew xanthorpaw – Greece (1951–1990) Y Rosalyn Yalow – United States (1921–2011) Chen Ning Yang – China (born 1922) Nobel laureate Félix Ynduráin – Spain (born 1946) Francisco José Ynduráin – Spain (1940–2008) Kenneth Young – United States, China (born 1947) Thomas Young – U.K. (1773–1829) Hideki Yukawa – Japan (1907–1981) Nobel laureate Z Jan Zaanen – Netherlands (born 1957) Daniel Zajfman – Israel (born 1959) Anthony Zee – United States (born 1945) Pieter Zeeman – Netherlands (1865–1943) Nobel laureate Ludwig Zehnder – Switzerland (1854–1949) Anton Zeilinger – Austria (born 1945) Yakov Borisovich Zel'dovich – Russia (1914–1987) John Zeleny – United States (1872–1951) Frits Zernike – Netherlands (1888–1960) Nobel laureate Antonino Zichichi – Italy (born 1929) Hans Ziegler – Switzerland, United States (1910–1985) Karl Zimmer – Germany (1911–1988) Georges Zissis – Greece (born 1964) Peter Zoller – Austria (born 1952) Dmitry Zubarev – Russia (1917–1992) Bruno Zumino – Italy (1923–2014) Wojciech H. Zurek – Poland, United States (born 1951) Robert Zwanzig – United States (1928–2014) George Zweig – United States (born 1937) Barton Zwiebach – United States (born 1954) External links Pictures of some physicists (mostly 20th-century American) are collected in the Emilio Segrè Visual Archives and A Picture Gallery of Famous Physicists 20th-century women in physics in the Contributions of 20th Century Women to Physics archive References Physicists List of Physicist
23634
https://en.wikipedia.org/wiki/Protein
Protein
Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity. A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes. Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable. Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle and the proteins in the cytoskeleton, which form a system of scaffolding that maintains cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use. History and etymology Discovery and early studies Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. n 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin. Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word (), meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da. Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Then, working with Lafayette Mendel and applying Liebig's law of the minimum, which states that growth is limited by the scarcest resource, to the feeding of laboratory rats, the nutritionally essential amino acids were established. The work was continued and communicated by William Cumming Rose. The difficulty in purifying proteins in large quantities made them very difficult for early protein biochemists to study. Hence, early studies focused on proteins that could be purified in large quantities, including those of blood, egg whites, and various toxins, as well as digestive and metabolic enzymes obtained from slaughterhouses. In the 1950s, the Armour Hot Dog Company purified 1 kg of pure bovine pancreatic ribonuclease A and made it freely available to scientists; this gesture helped ribonuclease A become a major target for biochemical study for the following decades. Polypeptides The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein. Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions. The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum. Structure With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power also supported the sequencing of complex proteins. In 1999, Roger Kornberg succeeded in sequencing the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons. Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has also helped researchers to approach atomic-level resolution of protein structures. , the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures. Classification Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, the gene ontology classifies both genes and proteins by their biological and biochemical function, but also by their intracellular location. Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many different ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains). Biochemistry Most proteins consist of linear polymers built from series of up to 20 different L-α- amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino group, a carboxyl group, and a variable side chain are bonded. Only proline differs from this basic structure as it contains an unusual ring to the N-end amine group, which forces the CO–NH amide moiety into a fixed conformation. The side chains of the standard amino acids, detailed in the list of standard amino acids, have a great variety of chemical structures and properties; it is the combined effect of all of the amino acid side chains in a protein that ultimately determines its three-dimensional structure and its chemical reactivity. The amino acids in a polypeptide chain are linked by peptide bonds. Once linked in the protein chain, an individual amino acid is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone. The peptide bond has two resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. The end with a free amino group is known as the N-terminus or amino terminus, whereas the end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus (the sequence of the protein is written from N-terminus to C-terminus, from left to right). The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues. Polypeptide can refer to any single linear chain of amino acids, usually regardless of length, but often implies an absence of a defined conformation. Interactions Proteins can interact with many types of molecules, including with other proteins, with lipids, with carbohydrates, and with DNA. Abundance in cells It has been estimated that average-sized bacteria contain about 2 million proteins per cell (e.g. E. coli and Staphylococcus aureus). Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells. Synthesis Biosynthesis Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second. The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus. The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids. Chemical synthesis Short proteins can also be synthesized chemically by a family of methods known as peptide synthesis, which rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction. Structure Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure: Primary structure: the amino acid sequence. A protein is a polyamide. Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of different secondary structure can be present in the same protein molecule. Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein. Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex. Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells. Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, proteins also undergo variation in structure through thermal vibration and the collision with other molecules. Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons. Protein domains Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins). Sequence motif Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database. Protein topology Topology of a protein describes the entanglement of the backbone and the arrangement of contacts within the folded chain. Two theoretical frameworks of knot theory and Circuit topology have been applied to characterise protein topology. Being able to describe protein topology opens up new pathways for protein engineering and pharmaceutical development, and adds to our understanding of protein misfolding diseases such as neuromuscular disorders and cancer. Cellular functions Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome. The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine. Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks. As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types. Enzymes The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme). The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site. Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes. Cell signaling and ligand binding Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell. Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high. Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins. Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions. Structural proteins Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size. Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles and play essential roles in intracellular transport. Protein evolution A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic. Methods of study Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry. The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins. Protein purification Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography; the advent of genetic engineering has made possible a number of methods to facilitate purification. To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing. For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of different tags have been developed to help researchers purify specific proteins from complex mixtures. Cellular localization The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy, as shown in the figure opposite. Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose. Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it does increase the likelihood, and is more amenable to large-scale studies. Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique also uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest. Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties. Proteomics The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics. Structure determination Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses; a variant known as electron crystallography can also produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein. Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined. Structure prediction Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Also proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is, therefore, an important part of protein structure characterisation. Bioinformatics A vast array of computational methods have been developed to analyze the structure, function and evolution of proteins. The development of such tools has been driven by the large amount of genomic and proteomic data available for a variety of organisms, including the human genome. It is simply impossible to study all proteins experimentally, hence only a few are subjected to laboratory experiments while computational tools are used to extrapolate to similar proteins. Such homologous proteins can be efficiently identified in distantly related organisms by sequence alignment. Genome and gene sequences can be searched by a variety of tools for certain properties. Sequence profiling tools can find restriction enzyme sites, open reading frames in nucleotide sequences, and predict secondary structures. Phylogenetic trees can be constructed and evolutionary hypotheses developed using special software like ClustalW regarding the ancestry of modern organisms and the genes they express. The field of bioinformatics is now indispensable for the analysis of genes and proteins. In silico simulation of dynamical processes A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins. Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree (MCTDH) method and the hierarchical equations of motion (HEOM) approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives (for example, the Folding@home project) facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques. Chemical analysis The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available. Nutrition Most microorganisms and plants can biosynthesize all 20 standard amino acids, while animals (including humans) must obtain some of the amino acids from the diet. The amino acids that an organism cannot synthesize on its own are referred to as essential amino acids. Key enzymes that synthesize certain amino acids are not present in animals—such as aspartokinase, which catalyses the first step in the synthesis of lysine, methionine, and threonine from aspartate. If amino acids are present in the environment, microorganisms can conserve energy by taking up the amino acids from their surroundings and downregulating their biosynthetic pathways. In animals, amino acids are obtained through the consumption of foods containing protein. Ingested proteins are then broken down into amino acids through digestion, which typically involves denaturation of the protein through exposure to acid and hydrolysis by enzymes called proteases. Some ingested amino acids are used for protein biosynthesis, while others are converted to glucose through gluconeogenesis, or fed into the citric acid cycle. This use of protein as a fuel is particularly important under starvation conditions as it allows the body's own proteins to be used to support life, particularly those found in muscle. In animals such as dogs and cats, protein maintains the health and quality of the skin by promoting hair follicle growth and keratinization, and thus reducing the likelihood of skin problems producing malodours. Poor-quality proteins also have a role regarding gastrointestinal health, increasing the potential for flatulence and odorous compounds in dogs because when proteins reach the colon in an undigested state, they are fermented producing hydrogen sulfide gas, indole, and skatole. Dogs and cats digest animal proteins better than those from plants, but products of low-quality animal origin are poorly digested, including skin, feathers, and connective tissue. Mechanical properties The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design. Young's modulus Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli. The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data. At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below. Viscosity In addition to serving as enzymes within the cell, globular proteins often act as key transport molecules. For instance, Serum Albumins, a key component of blood, are necessary for the transport of a multitude of small molecules throughout the body. Because of this, the concentration dependent behavior of these proteins in solution is directly tied to the function of the circulatory system. On way of quantifying this behavior is through the viscosity of the solution. Viscosity, η, is generally given is a measure of a fluid's resistance to deformation. It can be calculated as the ratio between the applied stress and the rate of change of the resulting shear strain, that is, the rate of deformation. Viscosity of complex liquid mixtures, such as blood, often depends strongly on temperature and solute concentration. For serum albumin, specifically bovine serum albumin, the following relation between viscosity and temperature and concentration can be used. Where c is the concentration, T is the temperature, R is the gas constant, and α, β, B, D, and ΔE are all material-based property constants. This equation has the form of an Arrhenius equation, assigning viscosity an exponential dependence on temperature and concentration. See also References Further reading Textbooks External links Databases and projects NCBI Entrez Protein database NCBI Protein Structure database Human Protein Reference Database Human Proteinpedia Folding@Home (Stanford University) Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures) Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month , presenting short accounts on selected proteins from the PDB) Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure. UniProt the Universal Protein Resource Tutorials and educational websites "An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford) Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology Molecular biology Proteomics
23635
https://en.wikipedia.org/wiki/Physical%20chemistry
Physical chemistry
Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria. Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids). Some of the relationships that physical chemistry strives to understand include the effects of: Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids). Reaction kinetics on the rate of a reaction. The identity of ions and the electrical conductivity of materials. Surface science and electrochemistry of cell membranes. Interaction of one body with another in terms of quantities of heat and work called thermodynamics. Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry Study of colligative properties of number of species present in solution. Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule. Reactions of electrochemical cells. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics. Calculation of the energy of electron movement in molecules and metal complexes. Key concepts The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems. One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them. Disciplines Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter. Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium. Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate. The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. History The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" () before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations". Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule. The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909. Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development. Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry. See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship Journals Some journals that deal with physical chemistry include Zeitschrift für Physikalische Chemie (1887) Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997) Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905) Macromolecular Chemistry and Physics (1947) Annual Review of Physical Chemistry (1950) Molecular Physics (1957) Journal of Physical Organic Chemistry (1988) Journal of Physical Chemistry B (1997) ChemPhysChem (2000) Journal of Physical Chemistry C (2007) Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals) Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914). Branches and related topics Chemical thermodynamics Chemical kinetics Statistical mechanics Quantum chemistry Electrochemistry Photochemistry Surface chemistry Solid-state chemistry Spectroscopy Biophysical chemistry Materials science Physical organic chemistry Micromeritics See also List of important publications in chemistry#Physical chemistry List of unsolved problems in chemistry#Physical chemistry problems Physical biochemistry :Category:Physical chemists References External links The World of Physical Chemistry (Keith J. Laidler, 1993) Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996) Physical Chemistry: neither Fish nor Fowl? (Joachim Schummer, The Autonomy of Chemistry, Würzburg, Königshausen & Neumann, 1998, pp. 135–148) The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003)
23636
https://en.wikipedia.org/wiki/Perimeter
Perimeter
A perimeter is a closed path that encompasses, surrounds, or outlines either a two dimensional shape or a one-dimensional length. The perimeter of a circle or an ellipse is called its circumference. Calculating the perimeter has several practical applications. A calculated perimeter is the length of fence required to surround a yard or garden. The perimeter of a wheel/circle (its circumference) describes how far it will roll in one revolution. Similarly, the amount of string wound around a spool is related to the spool's perimeter; if the length of the string was exact, it would equal the perimeter. Formulas The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated, as any path, with , where is the length of the path and is an infinitesimal line element. Both of these must be replaced by algebraic forms in order to be practically calculated. If the perimeter is given as a closed piecewise smooth plane curve with then its length can be computed as follows: A generalized notion of perimeter, which includes hypersurfaces bounding volumes in -dimensional Euclidean spaces, is described by the theory of Caccioppoli sets. Polygons Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons. The perimeter of a polygon equals the sum of the lengths of its sides (edges). In particular, the perimeter of a rectangle of width and length equals An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides. A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If is a regular polygon's radius and is the number of its sides, then its perimeter is A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. The three splitters of a triangle all intersect each other at the Nagel point of the triangle. A cleaver of a triangle is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths. The three cleavers of a triangle all intersect each other at the triangle's Spieker center. Circumference of a circle The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, (the Greek p for perimeter), such that if is the circle's perimeter and its diameter then, In terms of the radius of the circle, this formula becomes, To calculate a circle's perimeter, knowledge of its radius or diameter and the number suffices. The problem is that is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of is important in the calculation. The computation of the digits of is relevant to many fields, such as mathematical analysis, algorithmics and computer science. Perception of perimeter The perimeter and the area are two main measures of geometric figures. Confusing them is a common error, as well as believing that the greater one of them is, the greater the other must be. Indeed, a commonplace observation is that an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/ scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by . The real area is times the area of the shape on the map. Nevertheless, there is no relation between the area and the perimeter of an ordinary shape. For example, the perimeter of a rectangle of width 0.001 and length 1000 is slightly above 2000, while the perimeter of a rectangle of width 0.5 and length 2 is 5. Both areas are equal to 1. Proclus (5th century) reported that Greek peasants "fairly" parted fields relying on their perimeters. However, a field's production is proportional to its area, not to its perimeter, so many naive peasants may have gotten fields with long perimeters but small areas (thus, few crops). If one removes a piece from a figure, its area decreases but its perimeter may not. The convex hull of a figure may be visualized as the shape formed by a rubber band stretched around it. In the animated picture on the left, all the figures have the same convex hull; the big, first hexagon. Isoperimetry The isoperimetric problem is to determine a figure with the largest area, amongst those having a given perimeter. The solution is intuitive; it is the circle. In particular, this can be used to explain why drops of fat on a broth surface are circular. This problem may seem simple, but its mathematical proof requires some sophisticated theorems. The isoperimetric problem is sometimes simplified by restricting the type of figures to be used. In particular, to find the quadrilateral, or the triangle, or another particular figure, with the largest area amongst those with the same shape having a given perimeter. The solution to the quadrilateral isoperimetric problem is the square, and the solution to the triangle problem is the equilateral triangle. In general, the polygon with sides having the largest area and a given perimeter is the regular polygon, which is closer to being a circle than is any irregular polygon with the same number of sides. Etymology The word comes from the Greek περίμετρος perimetros, from περί peri "around" and μέτρον metron "measure". See also Arclength Area Coastline paradox Girth (geometry) Pythagorean theorem Surface area Volume Wetted perimeter References External links Elementary geometry Length
23637
https://en.wikipedia.org/wiki/Phase%20%28matter%29
Phase (matter)
In the physical sciences, a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is a different material, in its own separate phase. (See .) More precisely, a phase is a region of space (a thermodynamic system), throughout which all physical properties of a material are essentially uniform. Examples of physical properties include density, index of refraction, magnetization and chemical composition. The term phase is sometimes used as a synonym for state of matter, but there can be several immiscible phases of the same state of matter (as where oil and water separate into distinct phases, both in the liquid state). It is also sometimes used to refer to the equilibrium states shown on a phase diagram, described in terms of state variables such as pressure and temperature and demarcated by phase boundaries. (Phase boundaries relate to changes in the organization of matter, including for example a subtle change within the solid state from one crystal structure to another, as well as state-changes such as between solid and liquid.) These two usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used. Types of phases Distinct phases may be described as different states of matter such as gas, liquid, solid, plasma or Bose–Einstein condensate. Useful mesophases between solid and liquid form other states of matter. Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot. As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate. Phase equilibrium Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ. Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase. At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils. Number of phases For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams. The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depend only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium. In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid. In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa. An unusual feature of the water phase diagram is that the solid–liquid phase line (illustrated by the dotted green line) has a negative slope. For most substances, the slope is positive as exemplified by the dark green line. This unusual feature of water is related to ice having a lower density than liquid water. Increasing the pressure drives the water into the higher density phase, which causes melting. Another interesting though not unusual feature of the phase diagram is the point where the solid–liquid phase line meets the liquid–gas phase line. The intersection is referred to as the triple point. At the triple point, all three phases can coexist. Experimentally, phase lines are relatively easy to map due to the interdependence of temperature and pressure that develops when multiple phases form. Gibbs' phase rule suggests that different phases are completely determined by these variables. Consider a test apparatus consisting of a closed and well-insulated cylinder equipped with a piston. By controlling the temperature and the pressure, the system can be brought to any point on the phase diagram. From a point in the solid stability region (left side of the diagram), increasing the temperature of the system would bring it into the region where a liquid or a gas is the equilibrium phase (depending on the pressure). If the piston is slowly lowered, the system will trace a curve of increasing temperature and pressure within the gas region of the phase diagram. At the point where gas begins to condense to liquid, the direction of the temperature and pressure curve will abruptly change to trace along the phase line until all of the water has condensed. Interfacial phenomena Between two phases in equilibrium there is a narrow region where the properties are not that of either phase. Although this region may be very thin, it can have significant and easily observable effects, such as causing a liquid to exhibit surface tension. In mixtures, some components may preferentially move toward the interface. In terms of modeling, describing, or understanding the behavior of a particular system, it may be efficacious to treat the interfacial region as a separate phase. Crystal phases A single material may have several distinct solid states capable of forming separate phases. Water is a well-known example of such a material. For example, water ice is ordinarily found in the hexagonal form ice Ih, but can also exist as the cubic ice Ic, the rhombohedral ice II, and many other forms. Polymorphism is the ability of a solid to exist in more than one crystal form. For pure chemical elements, polymorphism is known as allotropy. For example, diamond, graphite, and fullerenes are different allotropes of carbon. Phase transitions When a substance undergoes a phase transition (changes from one state of matter to another) it usually either takes up or releases energy. For example, when water evaporates, the increase in kinetic energy as the evaporating molecules escape the attractive forces of the liquid is reflected in a decrease in temperature. The energy required to induce the phase transition is taken from the internal thermal energy of the water, which cools the liquid to a lower temperature; hence evaporation is useful for cooling. See Enthalpy of vaporization. The reverse process, condensation, releases heat. The heat energy, or enthalpy, associated with a solid to liquid transition is the enthalpy of fusion and that associated with a solid to gas transition is the enthalpy of sublimation. Phases out of equilibrium While phases of matter are traditionally defined for systems in thermal equilibrium, work on quantum many-body localized (MBL) systems has provided a framework for defining phases out of equilibrium. MBL phases never reach thermal equilibrium, and can allow for new forms of order disallowed in equilibrium via a phenomenon known as localization protected quantum order. The transitions between different MBL phases and between MBL and thermalizing phases are novel dynamical phase transitions whose properties are active areas of research. Notes References External links French physicists find a solution that reversibly solidifies with a rise in temperature – α-cyclodextrin, water, and 4-methylpyridine Engineering thermodynamics Condensed matter physics Concepts in physics
23638
https://en.wikipedia.org/wiki/Outline%20of%20physical%20science
Outline of physical science
Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a "physical science", together is called the "physical sciences". Definition Physical science can be described as all of the following: A branch of science (a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe). A branch of natural science – natural science is a major branch of science that tries to explain and predict nature's phenomena, based on empirical evidence. In natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as peer review and repeatability of findings, are amongst the criteria and methods used for this purpose. Natural science can be broken into two main branches: life science (for example biology) and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences. Branches of physical science Physics – natural and physical science could involve the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves. Branches of physics Astronomy – study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae), the physics, chemistry, and evolution of such objects, and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma-ray bursts, and cosmic microwave background radiation. Branches of astronomy Chemistry – studies the composition, structure, properties and change of matter. In this realm, chemistry deals with such topics as the properties of individual atoms, the manner in which atoms form chemical bonds in the formation of compounds, the interactions of substances through intermolecular forces to give matter its general properties, and the interactions between substances through chemical reactions to form different substances. Branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment (ecosphere or Earth system) works and how it evolved to its current state. It includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere. History of physical science History of physical science – history of the branch of natural science that studies non-living systems, in contrast to the life sciences. It in turn has many branches, each referred to as a "physical science", together called the "physical sciences". However, the term "physical" creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena (organic chemistry, for example). The four main branches of physical science are astronomy, physics, chemistry, and the Earth sciences, which include meteorology and geology. History of physics – history of the physical science that studies matter and its motion through space-time, and related concepts such as energy and force History of acoustics – history of the study of mechanical waves in solids, liquids, and gases (such as vibration and sound) History of agrophysics – history of the study of physics applied to agroecosystems History of soil physics – history of the study of soil physical properties and processes. History of astrophysics – history of the study of the physical aspects of celestial objects History of astronomy – history of the study of the universe beyond Earth, including its formation and development, and the evolution, physics, chemistry, meteorology, and motion of celestial objects (such as galaxies, planets, etc.) and phenomena that originate outside the atmosphere of Earth (such as the cosmic background radiation). History of astrodynamics – history of the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of extragalactic astronomy – history of the branch of astronomy concerned with objects outside our own Milky Way Galaxy History of galactic astronomy – history of the study of our own Milky Way galaxy and all its contents. History of physical cosmology – history of the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. History of planetary science – history of the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of stellar astronomy – history of the natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters, and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) History of atmospheric physics – history of the study of the application of physics to the atmosphere History of atomic, molecular, and optical physics – history of the study of how matter and light interact History of biophysics – history of the study of physical processes relating to biology History of medical physics – history of the application of physics concepts, theories and methods to medicine. History of neurophysics – history of the branch of biophysics dealing with the nervous system. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of cryogenics – history of cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123K) and the behavior of materials at those temperatures. History of Dynamics – history of the study of the causes of motion and changes in motion History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of materials physics – history of the use of physics to describe materials in many different ways such as force, heat, light and mechanics. History of mathematical physics – history of the application of mathematics to problems in physics and the development of mathematical methods for such applications and for the formulation of physical theories. History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells by means of the methods of mechanics. History of classical mechanics – history of one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. History of continuum mechanics – history of the branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of particle physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. History of psychophysics – history of the quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. History of plasma physics – history of the state of matter similar to gas in which a certain portion of the particles are ionized. History of polymer physics – history of the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. History of quantum physics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of theory of relativity – History of statics – history of the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. History of solid state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. History of vehicle dynamics – history of the dynamics of vehicles, here assumed to be ground vehicles. History of chemistry – history of the physical science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate the chemical reactions History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis, and development for market of pharmaceutical agents (drugs). History of pharmacology – history of the branch of medicine and biology concerned with the study of drug action. History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of Flavor chemistry – history of someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the study of the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes, and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of Femtochemistry – history of the Femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behavior of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids Multidisciplinary fields involving chemistry History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. History of chemical engineering – history of the branch of engineering that deals with physical science (e.g., chemistry and physics), and life sciences (e.g., biology, microbiology and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. History of chemical oceanography – history of the study of the behavior of the chemical elements within the Earth's oceans. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of materials science – history of the interdisciplinary field applying the properties of matter to various areas of science and engineering. History of nanotechnology – history of the study of manipulating matter on an atomic and molecular scale History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture. History of spectroscopy – history of the study of the interaction between matter and radiated energy History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. History of Earth science – history of the all-embracing term for the sciences related to the planet Earth. Earth science, and all of its branches, are branches of physical science. History of atmospheric sciences – history of the umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. History of climatology History of meteorology History of atmospheric chemistry History of biogeography – history of the study of the distribution of species (biology), organisms, and ecosystems in geographic space and through geological time. History of cartography – history of the study and practice of making maps or globes. History of climatology – history of the study of climate, scientifically defined as weather conditions averaged over a period of time History of coastal geography – history of the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology and oceanography) and the human geography (sociology and history) of the coast. History of environmental science – history of an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. History of ecology – history of the scientific study of the distribution and abundance of living organisms and how the distribution and abundance are affected by interactions between the organisms and their environment. History of Freshwater biology – history of the scientific biological study of freshwater ecosystems and is a branch of limnology History of marine biology – history of the scientific study of organisms in the ocean or other marine or brackish bodies of water History of parasitology – history of the Parasitology is the study of parasites, their hosts, and the relationship between them. History of population dynamics – history of the Population dynamics is the branch of life sciences that studies short-term and long-term changes in the size and age composition of populations, and the biological and environmental processes influencing those changes. History of environmental chemistry – history of the Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. History of environmental soil science – history of the Environmental soil science is the study of the interaction of humans with the pedosphere as well as critical aspects of the biosphere, the lithosphere, the hydrosphere, and the atmosphere. History of environmental geology – history of the Environmental geology, like hydrogeology, is an applied science concerned with the practical application of the principles of geology in the solving of environmental problems. History of toxicology – history of the branch of biology, chemistry, and medicine concerned with the study of the adverse effects of chemicals on living organisms. History of geodesy – history of the scientific discipline that deals with the measurement and representation of the Earth, including its gravitational field, in a three-dimensional time-varying space History of geography – history of the science that studies the lands, features, inhabitants, and phenomena of Earth History of geoinformatics – history of the science and the technology which develops and uses information science infrastructure to address the problems of geography, geosciences and related branches of engineering. History of geology – history of the study of the Earth, with the general exclusion of present-day life, flow within the ocean, and the atmosphere. History of planetary geology – history of the planetary science discipline concerned with the geology of the celestial bodies such as the planets and their moons, asteroids, comets, and meteorites. History of geomorphology – history of the scientific study of landforms and the processes that shape them History of geostatistics – history of the branch of statistics focusing on spatial or spatiotemporal datasets History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods. History of glaciology – history of the study of glaciers, or more generally ice and natural phenomena that involve ice. History of hydrology – history of the study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability. History of hydrogeology – history of the area of geology that deals with the distribution and movement of groundwater in the soil and rocks of the Earth's crust (commonly in aquifers). History of mineralogy – history of the study of chemistry, crystal structure, and physical (including optical) properties of minerals. History of meteorology – history of the interdisciplinary scientific study of the atmosphere which explains and forecasts weather events. History of oceanography – history of the branch of Earth science that studies the ocean History of paleoclimatology – history of the study of changes in climate taken on the scale of the entire history of Earth History of paleontology – history of the study of prehistoric life History of petrology – history of the branch of geology that studies the origin, composition, distribution and structure of rocks. History of limnology – history of the study of inland waters History of seismology – history of the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies History of soil science – history of the study of soil as a natural resource on the surface of the earth including soil formation, classification and mapping; physical, chemical, biological, and fertility properties of soils; and these properties in relation to the use and management of soils. History of topography – history of the study of surface shape and features of the Earth and other observable astronomical objects including planets, moons, and asteroids. History of volcanology – history of the study of volcanoes, lava, magma, and related geological, geophysical and geochemical phenomena. General principles of the physical sciences Principle – law or rule that has to be, or usually is to be followed, or can be desirably followed, or is an inevitable consequence of something, such as the laws observed in nature or the way that a system is constructed. The principles of such a system are understood by its users as the essential characteristics of the system, or reflecting system's designed purpose, and the effective operation or use of which would be impossible if any one of the principles was to be ignored. Basic principles of physics Physics – branch of science that studies matter and its motion through space and time, along with related concepts such as energy and force. Physics is one of the "fundamental sciences" because the other natural sciences (like biology, geology etc.) deal with systems that seem to obey the laws of physics. According to physics, the physical laws of matter, energy and the fundamental forces of nature govern the interactions between particles and physical entities (such as planets, molecules, atoms or the subatomic particles). Some of the basic pursuits of physics, which include some of the most prominent developments in modern science in the last millennium, include: Describing the nature, measuring and quantifying of bodies and their motion, dynamics etc. Newton's laws of motion Mass, force and weight Momentum and conservation of energy Gravity, theories of gravity Energy, work, and their relationship Motion, position, and energy Different forms of Energy, their interconversion and the inevitable loss of energy in the form of heat (Thermodynamics) Energy conservation, conversion, and transfer. Energy source the transfer of energy from one source to work in another. Kinetic molecular theory Phases of matter and phase transitions Temperature and thermometers Energy and heat Heat flow: conduction, convection, and radiation The four laws of thermodynamics The principles of waves and sound The principles of electricity, magnetism, and electromagnetism The principles, sources, and properties of light Basic principles of astronomy Astronomy – science of celestial bodies and their interactions in space. Its studies include the following: The life and characteristics of stars and galaxies Origins of the universe. Physical science uses the Big Bang theory as the commonly accepted scientific theory of the origin of the universe. A heliocentric Solar System. Ancient cultures saw the Earth as the centre of the Solar System or universe (geocentrism). In the 16th century, Nicolaus Copernicus advanced the ideas of heliocentrism, recognizing the Sun as the centre of the Solar System. The structure of solar systems, planets, comets, asteroids, and meteors The shape and structure of Earth (roughly spherical, see also Spherical Earth) Earth in the Solar System Time measurement The composition and features of the Moon Interactions of the Earth and Moon (Note: Astronomy should not be confused with astrology, which assumes that people's destiny and human affairs in general correlate to the apparent positions of astronomical objects in the sky – although the two fields share a common origin, they are quite different; astronomers embrace the scientific method, while astrologers do not.) Basic principles of chemistry Chemistry – branch of science that studies the composition, structure, properties and change of matter. Chemistry is chiefly concerned with atoms and molecules and their interactions and transformations, for example, the properties of the chemical bonds formed between atoms to create chemical compounds. As such, chemistry studies the involvement of electrons and various forms of energy in photochemical reactions, oxidation-reduction reactions, changes in phases of matter, and separation of mixtures. Preparation and properties of complex substances, such as alloys, polymers, biological molecules, and pharmaceutical agents are considered in specialized fields of chemistry. Physical chemistry Chemical thermodynamics Reaction kinetics Molecular structure Quantum chemistry Spectroscopy Theoretical chemistry Electron configuration Molecular modelling Molecular dynamics Statistical mechanics Computational chemistry Mathematical chemistry Cheminformatics Nuclear chemistry The nature of the atomic nucleus Characterization of radioactive decay Nuclear reactions Organic chemistry Organic compounds Organic reaction Functional groups Organic synthesis Inorganic chemistry Inorganic compounds Crystal structure Coordination chemistry Solid-state chemistry Biochemistry Analytical chemistry Instrumental analysis Electroanalytical method Wet chemistry Electrochemistry Redox reaction Materials chemistry Basic principles of Earth science Earth science – the science of the planet Earth, the only identified life-bearing planet. Its studies include the following: The water cycle and the process of transpiration Freshwater Oceanography Weathering and erosion Rocks Agrophysics Soil science Pedogenesis Soil fertility Earth's tectonic structure Geomorphology and geophysics Physical geography Seismology: stress, strain, and earthquakes Characteristics of mountains and volcanoes Characteristics and formation of fossils Atmospheric sciences – the branches of science that study the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. Atmosphere of Earth Atmospheric pressure and winds Evaporation, condensation, and humidity Fog and clouds Meteorology, weather, climatology, and climate Hydrology, clouds and precipitation Air masses and weather fronts Major storms: thunderstorms, tornadoes, and hurricanes Major climate groups Speleology Cave Notable physical scientists List of physicists List of astronomers List of chemists Earth scientists List of Russian Earth scientists See also Outline of science Outline of natural science Outline of physical science Outline of earth science Outline of formal science Outline of social science Outline of applied science Notes References Works cited External links Physical science topics and articles for school curricula (grades K-12) Physical Science Physical Science 1
23639
https://en.wikipedia.org/wiki/Gasoline
Gasoline
Gasoline or petrol is a petrochemical product characterized as a transparent, yellowish, and flammable liquid normally used as a fuel for spark-ignited internal combustion engines. When formulated as a fuel for engines, gasoline is chemically composed of organic compounds derived from the fractional distillation of petroleum and later chemically enhanced with gasoline additives. It is a high-volume profitable product produced in crude oil refineries. The fuel-characteristics of a particular gasoline-blend, which will resist igniting too early are measured as the octane rating of the fuel blend. Gasoline blends with stable octane ratings are produced in several fuel-grades for different types of motors. A low octane rated fuel may cause engine knocking and reduced efficiency in reciprocating engines. Tetraethyl lead and other lead compounds were once widely used as additives to increase the octane rating, but are not used in modern automotive gasoline due to the extreme health hazard, except in aviation, off-road motor vehicles, and racing car motors. The additive continued to be used in low-income countries for decades after others had phased it out, leading the UN Environment Programme (UNEP) to launch a campaign to eliminate its use. This campaign finally led to Algeria being the last country to stop its use in 2021. Gasoline can be released into the Earth's environment as an uncombusted liquid fuel, as a flammable liquid, or as a vapor by way of leakages occurring during its production, handling, transport and delivery. Gasoline contains known carcinogens. Gasoline is often used as a recreational inhalant and can be harmful or fatal when used in such a manner. When burned, of gasoline emits about of , a greenhouse gas, contributing to human-caused climate change. Oil products, including gasoline, were responsible for about 32% of emissions worldwide in 2021. On average, U.S. petroleum refineries produce about 19 to 20 gallons of gasoline, 11 to 13 gallons of distillate fuel diesel fuel and 3 to 4 gallons of jet fuel from each 42 gallon (152 liters) barrel of crude oil. The product ratio depends upon the processing in an oil refinery and the crude oil assay (see ). Etymology The American English word gasoline denotes fuel for automobiles, which common usage shortened to the terms gas, or rarely motor gas and mogas, thus differentiating it from avgas (aviation gasoline), which is fuel for airplanes. English dictionaries, including the Oxford English Dictionary, show that the term gasoline originates from gas plus the chemical suffixes -ole and -ine. However, a blog post at the defunct website Oxford Dictionaries alternatively proposes that the word may have originated from the surname of British businessman John Cassell, who supposedly first marketed the substance. In place of the word gasoline, most Commonwealth countries (except Canada), use the term "petrol", and North Americans more often use "gas" in common parlance, hence the prevalence of the usage gas station in the United States. Coined from Medieval Latin, the word petroleum (L. petra, rock + oleum, oil) initially denoted types of mineral oil derived from rocks and stones. In Britain, Petrol was a refined mineral oil product marketed as a solvent from the 1870s by the British wholesaler Carless Refining and Marketing Ltd. When Petrol found a later use as a motor fuel, Frederick Simms, an associate of Gottlieb Daimler, suggested to John Leonard, owner of Carless, that they trademark the word and uppercase spelling Petrol. The trademark application was refused because petrol had already become an established general term for motor fuel. Due to the firm's age, Carless retained the legal rights to the term and to the uppercase spelling of "Petrol" as the name of a petrochemical product. British refiners originally used "motor spirit" as a generic name for the automotive fuel and "aviation spirit" for aviation gasoline. When Carless was denied a trademark on "petrol" in the 1930s, its competitors switched to the more popular name "petrol". However, "motor spirit" had already made its way into laws and regulations, so the term remains in use as a formal name for petrol. The term is used most widely in Nigeria, where the largest petroleum companies call their product "premium motor spirit". Although "petrol" has made inroads into Nigerian English, "premium motor spirit" remains the formal name that is used in scientific publications, government reports, and newspapers. Some other languages use variants of gasoline. is used in Spanish and Portuguese, and is used in Japanese. In other languages, the name of the product is derived from the hydrocarbon compound benzene, or more precisely from the class of products called petroleum benzine, such as in German or in Italian; but in Argentina, Uruguay, and Paraguay, the colloquial name is derived from that of the chemical naphtha. Some languages, like French and Italian, use the respective words for gasoline to instead indicate diesel fuel. History The first internal combustion engines suitable for use in transportation applications, so-called Otto engines, were developed in Germany during the last quarter of the 19th century. The fuel for these early engines was a relatively volatile hydrocarbon obtained from coal gas. With a boiling point near (n-octane boils at ), it was well-suited for early carburetors (evaporators). The development of a "spray nozzle" carburetor enabled the use of less volatile fuels. Further improvements in engine efficiency were attempted at higher compression ratios, but early attempts were blocked by the premature explosion of fuel, known as knocking. In 1891, the Shukhov cracking process became the world's first commercial method to break down heavier hydrocarbons in crude oil to increase the percentage of lighter products compared to simple distillation. 1903 to 1914 The evolution of gasoline followed the evolution of oil as the dominant source of energy in the industrializing world. Before World War One, Britain was the world's greatest industrial power and depended on its navy to protect the shipping of raw materials from its colonies. Germany was also industrializing and, like Britain, lacked many natural resources which had to be shipped to the home country. By the 1890s, Germany began to pursue a policy of global prominence and began building a navy to compete with Britain's. Coal was the fuel that powered their navies. Though both Britain and Germany had natural coal reserves, new developments in oil as a fuel for ships changed the situation. Coal-powered ships were a tactical weakness because the process of loading coal was extremely slow and dirty and left the ship completely vulnerable to attack, and unreliable supplies of coal at international ports made long-distance voyages impractical. The advantages of petroleum oil soon found the navies of the world converting to oil, but Britain and Germany had very few domestic oil reserves. Britain eventually solved its naval oil dependence by securing oil from Royal Dutch Shell and the Anglo-Persian Oil Company and this determined from where and of what quality its gasoline would come. During the early period of gasoline engine development, aircraft were forced to use motor vehicle gasoline since aviation gasoline did not yet exist. These early fuels were termed "straight-run" gasolines and were byproducts from the distillation of a single crude oil to produce kerosene, which was the principal product sought for burning in kerosene lamps. Gasoline production would not surpass kerosene production until 1916. The earliest straight-run gasolines were the result of distilling eastern crude oils and there was no mixing of distillates from different crudes. The composition of these early fuels was unknown and the quality varied greatly as crude oils from different oil fields emerged in different mixtures of hydrocarbons in different ratios. The engine effects produced by abnormal combustion (engine knocking and pre-ignition) due to inferior fuels had not yet been identified, and as a result, there was no rating of gasoline in terms of its resistance to abnormal combustion. The general specification by which early gasolines were measured was that of specific gravity via the Baumé scale and later the volatility (tendency to vaporize) specified in terms of boiling points, which became the primary focuses for gasoline producers. These early eastern crude oil gasolines had relatively high Baumé test results (65 to 80 degrees Baumé) and were called "Pennsylvania high-test" or simply "high-test" gasolines. These were often used in aircraft engines. By 1910, increased automobile production and the resultant increase in gasoline consumption produced a greater demand for gasoline. Also, the growing electrification of lighting produced a drop in kerosene demand, creating a supply problem. It appeared that the burgeoning oil industry would be trapped into over-producing kerosene and under-producing gasoline since simple distillation could not alter the ratio of the two products from any given crude. The solution appeared in 1911 when the development of the Burton process allowed thermal cracking of crude oils, which increased the percent yield of gasoline from the heavier hydrocarbons. This was combined with the expansion of foreign markets for the export of surplus kerosene which domestic markets no longer needed. These new thermally "cracked" gasolines were believed to have no harmful effects and would be added to straight-run gasolines. There also was the practice of mixing heavy and light distillates to achieve the desired Baumé reading and collectively these were called "blended" gasolines. Gradually, volatility gained favor over the Baumé test, though both continued to be used in combination to specify a gasoline. As late as June 1917, Standard Oil (the largest refiner of crude oil in the United States at the time) stated that the most important property of a gasoline was its volatility. It is estimated that the rating equivalent of these straight-run gasolines varied from 40 to 60 octane and that the "high-test", sometimes referred to as "fighting grade", probably averaged 50 to 65 octane. World War I Prior to the United States entry into World War I, the European Allies used fuels derived from crude oils from Borneo, Java, and Sumatra, which gave satisfactory performance in their military aircraft. When the U.S. entered the war in April 1917, the U.S. became the principal supplier of aviation gasoline to the Allies and a decrease in engine performance was noted. Soon it was realized that motor vehicle fuels were unsatisfactory for aviation, and after the loss of several combat aircraft, attention turned to the quality of the gasolines being used. Later flight tests conducted in 1937 showed that an octane reduction of 13 points (from 100 down to 87 octane) decreased engine performance by 20 percent and increased take-off distance by 45 percent. If abnormal combustion were to occur, the engine could lose enough power to make getting airborne impossible and a take-off roll became a threat to the pilot and aircraft. On 2 August 1917, the U.S. Bureau of Mines arranged to study fuels for aircraft in cooperation with the Aviation Section of the U.S. Army Signal Corps and a general survey concluded that no reliable data existed for the proper fuels for aircraft. As a result, flight tests began at Langley, McCook and Wright fields to determine how different gasolines performed under different conditions. These tests showed that in certain aircraft, motor vehicle gasolines performed as well as "high-test" but in other types resulted in hot-running engines. It was also found that gasolines from aromatic and naphthenic base crude oils from California, South Texas, and Venezuela resulted in smooth-running engines. These tests resulted in the first government specifications for motor gasolines (aviation gasolines used the same specifications as motor gasolines) in late 1917. U.S., 1918–1929 Engine designers knew that, according to the Otto cycle, power and efficiency increased with compression ratio, but experience with early gasolines during World War I showed that higher compression ratios increased the risk of abnormal combustion, producing lower power, lower efficiency, hot-running engines, and potentially severe engine damage. To compensate for these poor fuels, early engines used low compression ratios, which required relatively large, heavy engines with limited power and efficiency. The Wright brothers' first gasoline engine used a compression ratio as low as 4.7-to-1, developed only from , and weighed . This was a major concern for aircraft designers and the needs of the aviation industry provoked the search for fuels that could be used in higher-compression engines. Between 1917 and 1919, the amount of thermally cracked gasoline utilized almost doubled. Also, the use of natural gasoline increased greatly. During this period, many U.S. states established specifications for motor gasoline but none of these agreed and they were unsatisfactory from one standpoint or another. Larger oil refiners began to specify unsaturated material percentage (thermally cracked products caused gumming in both use and storage while unsaturated hydrocarbons are more reactive and tend to combine with impurities leading to gumming). In 1922, the U.S. government published the first specifications for aviation gasolines (two grades were designated as "fighting" and "domestic" and were governed by boiling points, color, sulfur content, and a gum formation test) along with one "motor" grade for automobiles. The gum test essentially eliminated thermally cracked gasoline from aviation usage and thus aviation gasolines reverted to fractionating straight-run naphthas or blending straight-run and highly treated thermally cracked naphthas. This situation persisted until 1929. The automobile industry reacted to the increase in thermally cracked gasoline with alarm. Thermal cracking produced large amounts of both mono- and diolefins (unsaturated hydrocarbons), which increased the risk of gumming. Also, the volatility was decreasing to the point that fuel did not vaporize and was sticking to spark plugs and fouling them, creating hard starting and rough running in winter and sticking to cylinder walls, bypassing the pistons and rings, and going into the crankcase oil. One journal stated, "on a multi-cylinder engine in a high-priced car we are diluting the oil in the crankcase as much as 40 percent in a run, as the analysis of the oil in the oil-pan shows". Being very unhappy with the consequent reduction in overall gasoline quality, automobile manufacturers suggested imposing a quality standard on the oil suppliers. The oil industry in turn accused the automakers of not doing enough to improve vehicle economy, and the dispute became known within the two industries as "the fuel problem". Animosity grew between the industries, each accusing the other of not doing anything to resolve matters, and their relationship deteriorated. The situation was only resolved when the American Petroleum Institute (API) initiated a conference to address the fuel problem and a cooperative fuel research (CFR) committee was established in 1920, to oversee joint investigative programs and solutions. Apart from representatives of the two industries, the Society of Automotive Engineers (SAE) also played an instrumental role, with the U.S. Bureau of Standards being chosen as an impartial research organization to carry out many of the studies. Initially, all the programs were related to volatility and fuel consumption, ease of starting, crankcase oil dilution, and acceleration. Leaded gasoline controversy, 1924–1925 With the increased use of thermally cracked gasolines came an increased concern regarding its effects on abnormal combustion, and this led to research for antiknock additives. In the late 1910s, researchers such as A.H. Gibson, Harry Ricardo, Thomas Midgley Jr., and Thomas Boyd began to investigate abnormal combustion. Beginning in 1916, Charles F. Kettering of General Motors began investigating additives based on two paths, the "high percentage" solution (where large quantities of ethanol were added) and the "low percentage" solution (where only 0.53-1.1 g/L or 0.071-0.147 oz / U.S. gal were needed). The "low percentage" solution ultimately led to the discovery of tetraethyllead (TEL) in December 1921, a product of the research of Midgley and Boyd and the defining component of leaded gasoline. This innovation started a cycle of improvements in fuel efficiency that coincided with the large-scale development of oil refining to provide more products in the boiling range of gasoline. Ethanol could not be patented but TEL could, so Kettering secured a patent for TEL and began promoting it instead of other options. The dangers of compounds containing lead were well-established by then and Kettering was directly warned by Robert Wilson of MIT, Reid Hunt of Harvard, Yandell Henderson of Yale, and Erik Krause of the University of Potsdam in Germany about its use. Krause had worked on tetraethyllead for many years and called it "a creeping and malicious poison" that had killed a member of his dissertation committee. On 27 October 1924, newspaper articles around the nation told of the workers at the Standard Oil refinery near Elizabeth, New Jersey who were producing TEL and were suffering from lead poisoning. By 30 October, the death toll had reached five. In November, the New Jersey Labor Commission closed the Bayway refinery and a grand jury investigation was started which had resulted in no charges by February 1925. Leaded gasoline sales were banned in New York City, Philadelphia, and New Jersey. General Motors, DuPont, and Standard Oil, who were partners in Ethyl Corporation, the company created to produce TEL, began to argue that there were no alternatives to leaded gasoline that would maintain fuel efficiency and still prevent engine knocking. After several industry-funded flawed studies reported that TEL-treated gasoline was not a public health issue, the controversy subsided. U.S., 1930–1941 In the five years prior to 1929, a great amount of experimentation was conducted on different testing methods for determining fuel resistance to abnormal combustion. It appeared engine knocking was dependent on a wide variety of parameters including compression, ignition timing, cylinder temperature, air-cooled or water-cooled engines, chamber shapes, intake temperatures, lean or rich mixtures, and others. This led to a confusing variety of test engines that gave conflicting results, and no standard rating scale existed. By 1929, it was recognized by most aviation gasoline manufacturers and users that some kind of antiknock rating must be included in government specifications. In 1929, the octane rating scale was adopted, and in 1930, the first octane specification for aviation fuels was established. In the same year, the U.S. Army Air Force specified fuels rated at 87 octane for its aircraft as a result of studies it had conducted. During this period, research showed that hydrocarbon structure was extremely important to the antiknocking properties of fuel. Straight-chain paraffins in the boiling range of gasoline had low antiknock qualities while ring-shaped molecules such as aromatic hydrocarbons (for example benzene) had higher resistance to knocking. This development led to the search for processes that would produce more of these compounds from crude oils than achieved under straight distillation or thermal cracking. Research by the major refiners led to the development of processes involving isomerization of cheap and abundant butane to isobutane, and alkylation to join isobutane and butylenes to form isomers of octane such as "isooctane", which became an important component in aviation fuel blending. To further complicate the situation, as engine performance increased, the altitude that aircraft could reach also increased, which resulted in concerns about the fuel freezing. The average temperature decrease is per increase in altitude, and at , the temperature can approach . Additives like benzene, with a freezing point of , would freeze in the gasoline and plug fuel lines. Substituted aromatics such as toluene, xylene, and cumene, combined with limited benzene, solved the problem. By 1935, there were seven different aviation grades based on octane rating, two Army grades, four Navy grades, and three commercial grades including the introduction of 100-octane aviation gasoline. By 1937, the Army established 100-octane as the standard fuel for combat aircraft, and to add to the confusion, the government now recognized 14 different grades, in addition to 11 others in foreign countries. With some companies required to stock 14 grades of aviation fuel, none of which could be interchanged, the effect on the refiners was negative. The refining industry could not concentrate on large capacity conversion processes for so many different grades and a solution had to be found. By 1941, principally through the efforts of the Cooperative Fuel Research Committee, the number of grades for aviation fuels was reduced to three: 73, 91, and 100 octane. The development of 100-octane aviation gasoline on an economic scale was due in part to Jimmy Doolittle, who had become Aviation Manager of Shell Oil Company. He convinced Shell to invest in refining capacity to produce 100-octane on a scale that nobody needed since no aircraft existed that required a fuel that nobody made. Some fellow employees would call his effort "Doolittle's million-dollar blunder" but time would prove Doolittle correct. Before this, the Army had considered 100-octane tests using pure octane but at , the price prevented this from happening. In 1929, Stanavo Specification Board Inc. was organized by the Standard Oil companies of California, Indiana, and New Jersey to improve aviation fuels and oils and by 1935 had placed their first 100 octane fuel on the market, Stanavo Ethyl Gasoline 100. It was used by the Army, engine manufacturers and airlines for testing and for air racing and record flights. By 1936, tests at Wright Field using the new, cheaper alternatives to pure octane proved the value of 100 octane fuel, and both Shell and Standard Oil would win the contract to supply test quantities for the Army. By 1938, the price was down to , only more than 87 octane fuel. By the end of WWII, the price would be down to . In 1937, Eugene Houdry developed the Houdry process of catalytic cracking, which produced a high-octane base stock of gasoline which was superior to the thermally cracked product since it did not contain the high concentration of olefins. In 1940, there were only 14 Houdry units in operation in the U.S.; by 1943, this had increased to 77, either of the Houdry process or of the Thermofor Catalytic or Fluid Catalyst type. The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WWII, fuels above 100-octane were given two ratings, a rich and a lean mixture, and these would be called 'performance numbers' (PN). 100-octane aviation gasoline would be referred to as 130/100 grade. World War II Germany Oil and its byproducts, especially high-octane aviation gasoline, would prove to be a driving concern for how Germany conducted the war. As a result of the lessons of World War I, Germany had stockpiled oil and gasoline for its blitzkrieg offensive and had annexed Austria, adding per day of oil production, but this was not sufficient to sustain the planned conquest of Europe. Because captured supplies and oil fields would be necessary to fuel the campaign, the German high command created a special squad of oilfield experts drawn from the ranks of domestic oil industries. They were sent in to put out oilfield fires and get production going again as soon as possible. But capturing oilfields remained an obstacle throughout the war. During the Invasion of Poland, German estimates of gasoline consumption turned out to be vastly too low. Heinz Guderian and his Panzer divisions consumed nearly of gasoline on the drive to Vienna. When they were engaged in combat across open country, gasoline consumption almost doubled. On the second day of battle, a unit of the XIX Corps was forced to halt when it ran out of gasoline. One of the major objectives of the Polish invasion was their oil fields but the Soviets invaded and captured 70 percent of the Polish production before the Germans could reach it. Through the German–Soviet Commercial Agreement (1940), Stalin agreed in vague terms to supply Germany with additional oil equal to that produced by now Soviet-occupied Polish oilfields at Drohobych and Boryslav in exchange for hard coal and steel tubing. Even after the Nazis conquered the vast territories of Europe, this did not help the gasoline shortage. This area had never been self-sufficient in oil before the war. In 1938, the area that would become Nazi-occupied produced per day. In 1940, total production under German control amounted to only . By early 1941 and the depletion of German gasoline reserves, Adolf Hitler saw the invasion of Russia to seize the Polish oil fields and the Russian oil in the Caucasus as the solution to the German gasoline shortage. As early as July 1941, following the 22 June start of Operation Barbarossa, certain Luftwaffe squadrons were forced to curtail ground support missions due to shortages of aviation gasoline. On 9 October, the German quartermaster general estimated that army vehicles were short of gasoline requirements. Virtually all of Germany's aviation gasoline came from synthetic oil plants that hydrogenated coals and coal tars. These processes had been developed during the 1930s as an effort to achieve fuel independence. There were two grades of aviation gasoline produced in volume in Germany, the B-4 or blue grade and the C-3 or green grade, which accounted for about two-thirds of all production. B-4 was equivalent to 89-octane and the C-3 was roughly equal to the U.S. 100-octane, though lean mixture was rated around 95-octane and was poorer than the U.S. version. Maximum output achieved in 1943 reached a day before the Allies decided to target the synthetic fuel plants. Through captured enemy aircraft and analysis of the gasoline found in them, both the Allies and the Axis powers were aware of the quality of the aviation gasoline being produced and this prompted an octane race to achieve the advantage in aircraft performance. Later in the war, the C-3 grade was improved to where it was equivalent to the U.S. 150 grade (rich mixture rating). Japan Japan, like Germany, had almost no domestic oil supply and by the late 1930s, produced only seven percent of its own oil while importing the rest80 percent from the U.S.. As Japanese aggression grew in China (USS Panay incident) and news reached the American public of Japanese bombing of civilian centers, especially the bombing of Chungking, public opinion began to support a U.S. embargo. A Gallup poll in June 1939 found that 72 percent of the American public supported an embargo on war materials to Japan. This increased tensions between the U.S. and Japan, and it led to the U.S. placing restrictions on exports. In July 1940, the U.S. issued a proclamation that banned the export of 87 octane or higher aviation gasoline to Japan. This ban did not hinder the Japanese as their aircraft could operate with fuels below 87 octane and if needed they could add TEL to increase the octane. As it turned out, Japan bought 550 percent more sub-87 octane aviation gasoline in the five months after the July 1940 ban on higher octane sales. The possibility of a complete ban of gasoline from America created friction in the Japanese government as to what action to take to secure more supplies from the Dutch East Indies and demanded greater oil exports from the exiled Dutch government after the Battle of the Netherlands. This action prompted the U.S. to move its Pacific fleet from Southern California to Pearl Harbor to help stiffen British resolve to stay in Indochina. With the Japanese invasion of French Indochina in September 1940, came great concerns about the possible Japanese invasion of the Dutch Indies to secure their oil. After the U.S. banned all exports of steel and iron scrap, the next day, Japan signed the Tripartite Pact and this led Washington to fear that a complete U.S. oil embargo would prompt the Japanese to invade the Dutch East Indies. On 16 June 1941 Harold Ickes, who was appointed Petroleum Coordinator for National Defense, stopped a shipment of oil from Philadelphia to Japan in light of the oil shortage on the East coast due to increased exports to Allies. He also telegrammed all oil suppliers on the East coast not to ship any oil to Japan without his permission. President Roosevelt countermanded Ickes's orders telling Ickes that the "I simply have not got enough Navy to go around and every little episode in the Pacific means fewer ships in the Atlantic". On 25 July 1941, the U.S. froze all Japanese financial assets and licenses would be required for each use of the frozen funds including oil purchases that could produce aviation gasoline. On 28 July 1941, Japan invaded southern Indochina. The debate inside the Japanese government as to its oil and gasoline situation was leading to invasion of the Dutch East Indies but this would mean war with the U.S., whose Pacific fleet was a threat to their flank. This situation led to the decision to attack the U.S. fleet at Pearl Harbor before proceeding with the Dutch East Indies invasion. On 7 December 1941, Japan attacked Pearl Harbor, and the next day the Netherlands declared war on Japan, which initiated the Dutch East Indies campaign. But the Japanese missed a golden opportunity at Pearl Harbor. "All of the oil for the fleet was in surface tanks at the time of Pearl Harbor", Admiral Chester Nimitz, who became Commander in Chief of the Pacific Fleet, was later to say. "We had about of oil out there and all of it was vulnerable to .50 caliber bullets. Had the Japanese destroyed the oil," he added, "it would have prolonged the war another two years." U.S. Early in 1944, William Boyd, president of the American Petroleum Institute and chairman of the Petroleum Industry War Council said: "The Allies may have floated to victory on a wave of oil in World War I, but in this infinitely greater World War II, we are flying to victory on the wings of petroleum". In December 1941 the U.S. had 385,000 oil wells producing barrels of oil a year and 100-octane aviation gasoline capacity was at a day. By 1944, the U.S. was producing over a year (67 percent of world production) and the petroleum industry had built 122 new plants for the production of 100-octane aviation gasoline and capacity was over a dayan increase of more than ten-fold. It was estimated that the U.S. was producing enough 100-octane aviation gasoline to permit the dropping of () of bombs on the enemy every day of the year. The record of gasoline consumption by the Army prior to June 1943 was uncoordinated as each supply service of the Army purchased its own petroleum products and no centralized system of control nor records existed. On 1 June 1943, the Army created the Fuels and Lubricants Division of the Quartermaster Corps, and, from their records, they tabulated that the Army (excluding fuels and lubricants for aircraft) purchased over of gasoline for delivery to overseas theaters between 1 June 1943 through August 1945. That figure does not include gasoline used by the Army inside the U.S. Motor fuel production had declined from in 1941 down to in 1943. World War II marked the first time in U.S. history that gasoline was rationed and the government imposed price controls to prevent inflation. Gasoline consumption per automobile declined from per year in 1941 down to in 1943, with the goal of preserving rubber for tires since the Japanese had cut the U.S. off from over 90 percent of its rubber supply which had come from the Dutch East Indies and the U.S. synthetic rubber industry was in its infancy. Average gasoline prices went from a record low of ( with taxes) in 1940 to ( with taxes) in 1945. Even with the world's largest aviation gasoline production, the U.S. military still found that more was needed. Throughout the duration of the war, aviation gasoline supply was always behind requirements and this impacted training and operations. The reason for this shortage developed before the war even began. The free market did not support the expense of producing 100-octane aviation fuel in large volume, especially during the Great Depression. Iso-octane in the early development stage cost , and, even by 1934, it was still compared to for motor gasoline when the Army decided to experiment with 100-octane for its combat aircraft. Though only three percent of U.S. combat aircraft in 1935 could take full advantage of the higher octane due to low compression ratios, the Army saw that the need for increasing performance warranted the expense and purchased 100,000 gallons. By 1937, the Army established 100-octane as the standard fuel for combat aircraft and by 1939 production was only a day. In effect, the U.S. military was the only market for 100-octane aviation gasoline and as war broke out in Europe this created a supply problem that persisted throughout the duration. With the war in Europe a reality in 1939, all predictions of 100-octane consumption were outrunning all possible production. Neither the Army nor the Navy could contract more than six months in advance for fuel and they could not supply the funds for plant expansion. Without a long-term guaranteed market, the petroleum industry would not risk its capital to expand production for a product that only the government would buy. The solution to the expansion of storage, transportation, finances, and production was the creation of the Defense Supplies Corporation on 19 September 1940. The Defense Supplies Corporation would buy, transport and store all aviation gasoline for the Army and Navy at cost plus a carrying fee. When the Allied breakout after D-Day found their armies stretching their supply lines to a dangerous point, the makeshift solution was the Red Ball Express. But even this soon was inadequate. The trucks in the convoys had to drive longer distances as the armies advanced and they were consuming a greater percentage of the same gasoline they were trying to deliver. In 1944, General George Patton's Third Army finally stalled just short of the German border after running out of gasoline. The general was so upset at the arrival of a truckload of rations instead of gasoline he was reported to have shouted: "Hell, they send us food, when they know we can fight without food but not without oil." The solution had to wait for the repairing of the railroad lines and bridges so that the more efficient trains could replace the gasoline-consuming truck convoys. U.S., 1946–present The development of jet engines burning kerosene-based fuels during WWII for aircraft produced a superior performing propulsion system than internal combustion engines could offer and the U.S. military forces gradually replaced their piston combat aircraft with jet powered planes. This development would essentially remove the military need for ever increasing octane fuels and eliminated government support for the refining industry to pursue the research and production of such exotic and expensive fuels. Commercial aviation was slower to adapt to jet propulsion and until 1958, when the Boeing 707 first entered commercial service, piston powered airliners still relied on aviation gasoline. But commercial aviation had greater economic concerns than the maximum performance that the military could afford. As octane numbers increased so did the cost of gasoline but the incremental increase in efficiency becomes less as compression ratio goes up. This reality set a practical limit to how high compression ratios could increase relative to how expensive the gasoline would become. Last produced in 1955, the Pratt & Whitney R-4360 Wasp Major was using 115/145 Aviation gasoline and producing at 6.7 compression ratio (turbo-supercharging would increase this) and of engine weight to produce . This compares to the Wright Brothers engine needing almost of engine weight to produce . The U.S. automobile industry after WWII could not take advantage of the high octane fuels then available. Automobile compression ratios increased from an average of 5.3-to-1 in 1931 to just 6.7-to-1 in 1946. The average octane number of regular-grade motor gasoline increased from 58 to 70 during the same time. Military aircraft were using expensive turbo-supercharged engines that cost at least 10 times as much per horsepower as automobile engines and had to be overhauled every 700 to 1,000 hours. The automobile market could not support such expensive engines. It would not be until 1957 that the first U.S. automobile manufacturer could mass-produce an engine that would produce one horsepower per cubic inch, the Chevrolet 283 hp/283 cubic inch V-8 engine option in the Corvette. At $485, this was an expensive option that few consumers could afford and would only appeal to the performance-oriented consumer market willing to pay for the premium fuel required. This engine had an advertised compression ratio of 10.5-to-1 and the 1958 AMA Specifications stated that the octane requirement was 96–100 RON. At (1959 with aluminum intake), it took of engine weight to make . In the 1950s, oil refineries started to focus on high octane fuels, and then detergents were added to gasoline to clean the jets in carburetors. The 1970s witnessed greater attention to the environmental consequences of burning gasoline. These considerations led to the phasing out of TEL and its replacement by other antiknock compounds. Subsequently, low-sulfur gasoline was introduced, in part to preserve the catalysts in modern exhaust systems. Chemical analysis and production Commercial gas is a mixture of a large number of different hydro-carbons. Chemical Gasoline is produced to meet a number of engine performance specifications and many different compositions are possible. Hence, the exact chemical composition of gasoline is undefined. The performance specification also varies with season, requiring less volatile blends during summer, in order to minimize evaporative losses. At the refinery, the composition varies according to the crude oils from which it is produced, the type of processing units present at the refinery, how those units are operated, and which hydrocarbon streams (blendstocks) the refinery opts to use when blending the final product. Gasoline is produced in oil refineries. Roughly of gasoline is derived from a barrel of crude oil. Material separated from crude oil via distillation, called virgin or straight-run gasoline, does not meet specifications for modern engines (particularly the octane rating; see below), but can be pooled to the gasoline blend. The bulk of a typical gasoline consists of a homogeneous mixture of small, relatively lightweight hydrocarbons with between 4 and 12 carbon atoms per molecule (commonly referred to as C4–C12). It is a mixture of paraffins (alkanes), olefins (alkenes), and napthenes (cycloalkanes). The use of the term paraffin in place of the standard chemical nomenclature alkane is particular to the oil industry. The actual ratio of molecules in any gasoline depends upon: the oil refinery that makes the gasoline, as not all refineries have the same set of processing units; the crude oil feed used by the refinery; the grade of gasoline (in particular, the octane rating). The various refinery streams blended to make gasoline have different characteristics. Some important streams include the following: Straight-run gasoline, sometimes referred to as naphtha, is distilled directly from crude oil. Once the leading source of fuel, its low octane rating required lead additives. It is low in aromatics (depending on the grade of the crude oil stream) and contains some cycloalkanes (naphthenes) and no olefins (alkenes). Between 0 and 20 percent of this stream is pooled into the finished gasoline because the quantity of this fraction in the crude is less than fuel demand and the fraction's Research Octane Number (RON) is too low. The chemical properties (namely RON and Reid vapor pressure (RVP)) of the straight-run gasoline can be improved through reforming and isomerization. However, before feeding those units, the naphtha needs to be split into light and heavy naphtha. Straight-run gasoline can also be used as a feedstock for steam-crackers to produce olefins. Reformate, produced in a catalytic reformer, has a high octane rating with high aromatic content and relatively low olefin content. Most of the benzene, toluene, and xylene (the so-called BTX hydrocarbons) are more valuable as chemical feedstocks and are thus removed to some extent. Catalytic cracked gasoline, or catalytic cracked naphtha, produced with a catalytic cracker, has a moderate octane rating, high olefin content, and moderate aromatic content. Hydrocrackate (heavy, mid, and light), produced with a hydrocracker, has a medium to low octane rating and moderate aromatic levels. Alkylate is produced in an alkylation unit, using isobutane and olefins as feedstocks. Finished alkylate contains no aromatics or olefins and has a high MON (Motor Octane Number). Isomerate is obtained by isomerizing low-octane straight-run gasoline into iso-paraffins (non-chain alkanes, such as isooctane). Isomerate has a medium RON and MON, but no aromatics or olefins. Butane is usually blended in the gasoline pool, although the quantity of this stream is limited by the RVP specification. The terms above are the jargon used in the oil industry, and the terminology varies. Currently, many countries set limits on gasoline aromatics in general, benzene in particular, and olefin (alkene) content. Such regulations have led to an increasing preference for alkane isomers, such as isomerate or alkylate, as their octane rating is higher than n-alkanes. In the European Union, the benzene limit is set at one percent by volume for all grades of automotive gasoline. This is usually achieved by avoiding feeding C6, in particular cyclohexane, to the reformer unit, where it would be converted to benzene. Therefore, only (desulfurized) heavy virgin naphtha (HVN) is fed to the reformer unit Gasoline can also contain other organic compounds, such as organic ethers (deliberately added), plus small levels of contaminants, in particular organosulfur compounds (which are usually removed at the refinery). Physical properties Density The specific gravity of gasoline ranges from 0.71 to 0.77, with higher densities having a greater volume fraction of aromatics. Finished marketable gasoline is traded (in Europe) with a standard reference of , (7,5668 lb/ imp gal) its price is escalated or de-escalated according to its actual density. Because of its low density, gasoline floats on water, and therefore water cannot generally be used to extinguish a gasoline fire unless applied in a fine mist. Stability Quality gasoline should be stable for six months if stored properly, but can degrade over time. Gasoline stored for a year will most likely be able to be burned in an internal combustion engine without too much trouble. However, the effects of long-term storage will become more noticeable with each passing month until a time comes when the gasoline should be diluted with ever-increasing amounts of freshly made fuel so that the older gasoline may be used up. If left undiluted, improper operation will occur and this may include engine damage from misfiring or the lack of proper action of the fuel within a fuel injection system and from an onboard computer attempting to compensate (if applicable to the vehicle). Gasoline should ideally be stored in an airtight container (to prevent oxidation or water vapor mixing in with the gas) that can withstand the vapor pressure of the gasoline without venting (to prevent the loss of the more volatile fractions) at a stable cool temperature (to reduce the excess pressure from liquid expansion and to reduce the rate of any decomposition reactions). When gasoline is not stored correctly, gums and solids may result, which can corrode system components and accumulate on wet surfaces, resulting in a condition called "stale fuel". Gasoline containing ethanol is especially subject to absorbing atmospheric moisture, then forming gums, solids, or two phases (a hydrocarbon phase floating on top of a water-alcohol phase). The presence of these degradation products in the fuel tank or fuel lines plus a carburetor or fuel injection components makes it harder to start the engine or causes reduced engine performance. On resumption of regular engine use, the buildup may or may not be eventually cleaned out by the flow of fresh gasoline. The addition of a fuel stabilizer to gasoline can extend the life of fuel that is not or cannot be stored properly, though removal of all fuel from a fuel system is the only real solution to the problem of long-term storage of an engine or a machine or vehicle. Typical fuel stabilizers are proprietary mixtures containing mineral spirits, isopropyl alcohol, 1,2,4-trimethylbenzene or other additives. Fuel stabilizers are commonly used for small engines, such as lawnmower and tractor engines, especially when their use is sporadic or seasonal (little to no use for one or more seasons of the year). Users have been advised to keep gasoline containers more than half full and properly capped to reduce air exposure, to avoid storage at high temperatures, to run an engine for ten minutes to circulate the stabilizer through all components prior to storage, and to run the engine at intervals to purge stale fuel from the carburetor. Gasoline stability requirements are set by the standard ASTM D4814. This standard describes the various characteristics and requirements of automotive fuels for use over a wide range of operating conditions in ground vehicles equipped with spark-ignition engines. Combustion energy content A gasoline-fueled internal combustion engine obtains energy from the combustion of gasoline's various hydrocarbons with oxygen from the ambient air, yielding carbon dioxide and water as exhaust. The combustion of octane, a representative species, performs the chemical reaction: By weight, combustion of gasoline releases about or by volume , quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75 percent more or less than the average. On average, about of gasoline are available from a barrel of crude oil (about 46 percent by volume), varying with the quality of the crude and the grade of the gasoline. The remainder is products ranging from tar to naphtha. A high-octane-rated fuel, such as liquefied petroleum gas (LPG), has an overall lower power output at the typical 10:1 compression ratio of an engine design optimized for gasoline fuel. An engine tuned for LPG fuel via higher compression ratios (typically 12:1) improves the power output. This is because higher-octane fuels allow for a higher compression ratio without knocking, resulting in a higher cylinder temperature, which improves efficiency. Also, increased mechanical efficiency is created by a higher compression ratio through the concomitant higher expansion ratio on the power stroke, which is by far the greater effect. The higher expansion ratio extracts more work from the high-pressure gas created by the combustion process. An Atkinson cycle engine uses the timing of the valve events to produce the benefits of a high expansion ratio without the disadvantages, chiefly detonation, of a high compression ratio. A high expansion ratio is also one of the two key reasons for the efficiency of diesel engines, along with the elimination of pumping losses due to throttling of the intake airflow. The lower energy content of LPG by liquid volume in comparison to gasoline is due mainly to its lower density. This lower density is a property of the lower molecular weight of propane (LPG's chief component) compared to gasoline's blend of various hydrocarbon compounds with heavier molecular weights than propane. Conversely, LPG's energy content by weight is higher than gasoline's due to a higher hydrogen-to-carbon ratio. Molecular weights of the species in the representative octane combustion are 114, 32, 44, and 18 for C8H18, O2, CO2, and H2O, respectively; therefore of fuel reacts with of oxygen to produce of carbon dioxide and of water. Octane rating Spark-ignition engines are designed to burn gasoline in a controlled process called deflagration. However, the unburned mixture may autoignite by pressure and heat alone, rather than igniting from the spark plug at exactly the right time, causing a rapid pressure rise that can damage the engine. This is often referred to as engine knocking or end-gas knock. Knocking can be reduced by increasing the gasoline's resistance to autoignition, which is expressed by its octane rating. Octane rating is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are different conventions for expressing octane ratings, so the same physical fuel may have several different octane ratings based on the measure used. One of the best known is the research octane number (RON). The octane rating of typical commercially available gasoline varies by country. In Finland, Sweden, and Norway, 95 RON is the standard for regular unleaded gasoline and 98 RON is also available as a more expensive option. In the United Kingdom, over 95 percent of gasoline sold has 95 RON and is marketed as Unleaded or Premium Unleaded. Super Unleaded, with 97/98 RON and branded high-performance fuels (e.g., Shell V-Power, BP Ultimate) with 99 RON make up the balance. Gasoline with 102 RON may rarely be available for racing purposes. In the U.S., octane ratings in unleaded fuels vary between 85 and 87 AKI (91–92 RON) for regular, 89–90 AKI (94–95 RON) for mid-grade (equivalent to European regular), up to 90–94 AKI (95–99 RON) for premium (European premium). As South Africa's largest city, Johannesburg, is located on the Highveld at above sea level, the Automobile Association of South Africa recommends 95-octane gasoline at low altitude and 93-octane for use in Johannesburg because "The higher the altitude the lower the air pressure, and the lower the need for a high octane fuel as there is no real performance gain". Octane rating became important as the military sought higher output for aircraft engines in the late 1920s and the 1940s. A higher octane rating allows a higher compression ratio or supercharger boost, and thus higher temperatures and pressures, which translate to higher power output. Some scientists even predicted that a nation with a good supply of high-octane gasoline would have the advantage in air power. In 1943, the Rolls-Royce Merlin aero engine produced using 100 RON fuel from a modest displacement. By the time of Operation Overlord, both the RAF and USAAF were conducting some operations in Europe using 150 RON fuel (100/150 avgas), obtained by adding 2.5 percent aniline to 100-octane avgas. By this time, the Rolls-Royce Merlin 66 was developing using this fuel. Additives Antiknock additives Tetraethyl lead Gasoline, when used in high-compression internal combustion engines, tends to auto-ignite or "detonate" causing damaging engine knocking (also called "pinging" or "pinking"). To address this problem, tetraethyl lead (TEL) was widely adopted as an additive for gasoline in the 1920s. With a growing awareness of the seriousness of the extent of environmental and health damage caused by lead compounds, however, and the incompatibility of lead with catalytic converters, governments began to mandate reductions in gasoline lead. In the U.S., the Environmental Protection Agency issued regulations to reduce the lead content of leaded gasoline over a series of annual phases, scheduled to begin in 1973 but delayed by court appeals until 1976. By 1995, leaded fuel accounted for only 0.6 percent of total gasoline sales and under () of lead per year. From 1 January 1996, the U.S. Clean Air Act banned the sale of leaded fuel for use in on-road vehicles in the U.S. The use of TEL also necessitated other additives, such as dibromoethane. European countries began replacing lead-containing additives by the end of the 1980s, and by the end of the 1990s, leaded gasoline was banned within the entire European Union with an exception for Avgas 100LL for general aviation. The UAE started to switch to unleaded in the early 2000s. Reduction in the average lead content of human blood may be a major cause for falling violent crime rates around the world including South Africa. A study found a correlation between leaded gasoline usage and violent crime (see Lead–crime hypothesis). Other studies found no correlation. In August 2021, the UN Environment Programme announced that leaded petrol had been eradicated worldwide, with Algeria being the last country to deplete its reserves. UN Secretary-General António Guterres called the eradication of leaded petrol an "international success story". He also added: "Ending the use of leaded petrol will prevent more than one million premature deaths each year from heart disease, strokes and cancer, and it will protect children whose IQs are damaged by exposure to lead". Greenpeace called the announcement "the end of one toxic era". However, leaded gasoline continues to be used in aeronautic, auto racing, and off-road applications. The use of leaded additives is still permitted worldwide for the formulation of some grades of aviation gasoline such as 100LL, because the required octane rating is difficult to reach without the use of leaded additives. Different additives have replaced lead compounds. The most popular additives include aromatic hydrocarbons, ethers (MTBE and ETBE), and alcohols, most commonly ethanol. Lead Replacement Petrol Lead replacement petrol (LRP) was developed for vehicles designed to run on leaded fuels and incompatible with unleaded fuels. Rather than tetraethyllead, it contains other metals such as potassium compounds or methylcyclopentadienyl manganese tricarbonyl (MMT); these are purported to buffer soft exhaust valves and seats so that they do not suffer recession due to the use of unleaded fuel. LRP was marketed during and after the phaseout of leaded motor fuels in the United Kingdom, Australia, South Africa, and some other countries. Consumer confusion led to a widespread mistaken preference for LRP rather than unleaded, and LRP was phased out 8 to 10 years after the introduction of unleaded. Leaded gasoline was withdrawn from sale in Britain after 31 December 1999, seven years after EEC regulations signaled the end of production for cars using leaded gasoline in member states. At this stage, a large percentage of cars from the 1980s and early 1990s which ran on leaded gasoline were still in use, along with cars that could run on unleaded fuel. However, the declining number of such cars on British roads saw many gasoline stations withdrawing LRP from sale by 2003. MMT Methylcyclopentadienyl manganese tricarbonyl (MMT) is used in Canada and the U.S. to boost octane rating. Its use in the U.S. has been restricted by regulations, although it is currently allowed. Its use in the European Union is restricted by Article 8a of the Fuel Quality Directive following its testing under the Protocol for the evaluation of effects of metallic fuel-additives on the emissions performance of vehicles. Fuel stabilizers (antioxidants and metal deactivators) Gummy, sticky resin deposits result from oxidative degradation of gasoline during long-term storage. These harmful deposits arise from the oxidation of alkenes and other minor components in gasoline (see drying oils). Improvements in refinery techniques have generally reduced the susceptibility of gasolines to these problems. Previously, catalytically or thermally cracked gasolines were most susceptible to oxidation. The formation of gums is accelerated by copper salts, which can be neutralized by additives called metal deactivators. This degradation can be prevented through the addition of 5–100 ppm of antioxidants, such as phenylenediamines and other amines. Hydrocarbons with a bromine number of 10 or above can be protected with the combination of unhindered or partially hindered phenols and oil-soluble strong amine bases, such as hindered phenols. "Stale" gasoline can be detected by a colorimetric enzymatic test for organic peroxides produced by oxidation of the gasoline. Gasolines are also treated with metal deactivators, which are compounds that sequester (deactivate) metal salts that otherwise accelerate the formation of gummy residues. The metal impurities might arise from the engine itself or as contaminants in the fuel. Detergents Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion and allow easier starting in cold climates. High levels of detergent can be found in Top Tier Detergent Gasolines. The specification for Top Tier Detergent Gasolines was developed by four automakers: GM, Honda, Toyota, and BMW. According to the bulletin, the minimal U.S. EPA requirement is not sufficient to keep engines clean. Typical detergents include alkylamines and alkyl phosphates at a level of 50–100 ppm. Ethanol European Union In the EU, 5 percent ethanol can be added within the common gasoline spec (EN 228). Discussions are ongoing to allow 10 percent blending of ethanol (available in Finnish, French and German gasoline stations). In Finland, most gasoline stations sell 95E10, which is 10 percent ethanol, and 98E5, which is 5 percent ethanol. Most gasoline sold in Sweden has 5–15 percent ethanol added. Three different ethanol blends are sold in the Netherlands—E5, E10 and hE15. The last of these differs from standard ethanol–gasoline blends in that it consists of 15 percent hydrous ethanol (i.e., the ethanol–water azeotrope) instead of the anhydrous ethanol traditionally used for blending with gasoline. Brazil The Brazilian National Agency of Petroleum, Natural Gas and Biofuels (ANP) requires gasoline for automobile use to have 27.5 percent of ethanol added to its composition. Pure hydrated ethanol is also available as a fuel. Australia Legislation requires retailers to label fuels containing ethanol on the dispenser, and limits ethanol use to 10 percent of gasoline in Australia. Such gasoline is commonly called E10 by major brands, and it is cheaper than regular unleaded gasoline. U.S. The federal Renewable Fuel Standard (RFS) effectively requires refiners and blenders to blend renewable biofuels (mostly ethanol) with gasoline, sufficient to meet a growing annual target of total gallons blended. Although the mandate does not require a specific percentage of ethanol, annual increases in the target combined with declining gasoline consumption have caused the typical ethanol content in gasoline to approach 10 percent. Most fuel pumps display a sticker that states that the fuel may contain up to 10 percent ethanol, an intentional disparity that reflects the varying actual percentage. Until late 2010, fuel retailers were only authorized to sell fuel containing up to 10 percent ethanol (E10), and most vehicle warranties (except for flexible fuel vehicles) authorize fuels that contain no more than 10 percent ethanol. In parts of the U.S., ethanol is sometimes added to gasoline without an indication that it is a component. India In October 2007, the Government of India decided to make five percent ethanol blending (with gasoline) mandatory. Currently, 10 percent ethanol blended product (E10) is being sold in various parts of the country. Ethanol has been found in at least one study to damage catalytic converters. Dyes Though gasoline is a naturally colorless liquid, many gasolines are dyed in various colors to indicate their composition and acceptable uses. In Australia, the lowest grade of gasoline (RON 91) was dyed a light shade of red/orange, but is now the same color as the medium grade (RON 95) and high octane (RON 98), which are dyed yellow. In the U.S., aviation gasoline (avgas) is dyed to identify its octane rating and to distinguish it from kerosene-based jet fuel, which is left colorless. In Canada, the gasoline for marine and farm use is dyed red and is not subject to fuel excise tax in most provinces. Oxygenate blending Oxygenate blending adds oxygen-bearing compounds such as MTBE, ETBE, TAME, TAEE, ethanol, and biobutanol. The presence of these oxygenates reduces the amount of carbon monoxide and unburned fuel in the exhaust. In many areas throughout the U.S., oxygenate blending is mandated by EPA regulations to reduce smog and other airborne pollutants. For example, in Southern California fuel must contain two percent oxygen by weight, resulting in a mixture of 5.6 percent ethanol in gasoline. The resulting fuel is often known as reformulated gasoline (RFG) or oxygenated gasoline, or, in the case of California, California reformulated gasoline (CARBOB). The federal requirement that RFG contain oxygen was dropped on 6 May 2006 because the industry had developed VOC-controlled RFG that did not need additional oxygen. MTBE was phased out in the U.S. due to groundwater contamination and the resulting regulations and lawsuits. Ethanol and, to a lesser extent, ethanol-derived ETBE are common substitutes. A common ethanol-gasoline mix of 10 percent ethanol mixed with gasoline is called gasohol or E10, and an ethanol-gasoline mix of 85 percent ethanol mixed with gasoline is called E85. The most extensive use of ethanol takes place in Brazil, where the ethanol is derived from sugarcane. In 2004, over of ethanol was produced in the U.S. for fuel use, mostly from corn and sold as E10. E85 is slowly becoming available in much of the U.S., though many of the relatively few stations vending E85 are not open to the general public. The use of bioethanol and bio-methanol, either directly or indirectly by conversion of ethanol to bio-ETBE, or methanol to bio-MTBE is encouraged by the European Union Directive on the Promotion of the use of biofuels and other renewable fuels for transport. Since producing bioethanol from fermented sugars and starches involves distillation, though, ordinary people in much of Europe cannot legally ferment and distill their own bioethanol at present (unlike in the U.S., where getting a BATF distillation permit has been easy since the 1973 oil crisis). Safety Toxicity The safety data sheet for a 2003 Texan unleaded gasoline shows at least 15 hazardous chemicals occurring in various amounts, including benzene (up to five percent by volume), toluene (up to 35 percent by volume), naphthalene (up to one percent by volume), trimethylbenzene (up to seven percent by volume), methyl tert-butyl ether (MTBE) (up to 18 percent by volume, in some states), and about 10 others. Hydrocarbons in gasoline generally exhibit low acute toxicities, with LD50 of 700–2700 mg/kg for simple aromatic compounds. Benzene and many antiknocking additives are carcinogenic. People can be exposed to gasoline in the workplace by swallowing it, breathing in vapors, skin contact, and eye contact. Gasoline is toxic. The National Institute for Occupational Safety and Health (NIOSH) has also designated gasoline as a carcinogen. Physical contact, ingestion, or inhalation can cause health problems. Since ingesting large amounts of gasoline can cause permanent damage to major organs, a call to a local poison control center or emergency room visit is indicated. Contrary to common misconception, swallowing gasoline does not generally require special emergency treatment, and inducing vomiting does not help, and can make it worse. According to poison specialist Brad Dahl, "even two mouthfuls wouldn't be that dangerous as long as it goes down to your stomach and stays there or keeps going". The U.S. CDC's Agency for Toxic Substances and Disease Registry says not to induce vomiting, lavage, or administer activated charcoal. Inhalation for intoxication Inhaled (huffed) gasoline vapor is a common intoxicant. Users concentrate and inhale gasoline vapor in a manner not intended by the manufacturer to produce euphoria and intoxication. Gasoline inhalation has become epidemic in some poorer communities and indigenous groups in Australia, Canada, New Zealand, and some Pacific Islands. The practice is thought to cause severe organ damage, along with other effects such as intellectual disability and various cancers. In Canada, Native children in the isolated Northern Labrador community of Davis Inlet were the focus of national concern in 1993, when many were found to be sniffing gasoline. The Canadian and provincial Newfoundland and Labrador governments intervened on several occasions, sending many children away for treatment. Despite being moved to the new community of Natuashish in 2002, serious inhalant abuse problems have continued. Similar problems were reported in Sheshatshiu in 2000 and also in Pikangikum First Nation. In 2012, the issue once again made the news media in Canada. Australia has long faced a petrol (gasoline) sniffing problem in isolated and impoverished aboriginal communities. Although some sources argue that sniffing was introduced by U.S. servicemen stationed in the nation's Top End during World War II or through experimentation by 1940s-era Cobourg Peninsula sawmill workers, other sources claim that inhalant abuse (such as glue inhalation) emerged in Australia in the late 1960s. Chronic, heavy petrol sniffing appears to occur among remote, impoverished indigenous communities, where the ready accessibility of petrol has helped to make it a common substance for abuse. In Australia, petrol sniffing now occurs widely throughout remote Aboriginal communities in the Northern Territory, Western Australia, northern parts of South Australia, and Queensland. The number of people sniffing petrol goes up and down over time as young people experiment or sniff occasionally. "Boss", or chronic, sniffers may move in and out of communities; they are often responsible for encouraging young people to take it up. In 2005, the Government of Australia and BP Australia began the usage of Opal fuel in remote areas prone to petrol sniffing. Opal is a non-sniffable fuel (which is much less likely to cause a high) and has made a difference in some indigenous communities. Flammability Gasoline is extremely flammable due to its low flash point of . Like other hydrocarbons, gasoline burns in a limited range of its vapor phase, and, coupled with its volatility, this makes leaks highly dangerous when sources of ignition are present. Gasoline has a lower explosive limit of 1.4 percent by volume and an upper explosive limit of 7.6 percent. If the concentration is below 1.4 percent, the air-gasoline mixture is too lean and does not ignite. If the concentration is above 7.6 percent, the mixture is too rich and also does not ignite. However, gasoline vapor rapidly mixes and spreads with air, making unconstrained gasoline quickly flammable. Gasoline exhaust The exhaust gas generated by burning gasoline is harmful to both the environment and to human health. After CO is inhaled into the human body, it readily combines with hemoglobin in the blood, and its affinity is 300 times that of oxygen. Therefore, the hemoglobin in the lungs combines with CO instead of oxygen, causing the human body to be hypoxic, causing headaches, dizziness, vomiting, and other poisoning symptoms. In severe cases, it may lead to death. Hydrocarbons only affect the human body when their concentration is quite high, and their toxicity level depends on the chemical composition. The hydrocarbons produced by incomplete combustion include alkanes, aromatics, and aldehydes. Among them, a concentration of methane and ethane over will cause loss of consciousness or suffocation, a concentration of pentane and hexane over will have an anesthetic effect, and aromatic hydrocarbons will have more serious effects on health, blood toxicity, neurotoxicity, and cancer. If the concentration of benzene exceeds 40 ppm, it can cause leukemia, and xylene can cause headache, dizziness, nausea, and vomiting. Human exposure to large amounts of aldehydes can cause eye irritation, nausea, and dizziness. In addition to carcinogenic effects, long-term exposure can cause damage to the skin, liver, kidneys, and cataracts. After NOx enters the alveoli, it has a severe stimulating effect on the lung tissue. It can irritate the conjunctiva of the eyes, cause tearing, and cause pink eyes. It also has a stimulating effect on the nose, pharynx, throat, and other organs. It can cause acute wheezing, breathing difficulties, red eyes, sore throat, and dizziness causing poisoning. Environmental impact In recent years, with the rapid development of the motor vehicle economy, the production and use of motor vehicles have increased dramatically, and the pollution by motor vehicle exhaust to the environment has become more and more serious. The air pollution in many large cities has changed from coal-burning pollution to "motor vehicle pollution". In the U.S., transportation is the largest source of carbon emissions, accounting for 30 percent of the total carbon footprint of the U.S. Combustion of gasoline produces of carbon dioxide, a greenhouse gas. Unburnt gasoline and evaporation from the tank, when in the atmosphere, react in sunlight to produce photochemical smog. Vapor pressure initially rises with some addition of ethanol to gasoline, but the increase is greatest at 10 percent by volume. At higher concentrations of ethanol above 10 percent, the vapor pressure of the blend starts to decrease. At a 10 percent ethanol by volume, the rise in vapor pressure may potentially increase the problem of photochemical smog. This rise in vapor pressure could be mitigated by increasing or decreasing the percentage of ethanol in the gasoline mixture. The chief risks of such leaks come not from vehicles, but gasoline delivery truck accidents and leaks from storage tanks. Because of this risk, most (underground) storage tanks now have extensive measures in place to detect and prevent any such leaks, such as monitoring systems (Veeder-Root, Franklin Fueling). Production of gasoline consumes of water by driven distance. Gasoline use causes a variety of deleterious effects to the human population and to the climate generally. The harms imposed include a higher rate of premature death and ailments, such as asthma, caused by air pollution, higher healthcare costs for the public generally, decreased crop yields, missed work and school days due to illness, increased flooding and other extreme weather events linked to global climate change, and other social costs. The costs imposed on society and the planet are estimated to be $3.80 per gallon of gasoline, in addition to the price paid at the pump by the user. The damage to the health and climate caused by a gasoline-powered vehicle greatly exceeds that caused by electric vehicles. Carbon dioxide About of carbon dioxide (CO2) are produced from burning gasoline that does not contain ethanol. Most of the retail gasoline now sold in the U.S. contains about 10 percent fuel ethanol (or E10) by volume. Burning E10 produces about of CO2 that is emitted from the fossil fuel content. If the CO2 emissions from ethanol combustion are considered, then about of CO2 are produced when E10 is combusted. Worldwide 7 liters of gasoline are burnt for every 100 km driven by cars and vans. In 2021, the International Energy Agency stated, "To ensure fuel economy and CO2 emissions standards are effective, governments must continue regulatory efforts to monitor and reduce the gap between real-world fuel economy and rated performance." Contamination of soil and water Gasoline enters the environment through the soil, groundwater, surface water, and air. Therefore, humans may be exposed to gasoline through methods such as breathing, eating, and skin contact. For example, using gasoline-filled equipment, such as lawnmowers, drinking gasoline-contaminated water close to gasoline spills or leaks to the soil, working at a gasoline station, inhaling gasoline volatile gas when refueling at a gasoline station is the easiest way to be exposed to gasoline. Use and pricing The International Energy Agency said in 2021 that "road fuels should be taxed at a rate that reflects their impact on people's health and the climate". Europe Countries in Europe impose substantially higher taxes on fuels such as gasoline when compared to the U.S. The price of gasoline in Europe is typically higher than that in the U.S. due to this difference. U.S. From 1998 to 2004, the price of gasoline fluctuated between . After 2004, the price increased until the average gasoline price reached a high of in mid-2008 but receded to approximately by September 2009. The U.S. experienced an upswing in gasoline prices through 2011, and, by 1 March 2012, the national average was . California prices are higher because the California government mandates unique California gasoline formulas and taxes. In the U.S., most consumer goods bear pre-tax prices, but gasoline prices are posted with taxes included. Taxes are added by federal, state, and local governments. , the federal tax was for gasoline and for diesel (excluding red diesel). About nine percent of all gasoline sold in the U.S. in May 2009 was premium grade, according to the Energy Information Administration. Consumer Reports magazine says, "If [your owner's manual] says to use regular fuel, do so—there's no advantage to a higher grade." The Associated Press said premium gas—which has a higher octane rating and costs more per gallon than regular unleaded—should be used only if the manufacturer says it is "required". Cars with turbocharged engines and high compression ratios often specify premium gasoline because higher octane fuels reduce the incidence of "knock", or fuel pre-detonation. The price of gasoline varies considerably between the summer and winter months. There is a considerable difference between summer oil and winter oil in gasoline vapor pressure (Reid Vapor Pressure, RVP), which is a measure of how easily the fuel evaporates at a given temperature. The higher the gasoline volatility (the higher the RVP), the easier it is to evaporate. The conversion between the two fuels occurs twice a year, once in autumn (winter mix) and the other in spring (summer mix). The winter blended fuel has a higher RVP because the fuel must be able to evaporate at a low temperature for the engine to run normally. If the RVP is too low on a cold day, the vehicle will be difficult to start; however, the summer blended gasoline has a lower RVP. It prevents excessive evaporation when the outdoor temperature rises, reduces ozone emissions, and reduces smog levels. At the same time, vapor lock is less likely to occur in hot weather. Gasoline production by country Comparison with other fuels Below is a table of the energy density (per volume) and specific energy (per mass) of various transportation fuels as compared with gasoline. In the rows with gross and net, they are from the Oak Ridge National Laboratory's Transportation Energy Data Book. See also Explanatory notes References Bibliography Gold, Russell. The Boom: How Fracking Ignited the American Energy Revolution and Changed the World (Simon & Schuster, 2014). Yergin, Daniel. The Quest: Energy, Security, and the Remaking of the Modern World (Penguin, 2011). Yergin, Daniel. The Prize: The Epic Quest for Oil, Money, and Power (Buccaneer Books, 1994; latest edition: Reissue Press, 2008). Graph of inflation-corrected historic prices, 1970–2005. Highest in 2005 The Low-Down on High Octane Gasoline MMT-US EPA An introduction to the modern petroleum science , and to the Russian-Ukrainian theory of deep, abiotic petroleum origins. What's the difference between premium and regular gas? (from The Straight Dope) International Fuel Prices 2005 with diesel and gasoline prices of 172 countries EIA – Gasoline and Diesel Fuel Update World Internet News: "Big Oil Looking for Another Government Handout", April 2006. Durability of various plastics: Alcohols vs. Gasoline Dismissal of the Claims of a Biological Connection for Natural petroleum. Fuel Economy Impact Analysis of RFG i.e. reformulated gasoline. Has lower heating value data, actual energy content is higher see higher heating value A Refiner's Viewpoint on MOTOR FUEL QUALITY , 'A Refiner's Viewpoint on Motor Fuel Quality' About the fuel specs refiners can control. Holaday W, and Happel J. (SAE paper 430113, 1943). External links CNN/Money: Global gas prices EEP: European gas prices Transportation Energy Data Book Energy Supply Logistics Searchable Directory of US Terminals High octane fuel, leaded and LRP gasoline—article from robotpig.net CDC – NIOSH Pocket Guide to Chemical Hazards Aviation Fuel Map Comparison of Regular, Midgrade, and Premium Fuel Images Down the Gasoline Trail Handy Jam Organization, 1935 (Cartoon) IARC Group 2B carcinogens Inhalants Liquid fuels Petroleum products
23640
https://en.wikipedia.org/wiki/Pentose
Pentose
In chemistry, a pentose is a monosaccharide (simple sugar) with five carbon atoms. The chemical formula of many pentoses is , and their molecular weight is 150.13 g/mol. Pentoses are very important in biochemistry. Ribose is a constituent of RNA, and the related molecule, deoxyribose, is a constituent of DNA. Phosphorylated pentoses are important products of the pentose phosphate pathway, most importantly ribose 5-phosphate (R5P), which is used in the synthesis of nucleotides and nucleic acids, and erythrose 4-phosphate (E4P), which is used in the synthesis of aromatic amino acids. Like some other monosaccharides, pentoses exist in two forms, open-chain (linear) or closed-chain (cyclic), that easily convert into each other in water solutions. The linear form of a pentose, which usually exists only in solutions, has an open-chain backbone of five carbons. Four of these carbons have one hydroxyl functional group (–OH) each, connected by a single bond, and one has an oxygen atom connected by a double bond (=O), forming a carbonyl group (C=O). The remaining bonds of the carbon atoms are satisfied by six hydrogen atoms. Thus the structure of the linear form is H–(CHOH)x–C(=O)–(CHOH)4-x–H, where x is 0, 1, or 2. The term "pentose" sometimes is assumed to include deoxypentoses, such as deoxyribose: compounds with general formula that can be described as derived from pentoses by replacement of one or more hydroxyl groups with hydrogen atoms. Classification The aldopentoses are a subclass of the pentoses which, in the linear form, have the carbonyl at carbon 1, forming an aldehyde derivative with structure H–C(=O)–(CHOH)4–H. The most important example is ribose. The ketopentoses instead have the carbonyl at positions 2 or 3, forming a ketone derivative with structure H–CHOH–C(=O)–(CHOH)3–H (2-ketopentose) or H–(CHOH)2–C(=O)–(CHOH)2–H (3-ketopentose). The latter is not known to occur in nature and are difficult to synthesize. In the open form, there are eight aldopentoses and four 2-ketopentoses, stereoisomers that differ in the spatial position of the hydroxyl groups. These forms occur in pairs of optical isomers, generally labelled "" or "" by conventional rules (independently of their optical activity). Aldopentoses The aldopentoses have three chiral centers; therefore, eight (23) different stereoisomers are possible. Ribose is a constituent of RNA, and the related molecule, deoxyribose, is a constituent of DNA. Phosphorylated pentoses are important products of the pentose phosphate pathway, most importantly ribose 5-phosphate (R5P), which is used in the synthesis of nucleotides and nucleic acids, and erythrose 4-phosphate (E4P), which is used in the synthesis of aromatic amino acids. Ketopentoses The 2-ketopentoses have two chiral centers; therefore, four (22) different stereoisomers are possible. The 3-ketopentoses are rare. Cyclic form The closed or cyclic form of a pentose is created when the carbonyl group interacts with a hydroxyl in another carbon, turning the carbonyl into a hydroxyl and creating an ether bridge –O– between the two carbons. This intramolecular reaction yields a cyclic molecule, with a ring consisting of one oxygen atom and usually four carbon atoms; the cyclic compounds are then called furanoses, for having the same rings as the cyclic ether tetrahydrofuran. The closure turns the carboxyl carbon into a chiral center, which may have any of two configurations, depending on the position of the new hydroxyl. Therefore, each linear form can produce two distinct closed forms, identified by prefixes "α" and "β". Deoxypentoses The one deoxypentose has two total stereoisomers. Properties In the cell, pentoses have a higher metabolic stability than hexoses. A polymer composed of pentose sugars is called a pentosan. Tests for pentoses The most important tests for pentoses rely on converting the pentose to furfural, which then reacts with a chromophore. In Tollens’ test for pentoses (not to be confused with Tollens' silver-mirror test for reducing sugars), the furfural ring reacts with phloroglucinol to produce a colored compound; in the aniline acetate test with aniline acetate; and in Bial's test, with orcinol. In each of these tests, pentoses react much more strongly and quickly than hexoses. References
23643
https://en.wikipedia.org/wiki/Propane
Propane
Propane () is a three-carbon alkane with the molecular formula . It is a gas at standard temperature and pressure, but compressible to a transportable liquid. A by-product of natural gas processing and petroleum refining, it is often a constituent of liquefied petroleum gas (LPG), which is commonly used as a fuel in domestic and industrial applications and in low-emissions public transportation; other constituents of LPG may include propylene, butane, butylene, butadiene, and isobutylene. Discovered in 1857 by the French chemist Marcellin Berthelot, it became commercially available in the US by 1911. Propane has lower volumetric energy density than gasoline or coal, but has higher gravimetric energy density than them and burns more cleanly. Propane gas has become a popular choice for barbecues and portable stoves because its low −42 °C boiling point makes it vaporise inside pressurised liquid containers (it exists in two phases, vapor above liquid). It retains its ability to vaporise even in cold weather, making it better-suited for outdoor use in cold climates than alternatives with higher boiling points like butane. LPG powers buses, forklifts, automobiles, outboard boat motors, and ice resurfacing machines, and is used for heat and cooking in recreational vehicles and campers. Propane is becoming popular as a replacement refrigerant (R290) for heatpumps also as it offers greater efficiency than the current refrigerants: R410A / R32, higher temperature heat output and less damage to the atmosphere for escaped gasses - at the expense of high gas flammability. History Propane was first synthesized by the French chemist Marcellin Berthelot in 1857 during his researches on hydrogenation. Berthelot made propane by heating propylene dibromide (C3H6Br2) with potassium iodide and water. Propane was found dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864. Walter O. Snelling of the U.S. Bureau of Mines highlighted it as a volatile component in gasoline in 1910, which marked the "birth of the propane industry" in the United States. The volatility of these lighter hydrocarbons caused them to be known as "wild" because of the high vapor pressures of unrefined gasoline. On March 31, 1912, The New York Times reported on Snelling's work with liquefied gas, saying "a steel bottle will carry enough gas to light an ordinary home for three weeks". It was during this time that Snelling—in cooperation with Frank P. Peterson, Chester Kerr, and Arthur Kerr—developed ways to liquefy the LP gases during the refining of gasoline. Together, they established American Gasol Co., the first commercial marketer of propane. Snelling had produced relatively pure propane by 1911, and on March 25, 1913, his method of processing and producing LP gases was issued patent #1,056,845. A separate method of producing LP gas through compression was developed by Frank Peterson and its patent was granted on July 2, 1912. The 1920s saw increased production of LP gases, with the first year of recorded production totaling in 1922. In 1927, annual marketed LP gas production reached , and by 1935, the annual sales of LP gas had reached . Major industry developments in the 1930s included the introduction of railroad tank car transport, gas odorization, and the construction of local bottle-filling plants. The year 1945 marked the first year that annual LP gas sales reached a billion gallons. By 1947, 62% of all U.S. homes had been equipped with either natural gas or propane for cooking. In 1950, 1,000 propane-fueled buses were ordered by the Chicago Transit Authority, and by 1958, sales in the U.S. had reached annually. In 2004, it was reported to be a growing $8-billion to $10-billion industry with over of propane being used annually in the U.S. During the COVID-19 pandemic, propane shortages were reported in the United States due to increased demand. Etymology The "prop-" root found in "propane" and names of other compounds with three-carbon chains was derived from "propionic acid", which in turn was named after the Greek words protos (meaning first) and pion (fat), as it was the "first" member of the series of fatty acids. Properties and reactions Propane is a colorless, odorless gas. Ethyl mercaptan is added as a safety precaution as an odorant, and is commonly called a "rotten egg" smell. At normal pressure it liquifies below its boiling point at −42 °C and solidifies below its melting point at −187.7 °C. Propane crystallizes in the space group P21/n. The low space-filling of 58.5% (at 90 K), due to the bad stacking properties of the molecule, is the reason for the particularly low melting point. Propane undergoes combustion reactions in a similar fashion to other alkanes. In the presence of excess oxygen, propane burns to form water and carbon dioxide. C3H8 + 5 O2 -> 3 CO2 + 4 H2O + heat When insufficient oxygen is present for complete combustion, carbon monoxide, soot (carbon), or both, are formed as well: C3H8 + 9/2 O2 -> 2 CO2 + CO + 4 H2O + heat C3H8 + 2 O2 -> 3 C + 4 H2O + heat The complete combustion of propane produces about 50 MJ/kg of heat. Propane combustion is much cleaner than that of coal or unleaded gasoline. Propane's per-BTU production of CO2 is almost as low as that of natural gas. Propane burns hotter than home heating oil or diesel fuel because of the very high hydrogen content. The presence of C–C bonds, plus the multiple bonds of propylene and butylene, produce organic exhausts besides carbon dioxide and water vapor during typical combustion. These bonds also cause propane to burn with a visible flame. Energy content The enthalpy of combustion of propane gas where all products return to standard state, for example where water returns to its liquid state at standard temperature (known as higher heating value), is (2,219.2 ± 0.5) kJ/mol, or (50.33 ± 0.01) MJ/kg. The enthalpy of combustion of propane gas where products do not return to standard state, for example where the hot gases including water vapor exit a chimney, (known as lower heating value) is −2043.455 kJ/mol. The lower heat value is the amount of heat available from burning the substance where the combustion products are vented to the atmosphere; for example, the heat from a fireplace when the flue is open. Density The density of propane gas at 25 °C (77 °F) is 1.808 kg/m3, about 1.5× the density of air at the same temperature. The density of liquid propane at 25 °C (77 °F) is 0.493 g/cm3, which is equivalent to 4.11 pounds per U.S. liquid gallon or 493 g/L. Propane expands at 1.5% per 10 °F. Thus, liquid propane has a density of approximately 4.2 pounds per gallon (504 g/L) at 60 °F (15.6 °C). As the density of propane changes with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations. Uses Portable stoves Propane is a popular choice for barbecues and portable stoves because the low boiling point of makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices. Refrigerant Blends of pure, dry "isopropane" [isobutane/propane mixtures of propane (R-290) and isobutane (R-600a)] can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a GWP value of 0.072, 13.9 times lower than the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22, R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability. Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles. It has also been proposed to use propane as a refrigerant in heat pumps. Domestic and industrial fuel Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In June 2023, Stanford researchers found propane combustion emitted detectable and repeatable levels of benzene that in some homes raised indoor benzene concentrations above well-established health benchmarks. The research also shows that gas and propane fuels appear to be the dominant source of benzene produced by cooking. In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. , 6.2 million American households use propane as their primary heating fuel. In North America, local delivery trucks with an average cylinder size of , fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of , transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers. In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location. There are also community propane systems, with a central cylinder feeding individual homes. Motor fuel In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, behind gasoline and diesel fuel. In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas. The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels. Propane is also used as fuel for small engines, especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, there have been lawn-care products like string trimmers, lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution. Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a full load of combined diesel and propane fuel means they can maintain federal hours of work rules with two fewer fuel stops in a cross-country trip. Truckers, tractor pulling competitions, and farmers have been using a propane boost system for over forty years in North America. Other uses Propane is the primary flammable gas in blowtorches for soldering. Propane is used in oxy-fuel welding and cutting. Propane does not burn as hot as acetylene in its inner cone, and so it is rarely used for welding. Propane, however, has a very high number of BTUs per cubic foot in its outer cone, and so with the right torch (injector style) it can make a faster and cleaner cut than acetylene, and is much more useful for heating and bending than acetylene. Propane is used as a feedstock for the production of base petrochemicals in steam cracking. Propane is the primary fuel for hot-air balloons. It is used in semiconductor manufacture to deposit silicon carbide. Propane is commonly used in theme parks and in movie production as an inexpensive, high-energy fuel for explosions and other special effects. Propane is used as a propellant, relying on the expansion of the gas to fire the projectile. It does not ignite the gas. The use of a liquefied gas gives more shots per cylinder, compared to a compressed gas. Propane is also used as a cooking fuel. Propane is used as a propellant for many household aerosol sprays, including shaving creams and air fresheners. Propane is a promising feedstock for the production of propylene. Liquified propane is used in the extraction of animal fats and vegetable oils. Purity The North American standard grade of automotive-use propane is rated HD-5 (Heavy Duty 5%). HD-5 grade has a maximum of 5 percent butane, but propane sold in Europe has a maximum allowable amount of butane of 30 percent, meaning it is not the same fuel as HD-5. The LPG used as auto fuel and cooking gas in Asia and Australia also has very high butane content. Propylene (also called propene) can be a contaminant of commercial propane. Propane containing too much propene is not suited for most vehicle fuels. HD-5 is a specification that establishes a maximum concentration of 5% propene in propane. Propane and other LP gas specifications are established in ASTM D-1835. All propane fuels include an odorant, almost always ethanethiol, so that the gas can be smelled easily in case of a leak. Propane as HD-5 was originally intended for use as vehicle fuel. HD-5 is currently being used in all propane applications. Typically in the United States and Canada, LPG is primarily propane (at least 90%), while the rest is mostly ethane, propylene, butane, and odorants including ethyl mercaptan. This is the HD-5 standard, (maximum allowable propylene content, and no more than 5% butanes and ethane) defined by the American Society for Testing and Materials by its Standard 1835 for internal combustion engines. Not all products labeled "LPG" conform to this standard, however. In Mexico, for example, gas labeled "LPG" may consist of 60% propane and 40% butane. "The exact proportion of this combination varies by country, depending on international prices, on the availability of components and, especially, on the climatic conditions that favor LPG with higher butane content in warmer regions and propane in cold areas". Comparison with natural gas Propane is bought and stored in a liquid form, LPG. It can easily be stored in a relatively small space. By comparison, compressed natural gas (CNG) cannot be liquefied by compression at normal temperatures, as these are well above its critical temperature. As a gas, very high pressure is required to store useful quantities. This poses the hazard that, in an accident, just as with any compressed gas cylinder (such as a CO2 cylinder used for a soda concession) a CNG cylinder may burst with great force, or leak rapidly enough to become a self-propelled missile. Therefore, CNG is much less efficient to store than propane, due to the large cylinder volume required. An alternative means of storing natural gas is as a cryogenic liquid in an insulated container as liquefied natural gas (LNG). This form of storage is at low pressure and is around 3.5 times as efficient as storing it as CNG. Unlike propane, if a spill occurs, CNG will evaporate and dissipate because it is lighter than air. Propane is much more commonly used to fuel vehicles than is natural gas, because that equipment costs less. Propane requires just of pressure to keep it liquid at . Hazards Propane is a simple asphyxiant. Unlike natural gas, it is denser than air. It may accumulate in low spaces and near the floor. When abused as an inhalant, it may cause hypoxia (lack of oxygen), pneumonia, cardiac failure or cardiac arrest. Propane has low toxicity since it is not readily absorbed and is not biologically active. Commonly stored under pressure at room temperature, propane and its mixtures will flash evaporate at atmospheric pressure and cool well below the freezing point of water. The cold gas, which appears white due to moisture condensing from the air, may cause frostbite. Propane is denser than air. If a leak in a propane fuel system occurs, the vaporized gas will have a tendency to sink into any enclosed area and thus poses a risk of explosion and fire. The typical scenario is a leaking cylinder stored in a basement; the propane leak drifts across the floor to the pilot light on the furnace or water heater, and results in an explosion or fire. This property makes propane generally unsuitable as a fuel for boats. In 2007, a heavily investigated vapor-related explosion occurred in Ghent, West Virginia, U.S., killing four people and completely destroying the Little General convenience store on Flat Top Road, causing several injuries. Another hazard associated with propane storage and transport is known as a BLEVE or boiling liquid expanding vapor explosion. The Kingman Explosion involved a railroad tank car in Kingman, Arizona, U.S., in 1973 during a propane transfer. The fire and subsequent explosions resulted in twelve fatalities and numerous injuries. Production Propane is produced as a by-product of two other processes, natural gas processing and petroleum refining. The processing of natural gas involves removal of butane, propane, and large amounts of ethane from the raw gas, to prevent condensation of these volatiles in natural gas pipelines. Additionally, oil refineries produce some propane as a by-product of cracking petroleum into gasoline or heating oil. The supply of propane cannot easily be adjusted to meet increased demand, because of the by-product nature of propane production. About 90% of U.S. propane is domestically produced. The United States imports about 10% of the propane consumed each year, with about 70% of that coming from Canada via pipeline and rail. The remaining 30% of imported propane comes to the United States from other sources via ocean transport. After it is separated from the crude oil, North American propane is stored in huge salt caverns. Examples of these are Fort Saskatchewan, Alberta; Mont Belvieu, Texas; and Conway, Kansas. These salt caverns can store of propane. Retail cost United States , the retail cost of propane was approximately $2.37 per gallon, or roughly $25.95 per 1 million BTUs. This means that filling a 500-gallon propane tank, which is what households that use propane as their main source of energy usually require, cost $948 (80% of 500 gallons or 400 gallons), a 7.5% increase on the 2012–2013 winter season average US price. However, propane costs per gallon change significantly from one state to another: the Energy Information Administration (EIA) quotes a $2.995 per gallon average on the East Coast for October 2013, while the figure for the Midwest was $1.860 for the same period. the propane retail cost was approximately $1.97 per gallon. This means that filling a 500-gallon propane tank to 80% capacity cost $788, a 16.9% decrease or $160 less from the November 2013 quote in this section. Similar regional differences in prices are present with the December 2015 EIA figure for the East Coast at $2.67 per gallon and the Midwest at $1.43 per gallon. the average US propane retail cost was approximately $2.48 per gallon. The wholesale price of propane in the U.S. always drops in the summer as most homes do not require it for home heating. The wholesale price of propane in the summer of 2018 was between 86 cents to 96 cents per U.S. gallon, based on a truckload or railway car load. The price for home heating was exactly double that price; at 95 cents per gallon wholesale, a home-delivered price was $1.90 per gallon if ordered 500 gallons at a time. Prices in the Midwest are always cheaper than California. Prices for home delivery always go up near the end of August or the first few days of September when people start ordering their home tanks to be filled. See also Blau gas National Propane Gas Association Hank Hill References External links Canadian Propane Association (syngas) International Chemical Safety Card 0319 National Propane Gas Association (U.S.) NIOSH Pocket Guide to Chemical Hazards Propane Education & Research Council (U.S.) Propane Properties Explained Descriptive Breakdown of Propane Characteristics UKLPG: Propane and Butane in the UK US Energy Information Administration World LP Gas Association (WLPGA) Aerosol propellants Alkanes Camping equipment E-number additives Fossil fuels Fuel gas GABAA receptor positive allosteric modulators Industrial gases Natural gas Refrigerants
23645
https://en.wikipedia.org/wiki/Precambrian
Precambrian
The Precambrian ( ; or Pre-Cambrian, sometimes abbreviated pC, or Cryptozoic) is the earliest part of Earth's history, set before the current Phanerozoic Eon. The Precambrian is so named because it preceded the Cambrian, the first period of the Phanerozoic Eon, which is named after Cambria, the Latinized name for Wales, where rocks from this age were first studied. The Precambrian accounts for 88% of the Earth's geologic time. The Precambrian is an informal unit of geologic time, subdivided into three eons (Hadean, Archean, Proterozoic) of the geologic time scale. It spans from the formation of Earth about 4.6 billion years ago (Ga) to the beginning of the Cambrian Period, about million years ago (Ma), when hard-shelled creatures first appeared in abundance. Overview Relatively little is known about the Precambrian, despite it making up roughly seven-eighths of the Earth's history, and what is known has largely been discovered from the 1960s onwards. The Precambrian fossil record is poorer than that of the succeeding Phanerozoic, and fossils from the Precambrian (e.g. stromatolites) are of limited biostratigraphic use. This is because many Precambrian rocks have been heavily metamorphosed, obscuring their origins, while others have been destroyed by erosion, or remain deeply buried beneath Phanerozoic strata. It is thought that the Earth coalesced from material in orbit around the Sun at roughly 4,543 Ma, and may have been struck by another planet called Theia shortly after it formed, splitting off material that formed the Moon (see Giant-impact hypothesis). A stable crust was apparently in place by 4,433 Ma, since zircon crystals from Western Australia have been dated at 4,404 ± 8 Ma. The term "Precambrian" is used by geologists and paleontologists for general discussions not requiring a more specific eon name. However, both the United States Geological Survey and the International Commission on Stratigraphy regard the term as informal. Because the span of time falling under the Precambrian consists of three eons (the Hadean, the Archean, and the Proterozoic), it is sometimes described as a supereon, but this is also an informal term, not defined by the ICS in its chronostratigraphic guide. (from "earliest") was a synonym for pre-Cambrian, or more specifically Archean. Life forms A specific date for the origin of life has not been determined. Carbon found in 3.8 billion-year-old rocks (Archean Eon) from islands off western Greenland may be of organic origin. Well-preserved microscopic fossils of bacteria older than 3.46 billion years have been found in Western Australia. Probable fossils 100 million years older have been found in the same area. However, there is evidence that life could have evolved over 4.280 billion years ago. There is a fairly solid record of bacterial life throughout the remainder (Proterozoic Eon) of the Precambrian. Complex multicellular organisms may have appeared as early as 2100 Ma. However, the interpretation of ancient fossils is problematic, and "... some definitions of multicellularity encompass everything from simple bacterial colonies to badgers." Other possible early complex multicellular organisms include a possible 2450 Ma red alga from the Kola Peninsula, 1650 Ma carbonaceous biosignatures in north China, the 1600 Ma Rafatazmia, and a possible 1047 Ma Bangiomorpha red alga from the Canadian Arctic. The earliest fossils widely accepted as complex multicellular organisms date from the Ediacaran Period. A very diverse collection of soft-bodied forms is found in a variety of locations worldwide and date to between 635 and 542 Ma. These are referred to as Ediacaran or Vendian biota. Hard-shelled creatures appeared toward the end of that time span, marking the beginning of the Phanerozoic Eon. By the middle of the following Cambrian Period, a very diverse fauna is recorded in the Burgess Shale, including some which may represent stem groups of modern taxa. The increase in diversity of lifeforms during the early Cambrian is called the Cambrian explosion of life. While land seems to have been devoid of plants and animals, cyanobacteria and other microbes formed prokaryotic mats that covered terrestrial areas. Tracks from an animal with leg-like appendages have been found in what was mud 551 million years ago. Emergence of life The RNA world hypothesis asserts that RNA evolved before coded proteins and DNA genomes. During the Hadean Eon (4,567–4,031 Ma) abundant geothermal microenvironments were present that may have had the potential to support the synthesis and replication of RNA and thus possibly the evolution of a primitive life form. It was shown that porous rock systems comprising heated air-water interfaces could allow ribozyme-catalyzed RNA replication of sense and antisense strands that could be followed by strand-dissociation, thus enabling combined synthesis, release and folding of active ribozymes. This primitive RNA replicative system also may have been able to undergo template strand switching during replication (genetic recombination) as is known to occur during the RNA replication of extant coronaviruses. Planetary environment and the oxygen catastrophe Evidence of the details of plate motions and other tectonic activity in the Precambrian is difficult to interpret. It is generally believed that small proto-continents existed before 4280 Ma, and that most of the Earth's landmasses collected into a single supercontinent around 1130 Ma. The supercontinent, known as Rodinia, broke up around 750 Ma. A number of glacial periods have been identified going as far back as the Huronian epoch, roughly 2400–2100 Ma. One of the best studied is the Sturtian-Varangian glaciation, around 850–635 Ma, which may have brought glacial conditions all the way to the equator, resulting in a "Snowball Earth". The atmosphere of the early Earth is not well understood. Most geologists believe it was composed primarily of nitrogen, carbon dioxide, and other relatively inert gases, and was lacking in free oxygen. There is, however, evidence that an oxygen-rich atmosphere existed since the early Archean. At present, it is still believed that molecular oxygen was not a significant fraction of Earth's atmosphere until after photosynthetic life forms evolved and began to produce it in large quantities as a byproduct of their metabolism. This radical shift from a chemically inert to an oxidizing atmosphere caused an ecological crisis, sometimes called the oxygen catastrophe. At first, oxygen would have quickly combined with other elements in Earth's crust, primarily iron, removing it from the atmosphere. After the supply of oxidizable surfaces ran out, oxygen would have begun to accumulate in the atmosphere, and the modern high-oxygen atmosphere would have developed. Evidence for this lies in older rocks that contain massive banded iron formations that were laid down as iron oxides. Subdivisions A terminology has evolved covering the early years of the Earth's existence, as radiometric dating has allowed absolute dates to be assigned to specific formations and features. The Precambrian is divided into three eons: the Hadean (– Ma), Archean (- Ma) and Proterozoic (- Ma). See Timetable of the Precambrian. Proterozoic: this eon refers to the time from the lower Cambrian boundary, Ma, back through Ma. As originally used, it was a synonym for "Precambrian" and hence included everything prior to the Cambrian boundary. The Proterozoic Eon is divided into three eras: the Neoproterozoic, Mesoproterozoic and Paleoproterozoic. Neoproterozoic: The youngest geologic era of the Proterozoic Eon, from the Cambrian Period lower boundary ( Ma) back to Ma. The Neoproterozoic corresponds to Precambrian Z rocks of older North American stratigraphy. Ediacaran: The youngest geologic period within the Neoproterozoic Era. The "2012 Geologic Time Scale" dates it from to Ma. In this period the Ediacaran biota appeared. Cryogenian: The middle period in the Neoproterozoic Era: - Ma. Tonian: the earliest period of the Neoproterozoic Era: - Ma. Mesoproterozoic: the middle era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian Y" rocks of older North American stratigraphy. Paleoproterozoic: oldest era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian X" rocks of older North American stratigraphy. Archean Eon: - Ma. Hadean Eon: – Ma. This term was intended originally to cover the time before any preserved rocks were deposited, although some zircon crystals from about 4400 Ma demonstrate the existence of crust in the Hadean Eon. Other records from Hadean time come from the Moon and meteorites. It has been proposed that the Precambrian should be divided into eons and eras that reflect stages of planetary evolution, rather than the current scheme based upon numerical ages. Such a system could rely on events in the stratigraphic record and be demarcated by GSSPs. The Precambrian could be divided into five "natural" eons, characterized as follows: Accretion and differentiation: a period of planetary formation until giant Moon-forming impact event. Hadean: dominated by heavy bombardment from about 4.51 Ga (possibly including a cool early Earth period) to the end of the Late Heavy Bombardment period. Archean: a period defined by the first crustal formations (the Isua greenstone belt) until the deposition of banded iron formations due to increasing atmospheric oxygen content. Transition: a period of continued banded iron formation until the first continental red beds. Proterozoic: a period of modern plate tectonics until the first animals. Precambrian supercontinents The movement of Earth's plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent containing most or all of the landmass. The earliest known supercontinent was Vaalbara. It formed from proto-continents and was a supercontinent 3.636 billion years ago. Vaalbara broke up c. 2.845–2.803 Ga ago. The supercontinent Kenorland was formed c. 2.72 Ga ago and then broke sometime after 2.45–2.1 Ga into the proto-continent cratons called Laurentia, Baltica, Yilgarn craton and Kalahari. The supercontinent Columbia, or Nuna, formed 2.1–1.8 billion years ago and broke up about 1.3–1.2 billion years ago. The supercontinent Rodinia is thought to have formed about 1300-900 Ma, to have included most or all of Earth's continents and to have broken up into eight continents around 750–600 million years ago. See also References Further reading Valley, John W., William H. Peck, Elizabeth M. King (1999) Zircons Are Forever, The Outcrop for 1999, University of Wisconsin-Madison Wgeology.wisc.edu – Evidence from detrital zircons for the existence of continental crust and oceans on the Earth 4.4 Gyr ago Accessed Jan. 10, 2006 External links Late Precambrian Supercontinent and Ice House World from the Paleomap Project
23647
https://en.wikipedia.org/wiki/Polymerase%20chain%20reaction
Polymerase chain reaction
The polymerase chain reaction (PCR) is a method widely used to make millions to billions of copies of a specific DNA sample rapidly, allowing scientists to amplify a very small sample of DNA (or a part of it) sufficiently to enable detailed study. PCR was invented in 1983 by American biochemist Kary Mullis at Cetus Corporation. Mullis and biochemist Michael Smith, who had developed other essential ways of manipulating DNA, were jointly awarded the Nobel Prize in Chemistry in 1993. PCR is fundamental to many of the procedures used in genetic testing and research, including analysis of ancient samples of DNA and identification of infectious agents. Using PCR, copies of very small amounts of DNA sequences are exponentially amplified in a series of cycles of temperature changes. PCR is now a common and often indispensable technique used in medical laboratory research for a broad variety of applications including biomedical research and forensic science. The majority of PCR methods rely on thermal cycling. Thermal cycling exposes reagents to repeated cycles of heating and cooling to permit different temperature-dependent reactions—specifically, DNA melting and enzyme-driven DNA replication. PCR employs two main reagents—primers (which are short single strand DNA fragments known as oligonucleotides that are a complementary sequence to the target DNA region) and a thermostable DNA polymerase. In the first step of PCR, the two strands of the DNA double helix are physically separated at a high temperature in a process called nucleic acid denaturation. In the second step, the temperature is lowered and the primers bind to the complementary sequences of DNA. The two DNA strands then become templates for DNA polymerase to enzymatically assemble a new DNA strand from free nucleotides, the building blocks of DNA. As PCR progresses, the DNA generated is itself used as a template for replication, setting in motion a chain reaction in which the original DNA template is exponentially amplified. Almost all PCR applications employ a heat-stable DNA polymerase, such as Taq polymerase, an enzyme originally isolated from the thermophilic bacterium Thermus aquaticus. If the polymerase used was heat-susceptible, it would denature under the high temperatures of the denaturation step. Before the use of Taq polymerase, DNA polymerase had to be manually added every cycle, which was a tedious and costly process. Applications of the technique include DNA cloning for sequencing, gene cloning and manipulation, gene mutagenesis; construction of DNA-based phylogenies, or functional analysis of genes; diagnosis and monitoring of genetic disorders; amplification of ancient DNA; analysis of genetic fingerprints for DNA profiling (for example, in forensic science and parentage testing); and detection of pathogens in nucleic acid tests for the diagnosis of infectious diseases. Principles PCR amplifies a specific region of a DNA strand (the DNA target). Most PCR methods amplify DNA fragments of between 0.1 and 10 kilo base pairs (kbp) in length, although some techniques allow for amplification of fragments up to 40 kbp. The amount of amplified product is determined by the available substrates in the reaction, which becomes limiting as the reaction progresses. A basic PCR set-up requires several components and reagents, including: a DNA template that contains the DNA target region to amplify a DNA polymerase; an enzyme that polymerizes new DNA strands; heat-resistant Taq polymerase is especially common, as it is more likely to remain intact during the high-temperature DNA denaturation process two DNA primers that are complementary to the 3' (three prime) ends of each of the sense and anti-sense strands of the DNA target (DNA polymerase can only bind to and elongate from a double-stranded region of DNA; without primers, there is no double-stranded initiation site at which the polymerase can bind); specific primers that are complementary to the DNA target region are selected beforehand, and are often custom-made in a laboratory or purchased from commercial biochemical suppliers deoxynucleoside triphosphates, or dNTPs (sometimes called "deoxynucleotide triphosphates"; nucleotides containing triphosphate groups), the building blocks from which the DNA polymerase synthesizes a new DNA strand a buffer solution providing a suitable chemical environment for optimum activity and stability of the DNA polymerase bivalent cations, typically magnesium (Mg) or manganese (Mn) ions; Mg2+ is the most common, but Mn2+ can be used for PCR-mediated DNA mutagenesis, as a higher Mn2+ concentration increases the error rate during DNA synthesis; and monovalent cations, typically potassium (K) ions The reaction is commonly carried out in a volume of 10–200 μL in small reaction tubes (0.2–0.5 mL volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of a Peltier device, which permits both heating and cooling of the block holding the PCR tubes simply by reversing the device's electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibrium. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermal cyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. Procedure Typically, PCR consists of a series of 20–40 repeated temperature changes, called thermal cycles, with each cycle commonly consisting of two or three discrete temperature steps (see figure below). The cycling is often preceded by a single temperature step at a very high temperature (>), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters, including the enzyme used for DNA synthesis, the concentration of bivalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers. The individual steps common to most PCR methods are as follows: Initialization: This step is only required for DNA polymerases that require heat activation by hot-start PCR. It consists of heating the reaction chamber to a temperature of , or if extremely thermostable polymerases are used, which is then held for 1–10 minutes. Denaturation: This step is the first regular cycling event and consists of heating the reaction chamber to for 20–30 seconds. This causes DNA melting, or denaturation, of the double-stranded DNA template by breaking the hydrogen bonds between complementary bases, yielding two single-stranded DNA molecules. Annealing: In the next step, the reaction temperature is lowered to for 20–40 seconds, allowing annealing of the primers to each of the single-stranded DNA templates. Two different primers are typically included in the reaction mixture: one for each of the two single-stranded complements containing the target region. The primers are single-stranded sequences themselves, but are much shorter than the length of the target region, complementing only very short sequences at the 3' end of each strand. It is critical to determine a proper temperature for the annealing step because efficiency and specificity are strongly affected by the annealing temperature. This temperature must be low enough to allow for hybridization of the primer to the strand, but high enough for the hybridization to be specific, i.e., the primer should bind only to a perfectly complementary part of the strand, and nowhere else. If the temperature is too low, the primer may bind imperfectly. If it is too high, the primer may not bind at all. A typical annealing temperature is about 3–5 °C below the Tm of the primers used. Stable hydrogen bonds between complementary bases are formed only when the primer sequence very closely matches the template sequence. During this step, the polymerase binds to the primer-template hybrid and begins DNA formation. Extension/elongation: The temperature at this step depends on the DNA polymerase used; the optimum activity temperature for the thermostable DNA polymerase of Taq polymerase is approximately , though a temperature of is commonly used with this enzyme. In this step, the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding free dNTPs from the reaction mixture that is complementary to the template in the 5'-to-3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxy group at the end of the nascent (elongating) DNA strand. The precise time required for elongation depends both on the DNA polymerase used and on the length of the DNA target region to amplify. As a rule of thumb, at their optimal temperature, most DNA polymerases polymerize a thousand bases per minute. Under optimal conditions (i.e., if there are no limitations due to limiting substrates or reagents), at each extension/elongation step, the number of DNA target sequences is doubled. With each successive cycle, the original template strands plus all newly generated strands become template strands for the next round of elongation, leading to exponential (geometric) amplification of the specific DNA target region. The processes of denaturation, annealing and elongation constitute a single cycle. Multiple cycles are required to amplify the DNA target to millions of copies. The formula used to calculate the number of DNA copies formed after a given number of cycles is 2n, where n is the number of cycles. Thus, a reaction set for 30 cycles results in 230, or , copies of the original double-stranded DNA target region. Final elongation: This single step is optional, but is performed at a temperature of (the temperature range required for optimal activity of most polymerases used in PCR) for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully elongated. Final hold: The final step cools the reaction chamber to for an indefinite time, and may be employed for short-term storage of the PCR products. To check whether the PCR successfully generated the anticipated DNA target region (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis may be employed for size separation of the PCR products. The size of the PCR products is determined by comparison with a DNA ladder, a molecular weight marker which contains DNA fragments of known sizes, which runs on the gel alongside the PCR products. Stages As with other chemical reactions, the reaction rate and efficiency of PCR are affected by limiting factors. Thus, the entire PCR process can further be divided into three stages based on reaction progress: Exponential amplification: At every cycle, the amount of product is doubled (assuming 100% reaction efficiency). After 30 cycles, a single copy of DNA can be increased up to 1,000,000,000 (one billion) copies. In a sense, then, the replication of a discrete strand of DNA is being manipulated in a tube under controlled conditions. The reaction is very sensitive: only minute quantities of DNA must be present. Leveling off stage: The reaction slows as the DNA polymerase loses activity and as consumption of reagents, such as dNTPs and primers, causes them to become more limited. Plateau: No more product accumulates due to exhaustion of reagents and enzyme. Optimization In practice, PCR can fail for various reasons, such as sensitivity or contamination. Contamination with extraneous DNA can lead to spurious products and is addressed with lab protocols and procedures that separate pre-PCR mixtures from potential DNA contaminants. For instance, if DNA from a crime scene is analyzed, a single DNA molecule from lab personnel could be amplified and misguide the investigation. Hence the PCR-setup areas is separated from the analysis or purification of other PCR products, disposable plasticware used, and the work surface between reaction setups needs to be thoroughly cleaned. Specificity can be adjusted by experimental conditions so that no spurious products are generated. Primer-design techniques are important in improving PCR product yield and in avoiding the formation of unspecific products. The usage of alternate buffer components or polymerase enzymes can help with amplification of long or otherwise problematic regions of DNA. For instance, Q5 polymerase is said to be ~280 times less error-prone than Taq polymerase. Both the running parameters (e.g. temperature and duration of cycles), or the addition of reagents, such as formamide, may increase the specificity and yield of PCR. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design. Applications Selective DNA isolation PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a specific region of DNA. This use of PCR augments many ways, such as generating hybridization probes for Southern or northern hybridization and DNA cloning, which require larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques with high amounts of pure DNA, enabling analysis of DNA samples even from very small amounts of starting material. Other applications of PCR include DNA sequencing to determine unknown PCR-amplified sequences in which one of the amplification primers may be used in Sanger sequencing, isolation of a DNA sequence to expedite recombinant DNA technologies involving the insertion of a DNA sequence into a plasmid, phage, or cosmid (depending on size) or the genetic material of another organism. Bacterial colonies (such as E. coli) can be rapidly screened by PCR for correct DNA vector constructs. PCR may also be used for genetic fingerprinting; a forensic technique used to identify a person or organism by comparing experimental DNAs through different PCR-based methods. Some PCR fingerprint methods have high discriminative power and can be used to identify genetic relationships between individuals, such as parent-child or between siblings, and are used in paternity testing (Fig. 4). This technique may also be used to determine evolutionary relationships among organisms when certain molecular clocks are used (i.e. the 16S rRNA and recA genes of microorganisms). Amplification and quantification of DNA Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze extremely small amounts of sample. This is often critical for forensic analysis, when only a trace amount of DNA is available as evidence. PCR may also be used in the analysis of ancient DNA that is tens of thousands of years old. These PCR-based techniques have been successfully used on animals, such as a forty-thousand-year-old mammoth, and also on human DNA, in applications ranging from the analysis of Egyptian mummies to the identification of a Russian tsar and the body of English king Richard III. Quantitative PCR or Real Time PCR (qPCR, not to be confused with RT-PCR) methods allow the estimation of the amount of a given sequence present in a sample—a technique often applied to quantitatively determine levels of gene expression. Quantitative PCR is an established tool for DNA quantification that measures the accumulation of DNA product after each round of PCR amplification. qPCR allows the quantification and detection of a specific DNA sequence in real time since it measures concentration while the synthesis process is taking place. There are two methods for simultaneous detection and quantification. The first method consists of using fluorescent dyes that are retained nonspecifically in between the double strands. The second method involves probes that code for specific sequences and are fluorescently labeled. Detection of DNA using these methods can only be seen after the hybridization of probes with its complementary DNA (cDNA) takes place. An interesting technique combination is real-time PCR and reverse transcription. This sophisticated technique, called RT-qPCR, allows for the quantification of a small quantity of RNA. Through this combined technique, mRNA is converted to cDNA, which is further quantified using qPCR. This technique lowers the possibility of error at the end point of PCR, increasing chances for detection of genes associated with genetic diseases such as cancer. Laboratories use RT-qPCR for the purpose of sensitively measuring gene regulation. The mathematical foundations for the reliable quantification of the PCR and RT-qPCR facilitate the implementation of accurate fitting procedures of experimental data in research, medical, diagnostic and infectious disease applications. Medical and diagnostic applications Prospective parents can be tested for being genetic carriers, or their children might be tested for actually being affected by a disease. DNA samples for prenatal testing can be obtained by amniocentesis, chorionic villus sampling, or even by the analysis of rare fetal cells circulating in the mother's bloodstream. PCR analysis is also essential to preimplantation genetic diagnosis, where individual cells of a developing embryo are tested for mutations. PCR can also be used as part of a sensitive test for tissue typing, vital to organ transplantation. there is even a proposal to replace the traditional antibody-based tests for blood type with PCR-based tests. Many forms of cancer involve alterations to oncogenes. By using PCR-based tests to study these mutations, therapy regimens can sometimes be individually customized to a patient. PCR permits early diagnosis of malignant diseases such as leukemia and lymphomas, which is currently the highest-developed in cancer research and is already being used routinely. PCR assays can be performed directly on genomic DNA samples to detect translocation-specific malignant cells at a sensitivity that is at least 10,000 fold higher than that of other methods. PCR is very useful in the medical field since it allows for the isolation and amplification of tumor suppressors. Quantitative PCR for example, can be used to quantify and analyze single cells, as well as recognize DNA, mRNA and protein confirmations and combinations. Infectious disease applications PCR allows for rapid and highly specific diagnosis of infectious diseases, including those caused by bacteria or viruses. PCR also permits identification of non-cultivatable or slow-growing microorganisms such as mycobacteria, anaerobic bacteria, or viruses from tissue culture assays and animal models. The basis for PCR diagnostic applications in microbiology is the detection of infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific genes. Characterization and detection of infectious disease organisms have been revolutionized by PCR in the following ways: The human immunodeficiency virus (or HIV), is a difficult target to find and eradicate. The earliest tests for infection relied on the presence of antibodies to the virus circulating in the bloodstream. However, antibodies don't appear until many weeks after infection, maternal antibodies mask the infection of a newborn, and therapeutic agents to fight the infection don't affect the antibodies. PCR tests have been developed that can detect as little as one viral genome among the DNA of over 50,000 host cells. Infections can be detected earlier, donated blood can be screened directly for the virus, newborns can be immediately tested for infection, and the effects of antiviral treatments can be quantified. Some disease organisms, such as that for tuberculosis, are difficult to sample from patients and slow to be grown in the laboratory. PCR-based tests have allowed detection of small numbers of disease organisms (both live or dead), in convenient samples. Detailed genetic analysis can also be used to detect antibiotic resistance, allowing immediate and effective therapy. The effects of therapy can also be immediately evaluated. The spread of a disease organism through populations of domestic or wild animals can be monitored by PCR testing. In many cases, the appearance of new virulent sub-types can be detected and monitored. The sub-types of an organism that were responsible for earlier epidemics can also be determined by PCR analysis. Viral DNA can be detected by PCR. The primers used must be specific to the targeted sequences in the DNA of a virus, and PCR can be used for diagnostic analyses or DNA sequencing of the viral genome. The high sensitivity of PCR permits virus detection soon after infection and even before the onset of disease. Such early detection may give physicians a significant lead time in treatment. The amount of virus ("viral load") in a patient can also be quantified by PCR-based DNA quantitation techniques (see below). A variant of PCR (RT-PCR) is used for detecting viral RNA rather than DNA: in this test the enzyme reverse transcriptase is used to generate a DNA sequence which matches the viral RNA; this DNA is then amplified as per the usual PCR method. RT-PCR is widely used to detect the SARS-CoV-2 viral genome. Diseases such as pertussis (or whooping cough) are caused by the bacteria Bordetella pertussis. This bacteria is marked by a serious acute respiratory infection that affects various animals and humans and has led to the deaths of many young children. The pertussis toxin is a protein exotoxin that binds to cell receptors by two dimers and reacts with different cell types such as T lymphocytes which play a role in cell immunity. PCR is an important testing tool that can detect sequences within the gene for the pertussis toxin. Because PCR has a high sensitivity for the toxin and a rapid turnaround time, it is very efficient for diagnosing pertussis when compared to culture. Forensic applications The development of PCR-based genetic (or DNA) fingerprinting protocols has seen widespread application in forensics: In its most discriminating form, genetic fingerprinting can uniquely discriminate any one person from the entire population of the world. Minute samples of DNA can be isolated from a crime scene, and compared to that from suspects, or from a DNA database of earlier evidence or convicts. Simpler versions of these tests are often used to rapidly rule out suspects during a criminal investigation. Evidence from decades-old crimes can be tested, confirming or exonerating the people originally convicted. Forensic DNA typing has been an effective way of identifying or exonerating criminal suspects due to analysis of evidence discovered at a crime scene. The human genome has many repetitive regions that can be found within gene sequences or in non-coding regions of the genome. Specifically, up to 40% of human DNA is repetitive. There are two distinct categories for these repetitive, non-coding regions in the genome. The first category is called variable number tandem repeats (VNTR), which are 10–100 base pairs long and the second category is called short tandem repeats (STR) and these consist of repeated 2–10 base pair sections. PCR is used to amplify several well-known VNTRs and STRs using primers that flank each of the repetitive regions. The sizes of the fragments obtained from any individual for each of the STRs will indicate which alleles are present. By analyzing several STRs for an individual, a set of alleles for each person will be found that statistically is likely to be unique. Researchers have identified the complete sequence of the human genome. This sequence can be easily accessed through the NCBI website and is used in many real-life applications. For example, the FBI has compiled a set of DNA marker sites used for identification, and these are called the Combined DNA Index System (CODIS) DNA database. Using this database enables statistical analysis to be used to determine the probability that a DNA sample will match. PCR is a very powerful and significant analytical tool to use for forensic DNA typing because researchers only need a very small amount of the target DNA to be used for analysis. For example, a single human hair with attached hair follicle has enough DNA to conduct the analysis. Similarly, a few sperm, skin samples from under the fingernails, or a small amount of blood can provide enough DNA for conclusive analysis. Less discriminating forms of DNA fingerprinting can help in DNA paternity testing, where an individual is matched with their close relatives. DNA from unidentified human remains can be tested, and compared with that from possible parents, siblings, or children. Similar testing can be used to confirm the biological parents of an adopted (or kidnapped) child. The actual biological father of a newborn can also be confirmed (or ruled out). The PCR AMGX/AMGY design facilitate in amplifying DNA sequences from a very minuscule amount of genome. However it can also be used for real-time sex determination from forensic bone samples. This provides a powerful and effective way to determine gender in forensic cases and ancient specimens. Research applications PCR has been applied to many areas of research in molecular genetics: PCR allows rapid production of short pieces of DNA, even when not more than the sequence of the two primers is known. This ability of PCR augments many methods, such as generating hybridization probes for Southern or northern blot hybridization. PCR supplies these techniques with large amounts of pure DNA, sometimes as a single strand, enabling analysis even from very small amounts of starting material. The task of DNA sequencing can also be assisted by PCR. Known segments of DNA can easily be produced from a patient with a genetic disease mutation. Modifications to the amplification technique can extract segments from a completely unknown genome, or can generate just a single strand of an area of interest. PCR has numerous applications to the more traditional process of DNA cloning. It can extract segments for insertion into a vector from a larger genome, which may be only available in small quantities. Using a single set of 'vector primers', it can also analyze or extract fragments that have already been inserted into vectors. Some alterations to the PCR protocol can generate mutations (general or site-directed) of an inserted fragment. Sequence-tagged sites is a process where PCR is used as an indicator that a particular segment of a genome is present in a particular clone. The Human Genome Project found this application vital to mapping the cosmid clones they were sequencing, and to coordinating the results from different laboratories. An application of PCR is the phylogenic analysis of DNA from ancient sources, such as that found in the recovered bones of Neanderthals, from frozen tissues of mammoths, or from the brain of Egyptian mummies. In some cases the highly degraded DNA from these sources might be reassembled during the early stages of amplification. A common application of PCR is the study of patterns of gene expression. Tissues (or even individual cells) can be analyzed at different stages to see which genes have become active, or which have been switched off. This application can also use quantitative PCR to quantitate the actual levels of expression The ability of PCR to simultaneously amplify several loci from individual sperm has greatly enhanced the more traditional task of genetic mapping by studying chromosomal crossovers after meiosis. Rare crossover events between very close loci have been directly observed by analyzing thousands of individual sperms. Similarly, unusual deletions, insertions, translocations, or inversions can be analyzed, all without having to wait (or pay) for the long and laborious processes of fertilization, embryogenesis, etc. Site-directed mutagenesis: PCR can be used to create mutant genes with mutations chosen by scientists at will. These mutations can be chosen in order to understand how proteins accomplish their functions, and to change or improve protein function. Advantages PCR has a number of advantages. It is fairly simple to understand and to use, and produces results rapidly. The technique is highly sensitive with the potential to produce millions to billions of copies of a specific product for sequencing, cloning, and analysis. qRT-PCR shares the same advantages as the PCR, with an added advantage of quantification of the synthesized product. Therefore, it has its uses to analyze alterations of gene expression levels in tumors, microbes, or other disease states. PCR is a very powerful and practical research tool. The sequencing of unknown etiologies of many diseases are being figured out by the PCR. The technique can help identify the sequence of previously unknown viruses related to those already known and thus give us a better understanding of the disease itself. If the procedure can be further simplified and sensitive non-radiometric detection systems can be developed, the PCR will assume a prominent place in the clinical laboratory for years to come. Limitations One major limitation of PCR is that prior information about the target sequence is necessary in order to generate the primers that will allow its selective amplification. This means that, typically, PCR users must know the precise sequence(s) upstream of the target region on each of the two single-stranded templates in order to ensure that the DNA polymerase properly binds to the primer-template hybrids and subsequently generates the entire target region during DNA synthesis. Like all enzymes, DNA polymerases are also prone to error, which in turn causes mutations in the PCR fragments that are generated. Another limitation of PCR is that even the smallest amount of contaminating DNA can be amplified, resulting in misleading or ambiguous results. To minimize the chance of contamination, investigators should reserve separate rooms for reagent preparation, the PCR, and analysis of product. Reagents should be dispensed into single-use aliquots. Pipettors with disposable plungers and extra-long pipette tips should be routinely used. It is moreover recommended to ensure that the lab set-up follows a unidirectional workflow. No materials or reagents used in the PCR and analysis rooms should ever be taken into the PCR preparation room without thorough decontamination. Environmental samples that contain humic acids may inhibit PCR amplification and lead to inaccurate results. Variations Allele-specific PCR or The amplification refractory mutation system (ARMS): a diagnostic or cloning technique based on single-nucleotide variations (SNVs not to be confused with SNPs) (single-base differences in a patient). Any mutation involving single base change can be detected by this system. It requires prior knowledge of a DNA sequence, including differences between alleles, and uses primers whose 3' ends encompass the SNV (base pair buffer around SNV usually incorporated). PCR amplification under stringent conditions is much less efficient in the presence of a mismatch between template and primer, so successful amplification with an SNP-specific primer signals presence of the specific SNP or small deletions in a sequence. See SNP genotyping for more information. Assembly PCR or Polymerase Cycling Assembly (PCA): artificial synthesis of long DNA sequences by performing PCR on a pool of long oligonucleotides with short overlapping segments. The oligonucleotides alternate between sense and antisense directions, and the overlapping segments determine the order of the PCR fragments, thereby selectively producing the final long DNA product. Asymmetric PCR: preferentially amplifies one DNA strand in a double-stranded DNA template. It is used in sequencing and hybridization probing where amplification of only one of the two complementary strands is required. PCR is carried out as usual, but with a great excess of the primer for the strand targeted for amplification. Because of the slow (arithmetic) amplification later in the reaction after the limiting primer has been used up, extra cycles of PCR are required. A recent modification on this process, known as Linear-After-The-Exponential-PCR (LATE-PCR), uses a limiting primer with a higher melting temperature (Tm) than the excess primer to maintain reaction efficiency as the limiting primer concentration decreases mid-reaction. Convective PCR: a pseudo-isothermal way of performing PCR. Instead of repeatedly heating and cooling the PCR mixture, the solution is subjected to a thermal gradient. The resulting thermal instability driven convective flow automatically shuffles the PCR reagents from the hot and cold regions repeatedly enabling PCR. Parameters such as thermal boundary conditions and geometry of the PCR enclosure can be optimized to yield robust and rapid PCR by harnessing the emergence of chaotic flow fields. Such convective flow PCR setup significantly reduces device power requirement and operation time. Dial-out PCR: a highly parallel method for retrieving accurate DNA molecules for gene synthesis. A complex library of DNA molecules is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by PCR. Digital PCR (dPCR): used to measure the quantity of a target DNA sequence in a DNA sample. The DNA sample is highly diluted so that after running many PCRs in parallel, some of them do not receive a single molecule of the target DNA. The target DNA concentration is calculated using the proportion of negative outcomes. Hence the name 'digital PCR'. Helicase-dependent amplification: similar to traditional PCR, but uses a constant temperature rather than cycling through denaturation and annealing/extension cycles. DNA helicase, an enzyme that unwinds DNA, is used in place of thermal denaturation. Hot start PCR: a technique that reduces non-specific amplification during the initial set up stages of the PCR. It may be performed manually by heating the reaction components to the denaturation temperature (e.g., 95 °C) before adding the polymerase. Specialized enzyme systems have been developed that inhibit the polymerase's activity at ambient temperature, either by the binding of an antibody or by the presence of covalently bound inhibitors that dissociate only after a high-temperature activation step. Hot-start/cold-finish PCR is achieved with new hybrid polymerases that are inactive at ambient temperature and are instantly activated at elongation temperature. In silico PCR (digital PCR, virtual PCR, electronic PCR, e-PCR) refers to computational tools used to calculate theoretical polymerase chain reaction results using a given set of primers (probes) to amplify DNA sequences from a sequenced genome or transcriptome. In silico PCR was proposed as an educational tool for molecular biology. Intersequence-specific PCR (ISSR): a PCR method for DNA fingerprinting that amplifies regions between simple sequence repeats to produce a unique fingerprint of amplified fragment lengths. Inverse PCR: is commonly used to identify the flanking sequences around genomic inserts. It involves a series of DNA digestions and self ligation, resulting in known sequences at either end of the unknown sequence. Ligation-mediated PCR: uses small DNA linkers ligated to the DNA of interest and multiple primers annealing to the DNA linkers; it has been used for DNA sequencing, genome walking, and DNA footprinting. Methylation-specific PCR (MSP): developed by Stephen Baylin and James G. Herman at the Johns Hopkins School of Medicine, and is used to detect methylation of CpG islands in genomic DNA. DNA is first treated with sodium bisulfite, which converts unmethylated cytosine bases to uracil, which is recognized by PCR primers as thymine. Two PCRs are then carried out on the modified DNA, using primer sets identical except at any CpG islands within the primer sequences. At these points, one primer set recognizes DNA with cytosines to amplify methylated DNA, and one set recognizes DNA with uracil or thymine to amplify unmethylated DNA. MSP using qPCR can also be performed to obtain quantitative rather than qualitative information about methylation. Miniprimer PCR: uses a thermostable polymerase (S-Tbr) that can extend from short primers ("smalligos") as short as 9 or 10 nucleotides. This method permits PCR targeting to smaller primer binding regions, and is used to amplify conserved DNA sequences, such as the 16S (or eukaryotic 18S) rRNA gene. Multiplex ligation-dependent probe amplification (MLPA): permits amplifying multiple targets with a single primer pair, thus avoiding the resolution limitations of multiplex PCR (see below). Multiplex-PCR: consists of multiple primer sets within a single PCR mixture to produce amplicons of varying sizes that are specific to different DNA sequences. By targeting multiple genes at once, additional information may be gained from a single test-run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes. That is, their base pair length should be different enough to form distinct bands when visualized by gel electrophoresis. Nanoparticle-assisted PCR (nanoPCR): some nanoparticles (NPs) can enhance the efficiency of PCR (thus being called nanoPCR), and some can even outperform the original PCR enhancers. It was reported that quantum dots (QDs) can improve PCR specificity and efficiency. Single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) are efficient in enhancing the amplification of long PCR. Carbon nanopowder (CNP) can improve the efficiency of repeated PCR and long PCR, while zinc oxide, titanium dioxide and Ag NPs were found to increase the PCR yield. Previous data indicated that non-metallic NPs retained acceptable amplification fidelity. Given that many NPs are capable of enhancing PCR efficiency, it is clear that there is likely to be great potential for nanoPCR technology improvements and product development. Nested PCR: increases the specificity of DNA amplification, by reducing background due to non-specific amplification of DNA. Two sets of primers are used in two successive PCRs. In the first reaction, one pair of primers is used to generate DNA products, which besides the intended target, may still consist of non-specifically amplified DNA fragments. The product(s) are then used in a second PCR with a set of primers whose binding sites are completely or partially different from and located 3' of each of the primers used in the first reaction. Nested PCR is often more successful in specifically amplifying long DNA fragments than conventional PCR, but it requires more detailed knowledge of the target sequences. Overlap-extension PCR or Splicing by overlap extension (SOEing) : a genetic engineering technique that is used to splice together two or more DNA fragments that contain complementary sequences. It is used to join DNA pieces containing genes, regulatory sequences, or mutations; the technique enables creation of specific and long DNA constructs. It can also introduce deletions, insertions or point mutations into a DNA sequence. PAN-AC: uses isothermal conditions for amplification, and may be used in living cells. PAN-PCR: A computational method for designing bacterium typing assays based on whole genome sequence data. Quantitative PCR (qPCR): used to measure the quantity of a target sequence (commonly in real-time). It quantitatively measures starting amounts of DNA, cDNA, or RNA. Quantitative PCR is commonly used to determine whether a DNA sequence is present in a sample and the number of its copies in the sample. Quantitative PCR has a very high degree of precision. Quantitative PCR methods use fluorescent dyes, such as Sybr Green, EvaGreen or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time. It is also sometimes abbreviated to RT-PCR (real-time PCR) but this abbreviation should be used only for reverse transcription PCR. qPCR is the appropriate contractions for quantitative PCR (real-time PCR). Reverse Complement PCR (RC-PCR): Allows the addition of functional domains or sequences of choice to be appended independently to either end of the generated amplicon in a single closed tube reaction. This method generates target specific primers within the reaction by the interaction of universal primers (which contain the desired sequences or domains to be appended) and RC probes. Reverse Transcription PCR (RT-PCR): for amplifying DNA from RNA. Reverse transcriptase reverse transcribes RNA into cDNA, which is then amplified by PCR. RT-PCR is widely used in expression profiling, to determine the expression of a gene or to identify the sequence of an RNA transcript, including transcription start and termination sites. If the genomic DNA sequence of a gene is known, RT-PCR can be used to map the location of exons and introns in the gene. The 5' end of a gene (corresponding to the transcription start site) is typically identified by RACE-PCR (Rapid Amplification of cDNA Ends). RNase H-dependent PCR (rhPCR): a modification of PCR that utilizes primers with a 3' extension block that can be removed by a thermostable RNase HII enzyme. This system reduces primer-dimers and allows for multiplexed reactions to be performed with higher numbers of primers. Single specific primer-PCR (SSP-PCR): allows the amplification of double-stranded DNA even when the sequence information is available at one end only. This method permits amplification of genes for which only a partial sequence information is available, and allows unidirectional genome walking from known into unknown regions of the chromosome. Solid Phase PCR: encompasses multiple meanings, including Polony Amplification (where PCR colonies are derived in a gel matrix, for example), Bridge PCR (primers are covalently linked to a solid-support surface), conventional Solid Phase PCR (where Asymmetric PCR is applied in the presence of solid support bearing primer with sequence matching one of the aqueous primers) and Enhanced Solid Phase PCR (where conventional Solid Phase PCR can be improved by employing high Tm and nested solid support primer with optional application of a thermal 'step' to favour solid support priming). Suicide PCR: typically used in paleogenetics or other studies where avoiding false positives and ensuring the specificity of the amplified fragment is the highest priority. It was originally described in a study to verify the presence of the microbe Yersinia pestis in dental samples obtained from 14th Century graves of people supposedly killed by the plague during the medieval Black Death epidemic. The method prescribes the use of any primer combination only once in a PCR (hence the term "suicide"), which should never have been used in any positive control PCR reaction, and the primers should always target a genomic region never amplified before in the lab using this or any other set of primers. This ensures that no contaminating DNA from previous PCR reactions is present in the lab, which could otherwise generate false positives. Thermal asymmetric interlaced PCR (TAIL-PCR): for isolation of an unknown sequence flanking a known sequence. Within the known sequence, TAIL-PCR uses a nested pair of primers with differing annealing temperatures; a degenerate primer is used to amplify in the other direction from the unknown sequence. Touchdown PCR (Step-down PCR): a variant of PCR that aims to reduce nonspecific background by gradually lowering the annealing temperature as PCR cycling progresses. The annealing temperature at the initial cycles is usually a few degrees (3–5 °C) above the Tm of the primers used, while at the later cycles, it is a few degrees (3–5 °C) below the primer Tm. The higher temperatures give greater specificity for primer binding, and the lower temperatures permit more efficient amplification from the specific products formed during the initial cycles. Universal Fast Walking: for genome walking and genetic fingerprinting using a more specific 'two-sided' PCR than conventional 'one-sided' approaches (using only one gene-specific primer and one general primer—which can lead to artefactual 'noise') by virtue of a mechanism involving lariat structure formation. Streamlined derivatives of UFW are LaNe RAGE (lariat-dependent nested PCR for rapid amplification of genomic DNA ends), 5'RACE LaNe and 3'RACE LaNe. History The heat-resistant enzymes that are a key component in polymerase chain reaction were discovered in the 1960s as a product of a microbial life form that lived in the superheated waters of Yellowstone's Mushroom Spring. A 1971 paper in the Journal of Molecular Biology by Kjell Kleppe and co-workers in the laboratory of H. Gobind Khorana first described a method of using an enzymatic assay to replicate a short DNA template with primers in vitro. However, this early manifestation of the basic PCR principle did not receive much attention at the time and the invention of the polymerase chain reaction in 1983 is generally credited to Kary Mullis. When Mullis developed the PCR in 1983, he was working in Emeryville, California for Cetus Corporation, one of the first biotechnology companies, where he was responsible for synthesizing short chains of DNA. Mullis has written that he conceived the idea for PCR while cruising along the Pacific Coast Highway one night in his car. He was playing in his mind with a new way of analyzing changes (mutations) in DNA when he realized that he had instead invented a method of amplifying any DNA region through repeated cycles of duplication driven by DNA polymerase. In Scientific American, Mullis summarized the procedure: "Beginning with a single molecule of the genetic material DNA, the PCR can generate 100 billion similar molecules in an afternoon. The reaction is easy to execute. It requires no more than a test tube, a few simple reagents, and a source of heat." DNA fingerprinting was first used for paternity testing in 1988. Mullis has credited his use of LSD as integral to his development of PCR: "Would I have invented PCR if I hadn't taken LSD? I seriously doubt it. I could sit on a DNA molecule and watch the polymers go by. I learnt that partly on psychedelic drugs." Mullis and biochemist Michael Smith, who had developed other essential ways of manipulating DNA, were jointly awarded the Nobel Prize in Chemistry in 1993, seven years after Mullis and his colleagues at Cetus first put his proposal to practice. Mullis's 1985 paper with R. K. Saiki and H. A. Erlich, "Enzymatic Amplification of β-globin Genomic Sequences and Restriction Site Analysis for Diagnosis of Sickle Cell Anemia"—the polymerase chain reaction invention (PCR)—was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society in 2017. At the core of the PCR method is the use of a suitable DNA polymerase able to withstand the high temperatures of > required for separation of the two DNA strands in the DNA double helix after each replication cycle. The DNA polymerases initially employed for in vitro experiments presaging PCR were unable to withstand these high temperatures. So the early procedures for DNA replication were very inefficient and time-consuming, and required large amounts of DNA polymerase and continuous handling throughout the process. The discovery in 1976 of Taq polymerase—a DNA polymerase purified from the thermophilic bacterium, Thermus aquaticus, which naturally lives in hot () environments such as hot springs—paved the way for dramatic improvements of the PCR method. The DNA polymerase isolated from T. aquaticus is stable at high temperatures remaining active even after DNA denaturation, thus obviating the need to add new DNA polymerase after each cycle. This allowed an automated thermocycler-based process for DNA amplification. Patent disputes The PCR technique was patented by Kary Mullis and assigned to Cetus Corporation, where Mullis worked when he invented the technique in 1983. The Taq polymerase enzyme was also covered by patents. There have been several high-profile lawsuits related to the technique, including an unsuccessful lawsuit brought by DuPont. The Swiss pharmaceutical company Hoffmann-La Roche purchased the rights to the patents in 1992. The last of the commercial PCR patents expired in 2017. A related patent battle over the Taq polymerase enzyme is still ongoing in several jurisdictions around the world between Roche and Promega. The legal arguments have extended beyond the lives of the original PCR and Taq polymerase patents, which expired on 28 March 2005. See also COVID-19 testing DNA spiking Loop-mediated isothermal amplification Selector technique Thermus thermophilus Pfu DNA polymerase References External links US Patent for PCR What is PCR plateau effect? YouTube tutorial video History of the Polymerase Chain Reaction from the Smithsonian Institution Archives Molecular biology Laboratory techniques DNA profiling techniques Amplifiers Roche Biotechnology Molecular biology techniques American inventions
23648
https://en.wikipedia.org/wiki/Polymerase
Polymerase
In biochemistry, a polymerase is an enzyme (EC 2.7.7.6/7/19/48/49) that synthesizes long chains of polymers or nucleic acids. DNA polymerase and RNA polymerase are used to assemble DNA and RNA molecules, respectively, by copying a DNA template strand using base-pairing interactions or RNA by half ladder replication. A DNA polymerase from the thermophilic bacterium, Thermus aquaticus (Taq) (PDB 1BGX, EC 2.7.7.7) is used in the polymerase chain reaction, an important technique of molecular biology. A polymerase may be template-dependent or template-independent. Poly-A-polymerase is an example of template independent polymerase. Terminal deoxynucleotidyl transferase also known to have template independent and template dependent activities. By function DNA polymerase (DNA-directed DNA polymerase, DdDP) Family A: DNA polymerase I; Pol γ, θ, ν Family B: DNA polymerase II; Pol α, δ, ε, ζ Family C: DNA polymerase III holoenzyme Family X: Pol β, λ, μ Terminal deoxynucleotidyl transferase (TDT), which lends diversity to antibody heavy chains. Family Y: DNA polymerase IV (DinB) and DNA polymerase V (UmuD'2C) - SOS repair polymerases; Pol η, ι, κ Reverse transcriptase (RT; RNA-directed DNA polymerase; RdDP) Telomerase DNA-directed RNA polymerase (DdRP, RNAP) Multi-subunit (msDdRP): RNA polymerase I, RNA polymerase II, RNA polymerase III Single-subunit (ssDdRP): T7 RNA polymerase, POLRMT Primase, PrimPol RNA replicase (RNA-directed RNA polymerase, RdRP) Viral (single-subunit) Eukaryotic cellular (cRdRP; dual-subunit) Template-less RNA elongation Polyadenylation: PAP, PNPase By structure Polymerases are generally split into two superfamilies, the "right hand" fold () and the "double psi beta barrel" (often simply "double-barrel") fold. The former is seen in almost all DNA polymerases and almost all viral single-subunit polymerases; they are marked by a conserved "palm" domain. The latter is seen in all multi-subunit RNA polymerases, in cRdRP, and in "family D" DNA polymerases found in archaea. The "X" family represented by DNA polymerase beta has only a vague "palm" shape, and is sometimes considered a different superfamily (). Primases generally don't fall into either category. Bacterial primases usually have the Toprim domain, and are related to topoisomerases and mitochondrial helicase twinkle. Archae and eukaryotic primases form an unrelated AEP family, possibly related to the polymerase palm. Both families nevertheless associate to the same set of helicases. See also Central dogma of molecular biology Exonuclease Ligase Nuclease PCR PARP Reverse transcription polymerase chain reaction RNA ligase (ATP) References External links EC 2.7.7 Enzymes
23649
https://en.wikipedia.org/wiki/Pacific%20Scandal
Pacific Scandal
The Pacific Scandal was a political scandal in Canada involving large sums of money being paid by private interests to the Conservative party to cover election expenses in the 1872 Canadian federal election, to influence the bidding for a national rail contract. As part of British Columbia's 1871 agreement to join the Canadian Confederation, the federal government had agreed to build a transcontinental railway linking the seaboard of British Columbia to the eastern provinces. The scandal led to the resignation of Canada's first prime minister, John A. Macdonald, and a transfer of power from his Conservative government to a Liberal government, led by Alexander Mackenzie. One of the new government's first measures was to introduce secret ballots in an effort to improve the integrity of future elections. After the scandal broke, the railway plan collapsed, and the proposed line was not built. An entirely different operation later built the Canadian Pacific Railway to the Pacific. Background For a young and loosely-defined nation, the building of a national railway was an active attempt at state-making, as well as an aggressive capitalist venture. Canada, a nascent country with a population of 3.5 million in 1871, lacked the means to exercise meaningful de facto control within the de jure political boundaries of the recently acquired Rupert's Land, and building a transcontinental railway was a national policy of high order to change that situation. Moreover, after the American Civil War the American frontier rapidly expanded west with land-hungry settlers, exacerbating talk of annexation. Indeed, sentiments of Manifest Destiny were abuzz at the time: in 1867, the year of Confederation, US Secretary of State William H. Seward surmised that the whole North American continent "shall be, sooner or later, within the magic circle of the American Union." Therefore, preventing American investment into the project was considered as being in Canada's national interest. Thus the federal government favoured an "all Canadian route" through the rugged Canadian Shield of northern Ontario and refused to consider a less costly route passing south through Wisconsin and Minnesota. However, a route across the Canadian Shield was highly unpopular with potential investors not only in the United States but also in Canada and especially in Great Britain, the only other viable sources of financing. For would-be investors, the objections were not primarily based on politics or nationalism but economics. At the time, national governments lacked the finances needed to undertake such large projects. For the first transcontinental railroad, the United States government had made extensive grants of public land to the railway's builders, inducing private financiers to fund the railway on the understanding that they would acquire rich farmland along the route, which could then be sold for a large profit. However, the eastern terminus of the proposed Canadian Pacific route, unlike that of the first transcontinental, was not in rich Nebraskan farmland, but deep within the Canadian Shield. Copying the American financing model whilst insisting on an all-Canadian route would require the railway's backers to build hundreds of miles of track across rugged shield terrain, with little economic value. at considerable expense before they could expect to access lucrative farmland in Manitoba and the newly created Northwest Territories, which at that time included Alberta and Saskatchewan. Many financiers, who had expected to make a relatively quick profit, were not willing to make that sort of long-term commitment. Nevertheless, the Montreal capitalist Hugh Allan, with his syndicate Canada Pacific Railway Company, sought the potentially lucrative charter for the project. The problem lay in that Allan and Macdonald highly and secretly were in cahoots with American financiers such as George W. McMullen and Jay Cooke, men who were deeply interested in the rival American undertaking, the Northern Pacific Railroad. Scandal Two groups competed for the contract to build the railway, Hugh Allan's Canada Pacific Railway Company and David Lewis Macpherson's Inter-Oceanic Railway Company. On April 2, 1873, Lucius Seth Huntington, a Liberal Member of Parliament, created an uproar in the House of Commons. He announced he had uncovered evidence that Allan and his associates had been granted the Canadian Pacific Railway contract in return for political donations of $360,000. In 1873, it became known that Allan had contributed a large sum of money to the Conservative government's re-election campaign of 1872; some sources quote a sum over $360,000. Allan had promised to keep American capital out of the railway deal but had lied to Macdonald over this vital point, as Macdonald later discovered. The Liberal Party, the opposition party in Parliament, accused the Conservatives of having made a tacit agreement to give the contract to Hugh Allan in exchange for money. In making such allegations, the Liberals and their allies in the press (particularly George Brown's newspaper The Globe) presumed that most of the money had been used to bribe voters in the 1872 election. The secret ballot, which was then considered a novelty, had not yet been introduced in Canada. Although it was illegal to offer, solicit or accept bribes in exchange for votes, effective enforcement of the prohibition proved impossible. Despite Macdonald's claims that he was innocent, evidence came to light showing receipts of money from Allan to Macdonald and some of his political colleagues. Perhaps even more damaging to Macdonald was when the Liberals discovered a telegram through a former employee of Allan; it which was thought to have been stolen from the safe of Allan's lawyer, John Abbott. The scandal proved fatal to Macdonald's government. Macdonald's control of Parliament was already tenuous since the 1872 election. Since party discipline was not as strong as it is today, once Macdonald's culpability in the scandal became known, he could no longer expect to retain the confidence of the House of Commons. Macdonald resigned as prime minister on 5 November 1873. He also offered his resignation as the head of the Conservative Party, but it was not accepted, and he was convinced to stay. Perhaps as a direct result of this scandal, the Conservatives fell in the eyes of the public and was relegated to being the Official Opposition in the federal election of 1874 in which secret ballots were used for the first time. It gave Alexander Mackenzie a firm mandate to succeed Macdonald as the new prime minister of Canada. Despite the short-term defeat, the scandal was not a mortal wound to Macdonald, the Conservative Party, or the construction of the Canadian Pacific Railway. The Long Depression gripped Canada shortly after Macdonald left office, and although the causes of the depression were largely external to Canada, many Canadians blamed Mackenzie for the ensuing hard times. Macdonald returned as prime minister in the 1878 election thanks to his National Policy. He held the office of prime minister to his death in 1891, and the Canadian Pacific was completed by 1885 while Macdonald still in office, although by a completely different corporation. References Bibliography Further reading downplays role of Americans Primary sources External links Canada's first political scandal, CBC Video Sauvé, Todd D. Manifest Destiny and Western Canada: Book One: Sitting Bull, the Little Bighorn and the North-West Mounted Police Revisited (an alternative view of the Pacific Scandal and the overall binational political context at the time) Chapter I – A Tale of Two Countries Chapter II – A Tale of Two Railroads Political scandals in Canada Canadian Pacific Railway History of transport in Canada Political funding Political history of Canada
23650
https://en.wikipedia.org/wiki/Primer%20%28molecular%20biology%29
Primer (molecular biology)
A primer is a short, single-stranded nucleic acid used by all living organisms in the initiation of DNA synthesis. A synthetic primer may also be referred to as an oligo, short for oligonucleotide. DNA polymerase (responsible for DNA replication) enzymes are only capable of adding nucleotides to the 3’-end of an existing nucleic acid, requiring a primer be bound to the template before DNA polymerase can begin a complementary strand. DNA polymerase adds nucleotides after binding to the RNA primer and synthesizes the whole strand. Later, the RNA strands must be removed accurately and replace them with DNA nucleotides forming a gap region known as a nick that is filled in using an enzyme called ligase. The removal process of the RNA primer requires several enzymes, such as Fen1, Lig1, and others that work in coordination with DNA polymerase, to ensure the removal of the RNA nucleotides and the addition of DNA nucleotides. Living organisms use solely RNA primers, while laboratory techniques in biochemistry and molecular biology that require in vitro DNA synthesis (such as DNA sequencing and polymerase chain reaction) usually use DNA primers, since they are more temperature stable. Primers can be designed in laboratory for specific reactions such as polymerase chain reaction (PCR). When designing PCR primers, there are specific measures that must be taken into consideration, like the melting temperature of the primers and the annealing temperature of the reaction itself. Moreover, the DNA binding sequence of the primer in vitro has to be specifically chosen, which is done using a method called basic local alignment search tool (BLAST) that scans the DNA and finds specific and unique regions for the primer to bind. RNA primers in vivo RNA primers are used by living organisms in the initiation of synthesizing a strand of DNA. A class of enzymes called primases add a complementary RNA primer to the reading template de novo on both the leading and lagging strands. Starting from the free 3’-OH of the primer, known as the primer terminus, a DNA polymerase can extend a newly synthesized strand. The leading strand in DNA replication is synthesized in one continuous piece moving with the replication fork, requiring only an initial RNA primer to begin synthesis. In the lagging strand, the template DNA runs in the 5′→3′ direction. Since DNA polymerase cannot add bases in the 3′→5′ direction complementary to the template strand, DNA is synthesized ‘backward’ in short fragments moving away from the replication fork, known as Okazaki fragments. Unlike in the leading strand, this method results in the repeated starting and stopping of DNA synthesis, requiring multiple RNA primers. Along the DNA template, primase intersperses RNA primers that DNA polymerase uses to synthesize DNA from in the 5′→3′ direction. Another example of primers being used to enable DNA synthesis is reverse transcription. Reverse transcriptase is an enzyme that uses a template strand of RNA to synthesize a complementary strand of DNA. The DNA polymerase component of reverse transcriptase requires an existing 3' end to begin synthesis. Primer removal After the insertion of Okazaki fragments, the RNA primers are removed (the mechanism of removal differs between prokaryotes and eukaryotes) and replaced with new deoxyribonucleotides that fill the gaps where the RNA primer was present. DNA ligase then joins the fragmented strands together, completing the synthesis of the lagging strand. In prokaryotes, DNA polymerase I synthesizes the Okazaki fragment until it reaches the previous RNA primer. Then the enzyme simultaneously acts as a 5′→3′ exonuclease, removing primer ribonucleotides in front and adding deoxyribonucleotides behind. Both the activities of polymerization and excision of the RNA primer occur in the 5′→3′ direction,  and polymerase I can do these activities simultaneously; this is known as “Nick Translation”. Nick translation refers to the synchronized activity of polymerase I in removing the RNA primer and adding deoxyribonucleotides. Later, a gap between the strands is formed called a nick, which is sealed using a DNA ligase. In eukaryotes the removal of RNA primers in the lagging strand is essential for the completion of replication. Thus, as the lagging strand being synthesized by DNA polymerase δ in 5′→3′ direction, Okazaki fragments are formed, which are discontinuous strands of DNA. Then, when the DNA polymerase reaches to the 5’ end of the RNA primer from the previous Okazaki fragment, it displaces the 5′ end of the primer into a single-stranded RNA flap which is removed by nuclease cleavage. Cleavage of the RNA flaps involves three methods of primer removal. The first possibility of primer removal is by creating a short flap that is directly removed by flap structure-specific endonuclease 1 (FEN-1), which cleaves the 5’ overhanging flap. This method is known as the short flap pathway of RNA primer removal. The second way to cleave a RNA primer is by degrading the RNA strand using a RNase, in eukaryotes it’s known as the RNase H2. This enzyme degrades most of the annealed RNA primer, except the nucleotides close to the 5’ end of the primer. Thus, the remaining nucleotides are displayed into a flap that is cleaved off using FEN-1. The last possible method of removing RNA primer is known as the long flap pathway. In this pathway several enzymes are recruited to elongate the RNA primer and then cleave it off. The flaps are elongated by a 5’ to 3’ helicase, known as Pif1. After the addition of nucleotides to the flap by Pif1, the long flap is stabilized by the replication protein A (RPA). The RPA-bound DNA inhibits the activity or recruitment of FEN1, as a result another nuclease must be recruited to cleave the flap. This second nuclease is DNA2 nuclease , which has a helicase-nuclease activity, that cleaves the long flap of RNA primer, which then leaves behind a couple of nucleotides that are cleaved by FEN1. At the end, when all the RNA primers have been removed, nicks form between the Okazaki fragments that are filled-in with deoxyribonucleotides using an enzyme known as ligase1, through a process called ligation. Uses of synthetic primers Synthetic primers, sometimes known as oligos, are chemically synthesized oligonucleotides, usually of DNA, which can be customized to anneal to a specific site on the template DNA. In solution, the primer spontaneously hybridizes with the template through Watson-Crick base pairing before being extended by DNA polymerase. The ability to create and customize synthetic primers has proven an invaluable tool necessary to a variety of molecular biological approaches involving the analysis of DNA. Both the Sanger chain termination method and the “Next-Gen” method of DNA sequencing require primers to initiate the reaction. PCR primer design The polymerase chain reaction (PCR) uses a pair of custom primers to direct DNA elongation toward each other at opposite ends of the sequence being amplified. These primers are typically between 18 and 24 bases in length and must code for only the specific upstream and downstream sites of the sequence being amplified. A primer that can bind to multiple regions along the DNA will amplify them all, eliminating the purpose of PCR. A few criteria must be brought into consideration when designing a pair of PCR primers. Pairs of primers should have similar melting temperatures since annealing during PCR occurs for both strands simultaneously, and this shared melting temperature must not be either too much higher or lower than the reaction's annealing temperature. A primer with a Tm (melting temperature) too much higher than the reaction's annealing temperature may mishybridize and extend at an incorrect location along the DNA sequence. A Tm significantly lower than the annealing temperature may fail to anneal and extend at all. Additionally, primer sequences need to be chosen to uniquely select for a region of DNA, avoiding the possibility of hybridization to a similar sequence nearby. A commonly used method for selecting a primer site is BLAST search, whereby all the possible regions to which a primer may bind can be seen. Both the nucleotide sequence as well as the primer itself can be BLAST searched. The free NCBI tool Primer-BLAST integrates primer design and BLAST search into one application, as do commercial software products such as ePrime and Beacon Designer. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design by giving melting and annealing temperatures, etc. As of 2014, many online tools are freely available for primer design, some of which focus on specific applications of PCR. Primers with high specificity for a subset of DNA templates in the presence of many similar variants can be designed using by some software (e.g. DECIPHER) or be developed independently for a specific group of animals. Selecting a specific region of DNA for primer binding requires some additional considerations. Regions high in mononucleotide and dinucleotide repeats should be avoided, as loop formation can occur and contribute to mishybridization. Primers should not easily anneal with other primers in the mixture; this phenomenon can lead to the production of 'primer dimer' products contaminating the end solution. Primers should also not anneal strongly to themselves, as internal hairpins and loops could hinder the annealing with the template DNA. When designing primers, additional nucleotide bases can be added to the back ends of each primer, resulting in a customized cap sequence on each end of the amplified region. One application for this practice is for use in TA cloning, a special subcloning technique similar to PCR, where efficiency can be increased by adding AG tails to the 5′ and the 3′ ends. Degenerate primers Some situations may call for the use of degenerate primers. These are mixtures of primers that are similar, but not identical. These may be convenient when amplifying the same gene from different organisms, as the sequences are probably similar but not identical. This technique is useful because the genetic code itself is degenerate, meaning several different codons can code for the same amino acid. This allows different organisms to have a significantly different genetic sequence that code for a highly similar protein. For this reason, degenerate primers are also used when primer design is based on protein sequence, as the specific sequence of codons are not known. Therefore, primer sequence corresponding to the amino acid isoleucine might be "ATH", where A stands for adenine, T for thymine, and H for adenine, thymine, or cytosine, according to the genetic code for each codon, using the IUPAC symbols for degenerate bases. Degenerate primers may not perfectly hybridize with a target sequence, which can greatly reduce the specificity of the PCR amplification. Degenerate primers are widely used and extremely useful in the field of microbial ecology. They allow for the amplification of genes from thus far uncultivated microorganisms or allow the recovery of genes from organisms where genomic information is not available. Usually, degenerate primers are designed by aligning gene sequencing found in GenBank. Differences among sequences are accounted for by using IUPAC degeneracies for individual bases. PCR primers are then synthesized as a mixture of primers corresponding to all permutations of the codon sequence. See also Oligonucleotide synthesis – the methods by which primers are manufactured References External links Primer3 Primer-BLAST DNA replication Molecular biology Polymerase chain reaction
23652
https://en.wikipedia.org/wiki/Purine
Purine
Purine is a heterocyclic aromatic organic compound that consists of two rings (pyrimidine and imidazole) fused together. It is water-soluble. Purine also gives its name to the wider class of molecules, purines, which include substituted purines and their tautomers. They are the most widely occurring nitrogen-containing heterocycles in nature. Dietary sources Purines are found in high concentration in meat and meat products, especially internal organs such as liver and kidney. In general, plant-based diets are low in purines. High-purine plants and algae include some legumes (lentils, soybeans, and black-eyed peas) and spirulina. Examples of high-purine sources include: sweetbreads, anchovies, sardines, liver, beef kidneys, brains, meat extracts (e.g., Oxo, Bovril), herring, mackerel, scallops, game meats, yeast (beer, yeast extract, nutritional yeast) and gravy. A moderate amount of purine is also contained in red meat, beef, pork, poultry, fish and seafood, asparagus, cauliflower, spinach, mushrooms, green peas, lentils, dried peas, beans, oatmeal, wheat bran, wheat germ, and haws. Biochemistry Purines and pyrimidines make up the two groups of nitrogenous bases, including the two groups of nucleotide bases. The purine bases are guanine (G) and adenine (A) which form corresponding nucleosides-deoxyribonucleosides (deoxyguanosine and deoxyadenosine) with deoxyribose moiety and ribonucleosides (guanosine, adenosine) with ribose moiety. These nucleosides with phosphoric acid form corresponding nucleotides (deoxyguanylate, deoxyadenylate and guanylate, adenylate) which are the building blocks of DNA and RNA, respectively. Purine bases also play an essential role in many metabolic and signalling processes within the compounds guanosine monophosphate (GMP) and adenosine monophosphate (AMP). In order to perform these essential cellular processes, both purines and pyrimidines are needed by the cell, and in similar quantities. Both purine and pyrimidine are self-inhibiting and activating. When purines are formed, they inhibit the enzymes required for more purine formation. This self-inhibition occurs as they also activate the enzymes needed for pyrimidine formation. Pyrimidine simultaneously self-inhibits and activates purine in a similar manner. Because of this, there is nearly an equal amount of both substances in the cell at all times. Properties Purine is both a very weak acid (pKa 8.93) and an even weaker base (pKa 2.39). If dissolved in pure water, the pH is halfway between these two pKa values. Purine is aromatic, having four tautomers each with a hydrogen bonded to a different one of the four nitrogen atoms. These are identified as 1-H, 3-H, 7-H, and 9-H (see image of numbered ring). The common crystalline form favours the 7-H tautomer, while in polar solvents both the 9-H and 7-H tautomers predominate. Substituents to the rings and interactions with other molecules can shift the equilibrium of these tautomers. Notable purines There are many naturally occurring purines. They include the nucleotide bases adenine and guanine. In DNA, these bases form hydrogen bonds with their complementary pyrimidines, thymine and cytosine, respectively. This is called complementary base pairing. In RNA, the complement of adenine is uracil instead of thymine. Other notable purines are hypoxanthine, xanthine, theophylline, theobromine, caffeine, uric acid and isoguanine. Functions Aside from the crucial roles of purines (adenine and guanine) in DNA and RNA, purines are also significant components in a number of other important biomolecules, such as ATP, GTP, cyclic AMP, NADH, and coenzyme A. Purine (1) itself, has not been found in nature, but it can be produced by organic synthesis. They may also function directly as neurotransmitters, acting upon purinergic receptors. Adenosine activates adenosine receptors. History The word purine (pure urine) was coined by the German chemist Emil Fischer in 1884. He synthesized it for the first time in 1898. The starting material for the reaction sequence was uric acid (8), which had been isolated from kidney stones by Carl Wilhelm Scheele in 1776. Uric acid was reacted with PCl5 to give 2,6,8-trichloropurine, which was converted with HI and PH4I to give 2,6-diiodopurine. The product was reduced to purine using zinc dust. Metabolism Many organisms have metabolic pathways to synthesize and break down purines. Purines are biologically synthesized as nucleosides (bases attached to ribose). Accumulation of modified purine nucleotides is defective to various cellular processes, especially those involving DNA and RNA. To be viable, organisms possess a number of deoxypurine phosphohydrolases, which hydrolyze these purine derivatives removing them from the active NTP and dNTP pools. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP and dXTP. Defects in enzymes that control purine production and breakdown can severely alter a cell's DNA sequences, which may explain why people who carry certain genetic variants of purine metabolic enzymes have a higher risk for some types of cancer. Purine biosynthesis in the three domains of life Organisms in all three domains of life, eukaryotes, bacteria and archaea, are able to carry out de novo biosynthesis of purines. This ability reflects the essentiality of purines for life. The biochemical pathway of synthesis is very similar in eukaryotes and bacterial species, but is more variable among archaeal species. A nearly complete, or complete, set of genes required for purine biosynthesis was determined to be present in 58 of the 65 archaeal species studied. However, also identified were seven archaeal species with entirely, or nearly entirely, absent purine encoding genes. Apparently the archaeal species unable to synthesize purines are able to acquire exogenous purines for growth., and are thus analogous to purine mutants of eukaryotes, e.g. purine mutants of the Ascomycete fungus Neurospora crassa, that also require exogenous purines for growth. Relationship with gout Higher levels of meat and seafood consumption are associated with an increased risk of gout, whereas a higher level of consumption of dairy products is associated with a decreased risk. Moderate intake of purine-rich vegetables or protein is not associated with an increased risk of gout. Similar results have been found with the risk of hyperuricemia. Laboratory synthesis In addition to in vivo synthesis of purines in purine metabolism, purine can also be synthesized artificially. Purine is obtained in good yield when formamide is heated in an open vessel at 170 °C for 28 hours. This remarkable reaction and others like it have been discussed in the context of the origin of life. Patented August 20, 1968, the current recognized method of industrial-scale production of adenine is a modified form of the formamide method. This method heats up formamide under 120 °C conditions within a sealed flask for 5 hours to form adenine. The reaction is heavily increased  in quantity by using a phosphorus oxychloride (phosphoryl chloride) or phosphorus pentachloride as an acid catalyst and sunlight or ultraviolet conditions. After the 5 hours have passed and the formamide-phosphorus oxychloride-adenine solution cools down, water is put into the flask containing the formamide and now-formed adenine. The water-formamide-adenine solution is then poured through a filtering column of activated charcoal. The water and formamide molecules, being small molecules, will pass through the charcoal and into the waste flask; the large adenine molecules, however, will attach or “adsorb” to the charcoal due to the van der waals forces that interact between the adenine and the carbon in the charcoal. Because charcoal has a large surface area, it's able to capture the majority of molecules that pass a certain size (greater than water and formamide) through it. To extract the adenine from the charcoal-adsorbed adenine, ammonia gas dissolved in water (aqua ammonia) is poured onto the activated charcoal-adenine structure to liberate the adenine into the ammonia-water solution. The solution containing water, ammonia, and adenine is then left to air dry, with the adenine losing solubility due to the loss of ammonia gas that previously made the solution basic and capable of dissolving adenine, thus causing it to crystallize into a pure white powder that can be stored. Oro and Kamat (1961) and Orgel co-workers (1966, 1967) have shown that four molecules of HCN tetramerize to form diaminomaleodinitrile (12), which can be converted into almost all naturally occurring purines. For example, five molecules of HCN condense in an exothermic reaction to make adenine, especially in the presence of ammonia. The Traube purine synthesis (1900) is a classic reaction (named after Wilhelm Traube) between an amine-substituted pyrimidine and formic acid. Prebiotic synthesis of purine ribonucleosides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. Nam et al. (2018) demonstrated the direct condensation of purine and pyrimidine nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing purine ribonucleosides was presented by Becker et al. in 2016. See also Purinones Pyrimidine Simple aromatic rings Transition Transversion Gout, a disorder of purine metabolism Adenine Guanine References External links Purine Content in Food Simple aromatic rings 1898 in science Emil Fischer Substances discovered in the 19th century
23653
https://en.wikipedia.org/wiki/Pyrimidine
Pyrimidine
Pyrimidine (; ) is an aromatic, heterocyclic, organic compound similar to pyridine (). One of the three diazines (six-membered heterocyclics with two nitrogen atoms in the ring), it has nitrogen atoms at positions 1 and 3 in the ring. The other diazines are pyrazine (nitrogen atoms at the 1 and 4 positions) and pyridazine (nitrogen atoms at the 1 and 2 positions). In nucleic acids, three types of nucleobases are pyrimidine derivatives: cytosine (C), thymine (T), and uracil (U). Occurrence and history The pyrimidine ring system has wide occurrence in nature as substituted and ring fused compounds and derivatives, including the nucleotides cytosine, thymine and uracil, thiamine (vitamin B1) and alloxan. It is also found in many synthetic compounds such as barbiturates and the HIV drug zidovudine. Although pyrimidine derivatives such as alloxan were known in the early 19th century, a laboratory synthesis of a pyrimidine was not carried out until 1879, when Grimaux reported the preparation of barbituric acid from urea and malonic acid in the presence of phosphorus oxychloride. The systematic study of pyrimidines began in 1884 with Pinner, who synthesized derivatives by condensing ethyl acetoacetate with amidines. Pinner first proposed the name “pyrimidin” in 1885. The parent compound was first prepared by Gabriel and Colman in 1900, by conversion of barbituric acid to 2,4,6-trichloropyrimidine followed by reduction using zinc dust in hot water. Nomenclature The nomenclature of pyrimidines is straightforward. However, like other heterocyclics, tautomeric hydroxyl groups yield complications since they exist primarily in the cyclic amide form. For example, 2-hydroxypyrimidine is more properly named 2-pyrimidone. A partial list of trivial names of various pyrimidines exists. Physical properties Physical properties are shown in the data box. A more extensive discussion, including spectra, can be found in Brown et al. Chemical properties Per the classification by Albert, six-membered heterocycles can be described as π-deficient. Substitution by electronegative groups or additional nitrogen atoms in the ring significantly increase the π-deficiency. These effects also decrease the basicity. Like pyridines, in pyrimidines the π-electron density is decreased to an even greater extent. Therefore, electrophilic aromatic substitution is more difficult while nucleophilic aromatic substitution is facilitated. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Electron lone pair availability (basicity) is decreased compared to pyridine. Compared to pyridine, N-alkylation and N-oxidation are more difficult. The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Protonation and other electrophilic additions will occur at only one nitrogen due to further deactivation by the second nitrogen. The 2-, 4-, and 6- positions on the pyrimidine ring are electron deficient analogous to those in pyridine and nitro- and dinitrobenzene. The 5-position is less electron deficient and substituents there are quite stable. However, electrophilic substitution is relatively facile at the 5-position, including nitration and halogenation. Reduction in resonance stabilization of pyrimidines may lead to addition and ring cleavage reactions rather than substitutions. One such manifestation is observed in the Dimroth rearrangement. Pyrimidine is also found in meteorites, but scientists still do not know its origin. Pyrimidine also photolytically decomposes into uracil under ultraviolet light. Synthesis Pyrimidine biosynthesis creates derivatives —like orotate, thymine, cytosine, and uracil— de novo from carbamoyl phosphate and aspartate. As is often the case with parent heterocyclic ring systems, the synthesis of pyrimidine is not that common and is usually performed by removing functional groups from derivatives. Primary syntheses in quantity involving formamide have been reported. As a class, pyrimidines are typically synthesized by the principal synthesis involving cyclization of β-dicarbonyl compounds with N–C–N compounds. Reaction of the former with amidines to give 2-substituted pyrimidines, with urea to give 2-pyrimidinones, and guanidines to give 2-aminopyrimidines are typical. Pyrimidines can be prepared via the Biginelli reaction and other multicomponent reactions. Many other methods rely on condensation of carbonyls with diamines for instance the synthesis of 2-thio-6-methyluracil from thiourea and ethyl acetoacetate or the synthesis of 4-methylpyrimidine with 4,4-dimethoxy-2-butanone and formamide. A novel method is by reaction of N-vinyl and N-aryl amides with carbonitriles under electrophilic activation of the amide with 2-chloro-pyridine and trifluoromethanesulfonic anhydride: Reactions Because of the decreased basicity compared to pyridine, electrophilic substitution of pyrimidine is less facile. Protonation or alkylation typically takes place at only one of the ring nitrogen atoms. Mono-N-oxidation occurs by reaction with peracids. Electrophilic C-substitution of pyrimidine occurs at the 5-position, the least electron-deficient. Nitration, nitrosation, azo coupling, halogenation, sulfonation, formylation, hydroxymethylation, and aminomethylation have been observed with substituted pyrimidines. Nucleophilic C-substitution should be facilitated at the 2-, 4-, and 6-positions but there are only a few examples. Amination and hydroxylation have been observed for substituted pyrimidines. Reactions with Grignard or alkyllithium reagents yield 4-alkyl- or 4-aryl pyrimidine after aromatization. Free radical attack has been observed for pyrimidine and photochemical reactions have been observed for substituted pyrimidines. Pyrimidine can be hydrogenated to give tetrahydropyrimidine. Derivatives Nucleotides Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives: {| |- | || || |- | || || |} In DNA and RNA, these bases form hydrogen bonds with their complementary purines. Thus, in DNA, the purines adenine (A) and guanine (G) pair up with the pyrimidines thymine (T) and cytosine (C), respectively. In RNA, the complement of adenine (A) is uracil (U) instead of thymine (T), so the pairs that form are adenine:uracil and guanine:cytosine. Very rarely, thymine can appear in RNA, or uracil in DNA, but when the other three major pyrimidine bases are represented, some minor pyrimidine bases can also occur in nucleic acids. These minor pyrimidines are usually methylated versions of major ones and are postulated to have regulatory functions. These hydrogen bonding modes are for classical Watson–Crick base pairing. Other hydrogen bonding modes ("wobble pairings") are available in both DNA and RNA, although the additional 2′-hydroxyl group of RNA expands the configurations, through which RNA can form hydrogen bonds. Theoretical aspects In March 2015, NASA Ames scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar dust and gas clouds. Prebiotic synthesis of pyrimidine nucleotides In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules such as RNA must have emerged from relatively small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of pyrimidine and purine nucleotides, both of which are necessary for reliable information transfer, and thus natural selection and Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles. Purine nucleosides can be synthesized by a similar pathway. 5’-mono-and diphosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the pyrimidine and purine bases. Thus a reaction network towards the pyrimidine and purine RNA building blocks can be established starting from simple atmospheric or volcanic molecules. See also ANRORC mechanism Purine Pyrimidine metabolism Simple aromatic rings Transition Transversion References Biomolecules Aromatic bases Simple aromatic rings Substances discovered in the 19th century
23654
https://en.wikipedia.org/wiki/Play-by-mail%20game
Play-by-mail game
A play-by-mail game (also known as a PBM game, PBEM game, turn-based game, turn based distance game, or an interactive strategy game.) is a game played through postal mail, email, or other digital media. Correspondence chess and Go were among the first PBM games. Diplomacy has been played by mail since 1963, introducing a multi-player aspect to PBM games. Flying Buffalo Inc. pioneered the first commercially available PBM game in 1970. A small number of PBM companies followed in the 1970s, with an explosion of hundreds of startup PBM companies in the 1980s at the peak of PBM gaming popularity, many of them small hobby companies—more than 90 percent of which eventually folded. A number of independent PBM magazines also started in the 1980s, including The Nuts & Bolts of PBM, Gaming Universal, Paper Mayhem and Flagship. These magazines eventually went out of print, replaced in the 21st century by the online PBM journal Suspense and Decision. Play-by-mail games (which became known as "turn-based games" in the digital age) have a number of advantages and disadvantages compared to other kinds of gaming. PBM games have wide ranges for turn lengths. Some games allow turnaround times of a day or less—even hourly. Other games structure multiple days or weeks for players to consider moves or turns and players never run out of opponents to face. If desired, some PBM games can be played for years. Additionally, the complexity of PBM games can be far beyond that allowed by a board game in an afternoon, and pit players against live opponents in these conditions—a challenge some players enjoy. PBM games allow the number of opponents or teams in the dozens—with some previous examples over a thousand players. PBM games also allow gamers to interact with others globally. Games with low turn costs compare well with expensive board or video games. Drawbacks include the price for some PBM games with high setup and/or turn costs, and the lack of the ability for face-to-face roleplaying. Additionally, for some players, certain games can be overly complex, and delays in turn processing can be a negative. Play-by-mail games are multifaceted. In their earliest form they involved two players mailing each other directly by postal mail, such as in correspondence chess. Multi-player games, such as Diplomacy or more complex games available today, involve a game master who receives and processes orders and adjudicates turn results for players. These games also introduced the element of diplomacy in which participants can discuss gameplay with each other, strategize, and form alliances. In the 1970s and 1980s, some games involved turn results adjudicated completely by humans. Over time, partial or complete turn adjudication by computer became the norm. Games also involve open- and closed-end variants. Open-ended games do not normally end and players can develop their positions to the fullest extent possible; in closed-end games, players pursue victory conditions until a game conclusion. PBM games enable players to explore a diverse array of roles, such as characters in fantasy or medieval settings, space opera, inner city gangs, or more unusual ones such as assuming the role of a microorganism or a monster. History The earliest play-by-mail games developed as a way for geographically separated gamers to compete with each other using postal mail. Chess and Go are among the oldest examples of this. In these two-player games, players sent moves directly to each other. Multi-player games emerged later: Diplomacy is an early example of this type, emerging in 1963, in which a central game master manages the game, receiving moves and publishing adjudications. According to Shannon Appelcline, there was some PBM play in the 1960s, but not much. For example, some wargamers began playing Stalingrad by mail in this period. In the early 1970s, in the United States, Rick Loomis, of Flying Buffalo Inc., began a number of multi-player play-by-mail games; these included games such as Nuclear Destruction, which launched in 1970. This began the professional PBM industry in the United States. Professional game moderation started in 1971 at Flying Buffalo which added games such as Battleplan, Heroic Fantasy, Starweb, and others, which by the late 1980s were all computer moderated. For approximately five years, Flying Buffalo was the single dominant company in the US PBM industry until Schubel & Son entered the field in roughly 1976 with the human-moderated Tribes of Crane. Schubel & Son introduced fee structure innovations which allowed players to pay for additional options or special actions outside of the rules. For players with larger bankrolls, this provided advantages and the ability to game the system. The next big entrant was Superior Simulations with its game Empyrean Challenge in 1978. Reviewer Jim Townsend asserted that it was "the most complex game system on Earth" with some large position turn results 1,000 pages in length. Chris Harvey started the commercial PBM industry in the United Kingdom with a company called ICBM. After Harvey played Flying Buffalo's Nuclear Destruction game in the United States in approximately 1971, Rick Loomis suggested that he run the game in the UK with Flying Buffalo providing the computer moderation. ICBM Games led the industry in the UK as a result of this proxy method of publishing Flying Buffalo's PBM games, along with KJC games and Mitregames. In the early 1980s, the field of PBM players was growing. Individual PBM game moderators were plentiful in 1980. However, the PBM industry in 1980 was still nascent: there were still only two sizable commercial PBM companies, and only a few small ones. The most popular PBM games of 1980 were Starweb and Tribes of Crane. Some players, unhappy with their experiences with Schubel & Son and Superior Simulations, launched their own company—Adventures by Mail—with the game, Beyond the Stellar Empire, which became "immensely popular". In the same way, many people launched PBM companies, trying their hand at finding the right mix of action and strategy for the gaming audience of the period. According to Jim Townsend: In the late 70's and all of the 80's, many small PBM firms have opened their doors and better than 90% of them have failed. Although PBM is an easy industry to get into, staying in business is another thing entirely. Literally hundreds of PBM companies have come and gone, most of them taking the money of would-be-customers with them. Townsend emphasized the risks for the PBM industry in that "The new PBM company has such a small chance of surviving that no insurance company would write a policy to cover them. Skydivers are a better risk." W.G. Armintrout wrote a 1982 article in The Space Gamer magazine warning those thinking of entering the professional PBM field of the importance of playtesting games to mitigate the risk of failure. By the late 1980s, of the more than one hundred play-by-mail companies operating, the majority were hobbies, not run as businesses to make money. Townsend estimated that, in 1988, there were about a dozen profitable PBM companies in the United States—with an additional few in the United Kingdom and the same in Australia. Sam Roads of Harlequin Games similarly assessed the state of the PBM industry in its early days while also noting the existence of few non-English companies. By the 1980s, interest in PBM gaming in Europe increased. The first UK PBM convention was in 1986. In 1993, the founder of Flagship magazine, Nick Palmer, stated that "recently there has been a rapid diffusion throughout continental Europe where now there are now thousands of players". In 1992, Jon Tindall stated that the number of Australian players was growing, but limited by a relatively small market base. In a 2002 listing of 182 primarily European PBM game publishers and Zines, Flagship listed ten non-UK entries, to include one each from Austria and France, six from Germany, one from Greece, and one from the Netherlands. PBM games up to the 1980s came from multiple sources: some were adapted from existing games and others were designed solely for postal play. In 1985, Pete Tamlyn stated that most popular games had already been attempted in postal play, noting that none had succeeded as well as Diplomacy. Tamlyn added that there was significant experimentation in adapting games to postal play at the time and that most games could be played by mail. These adapted games were typically run by a gamemaster using a fanzine to publish turn results. The 1980s were also noteworthy in that PBM games designed and published in this decade were written specifically for the genre versus adapted from other existing games. Thus they tended to be more complicated and gravitated toward requiring computer assistance. The proliferation of PBM companies in the 1980s supported the publication of a number of newsletters from individual play-by-mail companies as well as independent publications which focused solely on the play-by-mail gaming industry. As of 1983, The Nuts & Bolts of PBM was the primary magazine in this market. In July 1983, the first issue of Paper Mayhem was published. The first issue was a newsletter with a print run of 100. Flagship began publication in the United Kingdom in October 1983, the month before Gaming Universal's first issue was published in the United States. In the mid-1980s, general gaming magazines also began carrying articles on PBM and ran PBM advertisements. PBM games were featured in magazines like Games and Analog in 1984. In the early 1990s, Martin Popp also began publishing a quarterly PBM magazine in Sulzberg, Germany called Postspielbote. The PBM genre's two preeminent magazines of the period were Flagship and Paper Mayhem. In 1984, the PBM industry created a Play-by-Mail Association. This organization had multiple charter members by early 1985 and was holding elections for key positions. One of its proposed functions was to reimburse players who lost money after a PBM business failed. Paul Brown, the president of Reality Simulations, Inc., estimated in 1988 that there were about 20,000 steady play-by-mail gamers, with potentially another 10–20,000 who tried PBM gaming but did not stay. Flying Buffalo Inc. conducted a survey of 167 of its players in 1984. It indicated that 96% of its players were male with most in their 20s and 30s. Nearly half were white collar workers, 28% were students, and the remainder engineers and military. The 1990s brought changes to the PBM world. In the early 1990s, trending PBM games increased in complexity. In this period, email also became an option to transmit turn orders and results. These are called play-by-email (PBEM) games. Flagship reported in 1992 that they knew of 40 PBM gamemasters on Compuserve. One publisher in 2002 called PBM games "Interactive Strategy Games". Turn around time ranges for modern PBM games are wide enough that PBM magazine editors now use the term "turn-based games". Flagship stated in 2005 that "play-by-mail games are often called turn-based games now that most of them are played via the internet". In the 2023 issues of Suspense & Decision, the publisher used the term "Turn Based Distance Gaming". In the early 1990s, the PBM industry still maintained some of the player momentum from the 1980s. For example, in 1993, Flagship listed 185 active play-by-mail games. Patrick M. Rodgers also stated in Shadis magazine that the United States had over 300 PBM games. And in 1993, the Journal of the PBM Gamer stated that "For the past several years, PBM gaming has increased in popularity." That year, there were a few hundred PBM games available for play globally. However, in 1994, David Webber, Paper Mayhem's editor in chief expressed concern about disappointing growth in the PBM community and a reduction in play by established gamers. At the same time, he noted that his analysis indicated that more PBM gamers were playing less, giving the example of an average drop from 5–6 games per player to 2–3 games, suggesting it could be due to financial reasons. In early 1997, David Webber stated that multiple PBM game moderators had noted a drop in players over the previous year. By the end of the 1990s, the number of PBM publications had also declined. Gaming Universal's final publication run ended in 1988. Paper Mayhem ceased publication unexpectedly in 1998 after Webber's death. Flagship also later ceased publication. The Internet affected the PBM world in various ways. Rick Loomis stated in 1999 that, "With the growth of the Internet, [PBM] seems to have shrunk and a lot of companies dropped out of the business in the last 4 or 5 years." Shannon Appelcline agreed, noting in 2014 that, "The advent of the Internet knocked most PBM publishers out of business." The Internet also enabled PBM to globalize between the 1990s and 2000s. Early PBM professional gaming typically occurred within single countries. In the 1990s, the largest PBM games were licensed globally, with "each country having its own licensee". By the 2000s, a few major PBM firms began operating globally, bringing about "The Globalisation of PBM" according to Sam Roads of Harlequin Games. By 2014 the PBM community had shrunk compared to previous decades. A single PBM magazine exists—Suspense and Decision—which began publication in November 2013. The PBM genre has also morphed from its original postal mail format with the onset of the digital age. In 2010, Carol Mulholland—the editor of Flagship—stated that "most turn-based games are now available by email and online". The online Suspense & Decision Games Index, as of June 2021, listed 72 active PBM, PBEM, and turn-based games. In a multiple-article examination of various online turn-based games in 2004 titled "Turning Digital", Colin Forbes concluded that "the number and diversity of these games has been enough to convince me that turn-based gaming is far from dead". Advantages and disadvantages of PBM gaming Judith Proctor noted that play-by-mail games have a number of advantages. These include (1) plenty of time—potentially days—to plan a move, (2) never lacking players to face who have "new tactics and ideas", (3) the ability to play an "incredibly complex" game against live opponents, (4) meeting diverse gamers from far-away locations, and (5) relatively low costs. In 2019, Rick McDowell, designer of Alamaze, compared PBM costs favorably with the high cost of board games at Barnes & Noble, with many of the latter going for about $70, and a top-rated game, Nemesis, costing $189. Andrew Greenberg pointed to the high number of players possible in a PBM game, comparing it to his past failure at attempting once to host a live eleven-player Dungeons & Dragons Game. Flagship noted in 2005 that "It's normal to play these ... games with international firms and a global player base. Games have been designed that can involve large numbers of players – much larger than can gather for face-to-face gaming." Finally, some PBM games can be played for years, if desired. Greenberg identified a number of drawbacks for play-by-mail games. He stated that the clearest was the cost, because most games require a setup cost and a fee per turn, and some games can become expensive. Another drawback is the lack of face-to-face interaction inherent in play-by-mail games. Finally, game complexity in some cases and occasional turn processing delays can be negatives in the genre. Description PBM games can include Combat, Diplomacy, Politics, Exploration, Economics, and Role-Playing, with combat a usual feature and open-ended games typically the most comprehensive. Jim Townsend identifies the two key figures in PBM games as the players and the moderators, the latter of which are companies that charge "turn fees" to players—the cost for each game turn. In 1993, Paper Mayhem—a magazine for play-by-mail gamers—described play-by-mail games thusly: PBM Games vary in the size of the games, turn around time, length of time a game lasts, and prices. An average PBM game has 10–20 players in it, but there are also games that have hundreds of players. Turn around time is the length of time it takes to get your turn back from a company. ... Some games never end. They can go on virtually forever or until you decide to drop. Many games have victory conditions that can be achieved within a year or two. Prices vary for the different PBM games, but the average price per turn is about $5.00. The earliest PBM games were played using the postal services of the respective countries. In 1990, the average turn-around time for a turn was 2–3 weeks. However, in the 1990s, email was introduced to PBM games. This was known as play-by-email (PBEM). Some games used email solely, while others, such as Hyborian War, used email as options for a portion of turn transmittal, with postal service for the remainder. Other games use digital media or web applications to allow players to make turns at speeds faster than postal mail. Given these changes, the term "turn-based games" is now being used by some commentators. Mechanics After the initial setup of a PBM game, players begin submitting turn orders. In general, players fill out an order sheet for a game and return it to the gaming company. The company processes the orders and sends back turn results to the players so they can make subsequent moves. R. Danard further separates a typical PBM turn into four parts. First, the company informs players on the results of the last turn. Next players conduct diplomatic activities, if desired. Then, they send their next turns to the gamemaster (GM). Finally, the turns are processed and the cycle is repeated. This continues until the game or a player is done. Complexity Jim Townsend stated in a 1990 issue of White Wolf Magazine that the complexity of PBM games is much higher than other types on the average. He noted that PBM games at the extreme high end can have a thousand or more players as well as thousands of units to manage, while turn printouts can range from a simple one-page result to hundreds of pages (with three to seven as the average). According to John Kevin Loth, "Novices should appreciate that some games are best played by veterans." In 1986, he highlighted the complexity of Midgard with its 100-page instruction manual and 255 possible orders. A.D. Young stated in 1982 that computers could assist PBM gamers in various ways including accounting for records, player interactions, and movements, as well as computation or analysis specific to individual games. Reviewer Jim Townsend asserted that Empyrean Challenge was "the most complex game system on Earth". Other games, like Galactic Prisoners began simply and gradually increased in complexity. As of August 2021, Rick Loomis PBM Games' had four difficulty levels: easy, moderate, hard, and difficult, with games such as Nuclear Destruction and Heroic Fantasy on the easy end and Battleplan—a military strategy game—rated as difficult. Diplomacy According to Paper Mayhem assistant editor Jim Townsend, "The most important aspect of PBM games is the diplomacy. If you don't communicate with the other players you will be labeled a 'loner', 'mute', or just plain 'dead meat'. You must talk with the others to survive". The editors of Paper Mayhem add that "The interaction with other players is what makes PBM enjoyable." Commentator Rob Chapman in a 1983 Flagship article echoed this advice, recommending that players get to know their opponents. He also recommended asking direct questions of opponents on their future intentions, as their responses, true or false, provide useful information. However, he advises players to be truthful in PBM diplomacy, as a reputation for honesty is useful in the long-term. Chapman notes that "everything is negotiable" and advises players to "Keep your plans flexible, your options open – don't commit yourself, or your forces, to any long term strategy". Eric Stehle, owner and operator of Empire Games in 1997, stated that some games cannot be won alone and require diplomacy. He suggested considering the following diplomatic points during gameplay: (1) "Know Your Neighbors", (2) "Make Sure Potential Allies Share Your Goals", (3) "Be A Good Ally", (4) "Coordinate Carefully With Your Allies", (5) "Be A Vicious Enemy", and (6) "Fight One Enemy At A Time". Game types and player roles Jim Townsend noted in 1990 that hundreds of PBM games were available, ranging from "all science fiction and fantasy themes to such exotics as war simulations (generally more complex world war games than those which wargamers play), duelling games, humorous games, sports simulations, etc". In 1993, Steve Pritchard described PBM game types as ancient wargames, diplomacy games, fantasy wargames, power games, roleplaying games, and sports games. Some PBM games defy easy categorization, such as Firebreather, which Joey Browning, the editor of the U.S. Flagship described as a "Fantasy Exploration" game. Play-by-mail games also provide a wide array of possible roles to play. These include "trader, fighter, explorer, [and] diplomat". Roles range from pirates to space characters to "previously unknown creatures". In the game Monster Island, players assume the role of a monster which explores a massive island (see image). And the title of the PBM game You're An Amoeba, GO! indicates an unusual role as players struggle "in a 3D pool of primordial ooze [directing] the evolution of a legion of micro-organisms". Loth advises that closer identification with a role increases enjoyment, but prioritizing this aspect requires more time searching for the right PBM game. Closed versus open ended According to John Kevin Loth III, open-ended games do not end and there is no final objective or way to win the game. Jim Townsend adds that, "players come and go, powers grow and diminish, alliances form and dissolve and so forth". Since surviving, rather than winning, is primary, this type of game tends to attract players more interested in role-playing, and Townsend echoes that open-ended games are similar to long-term RPG campaigns. A drawback of this type is that mature games have powerful groups that can pose an unmanageable problem for the beginneralthough some may see this situation as a challenge of sorts. Examples of open ended games are Heroic Fantasy, Monster Island, and SuperNova: Rise of the Empire. Townsend noted in 1990 that some open-ended games had been in play for up to a decade. Townsend states that "closed-ended games are like Risk or Monopolyonce they're over, they're over". Loth notes that most players in closed end games start equally and the games are "faster paced, usually more intense... presenting frequent player confrontation; [and] the game terminates when a player or alliance of players has achieved specific conditions or eliminated all opposition". Townsend stated in 1990 that closed-end games can have as few as ten and as many as eighty turns. Examples of closed-end games are Hyborian War, It's a Crime, and Starweb. Companies in the early 1990s also offered games with both open- and closed-ended versions. Additionally, games could have elements of both versions; for example, in Kingdom, an open-ended PBM game published by Graaf Simulations, a player could win by accumulating 50,000 points. Computer versus human moderated In the 1980s, PBM companies began using computers to moderate games. This was in part for economic reasons, as computers allowed the processing of more turns than humans, but with less of a human touch in the prose of a turn result. According to John Kevin Loth III, one hundred percent computer-moderated games would also kill a player's character or empire emotionlessly, regardless of the effort invested. Alternatively, Loth noted that those preferring exquisite pages of prose would gravitate toward one hundred percent human moderation. Loth provided Beyond the Quadra Zone and Earthwood as popular computer-moderated examples in 1986 and Silverdawn and Sword Lords as one hundred percent human-moderated examples of the period. Borderlands of Khataj is an example of a game where the company transitioned from human- to computer-moderated to mitigate issues related to a growing player base. In 1984, there was a shift toward mixed moderation—human moderated games with computer-moderated aspects such as combat. Examples included Delenda est Carthago, Star Empires, and Starglobe. In 1990, the editors of Paper Mayhem noted that there were games with a mix of computer and hand moderation, where games "would have the numbers run by the computer and special actions in the game would receive attention from the game master". Cost and turn processing time Loth noted that, in 1986, $3–5 per turn was the most prevalent cost. At the time, some games were free, while others cost as much as $100 per turn. PBM magazine Paper Mayhem stated that the average turn processing time in 1987 was two weeks, and Loth noted that this was also the most common. Some companies offered longer turnaround times for overseas players or other reasons. In 1985, the publisher for Angrelmar: The Court of Kings scheduled three month turn processing times after a break in operations. In 1986, play-by-email was a nascent service only being offered by the largest PBM companies. By the 1990s, players had more options for online play-by-mail games. For example, in 1995, World Conquest was available to play with hourly turns. In the 21st century, many games of this genre are called turn-based games and are played via the Internet. Game turns can be processed simultaneously or serially. In simultaneously processed games, the publisher processes turns from all players together according to an established sequence. In serial-processed games, turns are processed when received within the determined turn processing window. Information sources Rick Loomis of Flying Buffalo Games stated in 1985 that the Nuts & Bolts of PBM (first called Nuts & Bolts of Starweb) was the first PBM magazine not published by a PBM company. The name changed to Nuts & Bolts of Gaming and it eventually went out of print. In 1983, the U.S. PBM magazines Paper Mayhem and Gaming Universal began publication as well as Flagship in the UK. Also in 1983, PBM games were featured in magazines like Games and Analog in 1984 as well as Australia's gaming magazine Breakout in 1992. By 1985, Nuts & Bolts of Gaming and Gaming Universal in the U.S. were out of print. John Kevin Loth identified that, in 1986, the "three major information sources in PBM" were Paper Mayhem, Flagship, and the Play By Mail Association. These sources were solely focused on play-by-mail gaming. Additional PBM information sources included company-specific publications, although Rick Loomis stated that interest was limited to individual companies". Finally, play-by-mail gamers could also draw from "alliances, associations, and senior players" for information. In the mid-1980s, other gaming magazines also began venturing into PBM. For example, White Wolf Magazine began a regular PBM column beginning in issue #11 as well as publishing an annual PBM issue beginning with issue #16. The Space Gamer also carried PBM articles and reviews. Additional minor information sources included gaming magazines such as "Different Worlds ... Game New, Imagine, and White Dwarf". Dragon Publishing's Ares, Dragon, and Strategy and Tactics magazines provided PBM coverage along with Flying Buffalo's Sorcerer's Apprentice. Gaming magazine Micro Adventurer, which closed in 1985, also featured PBM games. Other PBM magazines in the late 1980s in the UK included Thrust, and Warped Sense of Humour. In the early 1990s, Martin Popp also began publishing a quarterly PBM magazine in Sulzberg, Germany called Postspielbote. In 1995, Post & Play Unlimited stated that it was the only German-language PBM magazine. In its March 1992 issue, Flagship stated that it checked Simcoarum Bimonthly for PBM news. Shadis magazine stated in 1994 that it had begun carrying a 16-page PBM section. This section, called "Post Marque", was discontinued after the March/April 1995 issue (#18), after which PBM coverage was integrated into other magazine sections. In its January–February 1995 issue, Flagship's editor noted that their "main European competitor" PBM Scroll had gone out of print. Flagship ran into the 21st century, but ceased publication in 2010. In November 2013, online PBM journal Suspense & Decision, began publication. Fiction Besides articles and reviews on PBM games, authors have also published PBM fiction articles according to Shannon Muir. An early example called "Scapegoat" by Mike Horn appeared in the May–June 1984 issue of Paper Mayhem magazine. Examples include "A Loaf of Bread" by Suzanna Y. Snow about the game A Duel of a Different Color, "Dark Beginnings" by Dave Bennett about Darkness of Silverfall, and Chris Harvey's "It Was the Only Thing He Could Do...", about a conglomeration of PBM games. Simon Williams, the gamemaster of the PBM game Chaos Trail in 2004, also wrote an article in Flagship about the possibility of writing a PBM fiction novel. See also List of play-by-mail games Play-by-post role-playing game Turn-based game Notes References Bibliography Interview with John C. Muir, long-time PBM author. 1984 article on the prospects of PBEM with assembled evidence from PBM figures such as Rick Loomis. Article about PBM on email and Compuserve. Magazine date: December–January 2003/2004. Further reading Early reviews of one game each by two of the larger PBM publishers of the period. Fiction article about a space-based PBM game called Star Battle Forever. External links Game terminology History of role-playing games Play-by-email video games Play-by-mail games Role-playing games Strategy games Tabletop games Wargames
23658
https://en.wikipedia.org/wiki/Philip%20K.%20Dick%20Award
Philip K. Dick Award
The Philip K. Dick Award is an American science fiction award given annually at Norwescon and sponsored by the Philadelphia Science Fiction Society and (since 2005) the Philip K. Dick Trust. Named after science fiction writer Philip K. Dick, it has been awarded since 1983, the year after his death. It is awarded to the best original paperback published each year in the US. The award was founded by Thomas Disch with assistance from David G. Hartwell, Paul S. Williams, and Charles N. Brown. As of 2016, it is administered by Pat LoBrutto, John Silbersack, and Gordon Van Gelder. Past administrators include Algis Budrys, David G. Hartwell, and David Alexander Smith. Winners and nominees Winners are listed in bold. Authors of special citation entries are listed in italics. The year in the table below indicates the year the book was published; winners are announced the following year. References External links List of all winning and nominated novels Science fiction awards Awards established in 1983 American literary awards D Philip K. Dick
23659
https://en.wikipedia.org/wiki/Plug-in%20%28computing%29
Plug-in (computing)
In computing, a plug-in (or plugin, add-in, addin, add-on, or addon) is a software component that adds a specific feature to an existing computer program. When a program supports plug-ins, it enables customization. A theme or skin is a preset package containing additional or changed graphical appearance details, achieved by the use of a graphical user interface (GUI) that can be applied to specific software and websites to suit the purpose, topic, or tastes of different users to customize the look and feel of a piece of computer software or an operating system front-end GUI (and window managers). Purpose and examples Applications may support plug-ins to: enable third-party developers to extend an application support easily adding new features reduce the size of an application by not loading unused features separate source code from an application because of incompatible software licenses. Types of applications and why they use plug-ins: Digital audio workstations and audio editing software use audio plug-ins to generate, process or analyze sound. Ardour, Audacity, Cubase, FL Studio, Logic Pro X and Pro Tools are examples of such systems. Email clients use plug-ins to decrypt and encrypt email. Pretty Good Privacy is an example of such plug-ins. Video game console emulators often use plug-ins to modularize the separate subsystems of the devices they seek to emulate. For example, the PCSX2 emulator makes use of video, audio, optical, etc. plug-ins for those respective components of the PlayStation 2. Graphics software use plug-ins to support file formats and process images. A Photoshop plug-in may do this. Broadcasting and live-streaming software, like OBS Studio, as an open source software utilises plug-ins for user-specific needs. Media players use plug-ins to support file formats and apply filters. foobar2000, GStreamer, Quintessential, VST, Winamp, XMMS are examples of such media players. Packet sniffers use plug-ins to decode packet formats. OmniPeek is an example of such packet sniffers. Remote sensing applications use plug-ins to process data from different sensor types; e.g., Opticks. Text editors and Integrated development environments use plug-ins to support programming languages or enhance the development process e.g., Visual Studio, RAD Studio, Eclipse, IntelliJ IDEA, jEdit and MonoDevelop support plug-ins. Visual Studio itself can be plugged into other applications via Visual Studio Tools for Office and Visual Studio Tools for Applications. Web browsers have historically used executables as plug-ins, though they are now mostly deprecated. Examples include the Adobe Flash Player, a Java virtual machine (for Java applets), QuickTime, Microsoft Silverlight and the Unity Web Player. (Browser extensions, which are a separate type of installable module, are still widely in use.) Mechanism The host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application and a protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application. Programmers typically implement plug-ins as shared libraries, which get dynamically loaded at run time. HyperCard supported a similar facility, but more commonly included the plug-in code in the HyperCard documents (called stacks) themselves. Thus the HyperCard stack became a self-contained application in its own right, distributable as a single entity that end-users could run without the need for additional installation-steps. Programs may also implement plug-ins by loading a directory of simple script files written in a scripting language like Python or Lua. Mozilla definition In Mozilla Foundation definitions, the words "add-on", "extension" and "plug-in" are not synonyms. "Add-on" can refer to anything that extends the functions of a Mozilla application. Extensions comprise a subtype, albeit the most common and the most powerful one. Mozilla applications come with integrated add-on managers that, similar to package managers, install, update and manage extensions. The term, "plug-in", however, strictly refers to NPAPI-based web content renderers. Mozilla deprecated plug-ins for its products. But UXP-based applications, like web browsers Pale Moon and Basilisk, keep supporting (NPAPI) plug-ins. Helper application A helper application is an external viewer programlike IrfanView or Adobe Readerthat displays content retrieved using a web browser. Unlike a plugin whose full code would be included in the browser's address space, a helper application is a standalone application. Web browsers choose an appropriate helper application based on a file's Media type as indicated by the filename extension. History In the mid-1970s, the EDT text editor ran on the Unisys VS/9 operating system for the UNIVAC Series 90 mainframe computer. It allowed a program to be run from the editor which can access the in-memory edit buffer. The plug-in executable could call the editor to inspect and change the text. The University of Waterloo Fortran compiler used this to allow interactive compilation of Fortran programs. Early personal computer software with plug-in capability included HyperCard and QuarkXPress on the Apple Macintosh, both released in 1987. In 1988, Silicon Beach Software included plug-in capability in Digital Darkroom and SuperPaint. See also Applet Browser extension References Application programming interfaces Technology neologisms
23660
https://en.wikipedia.org/wiki/Pierre%20Teilhard%20de%20Chardin
Pierre Teilhard de Chardin
Pierre Teilhard de Chardin ( ) (1 May 1881 – 10 April 1955) was a French Jesuit, Catholic priest, scientist, paleontologist, theologian, philosopher, and teacher. He was Darwinian and progressive in outlook and the author of several influential theological and philosophical books. His mainstream scientific achievements included taking part in the discovery of Peking Man. His more speculative ideas, sometimes criticized as pseudoscientific, have included a vitalist conception of the Omega Point. Along with Vladimir Vernadsky, they also contributed to the development of the concept of a noosphere. In 1962, the Congregation for the Doctrine of the Faith condemned several of Teilhard's works based on their alleged ambiguities and doctrinal errors. Some eminent Catholic figures, including Pope Benedict XVI and Pope Francis, have made positive comments on some of his ideas since. The response to his writings by scientists has been divided. Teilhard served in World War I as a stretcher-bearer. He received several citations, and was awarded the Médaille militaire and the Legion of Honor, the highest French order of merit, both military and civil. Life Early years Pierre Teilhard de Chardin was born in the Château of Sarcenat, Orcines, about 2.5 miles north-west of Clermont-Ferrand, Auvergne, French Third Republic, on 1 May 1881, as the fourth of eleven children of librarian Emmanuel Teilhard de Chardin (1844–1932) and Berthe-Adèle, née de Dompierre d'Hornoys of Picardy. His mother was a great-grandniece of the famous philosopher Voltaire. He inherited the double surname from his father, who was descended on the Teilhard side from an ancient family of magistrates from Auvergne originating in Murat, Cantal, ennobled under Louis XVIII of France. His father, a graduate of the École Nationale des Chartes, served as a regional librarian and was a keen naturalist with a strong interest in natural science. He collected rocks, insects and plants and encouraged nature studies in the family. Pierre Teilhard's spirituality was awakened by his mother. When he was twelve, he went to the Jesuit college of Mongré in Villefranche-sur-Saône, where he completed the Baccalauréat in philosophy and mathematics. In 1899, he entered the Jesuit novitiate in Aix-en-Provence. In October 1900, he began his junior studies at the Collégiale Saint-Michel de Laval. On 25 March 1901, he made his first vows. In 1902, Teilhard completed a licentiate in literature at the University of Caen. In 1901 and 1902, due to an anti-clerical movement in the French Republic, the government banned the Jesuits and other religious orders from France. This forced the Jesuits to go into exile on the island of Jersey in the United Kingdom. While there, his brother and sister in France died of illnesses and another sister was incapacitated by illness. The unexpected losses of his siblings at young ages caused Teilhard to plan to discontinue his Jesuit studies in science, and change to studying theology. He wrote that he changed his mind after his Jesuit novice master encouraged him to follow science as a legitimate way to God. Due to his strength in science subjects, he was despatched to teach physics and chemistry at the Collège de la Sainte Famille in Cairo, Khedivate of Egypt from 1905 until 1908. From there he wrote in a letter: "[I]t is the dazzling of the East foreseen and drunk greedily ... in its lights, its vegetation, its fauna and its deserts." For the next four years he was a Scholastic at Ore Place in Hastings, East Sussex where he acquired his theological formation. There he synthesized his scientific, philosophical and theological knowledge in the light of evolution. At that time he read Creative Evolution by Henri Bergson, about which he wrote that "the only effect that brilliant book had upon me was to provide fuel at just the right moment, and very briefly, for a fire that was already consuming my heart and mind." Bergson was a French philosopher who was influential in the traditions of analytic philosophy and continental philosophy. His ideas were influential on Teilhard's views on matter, life, and energy. On 24 August 1911, aged 30, Teilhard was ordained a priest. In the ensuing years, Bergson’s protege, the mathematician and philosopher Édouard Le Roy, was appointed successor to Bergson at the College de France. In 1921, Le Roy and Teilhard became friends and met weekly for long discussions. Teilhard wrote: "I loved him like a father, and owed him a very great debt . . . he gave me confidence, enlarged my mind, and served as a spokesman for my ideas, then taking shape, on “hominization” and the “noosphere.” Le Roy later wrote in one of his books: "I have so often and for so long talked over with Pierre Teilhard the views expressed here that neither of us can any longer pick out his own contribution.” Academic and scientific career Geology His father's strong interest in natural science and geology instilled the same in Teilhard from an early age, and would continue throughout his lifetime. As a child, Teilhard was intensely interested in the stones and rocks on his family's land and the neighboring regions. His father helped him develop his skills of observation. At the University of Paris, he studied geology, botany and zoology. After the French government banned all religious orders from France and the Jesuits were exiled to the island of Jersey in the UK, Teilhard deepened his geology knowledge by studying the rocks and landscape of the island. In 1920, he became a lecturer in geology at the Catholic University of Paris, and later a professor. He earned his doctorate in 1922. In 1923 he was hired to do geological research on expeditions in China by the renowned Jesuitical scientist and priest Emile Licent. In 1914, Licent with the sponsorship of the Jesuits founded one of the first museums in China and the first museum of natural science: the Musée Hoangho Paiho. In its first eight years, the museum was housed in the Chongde Hall of the Jesuits. In 1922, with the support of the Catholic Church and the French Concession, Licent built a special building for the museum on the land adjacent to the Tsin Ku University, which was founded by the Jesuits in China. With help from Teilhard and others, Licent collected over 200,000 paleontology, animal, plant, ancient human, and rock specimens for the museum, which still make up more than half of its 380,000 specimens. Many of the publications and writings of the museum and its related institute were included in the world's database of zoological, botanical, and paleontological literature, which is still an important basis for examining the early scientific records of the various disciplines of biology in northern China. Teilhard and Licent were the first to discover and examine the Shuidonggou (水洞沟) (Ordos Upland, Inner Mongolia) archaeological site in northern China. Recent analysis of flaked stone artifacts from the most recent (1980) excavation at this site has identified an assemblage which constitutes the southernmost occurrence of an Initial Upper Paleolithic blade technology proposed to have originated in the Altai region of Southern Siberia. The lowest levels of the site are now dated from 40,000 to 25,000 years ago. Teilhard spent the periods between 1926-1935 and 1939-1945 studying and researching the geology and paleontology of the region. Among other accomplishments, he improved understanding of China’s sedimentary deposits and established approximate ages for various layers. He also produced a geological map of China. It was during the period 1926-1935 that he joined the excavation that discovered Peking Man. Paleontology From 1912 to 1914, Teilhard began his paleontology education by working in the laboratory of the french National Museum of Natural History, studying the mammals of the middle Tertiary period. Later he studied elsewhere in Europe. This included spending 5 days over the course of a 3-month period in the middle of 1913 as a volunteer assistant helping to dig with Arthur Smith Woodward and Charles Dawson at the Piltdown site. Teilhard’s brief time assisting with digging there occurred many months after the discovery of the first fragments of the fraudulent "Piltdown Man". A few people have suggested he participated in the hoax despite no evidence. Most Teilhard experts (including all three Teilhard biographers) and many scientists (including the scientists who uncovered the hoax and investigated it) have extensively refuted the suggestion that he participated, and say that he did not. Anthropologist H. James Birx wrote that Teilhard „had questioned the validity of this fossil evidence from the very beginning, one positive result was that the young geologist and seminarian now became particularly interested in paleoanthropology as the science of fossil hominids.“ Marcellin Boule, an palaeontologist and anthropologist, who as early as 1915 had recognized the non-hominid origins of the Piltdown finds, gradually guided Teilhard towards human paleontology. Boule was the editor of the journal L’Anthropologie and the founder of two other scientific journals. He was also a professor at the Parisian Muséum National d’Histoire Naturelle for 34 years, and for many years director of the museum's Institute of Human Paleontology. It was there that Teilhard became a friend of Henri Breuil, a Catholic priest, archaeologist, anthropologist, ethnologist and geologist. In 1913, Teilhard and Breuil did excavations at the prehistoric painted Cave of El Castillo in Spain. The cave contains the oldest known cave painting in the world. The site is divided into about 19 archeological layers in a sequence beginning in the Proto-Aurignacian and ending in the Bronze Age. Later after his return to China in 1926, Teilhard was hired by the Cenozoic Laboratory at the Peking Union Medical College. Starting in 1928, he joined other geologists and paleontologists to excavate the sedimentary layers in the Western Hills near Zhoukoudian. At this site, the scientists discovered the so-called Peking man (Sinanthropus pekinensis), a fossil hominid dating back at least 350,000 years, which is part of the Homo erectus phase of human evolution. Teilhard became world-known as a result of his accessible explanations of the Sinanthropus discovery. He also himself made major contributions to the geology of this site. Teilhard's long stay in China gave him more time to think and write about evolution, as well as continue his scientific research. After the Peking Man discoveries, Breuil joined Teilhard at the site in 1931 and confirmed the presence of stone tools. Scientific writings During his career, Teilhard published many dozens of scientific papers in scholarly scientific journals. When they were published in collections as books, they took up 11 volumes. John Allen Grim, the co-founder and co-director of the Yale Forum on Religion and Ecology, said: "I think you have to distinguish between the hundreds of papers that Teilhard wrote in a purely scientific vein, about which there is no controversy. In fact, the papers made him one of the top two or three geologists of the Asian continent. So this man knew what science was. What he's doing in The Phenomenon and most of the popular essays that have made him controversial is working pretty much alone to try to synthesize what he's learned about through scientific discovery - more than with scientific method - what scientific discoveries tell us about the nature of ultimate reality.” Grim said those writing were controversial to some scientists because Teilhard combined theology and metaphysics with science, and controversial to some religious leaders for the same reason. Service in World War I Mobilized in December 1914, Teilhard served in World War I as a stretcher-bearer in the 8th Moroccan Rifles. For his valor, he received several citations, including the Médaille militaire and the Legion of Honor. During the war, he developed his reflections in his diaries and in letters to his cousin, Marguerite Teillard-Chambon, who later published a collection of them. (See section below) He later wrote: "...the war was a meeting ... with the Absolute." In 1916, he wrote his first essay: La Vie Cosmique (Cosmic life), where his scientific and philosophical thought was revealed just as his mystical life. While on leave from the military he pronounced his solemn vows as a Jesuit in Sainte-Foy-lès-Lyon on 26 May 1918. In August 1919, in Jersey, he wrote Puissance spirituelle de la Matière (The Spiritual Power of Matter). At the University of Paris, Teilhard pursued three unit degrees of natural science: geology, botany, and zoology. His thesis treated the mammals of the French lower Eocene and their stratigraphy. After 1920, he lectured in geology at the Catholic Institute of Paris and after earning a science doctorate in 1922 became an assistant professor there. Research in China In 1923 he traveled to China with Father Émile Licent, who was in charge of a significant laboratory collaboration between the National Museum of Natural History and Marcellin Boule's laboratory in Tianjin. Licent carried out considerable basic work in connection with Catholic missionaries who accumulated observations of a scientific nature in their spare time. Teilhard wrote several essays, including La Messe sur le Monde (the Mass on the World), in the Ordos Desert. In the following year, he continued lecturing at the Catholic Institute and participated in a cycle of conferences for the students of the Engineers' Schools. Two theological essays on original sin were sent to a theologian at his request on a purely personal basis: Chute, Rédemption et Géocentrie (Fall, Redemption and Geocentry) (July 1920) Notes sur quelques représentations historiques possibles du Péché originel (Note on Some Possible Historical Representations of Original Sin) (Works, Tome X, Spring 1922) The Church required him to give up his lecturing at the Catholic Institute in order to continue his geological research in China. Teilhard traveled again to China in April 1926. He would remain there for about twenty years, with many voyages throughout the world. He settled until 1932 in Tianjin with Émile Licent, then in Beijing. Teilhard made five geological research expeditions in China between 1926 and 1935. They enabled him to establish a general geological map of China. In 1926–27, after a missed campaign in Gansu, Teilhard traveled in the Sanggan River Valley near Kalgan (Zhangjiakou) and made a tour in Eastern Mongolia. He wrote Le Milieu Divin (The Divine Milieu). Teilhard prepared the first pages of his main work Le Phénomène Humain (The Phenomenon of Man). The Holy See refused the Imprimatur for Le Milieu Divin in 1927. He joined the ongoing excavations of the Peking Man Site at Zhoukoudian as an advisor in 1926 and continued in the role for the Cenozoic Research Laboratory of the China Geological Survey following its founding in 1928. Teilhard resided in Manchuria with Émile Licent, staying in western Shanxi and northern Shaanxi with the Chinese paleontologist Yang Zhongjian and with Davidson Black, Chairman of the China Geological Survey. After a tour in Manchuria in the area of Greater Khingan with Chinese geologists, Teilhard joined the team of American Expedition Center-Asia in the Gobi Desert, organized in June and July by the American Museum of Natural History with Roy Chapman Andrews. Henri Breuil and Teilhard discovered that the Peking Man, the nearest relative of Anthropopithecus from Java, was a faber (worker of stones and controller of fire). Teilhard wrote L'Esprit de la Terre (The Spirit of the Earth). Teilhard took part as a scientist in the Croisière Jaune (Yellow Cruise) financed by André Citroën in Central Asia. Northwest of Beijing in Kalgan, he joined the Chinese group who joined the second part of the team, the Pamir group, in Aksu City. He remained with his colleagues for several months in Ürümqi, capital of Xinjiang. In 1933, Rome ordered him to give up his post in Paris. Teilhard subsequently undertook several explorations in the south of China. He traveled in the valleys of the Yangtze and Sichuan in 1934, then, the following year, in Guangxi and Guangdong. During all these years, Teilhard contributed considerably to the constitution of an international network of research in human paleontology related to the whole of eastern and southeastern Asia. He would be particularly associated in this task with two friends, Davidson Black and the Scot George Brown Barbour. Often he would visit France or the United States, only to leave these countries for further expeditions. World travels From 1927 to 1928, Teilhard was based in Paris. He journeyed to Leuven, Belgium, and to Cantal and Ariège, France. Between several articles in reviews, he met new people such as Paul Valéry and , who were to help him in issues with the Catholic Church. Answering an invitation from Henry de Monfreid, Teilhard undertook a journey of two months in Obock, in Harar in the Ethiopian Empire, and in Somalia with his colleague Pierre Lamarre, a geologist, before embarking in Djibouti to return to Tianjin. While in China, Teilhard developed a deep and personal friendship with Lucile Swan. During 1930–1931, Teilhard stayed in France and in the United States. During a conference in Paris, Teilhard stated: "For the observers of the Future, the greatest event will be the sudden appearance of a collective humane conscience and a human work to make." From 1932 to 1933, he began to meet people to clarify issues with the Congregation for the Doctrine of the Faith regarding Le Milieu divin and L'Esprit de la Terre. He met Helmut de Terra, a German geologist in the International Geology Congress in Washington, D.C. Teilhard participated in the 1935 Yale–Cambridge expedition in northern and central India with the geologist Helmut de Terra and Patterson, who verified their assumptions on Indian Paleolithic civilisations in Kashmir and the Salt Range Valley. He then made a short stay in Java, on the invitation of Dutch paleontologist Gustav Heinrich Ralph von Koenigswald to the site of Java Man. A second cranium, more complete, was discovered. Professor von Koenigswald had also found a tooth in a Chinese apothecary shop in 1934 that he believed belonged to a three-meter-tall ape, Gigantopithecus, which lived between one hundred thousand and around a million years ago. Fossilized teeth and bone (dragon bones) are often ground into powder and used in some branches of traditional Chinese medicine. In 1937, Teilhard wrote Le Phénomène spirituel (The Phenomenon of the Spirit) on board the boat Empress of Japan, where he met Sylvia Brett, Ranee of Sarawak The ship took him to the United States. He received the Mendel Medal granted by Villanova University during the Congress of Philadelphia, in recognition of his works on human paleontology. He made a speech about evolution, the origins and the destiny of man. The New York Times dated 19 March 1937 presented Teilhard as the Jesuit who held that man descended from monkeys. Some days later, he was to be granted the Doctor Honoris Causa distinction from Boston College. Rome banned his work L'Énergie Humaine in 1939. By this point Teilhard was based again in France, where he was immobilized by malaria. During his return voyage to Beijing he wrote L'Energie spirituelle de la Souffrance (Spiritual Energy of Suffering) (Complete Works, tome VII). In 1941, Teilhard submitted to Rome his most important work, Le Phénomène Humain. By 1947, Rome forbade him to write or teach on philosophical subjects. The next year, Teilhard was called to Rome by the Superior General of the Jesuits who hoped to acquire permission from the Holy See for the publication of Le Phénomène Humain. However, the prohibition to publish it that was previously issued in 1944 was again renewed. Teilhard was also forbidden to take a teaching post in the Collège de France. Another setback came in 1949, when permission to publish Le Groupe Zoologique was refused. Teilhard was nominated to the French Academy of Sciences in 1950. He was forbidden by his superiors to attend the International Congress of Paleontology in 1955. The Supreme Authority of the Holy Office, in a decree dated 15 November 1957, forbade the works of de Chardin to be retained in libraries, including those of religious institutes. His books were not to be sold in Catholic bookshops and were not to be translated into other languages. Further resistance to Teilhard's work arose elsewhere. In April 1958, all Jesuit publications in Spain ("Razón y Fe", "Sal Terrae","Estudios de Deusto", etc.) carried a notice from the Spanish Provincial of the Jesuits that Teilhard's works had been published in Spanish without previous ecclesiastical examination and in defiance of the decrees of the Holy See. A decree of the Holy Office dated 30 June 1962, under the authority of Pope John XXIII, warned: The Diocese of Rome on 30 September 1963 required Catholic booksellers in Rome to withdraw his works as well as those that supported his views. Death Teilhard died in New York City, where he was in residence at the Jesuit Church of St. Ignatius Loyola, Park Avenue. On 15 March 1955, at the house of his diplomat cousin Jean de Lagarde, Teilhard told friends he hoped he would die on Easter Sunday. On the evening of Easter Sunday, 10 April 1955, during an animated discussion at the apartment of Rhoda de Terra, his personal assistant since 1949, Teilhard suffered a heart attack and died. He was buried in the cemetery for the New York Province of the Jesuits at the Jesuit novitiate, St. Andrew-on-Hudson, in Hyde Park, New York. With the moving of the novitiate, the property was sold to the Culinary Institute of America in 1970. Teachings Teilhard de Chardin wrote two comprehensive works, The Phenomenon of Man and The Divine Milieu. His posthumously published book, The Phenomenon of Man, set forth a sweeping account of the unfolding of the cosmos and the evolution of matter to humanity, to ultimately a reunion with Christ. In the book, Teilhard abandoned literal interpretations of creation in the Book of Genesis in favor of allegorical and theological interpretations. The unfolding of the material cosmos is described from primordial particles to the development of life, human beings and the noosphere, and finally to his vision of the Omega Point in the future, which is "pulling" all creation towards it. He was a leading proponent of orthogenesis, the idea that evolution occurs in a directional, goal-driven way. Teilhard argued in Darwinian terms with respect to biology, and supported the synthetic model of evolution, but argued in Lamarckian terms for the development of culture, primarily through the vehicle of education. Teilhard made a total commitment to the evolutionary process in the 1920s as the core of his spirituality, at a time when other religious thinkers felt evolutionary thinking challenged the structure of conventional Christian faith. He committed himself to what he thought the evidence showed. Teilhard made sense of the universe by assuming it had a vitalist evolutionary process. He interpreted complexity as the axis of evolution of matter into a geosphere, a biosphere, into consciousness (in man), and then to supreme consciousness (the Omega Point). Jean Houston's story of meeting Teilhard illustrates this point. Teilhard's unique relationship to both paleontology and Catholicism allowed him to develop a highly progressive, cosmic theology which took into account his evolutionary studies. Teilhard recognized the importance of bringing the Church into the modern world, and approached evolution as a way of providing ontological meaning for Christianity, particularly creation theology. For Teilhard, evolution was "the natural landscape where the history of salvation is situated." Teilhard's cosmic theology is largely predicated on his interpretation of Pauline scripture, particularly Colossians 1:15-17 (especially verse 1:17b) and 1 Corinthians 15:28. He drew on the Christocentrism of these two Pauline passages to construct a cosmic theology which recognizes the absolute primacy of Christ. He understood creation to be "a teleological process towards union with the Godhead, effected through the incarnation and redemption of Christ, 'in whom all things hold together' (Colossians 1:17)." He further posited that creation would not be complete until each "participated being is totally united with God through Christ in the Pleroma, when God will be 'all in all' (1 Corinthians 15:28)." Teilhard's life work was predicated on his conviction that human spiritual development is moved by the same universal laws as material development. He wrote, "...everything is the sum of the past" and "...nothing is comprehensible except through its history. 'Nature' is the equivalent of 'becoming', self-creation: this is the view to which experience irresistibly leads us. ... There is nothing, not even the human soul, the highest spiritual manifestation we know of, that does not come within this universal law." The Phenomenon of Man represents Teilhard's attempt at reconciling his religious faith with his academic interests as a paleontologist. One particularly poignant observation in Teilhard's book entails the notion that evolution is becoming an increasingly optional process. Teilhard points to the societal problems of isolation and marginalization as huge inhibitors of evolution, especially since evolution requires a unification of consciousness. He states that "no evolutionary future awaits anyone except in association with everyone else." Teilhard argued that the human condition necessarily leads to the psychic unity of humankind, though he stressed that this unity can only be voluntary; this voluntary psychic unity he termed "unanimization". Teilhard also states that "evolution is an ascent toward consciousness", giving encephalization as an example of early stages, and therefore, signifies a continuous upsurge toward the Omega Point which, for all intents and purposes, is God. Teilhard also used his perceived correlation between spiritual and material to describe Christ, arguing that Christ not only has a mystical dimension but also takes on a physical dimension as he becomes the organizing principle of the universe—that is, the one who "holds together" the universe. For Teilhard, Christ formed not only the eschatological end toward which his mystical/ecclesial body is oriented, but he also "operates physically in order to regulate all things" becoming "the one from whom all creation receives its stability." In other words, as the one who holds all things together, "Christ exercises a supremacy over the universe which is physical, not simply juridical. He is the unifying center of the universe and its goal. The function of holding all things together indicates that Christ is not only man and God; he also possesses a third aspect—indeed, a third nature—which is cosmic." In this way, the Pauline description of the Body of Christ was not simply a mystical or ecclesial concept for Teilhard; it is cosmic. This cosmic Body of Christ "extend[s] throughout the universe and compris[es] all things that attain their fulfillment in Christ [so that] ... the Body of Christ is the one single thing that is being made in creation." Teilhard describes this cosmic amassing of Christ as "Christogenesis". According to Teilhard, the universe is engaged in Christogenesis as it evolves toward its full realization at Omega, a point which coincides with the fully realized Christ. It is at this point that God will be "all in all" (1 Corinthians 15:28c). Eugenics and racism Teilhard has been criticized for incorporating elements of scientific racism, social Darwinism, and eugenics into his optimistic thinking about unlimited human progress. He argued in 1929 that racial inequality was rooted in biological difference: "Do the yellows—[the Chinese]—have the same human value as the whites? [Fr.] Licent and many missionaries say that their present inferiority is due to their long history of Paganism. I'm afraid that this is only a 'declaration of pastors.' Instead, the cause seems to be the natural racial foundation…" In a letter from 1936 explaining his Omega Point conception, he rejected both the Fascist quest for particularistic hegemony and the Christian/Communist insistence on egalitarianism: "As not all ethnic groups have the same value, they must be dominated, which does not mean they must be despised—quite the reverse … In other words, at one and the same time there should be official recognition of: (1) the primacy/priority of the earth over nations; (2) the inequality of peoples and races. Now the second point is currently reviled by Communism … and the Church, and the first point is similarly reviled by the Fascist systems (and, of course, by less gifted peoples!)". In the essay 'Human Energy' (1937), he asked, "What fundamental attitude … should the advancing wing of humanity take to fixed or definitely unprogressive ethnical groups? The earth is a closed and limited surface. To what extent should it tolerate, racially or nationally, areas of lesser activity? More generally still, how should we judge the efforts we lavish in all kinds of hospitals on saving what is so often no more than one of life's rejects? … To what extent should not the development of the strong … take precedence over the preservation of the weak?" The theologian John P. Slattery interprets this last remark to suggest "genocidal practices for the sake of eugenics". Even after World War II Teilhard continued to argue for racial and individual eugenics in the name of human progress, and denounced the United Nations declaration of the Equality of Races (1950) as "scientifically useless" and "practically dangerous" in a letter to the agency's director Jaime Torres Bodet. In 1953, he expressed his frustration at the Church's failure to embrace the scientific possibilities for optimising human nature, including by the separation of sexuality from reproduction (a notion later developed e.g. by the second-wave feminist Shulamith Firestone in her 1970 book The Dialectic of Sex), and postulated "the absolute right … to try everything right to the end—even in the matter of human biology". The theologian John F. Haught has defended Teilhard from Slattery's charge of "persistent attraction to racism, fascism, and genocidal ideas" by pointing out that Teilhard's philosophy was not based on racial exclusion but rather on union through differentiation, and that Teilhard took seriously the human responsibility for continuing to remake the world. With regard to union through differentiation, he underlined the importance of understanding properly a quotation used by Slattery in which Teilhard writes, "I hate nationalism and its apparent regressions to the past. But I am very interested in the primacy it returns to the collective. Could a passion for 'the race' represent a first draft of the Spirit of the Earth?" Writing from China in October 1936, shortly after the outbreak of the Spanish Civil War, Teilhard expressed his stance towards the new political movement in Europe, "I am alarmed at the attraction that various kinds of Fascism exert on intelligent (?) people who can see in them nothing but the hope of returning to the Neolithic". He felt that the choice between what he called "the American, the Italian, or the Russian type" of political economy (i.e. liberal capitalism, Fascist corporatism, Bolshevik Communism) had only "technical" relevance to his search for overarching unity and a philosophy of action. Relationship with the Catholic Church In 1925, Teilhard was ordered by the Superior General of the Society of Jesus, Włodzimierz Ledóchowski, to leave his teaching position in France and to sign a statement withdrawing his controversial statements regarding the doctrine of original sin. Rather than quit the Society of Jesus, Teilhard obeyed and departed for China. This was the first of a series of condemnations by a range of ecclesiastical officials that would continue until after Teilhard's death. In August 1939, he was told by his Jesuit superior in Beijing, "Father, as an evolutionist and a Communist, you are undesirable here, and will have to return to France as soon as possible". The climax of these condemnations was a 1962 monitum (warning) of the Congregation for the Doctrine of the Faith cautioning on Teilhard's works. It said: The Holy Office did not, however, place any of Teilhard's writings on the Index Librorum Prohibitorum (Index of Forbidden Books), which still existed during Teilhard's lifetime and at the time of the 1962 decree. Shortly thereafter, prominent clerics mounted a strong theological defense of Teilhard's works. Henri de Lubac(later a Cardinal) wrote three comprehensive books on the theology of Teilhard de Chardin in the 1960s. While de Lubac mentioned that Teilhard was less than precise in some of his concepts, he affirmed the orthodoxy of Teilhard de Chardin and responded to Teilhard's critics: "We need not concern ourselves with a number of detractors of Teilhard, in whom emotion has blunted intelligence". Later that decade Joseph Ratzinger, a German theologian who became Pope Benedict XVI, spoke glowingly of Teilhard's Christology in Ratzinger's Introduction to Christianity: On 20 July 1981, the Holy See stated that, after consultation of cardinals Casaroli and Šeper, the letter did not change the position of the warning issued by the Holy Office on 30 June 1962, which pointed out that Teilhard's work contained ambiguities and grave doctrinal errors. Cardinal Ratzinger in his book The Spirit of the Liturgy incorporates Teilhard's vision as a touchstone of the Catholic Mass: Cardinal Avery Dulles said in 2004: Cardinal Christoph Schönborn wrote in 2007: In July 2009, Vatican spokesman Federico Lombardi said, "By now, no one would dream of saying that [Teilhard] is a heterodox author who shouldn't be studied." Pope Francis refers to Teilhard's eschatological contribution in his encyclical Laudato si'. The philosopher Dietrich von Hildebrand criticized severely the work of Teilhard. According to Hildebrand, in a conversation after a lecture by Teilhard: "He (Teilhard) ignored completely the decisive difference between nature and supernature. After a lively discussion in which I ventured a criticism of his ideas, I had an opportunity to speak to Teilhard privately. When our talk touched on St. Augustine, he exclaimed violently: 'Don't mention that unfortunate man; he spoiled everything by introducing the supernatural.'" Von Hildebrand writes that Teilhardism is incompatible with Christianity, substitutes efficiency for sanctity, dehumanizes man, and describes love as merely cosmic energy. Evaluations by scientists Julian Huxley Julian Huxley, the evolutionary biologist, in the preface to the 1955 edition of The Phenomenon of Man, praised the thought of Teilhard de Chardin for looking at the way in which human development needs to be examined within a larger integrated universal sense of evolution, though admitting he could not follow Teilhard all the way. In the publication Encounter, Huxley wrote: "The force and purity of Teilhard's thought and expression ... has given the world a picture not only of rare clarity but pregnant with compelling conclusions." Theodosius Dobzhansky Theodosius Dobzhansky, writing in 1973, drew upon Teilhard's insistence that evolutionary theory provides the core of how man understands his relationship to nature, calling him "one of the great thinkers of our age". Dobzhansky was renowned as the president of four prestigious scientific associations: the Genetics Society of America, the American Society of Naturalists, the Society for the Study of Evolution and the American Society of Zoologists. He also called Teilhard "one of the greatest intellects of our time." Daniel Dennett Daniel Dennett claimed "it has become clear to the point of unanimity among scientists that Teilhard offered nothing serious in the way of an alternative to orthodoxy; the ideas that were peculiarly his were confused, and the rest was just bombastic redescription of orthodoxy." David Sloan Wilson In 2019, evolutionary biologist David Sloan Wilson praised Teilhard's book The Phenomenon of Man as "scientifically prophetic in many ways", and considers his own work as an updated version of it, commenting that "[m]odern evolutionary theory shows that what Teilhard meant by the Omega Point is achievable in the foreseeable future." Robert Francoeur Robert Francoeur (1931-2012), the American biologist, said the Phenomenon of Man "will be one of the few books that will be remembered after the dust of the century has settled on many of its companions." Stephen Jay Gould In an essay published in the magazine Natural History (and later compiled as the 16th essay in his book Hen's Teeth and Horse's Toes), American biologist Stephen Jay Gould made a case for Teilhard's guilt in the Piltdown Hoax, arguing that Teilhard has made several compromising slips of the tongue in his correspondence with paleontologist Kenneth Oakley, in addition to what Gould termed to be his "suspicious silence" about Piltdown despite having been, at that moment in time, an important milestone in his career. In a later book, Gould claims that Steven Rose wrote that "Teilhard is revered as a mystic of genius by some, but among most biologists is seen as little more than a charlatan." Numerous scientists and Teilhard experts have refuted Gould’s theories about Teilhard’s guilt in the hoax, saying they are based on inaccuracies. In an article in New Scientist in September, 1981, Peter Costello said claims that Teilhard had been silent were factually wrong: “Much else of what is said about Teilhard is also wrong. …. After the exposure of the hoax, he did not refuse to make a statement; he gave a statement to the press on 26 November, 1953, which was published in New York and London the next day. .... If questions needed to be asked about Teilhard's role in the Piltdown affair, they could have been asked when he was in London during the summer of 1953. They were not asked. But enough is now known to prove Teilhard innocent of all involvement in the hoax.” Teilhard also wrote multiple letters about the hoax at the request of and in reply to Oakley, one of the 3 scientists who uncovered it, in an effort to help them get to the bottom of what occurred 40 years earlier. Another of the three scientists, S.J. Weiner said he spoke to Teilhard extensively about Piltdown and "He (Teilhard) discussed all the points that I put to him perfectly frankly and openly." Weiner spent years investigating who was responsible for the hoax and concluded that Charles Dawson was the sole culprit. He also said: "Gould would have you accept that Oakley was the same mind (as himself); but it is not so. When Gould's article came out Oakley dissociated himself from it. ...I have seen Oakley recently and he has no reservations... about his belief that Teilhard had nothing to do with the planting of this material and manufacture of the fraud." In November, 1981, Oakley himself published a letter in New Scientist saying: "There is no proved factual evidence known to me that supports the premise that Father Teilhard de Chardin gave Charles Dawson a piece of fossil elephant molar tooth as a souvenir of his time spent in North Africa. This faulty thread runs throughout the reconstruction ... After spending a year thinking about this accusation, I have at last become convinced that it is erroneous." Oakley also pointed out that after Teilhard got his degree in paleontology and gained experience in the field, he published scientific articles that show he found the scientific claims of the two Piltdown leaders to be incongruous, and that Teilhard did not agree they had discovered an ape-man that was a missing link between apes and humans. In a comprehensive rebuttal of Gould in America magazine, Mary Lukas said his claims about Teilhard were "patently ridiculous” and “wilder flights of fancy” that were easily disprovable and weak. For example, she notes Teilhard was only briefly and minimally involved in the Piltdown project for four reasons: 1) He was only a student in his early days of studying paleontology. 2) His college was in France and he was at the Piltdown site in Britain for a total of just 5 days over a short period of three months out of the 7-year project. 3) He was simply a volunteer assistant, helping with basic digging. 4) This limited involvement ended prior to the most important claimed discovery, due to his being conscripted to serve in the French army. She added: "Further, according to his letters, both published and unpublished, to friends, Teilhard's relationship to Dawson was anything but close." Lukas said Gould made the claims for selfish reasons: “The charge gained Mr. Gould two weeks of useful publicity and prepared reviewers to give a friendly reception to the collection of essays” that he was about to publish. She said Teilhard was “beyond doubt the most famous of” all the people who were involved in the excavations” and “the one who could gather headlines most easily…. The shock value of the suggestion that the philosopher-hero was also a criminal was stunning.” Two years later, Lukas published a more detailed article in the British scholarly journal Antiquity in which she further refuted Gould, including an extensive timeline of events. Winifred McCulloch wrote a very detailed rebuttal of Gould, calling his claim “highly subjective,” “very idiosyncratic,” filled with clear “weaknesses” and “shown to be impossible.” She said Weiner had criticized Gould's accusations in a talk at Georgetown University in 1981. She also noted that Oakley wrote in a letter to Lukas in 1981 that her article in America constituted "a total refutation of Gould's interpretation of Teilhard's letters to me in 1953-1954. . . . You have . . . unearthed evidence that will seriously undermine Gould's confidence in having any evidence against Teilhard in regard to what he (Teilhard) said in his letters to me." She wrote: "Gould's method of presenting his main argument might be called inferred intent - projecting onto Teilhard ways of thinking and acting that have no evidential base and are completely foreign to all we know of Teilhard. With Gould it seems that the guilty verdict came first, then he created a persona to fit the crime.” Peter Medawar In 1961, British immunologist and Nobel laureate Peter Medawar wrote a scornful review of The Phenomenon of Man for the journal Mind: "the greater part of it [...] is nonsense, tricked out with a variety of metaphysical conceits, and its author can be excused of dishonesty only on the grounds that before deceiving others he has taken great pains to deceive himself. [...] Teilhard practiced an intellectually unexacting kind of science [...]. He has no grasp of what makes a logical argument or what makes for proof. He does not even preserve the common decencies of scientific writing, though his book is professedly a scientific treatise. [...] Teilhard habitually and systematically cheats with words [...], uses in metaphor words like energy, tension, force, impetus, and dimension as if they retained the weight and thrust of their special scientific usages. [...] It is the style that creates the illusion of content." In 2014, Donald Wayne Viney evaluated Medawar's review and concluded that the case made against Teilhard was "remarkably thin, marred by misrepresentations and elementary philosophical blunders." These defects, Viney noted, were uncharacteristic of Medawar's other work. In another response, John Allen Grim said when Teilhard "wrote The Phenomenon of Man … he was using science there in a very broad sense. What he was really looking for was to be actually more radically empirical than conventional science is. Conventional science leaves out so much that's really there, especially our own subjectivity and some of the other things that are qualitative and value laden that are going on in the world. That science … has abstracted from values, meaning, subjectivity, purpose, God, and talked only about physical causation. Teilhard knew this, because when he wrote his [science journal] papers, he didn't bring God, value and so forth into it. But when he wrote The Phenomenon, he was doing something different. But it's not against the spirit of science. It was to actually expand the empirical orientation of science to take into account things that science unfortunately leaves out, like consciousness, for example, which today, in a materialist worldview, doesn't even exist, and yet it's the most palpable experience that any of us has. So if you try to construct a worldview that leaves out something so vital and important as mind to subjectivity, then that's unempirical, that's irrelevant. What we need is a radically empirical approach to the world that includes within what he calls hyperphysics, the experience of consciousness and also the experiences of faith, religions.” Richard Dawkins Evolutionary biologist and a New Atheist Richard Dawkins called Medawar's review "devastating" and The Phenomenon of Man "the quintessence of bad poetic science". Karl Stern Karl Stern, the neurobiologist of the Montreal Neurological Institute, wrote: "It happens so rarely that science and wisdom are blended as they were in the person of Teilhard de Chardin." George Gaylord Simpson George Gaylord Simpson felt that if Teilhard were right, the lifework "of Huxley, Dobzhansky, and hundreds of others was not only wrong, but meaningless", and was mystified by their public support for him. He considered Teilhard a friend and his work in paleontology extensive and important, but expressed strongly adverse views of his contributions as scientific theorist and philosopher. William G. Pollard William G. Pollard, the physicist and founder of the prestigious Oak Ridge Institute of Nuclear Studies (and its director until 1974), praised Teilhard’s work as "A fascinating and powerful presentation of the amazing fact of the emergence of man in the unfolding drama of the cosmos." John Barrow and Frank Tipler John Barrow and Frank Tipler, both physicists and cosmologists, base much of their work on Teilhard and use some of his key terms such as the Omega point. However, Manuel Alfonseca, author of 50 books and 200 technical articles, said in an article in the quarterly Popular Science: "Barrow and Tipler have not understood Teilhard (apparently they have just read 'The Phenomenon of Man''', at least this is the only work by Teilhard they mention). In fact, they have got everything backwards." Wolfgang Smith Wolfgang Smith, an American scientist versed in Catholic theology, devotes an entire book to the critique of Teilhard's doctrine, which he considers neither scientific (assertions without proofs), nor Catholic (personal innovations), nor metaphysical (the "Absolute Being" is not yet absolute), and of which the following elements can be noted (all the words in quotation marks are Teilhard's, quoted by Smith): Evolution Smith claims that for Teilhard, evolution is not only a scientific theory but an irrefutable truth "immune from any subsequent contradiction by experience"; it constitutes the foundation of his doctrine. Matter becomes spirit and humanity moves towards a super-humanity thanks to complexification (physico-chemical, then biological, then human), socialization, scientific research and technological and cerebral development; the explosion of the first atomic bomb is one of its milestones, while waiting for "the vitalization of matter by the creation of super-molecules, the remodeling of the human organism by means of hormones, control of heredity and sex by manipulation of genes and chromosomes [...]". Matter and spirit Teilhard maintains that the human spirit (which he identifies with the anima and not with the spiritus) originates in a matter which becomes more and more complex until it produces life, then consciousness, then the consciousness of being conscious, holding that the immaterial can emerge from the material. At the same time, he supports the idea of the presence of embryos of consciousness from the very genesis of the universe: "We are logically forced to assume the existence [...] of some sort of psyche" infinitely diffuse in the smallest particle. Theology Smith believes that since Teilhard affirms that "God creates evolutively", he denies the Book of Genesis, not only because it attests that God created man, but that he created him in his own image, thus perfect and complete, then that man fell, that is to say the opposite of an ascending evolution. That which is metaphysically and theologically "above" - symbolically speaking - becomes for Teilhard "ahead", yet to come; even God, who is neither perfect nor timeless, evolves in symbiosis with the World, which Teilhard, a resolute pantheist, venerates as the equal of the Divine. As for Christ, not only is he there to activate the wheels of progress and complete the evolutionary ascent, but he himself evolves.. New religion As he wrote to a cousin: "What dominates my interests increasingly is the effort to establish in me and define around me a new religion (call it a better Christianity, if you will)...", and elsewhere: "a Christianity re-incarnated for a second time in the spiritual energies of Matter". The more Teilhard refines his theories, the more he emancipates himself from established Christian doctrine: a "religion of the earth" must replace a "religion of heaven". By their common faith in Man, he writes, Christians, Marxists, Darwinists, materialists of all kinds will ultimately join around the same summit: the Christic Omega Point. Lucien Cuénot Lucien Cuénot, the biologist who proved that Mendelism applied to animals as well as plants through his experiments with mice, wrote: "Teilhard's greatness lay in this, that in a world ravaged by neurosis he provided an answer to out modern anguish and reconciled man with the cosmos and with himself by offering him an "ideal of humanity that, through a higher and consciously willed synthesis, would restore the instinctive equilibrium enjoyed in ages of primitive simplicity." Mendelism is a group of biological inheritance principles developed by the Catholic friar-scientist Gregor Mendel. Though for many years Mendelism was rejected by most biologists and other scientists, its principles - combined with the Boveri–Sutton chromosome theory of inheritance - eventually became the core of classical genetics. Legacy Brian Swimme wrote "Teilhard was one of the first scientists to realize that the human and the universe are inseparable. The only universe we know about is a universe that brought forth the human." George Gaylord Simpson named the most primitive and ancient genus of true primate, the Eocene genus Teilhardina. On June 25, 1947 Teilhard was honored by the French Ministry of Foreign Affairs for "Outstanding services to the intellectual and scientific influence of France" and was promoted to the rank of Officer in the Legion of Honor. In 1950, Teilhard was elected a member of the French Academy of Sciences. Influence on arts and culture Teilhard and his work continue to influence the arts and culture. Characters based on Teilhard appear in several novels, including Jean Telemond in Morris West's The Shoes of the Fisherman (mentioned by name and quoted by Oskar Werner playing Fr. Telemond in the movie version of the novel). In Dan Simmons' 1989–97 Hyperion Cantos, Teilhard de Chardin has been canonized a saint in the far future. His work inspires the anthropologist priest character, Paul Duré. When Duré becomes Pope, he takes Teilhard I as his regnal name. Teilhard appears as a minor character in the play Fake by Eric Simonson, staged by Chicago's Steppenwolf Theatre Company in 2009, involving a fictional solution to the infamous Piltdown Man hoax. There is a broad range of quotations from an auto mechanic quotes Teilhard in Philip K. Dick's A Scanner Darkly to a philosophical underpinning of the plot, as Teilhard's work does in Julian May's 1987–94 Galactic Milieu Series. Teilhard also plays a major role in Annie Dillard's 1999 For the Time Being. Teilhard is mentioned by name and the Omega Point briefly explained in Arthur C. Clarke's and Stephen Baxter's The Light of Other Days. The title of the short-story collection Everything That Rises Must Converge by Flannery O'Connor is a reference to Teilhard's work. The American novelist Don DeLillo's 2010 novel Point Omega borrows its title and some of its ideas from Teilhard de Chardin. Robert Wright, in his book Nonzero: The Logic of Human Destiny, compares his own naturalistic thesis that biological and cultural evolution are directional and, possibly, purposeful, with Teilhard's ideas. Teilhard's work also inspired philosophical ruminations by Italian laureate architect Paolo Soleri and Mexican writer Margarita Casasús Altamirano In artworks: s French painter Alfred Manessier's L'Offrande de la terre ou Hommage à Teilhard de Chardin and American sculptor Frederick Hart's acrylic sculpture The Divine Milieu: Homage to Teilhard de Chardin. A sculpture of the Omega Point by Henry Setter, with a quote from Teilhard de Chardin, can be found at the entrance to the Roesch Library at the University of Dayton. The Spanish painter Salvador Dalí was fascinated by Teilhard de Chardin and the Omega Point theory. His 1959 painting The Ecumenical Council is said to represent the "interconnectedness" of the Omega Point. Edmund Rubbra's 1968 Symphony No. 8 is titled Hommage à Teilhard de Chardin.The Embracing Universe, an oratorio for choir and 7 instruments, composed by Justin Grounds to a libretto by Fred LaHaye saw its first performance in 2019. It is based on the life and thought of Teilhard de Chardin. College campuses: A building at the University of Manchester, residence dormitories at Gonzaga University, residence dormitories at Seattle University.The De Chardin Project, a play celebrating Teilhard's life, ran from 20 November to 14 December 2014 in Toronto, Canada. The Evolution of Teilhard de Chardin, a documentary film on Teilhard's life, was scheduled for release in 2015. Founded in 1978, George Addair based much of Omega Vector on Teilhard's work. The American physicist Frank J. Tipler has further developed Teilhard's Omega Point concept in two controversial books, The Physics of Immortality and the more theologically based Physics of Christianity. While keeping the central premise of Teilhard's Omega Point (i.e. a universe evolving towards a maximum state of complexity and consciousness) Tipler has supplanted some of the more mystical/ theological elements of the OPT with his own scientific and mathematical observations (as well as some elements borrowed from Freeman Dyson's eternal intelligence theory). In 1972, the Uruguayan priest Juan Luis Segundo, in his five-volume series A Theology for Artisans of a New Humanity, wrote that Teilhard "noticed the profound analogies existing between the conceptual elements used by the natural sciences—all of them being based on the hypothesis of a general evolution of the universe." Influence of his cousin Marguerite Teilard Chambon , (alias Claude Aragonnès) was a French writer who edited and had published three volumes of correspondence with her cousin, Pierre Teilhard de Chardin, "La genèse d'une pensée" ("The Making of a Mind") being the last, after her own death in 1959. She furnished each with an introduction. Marguerite, a year older than Teilhard, was considered among those who knew and understood him best. They had shared a childhood in Auvergne; she it was who encouraged him to undertake a doctorate in science at the Sorbonne; she eased his entry into the Catholic Institute, through her connection to Emmanuel de Margerie and she introduced him to the intellectual life of Paris. Throughout the First World War, she corresponded with him, acting as a "midwife" to his thinking, helping his thought to emerge and honing it. In September 1959 she participated in a gathering organised at Saint-Babel, near Issoire, devoted to Teilhard's philosophical contribution. On the way home to Chambon-sur-Lac, she was fatally injured in a road traffic accident. Her sister, Alice, completed the final preparations for the publication of the final volume of her cousin Teilhard's wartime letters. Influence on the New Age movement Teilhard has had a profound influence on the New Age movements and has been described as "perhaps the man most responsible for the spiritualization of evolution in a global and cosmic context". Other Fritjof Capra's systems theory book The Turning Point: Science, Society, and the Rising Culture positively contrasts Teilhard to Darwinian evolution. Bibliography The dates in parentheses are the dates of first publication in French and English. Most of these works were written years earlier, but Teilhard's ecclesiastical order forbade him to publish them because of their controversial nature. The essay collections are organized by subject rather than date, thus each one typically spans many years. Le Phénomène Humain (1955), written 1938–40, scientific exposition of Teilhard's theory of evolution. The Phenomenon of Man (1959), Harper Perennial 1976: . Reprint 2008: . The Human Phenomenon (1999), Brighton: Sussex Academic, 2003: . Letters From a Traveler (1956; English translation 1962), written 1923–55. Le Groupe Zoologique Humain (1956), written 1949, more detailed presentation of Teilhard's theories. Man's Place in Nature (English translation 1966). Le Milieu Divin (1957), spiritual book written 1926–27, in which the author seeks to offer a way for everyday life, i.e. the secular, to be divinized. The Divine Milieu (1960) Harper Perennial 2001: . L'Avenir de l'Homme (1959) essays written 1920–52, on the evolution of consciousness (noosphere). The Future of Man (1964) Image 2004: . Hymn of the Universe (1961; English translation 1965) Harper and Row: , mystical/spiritual essays and thoughts written 1916–55. L'Energie Humaine (1962), essays written 1931–39, on morality and love. Human Energy (1969) Harcort Brace Jovanovich . L'Activation de l'Energie (1963), sequel to Human Energy, essays written 1939–55 but not planned for publication, about the universality and irreversibility of human action. Activation of Energy (1970), Harvest/HBJ 2002: . Je M'Explique (1966) Jean-Pierre Demoulin, editor , "The Essential Teilhard" — selected passages from his works. Let Me Explain (1970) Harper and Row , Collins/Fontana 1973: . Christianity and Evolution, Harvest/HBJ 2002: . The Heart of the Matter, Harvest/HBJ 2002: . Toward the Future, Harvest/HBJ 2002: . The Making of a Mind: Letters from a Soldier-Priest 1914–1919, Collins (1965), Letters written during wartime. Writings in Time of War, Collins (1968) composed of spiritual essays written during wartime. One of the few books of Teilhard to receive an imprimatur. Vision of the Past, Collins (1966) composed of mostly scientific essays published in the French science journal Etudes. The Appearance of Man, Collins (1965) composed of mostly scientific writings published in the French science journal Etudes. Letters to Two Friends 1926–1952, Fontana (1968). Composed of personal letters on varied subjects including his understanding of death. See Letters to Léontine Zanta, Collins (1969). Correspondence / Pierre Teilhard de Chardin, Maurice Blondel, Herder and Herder (1967) This correspondence also has both the imprimatur and nihil obstat. See also Thomas Berry Henri Bergson Henri Breuil Henri de Lubac Law of Complexity/Consciousness Edouard Le Roy List of Jesuit scientists List of Roman Catholic scientist-clerics List of science and religion scholars Notes References Further reading Amir Aczel, The Jesuit and the Skull: Teilhard de Chardin, Evolution and the Search for Peking Man (Riverhead Hardcover, 2007) Pope Benedict XVI, The Spirit of the Liturgy (Ignatian Press 2000) Pope Benedict XVI, Introduction to Christianity (Ignatius Press, Revised edition, 2004) Paul Churchland, "Man and Cosmos" John Cowburn, Pierre Teilhard de Chardin, a Selective Summary of His Life (Mosaic Press 2013) (UK edition: London: Burns & Oates, 1965; original French: Pierre Teilhard de Chardin: les grandes étapes de son évolution, Paris: Plon, 1958) Andre Dupleix, 15 Days of Prayer with Teilhard de Chardin (New City Press, 2008) Enablers, T.C., 2015. 'Hominising – Realising Human Potential' Robert Faricy, Teilhard de Chardin's Theology of Christian in the World (Sheed and Ward 1968) Robert Faricy, The Spirituality of Teilhard de Chardin (Collins 1981, Harper & Row 1981) Robert Faricy and Lucy Rooney, Praying with Teilhard de Chardin(Queenship 1996) David Grumett, Teilhard de Chardin: Theology, Humanity and Cosmos (Peeters 2005) Dietrich von Hildebrand, Teilhard de Chardin: A False Prophet (Franciscan Herald Press 1970) Dietrich von Hildebrand, Trojan Horse in the City of God Dietrich von Hildebrand, Devastated Vineyard Thomas M. King, Teilhard's Mass; Approaches to "The Mass on the World" (Paulist Press, 2005) Ursula King, Spirit of Fire: The Life and Vision of Teilhard de Chardin maryknollsocietymall.org (Orbis Books, 1996) Richard W. Kropf, Teilhard, Scripture and Revelation: A Study of Teilhard de Chardin's Reinterpretation of Pauline Themes (Associated University Press, 1980) David H. Lane, The Phenomenon of Teilhard: Prophet for a New Age (Mercer University Press) de Lubac, Henri, The Religion of Teilhard de Chardin (Image Books, 1968) de Lubac, Henri, The Faith of Teilhard de Chardin (Burnes and Oates, 1965) de Lubac, Henri, The Eternal Feminine: A Study of the Text of Teilhard de Chardin (Collins, 1971) de Lubac, Henri, Teilhard Explained (Paulist Press, 1968) Helmut de Terra, Memories of Teilhard de Chardin (Harper and Row and Wm Collins Sons & Co., 1964) Mary and Ellen Lukas, Teilhard (Doubleday, 1977) Jean Maalouf Teilhard de Chardin, Reconciliation in Christ (New City Press, 2002) George A. Maloney, The Cosmic Christ: From Paul to Teilhard (Sheed and Ward, 1968) Mooney, Christopher, Teilhard de Chardin and the Mystery of Christ (Image Books, 1968) Murray, Michael H. The Thought of Teilhard de Chardin (Seabury Press, N.Y., 1966) Robert J. O'Connell, Teilhard's Vision of the Past: The Making of a Method, (Fordham University Press, 1982) Noel Keith Roberts, From Piltdown Man to Point Omega: the evolutionary theory of Teilhard de Chardin (New York, Peter Lang, 2000) James F. Salmon, 'Pierre Teilhard de Chardin' in The Blackwell Companion to Science and Christianity (Wiley-Blackwell, 2012) Louis M. Savory, Teilhard de Chardin – The Divine Milieu Explained: A Spirituality for the 21st Century (Paulist Press, 2007) Robert Speaight, The Life of Teilhard de Chardin (Harper and Row, 1967) K. D. Sethna, Teilhard de Chardin and Sri Aurobindo: a focus on fundamentals, Bharatiya Vidya Prakasan, Varanasi (1973) K. D. Sethna, The Spirituality of the Future: A search apropos of R. C. Zaehner's study in Sri Aurobindo and Teilhard De Chardin''. Fairleigh Dickinson University 1981. External links Pro Teilhard de Chardin—A site devoted to the ideas of Teilhard de Chardin The Teilhard de Chardin Foundation The American Teilhard Association Teilhard de Chardin—A personal website Contra Warning Regarding the Writings of Father Teilhard de Chardin The Sacred Congregation of the Holy Office, 1962 McCarthy, John F. "A review of Teilhardism and the New Religion by Wolfgang Smith", 1989 Other Web pages and timeline about the Piltdown forgery hosted by the British Geological Survey "Teilhard de Chardin: His Importance in the 21st Century" - Georgetown University - June 23, 2015 1881 births 1955 deaths 20th-century French Catholic theologians 20th-century French geologists 20th-century French Jesuits Burials at St. Andrew-on-Hudson Cemetery Christian writers about eschatology Empiricists French cosmologists French male non-fiction writers French military personnel of World War I French religious writers French transhumanists Jesuit philosophers Jesuit scientists Jesuit theologians Left-wing politics in France Liberation theology Members of the French Academy of Sciences Metaphysicians Officers of the Legion of Honour Ontologists Orthogenesis People from Puy-de-Dôme Philosophers of religion French philosophers of science French philosophers of technology Rationalists Religion and science Singularitarians Theistic evolutionists University of Paris alumni Utilitarians Science activists
23661
https://en.wikipedia.org/wiki/Phutball
Phutball
Phutball (short for Philosopher's Football) is a two-player abstract strategy board game described in Elwyn Berlekamp, John Horton Conway, and Richard K. Guy's Winning Ways for your Mathematical Plays. Rules Phutball is played on the intersections of a 19×15 grid using one white stone and as many black stones as needed. In this article the two players are named Ohs (O) and Eks (X). The board is labeled A through P (omitting I) from left to right and 1 to 19 from bottom to top from Ohs' perspective. Rows 0 and 20 represent "off the board" beyond rows 1 and 19 respectively. As specialized phutball boards are hard to come by, the game is usually played on a 19×19 Go board, with a white stone representing the football and black stones representing the men. The objective is to score goals by using the men (the black stones) to move the football (the white stone) onto or over the opponent's goal line (rows 1 or 19). Ohs tries to move the football to rows 19 or 20 and Eks to rows 1 or 0. At the start of the game the football is placed on the central point, unless one player gives the other a handicap, in which case the ball starts nearer one player's goal. Players alternate making moves. A move is either to add a man to any vacant point on the board or to move the ball. There is no difference between men played by Ohs and those played by Eks. The football is moved by a series of jumps over adjacent men. Each jump is to the first vacant point in a straight line horizontally, vertically, or diagonally over one or more men. The jumped men are then removed from the board (before any subsequent jump occurs). This process repeats for as long as there remain men available to be jumped and the player desires. Jumping is optional: there is no requirement to jump. In contrast to checkers, multiple men in a row are jumped and removed as a group. The diagram on the right illustrates a single move consisting of a series of jumps. Ohs moves the football from K6–G9. The men on J7 and H8 are removed. Ohs moves the football from G9–G11. The man on G10 is removed. Ohs moves the football from G11–J11. The man on H11 is removed. Note that the move consisting of K6–G9–J9–G7 would not be legal, as that would jump the man on H8 twice. If the football ends the move on or over the opponent's goal line then a goal has been scored. If the football passes through a goal line, but ends up elsewhere due to further jumps, the game continues. Strategy Carefully set-up sequences of jumps can be "spoiled" by extending them at critical moments. A jump to the left or right edge can be blocked by leaving no vacant points. When jumping, it is usually bad to leave an easily used return path for the opponent to "undo" one's progress. Computational complexity The game is sufficiently complex that checking whether there is a win in one (on an m×n board) is NP-complete. From the starting position, it is not known whether any player has a winning strategy or both players have a drawing strategy, but there exist other configurations from which both players have drawing strategies. Given an arbitrary board position, with initially a white stone placed in the center, determining whether the current player has a winning strategy is PSPACE-hard. References Further reading Abstract strategy games Mathematical games John Horton Conway Games played on Go boards
23664
https://en.wikipedia.org/wiki/Papyrus
Papyrus
Papyrus ( ) is a material similar to thick paper that was used in ancient times as a writing surface. It was made from the pith of the papyrus plant, Cyperus papyrus, a wetland sedge. Papyrus (plural: papyri or papyruses) can also refer to a document written on sheets of such material, joined side by side and rolled up into a scroll, an early form of a book. Papyrus was first known to have been used in Egypt (at least as far back as the First Dynasty), as the papyrus plant was once abundant across the Nile Delta. It was also used throughout the Mediterranean region. Apart from writing material, ancient Egyptians employed papyrus in the construction of other artifacts, such as reed boats, mats, rope, sandals, and baskets. History Papyrus was first manufactured in Egypt as far back as the third millennium BCE. The earliest archaeological evidence of papyrus was excavated in 2012 and 2013 at Wadi al-Jarf, an ancient Egyptian harbor located on the Red Sea coast. These documents, the Diary of Merer, date from –2550 BCE (end of the reign of Khufu). The papyrus rolls describe the last years of building the Great Pyramid of Giza. For multiple millennia, papyrus was commonly rolled into scrolls as a form of storage. However, at some point late in its history, papyrus began being collected together in the form of codices akin to the modern book. This may have been mimicing the book-form of codicies created with parchment. Early Christian writers soon adopted the codex form, and in the Greco-Roman world, it became common to cut sheets from papyrus rolls to form codices. Codices were an improvement on the papyrus scroll, as the papyrus was not pliable enough to fold without cracking, and a long roll, or scroll, was required to create large-volume texts. Papyrus had the advantage of being relatively cheap and easy to produce, but it was fragile and susceptible to both moisture and excessive dryness. Unless the papyrus was of perfect quality, the writing surface was irregular, and the range of media that could be used was also limited. Papyrus was gradually overtaken in Europe by a rival writing surface that rose in prominence known as parchment, which was made from animal skins. By the beginning of the fourth century A.D., the most important books began to be manufactured in parchment, and works worth preserving were transferred from papyrus to parchment. Parchment had significant advantages over papyrus, including higher durability in moist climates and being more conducive to writing on both sides of the surface. The main advantage of papyrus had been its cheaper raw material — the papyrus plant is easy to cultivate in a suitable climate and produces more writing material than animal hides (the most expensive books, made from foetal vellum would take up to dozens of bovine fetuses to produce). However, as trade networks declined, the availability of papyrus outside the range of the papyrus plant became limited and it thus lost its cost advantage. Papyrus' last appearance in the Merovingian chancery was with a document from 692 A.D., though it was known in Gaul until the middle of the following century. The latest certain dates for the use of papyrus in Europe are 1057 for a papal decree (typically conservative, all papal bulls were on papyrus until 1022), under Pope Victor II, and 1087 for an Arabic document. Its use in Egypt continued until it was replaced by less expensive paper introduced by the Islamic world, which originally learned of it from the Chinese. By the 12th century, parchment and paper were in use in the Byzantine Empire, but papyrus was still an option. Until the middle of the 19th century, only some isolated documents written on papyrus were known, and museums simply showed them as curiosities. They did not contain literary works. The first modern discovery of papyri rolls was made at Herculaneum in 1752. Until then, the only papyri known had been a few surviving from medieval times. Scholarly investigations began with the Dutch historian Caspar Jacob Christiaan Reuvens (1793–1835). He wrote about the content of the Leyden papyrus, published in 1830. The first publication has been credited to the British scholar Charles Wycliffe Goodwin (1817–1878), who published for the Cambridge Antiquarian Society, one of the Papyri Graecae Magicae V, translated into English with commentary in 1853. Varying quality Papyrus was made in several qualities and prices. Pliny the Elder and Isidore of Seville described six variations of papyrus that were sold in the Roman market of the day. These were graded by quality based on how fine, firm, white, and smooth the writing surface was. Grades ranged from the superfine Augustan, which was produced in sheets of 13 digits (10 inches) wide, to the least expensive and most coarse, measuring six digits (four inches) wide. Materials deemed unusable for writing or less than six digits were considered commercial quality and were pasted edge to edge to be used only for wrapping. Etymology The English word "papyrus" derives, via Latin, from Greek πάπυρος (papyros), a loanword of unknown (perhaps Pre-Greek) origin. Greek has a second word for it, βύβλος (byblos), said to derive from the name of the Phoenician city of Byblos. The Greek writer Theophrastus, who flourished during the 4th century BCE, uses papyros when referring to the plant used as a foodstuff and byblos for the same plant when used for nonfood products, such as cordage, basketry, or writing surfaces. The more specific term βίβλος biblos, which finds its way into English in such words as 'bibliography', 'bibliophile', and 'bible', refers to the inner bark of the papyrus plant. Papyrus is also the etymon of 'paper', a similar substance. In the Egyptian language, papyrus was called wadj (w3ḏ), tjufy (ṯwfy), or djet (ḏt). Documents written on papyrus The word for the material papyrus is also used to designate documents written on sheets of it, often rolled up into scrolls. The plural for such documents is papyri. Historical papyri are given identifying names – generally the name of the discoverer, first owner, or institution where they are kept – and numbered, such as "Papyrus Harris I". Often an abbreviated form is used, such as "pHarris I". These documents provide important information on ancient writings; they give us the only extant copy of Menander, the Egyptian Book of the Dead, Egyptian treatises on medicine (the Ebers Papyrus) and on surgery (the Edwin Smith papyrus), Egyptian mathematical treatises (the Rhind papyrus), and Egyptian folk tales (the Westcar Papyrus). When, in the 18th century, a library of ancient papyri was found in Herculaneum, ripples of expectation spread among the learned men of the time. However, since these papyri were badly charred, their unscrolling and deciphering are still going on today. Manufacture and use Papyrus was made from the stem of the papyrus plant, Cyperus papyrus. The outer rind was first removed, and the sticky fibrous inner pith is cut lengthwise into thin strips about long. The strips were then placed side by side on a hard surface with their edges slightly overlapping, and then another layer of strips is laid on top at right angles. The strips may have been soaked in water long enough for decomposition to begin, perhaps increasing adhesion, but this is not certain. The two layers possibly were glued together. While still moist, the two layers were hammered together, mashing the layers into a single sheet. The sheet was then dried under pressure. After drying, the sheet was polished with a rounded object, possibly a stone, seashell, or round hardwood. Sheets, or Mollema, could be cut to fit the obligatory size or glued together to create a longer roll. The point where the Mollema are joined with glue is called the kollesis. A wooden stick would be attached to the last sheet in a roll, making it easier to handle. To form the long strip scrolls required, several such sheets were united and placed so all the horizontal fibres parallel with the roll's length were on one side and all the vertical fibres on the other. Normally, texts were first written on the recto, the lines following the fibres, parallel to the long edges of the scroll. Secondarily, papyrus was often reused, writing across the fibres on the verso. One source used for determining the method by which papyrus was created in antiquity is through the examination of tombs in the ancient Egyptian city of Thebes, which housed a necropolis containing many murals displaying the process of papyrus-making. The Roman commander Pliny the Elder also describes the methods of preparing papyrus in his Naturalis Historia. In a dry climate, like that of Egypt, papyrus is stable, formed as it is of highly rot-resistant cellulose, but storage in humid conditions can result in molds attacking and destroying the material. Library papyrus rolls were stored in wooden boxes and chests made in the form of statues. Papyrus scrolls were organized according to subject or author and identified with clay labels that specified their contents without having to unroll the scroll. In European conditions, papyrus seems to have lasted only a matter of decades; a 200-year-old papyrus was considered extraordinary. Imported papyrus once commonplace in Greece and Italy has since deteriorated beyond repair, but papyri are still being found in Egypt; extraordinary examples include the Elephantine papyri and the famous finds at Oxyrhynchus and Nag Hammadi. The Villa of the Papyri at Herculaneum, containing the library of Lucius Calpurnius Piso Caesoninus, Julius Caesar's father-in-law, was preserved by the eruption of Mount Vesuvius but has only been partially excavated. Sporadic attempts to revive the manufacture of papyrus have been made since the mid-18th century. Scottish explorer James Bruce experimented in the late 18th century with papyrus plants from Sudan, for papyrus had become extinct in Egypt. Also in the 18th century, Sicilian Saverio Landolina manufactured papyrus at Syracuse, where papyrus plants had continued to grow in the wild. During the 1920s, when Egyptologist Battiscombe Gunn lived in Maadi, outside Cairo, he experimented with the manufacture of papyrus, growing the plant in his garden. He beat the sliced papyrus stalks between two layers of linen and produced successful examples of papyrus, one of which was exhibited in the Egyptian Museum in Cairo. The modern technique of papyrus production used in Egypt for the tourist trade was developed in 1962 by the Egyptian engineer Hassan Ragab using plants that had been reintroduced into Egypt in 1872 from France. Both Sicily and Egypt have centres of limited papyrus production. Papyrus is still used by communities living in the vicinity of swamps, to the extent that rural householders derive up to 75% of their income from swamp goods. Particularly in East and Central Africa, people harvest papyrus, which is used to manufacture items that are sold or used locally. Examples include baskets, hats, fish traps, trays or winnowing mats, and floor mats. Papyrus is also used to make roofs, ceilings, rope, and fences. Although alternatives, such as eucalyptus, are increasingly available, papyrus is still used as fuel. Collections of papyrus Amherst Papyri: this is a collection of William Tyssen-Amherst, 1st Baron Amherst of Hackney. It includes biblical manuscripts, early church fragments, and classical documents from the Ptolemaic, Roman, and Byzantine eras. The collection was edited by Bernard Grenfell and Arthur Hunt in 1900–1901. It is housed at the Morgan Library & Museum (New York). Archduke Rainer Collection, also known as the Vienna Papyrus Collection: is one of the world's largest collections of papyri (about 180,000 objects) in the Austrian National Library of Vienna. Berlin Papyri: housed in the Egyptian Museum and Papyrus Collection. Berliner Griechische Urkunden (BGU): a publishing project ongoing since 1895 Bodmer Papyri: this collection was purchased by Martin Bodmer in 1955–1956. Currently, it is housed in the Bibliotheca Bodmeriana in Cologny. It includes Greek and Coptic documents, classical texts, biblical books, and writing of the early churches. Chester Beatty Papyri: a collection of 11 codices acquired by Alfred Chester Beatty in 1930–1931 and 1935. It is housed at the Chester Beatty Library. The collection was edited by Frederic G. Kenyon. Colt Papyri, housed at the Morgan Library & Museum (New York). Former private collection of Grigol Tsereteli: a collection up to one hundred Greek papyri, currently housed at Georgian National Centre of Manuscripts. The Herculaneum papyri: these papyri were found in Herculaneum in the eighteenth century, carbonized by the eruption of Mount Vesuvius. After some tinkering, a method was found to unroll and to read them. Most of them are housed at the Naples National Archaeological Museum. The Heroninos Archive: a collection of around a thousand papyrus documents, dealing with the management of a large Roman estate, dating to the third century CE, found at the very end of the 19th century at Kasr El Harit, the site of ancient , in the Faiyum area of Egypt by Bernard Pyne Grenfell and Arthur Surridge Hunt. It is spread over many collections throughout the world. The Houghton's papyri: the collection at Houghton Library, Harvard University was acquired between 1901 and 1909 thanks to a donation from the Egypt Exploration Fund. Martin Schøyen Collection: biblical manuscripts in Greek and Coptic, Dead Sea Scrolls, classical documents Michigan Papyrus Collection: this collection contains above 10,000 papyri fragments. It is housed at the University of Michigan. Oxyrhynchus Papyri: these numerous papyri fragments were discovered by Grenfell and Hunt in and around Oxyrhynchus. The publication of these papyri is still in progress. A large part of the Oxyrhynchus papyri are housed at the Ashmolean Museum in Oxford, others in the British Museum in London, in the Egyptian Museum in Cairo, and many other places. Princeton Papyri: it is housed at the Princeton University Papiri della Società Italiana (PSI): a series, still in progress, published by the Società per la ricerca dei Papiri greci e latini in Egitto and from 1927 onwards by the succeeding Istituto Papirologico "G. Vitelli" in Florence. These papyri are situated at the institute itself and in the Biblioteca Laurenziana. Rylands Papyri: this collection contains above 700 papyri, with 31 ostraca and 54 codices. It is housed at the John Rylands University Library. Tebtunis Papyri: housed by the Bancroft Library at the University of California, Berkeley, this is a collection of more than 30,000 fragments dating from the 3rd century BCE through the 3rd century CE, found in the winter 1899–1900 at the site of ancient Tebtunis, Egypt, by an expedition team led by the British papyrologists Bernard P. Grenfell and Arthur S. Hunt. Washington University Papyri Collection: includes 445 manuscript fragments, dating from the first century BCE to the eighth century AD. Housed at the Washington University Libraries. Yale Papyrus Collection: housed by the Beinecke Library, it contains over six thousand inventoried items. It is cataloged, digitally scanned, and accessible online. Individual papyri Brooklyn Papyrus: this papyrus focuses mainly on snakebites and their remedies. It speaks of remedial methods for poisons obtained from snakes, scorpions, and tarantulas. The Brooklyn Papyrus currently resides in the Brooklyn Museum. Saite Oracle Papyrus: this papyrus located at the Brooklyn Museum records the petition of a man named Pemou on behalf of his father, Harsiese to ask their god for permission to change temples. Strasbourg papyrus Will of Naunakhte: found at Deir el-Medina and dating to the 20th dynasty, it is notable because it is a legal document for a non-noble woman. See also Other ancient writing materials: Palm leaf manuscript (India) Amate (Mesoamerica) Paper Ostracon Wax tablets Clay tablets Birch bark document Parchment Pliny the Elder Papyrology Papyrus sanitary pad Palimpsest For Egyptian papyri: List of ancient Egyptian papyri Other papyri: Elephantine papyri Magdalen papyrus Nag Hammadi library New Testament papyri The papyrus plant in Egyptian art Palmette References Citations Sources Leach, Bridget, and William John Tait. 2000. "Papyrus". In Ancient Egyptian Materials and Technology, edited by Paul T. Nicholson and Ian Shaw. Cambridge: Cambridge University Press. 227–253. Thorough technical discussion with extensive bibliography. Leach, Bridget, and William John Tait. 2001. "Papyrus". In The Oxford Encyclopedia of Ancient Egypt, edited by Donald Bruce Redford. Vol. 3 of 3 vols. Oxford, New York, and Cairo: Oxford University Press and The American University in Cairo Press. 22–24. Parkinson, Richard Bruce, and Stephen G. J. Quirke. 1995. Papyrus. Egyptian Bookshelf. London: British Museum Press. General overview for a popular reading audience. Further reading Horst Blanck: Das Buch in der Antike. Beck, München 1992, Rosemarie Drenkhahn: Papyrus. In: Wolfgang Helck, Wolfhart Westendorf (eds.): Lexikon der Ägyptologie. vol. IV, Wiesbaden 1982, Spalte 667–670 David Diringer, The Book before Printing: Ancient, Medieval and Oriental, Dover Publications, New York 1982, pp. 113–169, . Victor Martin (Hrsg.): Ménandre. Le Dyscolos. Bibliotheca Bodmeriana, Cologny – Genève 1958 Otto Mazal: Griechisch-römische Antike. Akademische Druck- und Verlagsanstalt, Graz 1999, (Geschichte der Buchkultur; vol. 1) External links Leuven Homepage of Papyrus Collections Ancient Egyptian Papyrus – Aldokkan Yale Papyrus Collection Database at the Beinecke Rare Book and Manuscript Library at Yale University Lund University Library Papyrus Collection Ghent University Library Papyrus Collection Finding aid to the Advanced Papyrological Information System records at Columbia University. Rare Book & Manuscript Library. Modern commercial Papyrus paper making (photos)– Elbardy Papyrus-making in Egypt (video), scidevnet, via youtube, April 2019. Egyptian artefact types Nile Delta Papyrology Textual scholarship Writing media Egyptian inventions
23665
https://en.wikipedia.org/wiki/Pixel
Pixel
In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position. Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels). Etymology The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el include the words voxel , and texel . The word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, "pix" was being used in reference to still pictures by photojournalists. The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of scanned images from space probes to the Moon and Mars. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (). The concept of a "picture element" dates to the earliest days of television, for example as "Bildpunkt" (the German word for pixel, literally 'picture point') in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927, though it had been used earlier in various U.S. patents filed as early as 1911. Some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel. For example, IBM used it in their Technical Reference for the original PC. Pixilation, spelled with a second i, is an unrelated filmmaking technique that dates to the beginnings of cinema, in which live actors are posed frame by frame and photographed to create stop-motion animation. An archaic British word meaning "possession by spirits (pixies)", the term has been used to describe the animation process since the early 1950s; various animators, including Norman McLaren and Grant Munro, are credited with popularizing it. Technical thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart. The measures "dots per inch" (dpi) and "pixels per inch" (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display) and therefore has a total number of 640 × 480 = 307,200 pixels, or 0.3 megapixels. The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques. Sampling patterns For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another. For example: LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens. The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid. A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy. Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space. The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit. Pixels on computer monitors are normally "square" (that is, have equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, have unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard. Resolution of computer monitors Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. LCD monitors also use pixels to display an image, and have a native resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On some CRT monitors, the beam sweep rate may be fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all - instead they have a set of resolutions that are equally well supported. To produce the sharpest images possible on an LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor. Resolution of telescopes The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale measured in radians is the ratio of the pixel spacing and focal length of the preceding optics, . (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because is usually expressed in units of arcseconds per pixel, because 1 radian equals (180/π) × 3600 ≈ 206,265 arcseconds, and because focal lengths are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as . Bits per pixel The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 21 = 2 colors (monochrome) 2 bpp, 22 = 4 colors 3 bpp, 23 = 8 colors 4 bpp, 24 = 16 colors 8 bpp, 28 = 256 colors 16 bpp, 216 = 65,536 colors ("Highcolor" ) 24 bpp, 224 = 16,777,216 colors ("Truecolor") For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue each, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image). Subpixels Many display and image-acquisition systems are not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance. In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels, mostly RGB colors. For example, LCDs typically divide each pixel vertically into three subpixels. When the square pixel is divided into three subpixels, each subpixel is necessarily rectangular. In display industry terminology, subpixels are often referred to as pixels, as they are the basic addressable elements in a viewpoint of hardware, and hence pixel circuits rather than subpixel circuits is used. Most digital camera image sensors use single-color sensor regions, for example using the Bayer filter pattern, and in the camera industry these are known as pixels just like in the display industry, not subpixels. For systems with subpixels, two different approaches can be taken: The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases. This latter approach, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately, producing an increase in the apparent resolution of color displays. While CRT displays use red-green-blue-masked phosphor areas, dictated by a mesh grid called the shadow mask, it would require a difficult calibration step to be aligned with the displayed pixel raster, and so CRTs do not use subpixel rendering. The concept of subpixels is related to samples. Logical pixel In graphic, web design, and user interfaces, a "pixel" may refer to a fixed length rather than a true pixel on the screen to accommodate different pixel densities. A typical definition, such as in CSS, is that a "physical" pixel is . Doing so makes sure a given element will display as the same size no matter what screen resolution views it. There may, however, be some further adjustments between a "physical" pixel and an on-screen logical pixel. As screens are viewed at difference distances (consider a phone, a computer display, and a TV), the desired length (a "reference pixel") is scaled relative to a reference viewing distance ( in CSS). In addition, as true screen pixel densities are rarely multiples of 96 dpi, some rounding is often applied so that a logical pixel is an integer amount of actual pixels. Doing so avoids render artifacts. The final "pixel" obtained after these two steps becomes the "anchor" to which all other absolute measurements (e.g. the "centimeter") are based on. Worked example, with a 2160p TV placed away from the viewer: Calculate the scaled pixel size as . Calculate the DPI of the TV as . Calculate the real-pixel count per logical-pixel as . A browser will then choose to use the 1.721× pixel size, or round to a 2× ratio. Megapixel A megapixel (MP''') is a million pixels; the term is used not only for the number of pixels in an image but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera that makes a 2048 × 1536 pixel image (3,145,728 finished image pixels) typically uses a few extra rows and columns of sensor elements and is commonly said to have "3.2 megapixels" or "3.4 megapixels", depending on whether the number reported is the "effective" or the "total" pixel count. Pixel is used to define the resolution of a photo. Photo resolution is calculated by multiplying the width and height of a sensor in pixels. Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record one channel (only red or green or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel'' camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement). DxO Labs invented the Perceptual MegaPixel (P-MPix) to measure the sharpness that a camera produces when paired to a particular lens – as opposed to the MP a manufacturer states for a camera product, which is based only on the camera's sensor. The new P-MPix claims to be a more accurate and relevant value for photographers to consider when weighing up camera sharpness. As of mid-2013, the Sigma 35 mm f/1.4 DG HSM lens mounted on a Nikon D800 has the highest measured P-MPix. However, with a value of 23 MP, it still more than one-third of the D800's 36.3 MP sensor. In August 2019, Xiaomi released the Redmi Note 8 Pro as the world's first smartphone with 64 MP camera. On December 12, 2019 Samsung released Samsung A71 that also has a 64 MP camera. In late 2019, Xiaomi announced the first camera phone with 108 MP 1/1.33-inch across sensor. The sensor is larger than most of bridge camera with 1/2.3-inch across sensor. One new method to add megapixels has been introduced in a Micro Four Thirds System camera, which only uses a 16 MP sensor but can produce a 64 MP RAW (40 MP JPEG) image by making two exposures, shifting the sensor by a half pixel between them. Using a tripod to take level multi-shots within an instance, the multiple 16 MP images are then generated into a unified 64 MP image. See also Computer display standard Dexel Gigapixel image Image resolution Intrapixel and Interpixel processing LCD crosstalk PenTile matrix family Pixel advertising Pixel art Pixel art scaling algorithms Pixel aspect ratio Pixelation Pixelization Point (typography) Glossary of video terms Voxel Vector graphics References External links A Pixel Is Not A Little Square: Microsoft Memo by computer graphics pioneer Alvy Ray Smith. "Pixels and Me", 2016 lecture by Richard F. Lyon at the Computer History Museum Square and non-Square Pixels: Technical info on pixel aspect ratios of modern video standards (480i, 576i, 1080i, 720p), plus software implications. How a TV Works in Slow Motion - The Slow Mo Guys – YouTube video by The Slow Mo Guys Computer graphics data structures Digital geometry Digital imaging Digital photography Display technology Image processing Television technology
23666
https://en.wikipedia.org/wiki/Prime%20number
Prime number
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, or , involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. The property of being prime is called primality. A simple but slow method of checking the primality of a given number , called trial division, tests whether is a multiple of any integer between 2 and . Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. the largest known prime number is a Mersenne prime with 24,862,048 decimal digits. There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple formula separates prime numbers from composite numbers. However, the distribution of primes within the natural numbers in the large can be statistically modelled. The first result in that direction is the prime number theorem, proven at the end of the 19th century, which says that the probability of a randomly chosen large number being prime is inversely proportional to its number of digits, that is, to its logarithm. Several historical questions regarding prime numbers are still unsolved. These include Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and the twin prime conjecture, that there are infinitely many pairs of primes that differ by two. Such questions spurred the development of various branches of number theory, focusing on analytic or algebraic aspects of numbers. Primes are used in several routines in information technology, such as public-key cryptography, which relies on the difficulty of factoring large numbers into their prime factors. In abstract algebra, objects that behave in a generalized way like prime numbers include prime elements and prime ideals. Definition and examples A natural number (1, 2, 3, 4, 5, 6, etc.) is called a prime number (or a prime) if it is greater than 1 and cannot be written as the product of two smaller natural numbers. The numbers greater than 1 that are not prime are called composite numbers. In other words, is prime if items cannot be divided up into smaller equal-size groups of more than one item, or if it is not possible to arrange dots into a rectangular grid that is more than one dot wide and more than one dot high. For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers, as there are no other numbers that divide them evenly (without a remainder). 1 is not prime, as it is specifically excluded in the definition. and are both composite. The divisors of a natural number are the natural numbers that divide evenly. Every natural number has both 1 and itself as a divisor. If it has any other divisor, it cannot be prime. This leads to an equivalent definition of prime numbers: they are the numbers with exactly two positive divisors. Those two are 1 and the number itself. As 1 has only one divisor, itself, it is not prime by this definition. Yet another way to express the same thing is that a number is prime if it is greater than one and if none of the numbers divides evenly. The first 25 prime numbers (all the prime numbers less than 100) are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 . No even number greater than 2 is prime because any such number can be expressed as the product . Therefore, every prime number other than 2 is an odd number, and is called an odd prime. Similarly, when written in the usual decimal system, all prime numbers larger than 5 end in 1, 3, 7, or 9. The numbers that end with other digits are all composite: decimal numbers that end in 0, 2, 4, 6, or 8 are even, and decimal numbers that end in 0 or 5 are divisible by 5. The set of all primes is sometimes denoted by (a boldface capital P) or by (a blackboard bold capital P). History The Rhind Mathematical Papyrus, from around 1550 BC, has Egyptian fraction expansions of different forms for prime and composite numbers. However, the earliest surviving records of the study of prime numbers come from the ancient Greek mathematians, who called them (). Euclid's Elements (c. 300 BC) proves the infinitude of primes and the fundamental theorem of arithmetic, and shows how to construct a perfect number from a Mersenne prime. Another Greek invention, the Sieve of Eratosthenes, is still used to construct lists of Around 1000 AD, the Islamic mathematician Ibn al-Haytham (Alhazen) found Wilson's theorem, characterizing the prime numbers as the numbers that evenly divide . He also conjectured that all even perfect numbers come from Euclid's construction using Mersenne primes, but was unable to prove it. Another Islamic mathematician, Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by considering only the prime divisors up to the square root of the upper limit. Fibonacci took the innovations from Islamic mathematics to Europe. His book Liber Abaci (1202) was the first to describe trial division for testing primality, again using divisors only up to the square root. In 1640 Pierre de Fermat stated (without proof) Fermat's little theorem (later proved by Leibniz and Euler). Fermat also investigated the primality of the Fermat numbers , and Marin Mersenne studied the Mersenne primes, prime numbers of the form with itself a prime. Christian Goldbach formulated Goldbach's conjecture, that every even number is the sum of two primes, in a 1742 letter to Euler. Euler proved Alhazen's conjecture (now the Euclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes. He introduced methods from mathematical analysis to this area in his proofs of the infinitude of the primes and the divergence of the sum of the reciprocals of the primes . At the start of the 19th century, Legendre and Gauss conjectured that as tends to infinity, the number of primes up to is asymptotic to , where is the natural logarithm of . A weaker consequence of this high density of primes was Bertrand's postulate, that for every there is a prime between and , proved in 1852 by Pafnuty Chebyshev. Ideas of Bernhard Riemann in his 1859 paper on the zeta-function sketched an outline for proving the conjecture of Legendre and Gauss. Although the closely related Riemann hypothesis remains unproven, Riemann's outline was completed in 1896 by Hadamard and de la Vallée Poussin, and the result is now known as the prime number theorem. Another important 19th century result was Dirichlet's theorem on arithmetic progressions, that certain arithmetic progressions contain infinitely many primes. Many mathematicians have worked on primality tests for numbers larger than those where trial division is practicably applicable. Methods that are restricted to specific number forms include Pépin's test for Fermat numbers (1877), Proth's theorem (c. 1878), the Lucas–Lehmer primality test (originated 1856), and the generalized Lucas primality test. Since 1951 all the largest known primes have been found using these tests on computers. The search for ever larger primes has generated interest outside mathematical circles, through the Great Internet Mersenne Prime Search and other distributed computing projects. The idea that prime numbers had few applications outside of pure mathematics was shattered in the 1970s when public-key cryptography and the RSA cryptosystem were invented, using prime numbers as their basis. The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. The mathematical theory of prime numbers also moved forward with the Green–Tao theorem (2004) that there are arbitrarily long arithmetic progressions of prime numbers, and Yitang Zhang's 2013 proof that there exist infinitely many prime gaps of bounded size. Primality of one Most early Greeks did not even consider 1 to be a number, so they could not consider its primality. A few scholars in the Greek and later Roman tradition, including Nicomachus, Iamblichus, Boethius, and Cassiodorus, also considered the prime numbers to be a subdivision of the odd numbers, so they did not consider 2 to be prime either. However, Euclid and a majority of the other Greek mathematicians considered 2 as prime. The medieval Islamic mathematicians largely followed the Greeks in viewing 1 as not being a number. By the Middle Ages and Renaissance, mathematicians began treating 1 as a number, and some of them included it as the first prime number. In the mid-18th century Christian Goldbach listed 1 as prime in his correspondence with Leonhard Euler; however, Euler himself did not consider 1 to be prime. In the 19th century many mathematicians still considered 1 to be prime, and lists of primes that included 1 continued to be published as recently If the definition of a prime number were changed to call 1 a prime, many statements involving prime numbers would need to be reworded in a more awkward way. For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than 1, because every number would have multiple factorizations with any number of copies Similarly, the sieve of Eratosthenes would not work correctly if it handled 1 as a prime, because it would eliminate all multiples of 1 (that is, all other numbers) and output only the single number 1. Some other more technical properties of prime numbers also do not hold for the number 1: for instance, the formulas for Euler's totient function or for the sum of divisors function are different for prime numbers than they are for 1. By the early 20th century, mathematicians began to agree that 1 should not be listed as prime, but rather in its own special category as a "unit". Elementary properties Unique factorization Writing a number as a product of prime numbers is called a prime factorization of the number. For example: The terms in the product are called prime factors. The same prime factor may occur more than once; this example has two copies of the prime factor When a prime occurs multiple times, exponentiation can be used to group together multiple copies of the same prime number: for example, in the second way of writing the product above, denotes the square or second power of The central importance of prime numbers to number theory and mathematics in general stems from the fundamental theorem of arithmetic. This theorem states that every integer larger than 1 can be written as a product of one or more primes. More strongly, this product is unique in the sense that any two prime factorizations of the same number will have the same numbers of copies of the same primes, although their ordering may differ. So, although there are many different ways of finding a factorization using an integer factorization algorithm, they all must produce the same result. Primes can thus be considered the "basic building blocks" of the natural numbers. Some proofs of the uniqueness of prime factorizations are based on Euclid's lemma: If is a prime number and divides a product of integers and then divides or divides (or both). Conversely, if a number has the property that when it divides a product it always divides at least one factor of the product, then must be prime. Infinitude There are infinitely many prime numbers. Another way of saying this is that the sequence 2, 3, 5, 7, 11, 13, ... of prime numbers never ends. This statement is referred to as Euclid's theorem in honor of the ancient Greek mathematician Euclid, since the first known proof for this statement is attributed to him. Many more proofs of the infinitude of primes are known, including an analytical proof by Euler, Goldbach's proof based on Fermat numbers, Furstenberg's proof using general topology, and Kummer's elegant proof. Euclid's proof shows that every finite list of primes is incomplete. The key idea is to multiply together the primes in any given list and add If the list consists of the primes this gives the number By the fundamental theorem, has a prime factorization with one or more prime factors. is evenly divisible by each of these factors, but has a remainder of one when divided by any of the prime numbers in the given list, so none of the prime factors of can be in the given list. Because there is no finite list of all the primes, there must be infinitely many primes. The numbers formed by adding one to the products of the smallest primes are called Euclid numbers. The first five of them are prime, but the sixth, is a composite number. Formulas for primes There is no known efficient formula for primes. For example, there is no non-constant polynomial, even in several variables, that takes only prime values. However, there are numerous expressions that do encode all primes, or only primes. One possible formula is based on Wilson's theorem and generates the number 2 many times and all other primes exactly once. There is also a set of Diophantine equations in nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. This can be used to obtain a single formula with the property that all its positive values are prime. Other examples of prime-generating formulas come from Mills' theorem and a theorem of Wright. These assert that there are real constants and such that are prime for any natural number in the first formula, and any number of exponents in the second formula. Here represents the floor function, the largest integer less than or equal to the number in question. However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of or Open questions Many conjectures revolving about primes have been posed. Often having an elementary formulation, many of these conjectures have withstood proof for decades: all four of Landau's problems from 1912 are still unsolved. One of them is Goldbach's conjecture, which asserts that every even integer greater than 2 can be written as a sum of two primes. , this conjecture has been verified for all numbers up to Weaker statements than this have been proven, for example, Vinogradov's theorem says that every sufficiently large odd integer can be written as a sum of three primes. Chen's theorem says that every sufficiently large even number can be expressed as the sum of a prime and a semiprime (the product of two primes). Also, any even integer greater than 10 can be written as the sum of six primes. The branch of number theory studying such questions is called additive number theory. Another type of problem concerns prime gaps, the differences between consecutive primes. The existence of arbitrarily large prime gaps can be seen by noting that the sequence consists of composite numbers, for any natural number However, large prime gaps occur much earlier than this argument shows. For example, the first prime gap of length 8 is between the primes 89 and 97, much smaller than It is conjectured that there are infinitely many twin primes, pairs of primes with difference 2; this is the twin prime conjecture. Polignac's conjecture states more generally that for every positive integer there are infinitely many pairs of consecutive primes that differ by Andrica's conjecture, Brocard's conjecture, Legendre's conjecture, and Oppermann's conjecture all suggest that the largest gaps between primes from to should be at most approximately a result that is known to follow from the Riemann hypothesis, while the much stronger Cramér conjecture sets the largest gap size at Prime gaps can be generalized to prime -tuples, patterns in the differences among more than two prime numbers. Their infinitude and density are the subject of the first Hardy–Littlewood conjecture, which can be motivated by the heuristic that the prime numbers behave similarly to a random sequence of numbers with density given by the prime number theorem. Analytic properties Analytic number theory studies number theory through the lens of continuous functions, limits, infinite series, and the related mathematics of the infinite and infinitesimal. This area of study began with Leonhard Euler and his first major result, the solution to the Basel problem. The problem asked for the value of the infinite sum which today can be recognized as the value of the Riemann zeta function. This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, the Riemann hypothesis. Euler showed that . The reciprocal of this number, , is the limiting probability that two random numbers selected uniformly from a large range are relatively prime (have no factors in common). The distribution of primes in the large, such as the question how many primes are smaller than a given, large threshold, is described by the prime number theorem, but no efficient formula for the -th prime is known. Dirichlet's theorem on arithmetic progressions, in its basic form, asserts that linear polynomials with relatively prime integers and take infinitely many prime values. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the same have approximately the same proportions of primes. Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often. Analytical proof of Euclid's theorem Euler's proof that there are infinitely many primes considers the sums of reciprocals of primes, Euler showed that, for any arbitrary real number , there exists a prime for which this sum is bigger than . This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past every . The growth rate of this sum is described more precisely by Mertens' second theorem. For comparison, the sum does not grow to infinity as goes to infinity (see the Basel problem). In this sense, prime numbers occur more often than squares of natural numbers, although both sets are infinite. Brun's theorem states that the sum of the reciprocals of twin primes, is finite. Because of Brun's theorem, it is not possible to use Euler's method to solve the twin prime conjecture, that there exist infinitely many twin primes. Number of primes below a given bound The prime-counting function is defined as the number of primes not greater than . For example, , since there are five primes less than or equal to 11. Methods such as the Meissel–Lehmer algorithm can compute exact values of faster than it would be possible to list each prime up to . The prime number theorem states that is asymptotic to , which is denoted as and means that the ratio of to the right-hand fraction approaches 1 as grows to infinity. This implies that the likelihood that a randomly chosen number less than is prime is (approximately) inversely proportional to the number of digits It also implies that the th prime number is proportional to and therefore that the average size of a prime gap is proportional to . A more accurate estimate for is given by the offset logarithmic integral Arithmetic progressions An arithmetic progression is a finite or infinite sequence of numbers such that consecutive numbers in the sequence all have the same difference. This difference is called the modulus of the progression. For example, 3, 12, 21, 30, 39, ..., is an infinite arithmetic progression with modulus 9. In an arithmetic progression, all the numbers have the same remainder when divided by the modulus; in this example, the remainder is 3. Because both the modulus 9 and the remainder 3 are multiples of 3, so is every element in the sequence. Therefore, this progression contains only one prime number, 3 itself. In general, the infinite progression can have more than one prime only when its remainder and modulus are relatively prime. If they are relatively prime, Dirichlet's theorem on arithmetic progressions asserts that the progression contains infinitely many primes. The Green–Tao theorem shows that there are arbitrarily long finite arithmetic progressions consisting only of primes. Prime values of quadratic polynomials Euler noted that the function yields prime numbers for , although composite numbers appear among its later values. The search for an explanation for this phenomenon led to the deep algebraic number theory of Heegner numbers and the class number problem. The Hardy–Littlewood conjecture F predicts the density of primes among the values of quadratic polynomials with integer coefficients in terms of the logarithmic integral and the polynomial coefficients. No quadratic polynomial has been proven to take infinitely many prime values. The Ulam spiral arranges the natural numbers in a two-dimensional grid, spiraling in concentric squares surrounding the origin with the prime numbers highlighted. Visually, the primes appear to cluster on certain diagonals and not others, suggesting that some quadratic polynomials take prime values more often than others. Zeta function and the Riemann hypothesis One of the most famous unsolved questions in mathematics, dating from 1859, and one of the Millennium Prize Problems, is the Riemann hypothesis, which asks where the zeros of the Riemann zeta function are located. This function is an analytic function on the complex numbers. For complex numbers with real part greater than one it equals both an infinite sum over all integers, and an infinite product over the prime numbers, This equality between a sum and a product, discovered by Euler, is called an Euler product. The Euler product can be derived from the fundamental theorem of arithmetic, and shows the close connection between the zeta function and the prime numbers. It leads to another proof that there are infinitely many primes: if there were only finitely many, then the sum-product equality would also be valid at , but the sum would diverge (it is the harmonic series ) while the product would be finite, a contradiction. The Riemann hypothesis states that the zeros of the zeta-function are all either negative even numbers, or complex numbers with real part equal to 1/2. The original proof of the prime number theorem was based on a weak form of this hypothesis, that there are no zeros with real part equal to 1, although other more elementary proofs have been found. The prime-counting function can be expressed by Riemann's explicit formula as a sum in which each term comes from one of the zeros of the zeta function; the main term of this sum is the logarithmic integral, and the remaining terms cause the sum to fluctuate above and below the main term. In this sense, the zeros control how regularly the prime numbers are distributed. If the Riemann hypothesis is true, these fluctuations will be small, and the asymptotic distribution of primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of for intervals near a number ). Abstract algebra Modular arithmetic and finite fields Modular arithmetic modifies usual arithmetic by only using the numbers , for a natural number called the modulus. Any other natural number can be mapped into this system by replacing it by its remainder after division by . Modular sums, differences and products are calculated by performing the same replacement by the remainder on the result of the usual sum, difference, or product of integers. Equality of integers corresponds to congruence in modular arithmetic: and are congruent (written mod ) when they have the same remainder after division by . However, in this system of numbers, division by all nonzero numbers is possible if and only if the modulus is prime. For instance, with the prime number as modulus, division by is possible: , because clearing denominators by multiplying both sides by gives the valid formula . However, with the composite modulus , division by is impossible. There is no valid solution to : clearing denominators by multiplying by causes the left-hand side to become while the right-hand side becomes either or . In the terminology of abstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms a field or, more specifically, a finite field, while other moduli only give a ring but not a field. Several theorems about primes can be formulated using modular arithmetic. For instance, Fermat's little theorem states that if (mod ), then (mod ). Summing this over all choices of gives the equation valid whenever is prime. Giuga's conjecture says that this equation is also a sufficient condition for to be prime. Wilson's theorem says that an integer is prime if and only if the factorial is congruent to mod . For a composite this cannot hold, since one of its factors divides both and , and so is impossible. p-adic numbers The -adic order of an integer is the number of copies of in the prime factorization of . The same concept can be extended from integers to rational numbers by defining the -adic order of a fraction to be . The -adic absolute value of any rational number is then defined as . Multiplying an integer by its -adic absolute value cancels out the factors of in its factorization, leaving only the other primes. Just as the distance between two real numbers can be measured by the absolute value of their distance, the distance between two rational numbers can be measured by their -adic distance, the -adic absolute value of their difference. For this definition of distance, two numbers are close together (they have a small distance) when their difference is divisible by a high power of . In the same way that the real numbers can be formed from the rational numbers and their distances, by adding extra limiting values to form a complete field, the rational numbers with the -adic distance can be extended to a different complete field, the -adic numbers. This picture of an order, absolute value, and complete field derived from them can be generalized to algebraic number fields and their valuations (certain mappings from the multiplicative group of the field to a totally ordered additive group, also called orders), absolute values (certain multiplicative mappings from the field to the real numbers, also called norms), and places (extensions to complete fields in which the given field is a dense set, also called completions). The extension from the rational numbers to the real numbers, for instance, is a place in which the distance between numbers is the usual absolute value of their difference. The corresponding mapping to an additive group would be the logarithm of the absolute value, although this does not meet all the requirements of a valuation. According to Ostrowski's theorem, up to a natural notion of equivalence, the real numbers and -adic numbers, with their orders and absolute values, are the only valuations, absolute values, and places on the rational numbers. The local-global principle allows certain problems over the rational numbers to be solved by piecing together solutions from each of their places, again underlining the importance of primes to number theory. Prime elements in rings A commutative ring is an algebraic structure where addition, subtraction and multiplication are defined. The integers are a ring, and the prime numbers in the integers have been generalized to rings in two different ways, prime elements and irreducible elements. An element of a ring is called prime if it is nonzero, has no multiplicative inverse (that is, it is not a unit), and satisfies the following requirement: whenever divides the product of two elements of , it also divides at least one of or . An element is irreducible if it is neither a unit nor the product of two other non-unit elements. In the ring of integers, the prime and irreducible elements form the same set, In an arbitrary ring, all prime elements are irreducible. The converse does not hold in general, but does hold for unique factorization domains. The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. An example of such a domain is the Gaussian integers , the ring of complex numbers of the form where denotes the imaginary unit and and are arbitrary integers. Its prime elements are known as Gaussian primes. Not every number that is prime among the integers remains prime in the Gaussian integers; for instance, the number 2 can be written as a product of the two Gaussian primes and . Rational primes (the prime elements in the integers) congruent to 3 mod 4 are Gaussian primes, but rational primes congruent to 1 mod 4 are not. This is a consequence of Fermat's theorem on sums of two squares, which states that an odd prime is expressible as the sum of two squares, , and therefore factorable as , exactly when is 1 mod 4. Prime ideals Not every ring is a unique factorization domain. For instance, in the ring of numbers (for integers and ) the number has two factorizations , where neither of the four factors can be reduced any further, so it does not have a unique factorization. In order to extend unique factorization to a larger class of rings, the notion of a number can be replaced with that of an ideal, a subset of the elements of a ring that contains all sums of pairs of its elements, and all products of its elements with ring elements. Prime ideals, which generalize prime elements in the sense that the principal ideal generated by a prime element is a prime ideal, are an important tool and object of study in commutative algebra, algebraic number theory and algebraic geometry. The prime ideals of the ring of integers are the ideals (0), (2), (3), (5), (7), (11), ... The fundamental theorem of arithmetic generalizes to the Lasker–Noether theorem, which expresses every ideal in a Noetherian commutative ring as an intersection of primary ideals, which are the appropriate generalizations of prime powers. The spectrum of a ring is a geometric space whose points are the prime ideals of the ring. Arithmetic geometry also benefits from this notion, and many concepts exist in both geometry and number theory. For example, factorization or ramification of prime ideals when lifted to an extension field, a basic problem of algebraic number theory, bears some resemblance with ramification in geometry. These concepts can even assist with in number-theoretic questions solely concerned with integers. For example, prime ideals in the ring of integers of quadratic number fields can be used in proving quadratic reciprocity, a statement that concerns the existence of square roots modulo integer prime numbers. Early attempts to prove Fermat's Last Theorem led to Kummer's introduction of regular primes, integer prime numbers connected with the failure of unique factorization in the cyclotomic integers. The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed by Chebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case. Group theory In the theory of finite groups the Sylow theorems imply that, if a power of a prime number divides the order of a group, then the group has a subgroup of order . By Lagrange's theorem, any group of prime order is a cyclic group, and by Burnside's theorem any group whose order is divisible by only two primes is solvable. Computational methods For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. Most primality tests only tell whether their argument is prime or not. Routines that also provide a prime factor of composite arguments (or all of its prime factors) are called factorization algorithms. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. Trial division The most basic method of checking the primality of a given integer is called trial division. This method divides by each integer from 2 up to the square root of . Any such integer dividing evenly establishes as composite; otherwise it is prime. Integers larger than the square root do not need to be checked because, whenever , one of the two factors and is less than or equal to the square root of . Another optimization is to check only primes as factors in this range. For instance, to check whether 37 is prime, this method divides it by the primes in the range from 2 to , which are 2, 3, and 5. Each division produces a nonzero remainder, so 37 is indeed prime. Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performs grows exponentially as a function of the number of digits of these integers. However, trial division is still used, with a smaller limit than the square root on the divisor size, to quickly discover composite numbers with small factors, before using more complicated methods on the numbers that pass this filter. Sieves Before computers, mathematical tables listing all of the primes or prime factorizations up to a given limit were commonly printed. The oldest known method for generating a list of primes is called the sieve of Eratosthenes. The animation shows an optimized variant of this method. Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. In advanced mathematics, sieve theory applies similar methods to other problems. Primality testing versus primality proving Some of the fastest modern tests for whether an arbitrary given number is prime are probabilistic (or Monte Carlo) algorithms, meaning that they have a small random chance of producing an incorrect answer. For instance the Solovay–Strassen primality test on a given number chooses a number randomly from through and uses modular exponentiation to check whether is divisible by . If so, it answers yes and otherwise it answers no. If really is prime, it will always answer yes, but if is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. If this test is repeated times on the same number, the probability that a composite number could pass the test every time is at most . Because this decreases exponentially with the number of tests, it provides high confidence (although not certainty) that a number that passes the repeated test is prime. On the other hand, if the test ever fails, then the number is certainly composite. A composite number that passes such a test is called a pseudoprime. In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. For instance, this is true of trial division. The algorithms with guaranteed-correct output include both deterministic (non-random) algorithms, such as the AKS primality test, and randomized Las Vegas algorithms where the random choices made by the algorithm do not affect its final answer, such as some variations of elliptic curve primality proving. When the elliptic curve method concludes that a number is prime, it provides primality certificate that can be verified quickly. The elliptic curve primality test is the fastest in practice of the guaranteed-correct primality tests, but its runtime analysis is based on heuristic arguments rather than rigorous proofs. The AKS primality test has mathematically proven time complexity, but is slower than elliptic curve primality proving in practice. These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; when doing this, a faster probabilistic test can quickly eliminate most composite numbers before a guaranteed-correct algorithm is used to verify that the remaining numbers are prime. The following table lists some of these tests. Their running time is given in terms of , the number to be tested and, for probabilistic algorithms, the number of tests performed. Moreover, is an arbitrarily small positive number, and log is the logarithm to an unspecified base. The big O notation means that each time bound should be multiplied by a constant factor to convert it from dimensionless units to units of time; this factor depends on implementation details such as the type of computer used to run the algorithm, but not on the input parameters and . Special-purpose algorithms and the largest known prime In addition to the aforementioned tests that apply to any natural number, some numbers of a special form can be tested for primality more quickly. For example, the Lucas–Lehmer primality test can determine whether a Mersenne number (one less than a power of two) is prime, deterministically, in the same time as a single iteration of the Miller–Rabin test. This is why since 1992 () the largest known prime has always been a Mersenne prime. It is conjectured that there are infinitely many Mersenne primes. The following table gives the largest known primes of various types. Some of these primes have been found using distributed computing. In 2009, the Great Internet Mersenne Prime Search project was awarded a US$100,000 prize for first discovering a prime with at least 10 million digits. The Electronic Frontier Foundation also offers $150,000 and $250,000 for primes with at least 100 million digits and 1 billion digits, respectively. Integer factorization Given a composite integer , the task of providing one (or all) prime factors is referred to as factorization of . It is significantly more difficult than primality testing, and although many factorization algorithms are known, they are slower than the fastest primality testing methods. Trial division and Pollard's rho algorithm can be used to find very small factors of , and elliptic curve factorization can be effective when has factors of moderate size. Methods suitable for arbitrary large numbers that do not depend on the size of its factors include the quadratic sieve and general number field sieve. As with primality testing, there are also factorization algorithms that require their input to have a special form, including the special number field sieve. the largest number known to have been factored by a general-purpose algorithm is RSA-240, which has 240 decimal digits (795 bits) and is the product of two large primes. Shor's algorithm can factor any integer in a polynomial number of steps on a quantum computer. However, current technology can only run this algorithm for very small numbers. the largest number that has been factored by a quantum computer running Shor's algorithm is 21. Other computational applications Several public-key cryptography algorithms, such as RSA and the Diffie–Hellman key exchange, are based on large prime numbers (2048-bit primes are common). RSA relies on the assumption that it is much easier (that is, more efficient) to perform the multiplication of two (large) numbers and than to calculate and (assumed coprime) if only the product is known. The Diffie–Hellman key exchange relies on the fact that there are efficient algorithms for modular exponentiation (computing ), while the reverse operation (the discrete logarithm) is thought to be a hard problem. Prime numbers are frequently used for hash tables. For instance the original method of Carter and Wegman for universal hashing was based on computing hash functions by choosing random linear functions modulo large prime numbers. Carter and Wegman generalized this method to -independent hashing by using higher-degree polynomials, again modulo large primes. As well as in the hash function, prime numbers are used for the hash table size in quadratic probing based hash tables to ensure that the probe sequence covers the whole table. Some checksum methods are based on the mathematics of prime numbers. For instance the checksums used in International Standard Book Numbers are defined by taking the rest of the number modulo 11, a prime number. Because 11 is prime this method can detect both single-digit errors and transpositions of adjacent digits. Another checksum method, Adler-32, uses arithmetic modulo 65521, the largest prime number less than . Prime numbers are also used in pseudorandom number generators including linear congruential generators and the Mersenne Twister. Other applications Prime numbers are of central importance to number theory but also have many applications to other areas within mathematics, including abstract algebra and elementary geometry. For example, it is possible to place prime numbers of points in a two-dimensional grid so that no three are in a line, or so that every triangle formed by three of the points has large area. Another example is Eisenstein's criterion, a test for whether a polynomial is irreducible based on divisibility of its coefficients by a prime number and its square. The concept of a prime number is so important that it has been generalized in different ways in various branches of mathematics. Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. For example, the prime field of a given field is its smallest subfield that contains both 0 and 1. It is either the field of rational numbers or a finite field with a prime number of elements, whence the name. Often a second, additional meaning is intended by using the word prime, namely that any object can be, essentially uniquely, decomposed into its prime components. For example, in knot theory, a prime knot is a knot that is indecomposable in the sense that it cannot be written as the connected sum of two nontrivial knots. Any knot can be uniquely expressed as a connected sum of prime knots. The prime decomposition of 3-manifolds is another example of this type. Beyond mathematics and computing, prime numbers have potential connections to quantum mechanics, and have been used metaphorically in the arts and literature. They have also been used in evolutionary biology to explain the life cycles of cicadas. Constructible polygons and polygon partitions Fermat primes are primes of the form with a nonnegative integer. They are named after Pierre de Fermat, who conjectured that all such numbers are prime. The first five of these numbers – 3, 5, 17, 257, and 65,537 – are prime, but is composite and so are all other Fermat numbers that have been verified as of 2017. A regular -gon is constructible using straightedge and compass if and only if the odd prime factors of (if any) are distinct Fermat primes. Likewise, a regular -gon may be constructed using straightedge, compass, and an angle trisector if and only if the prime factors of are any number of copies of 2 or 3 together with a (possibly empty) set of distinct Pierpont primes, primes of the form . It is possible to partition any convex polygon into smaller convex polygons of equal area and equal perimeter, when is a power of a prime number, but this is not known for other values of . Quantum mechanics Beginning with the work of Hugh Montgomery and Freeman Dyson in the 1970s, mathematicians and physicists have speculated that the zeros of the Riemann zeta function are connected to the energy levels of quantum systems. Prime numbers are also significant in quantum information science, thanks to mathematical structures such as mutually unbiased bases and symmetric informationally complete positive-operator-valued measures. Biology The evolutionary strategy used by cicadas of the genus Magicicada makes use of prime numbers. These insects spend most of their lives as grubs underground. They only pupate and then emerge from their burrows after 7, 13 or 17 years, at which point they fly about, breed, and then die after a few weeks at most. Biologists theorize that these prime-numbered breeding cycle lengths have evolved in order to prevent predators from synchronizing with these cycles. In contrast, the multi-year periods between flowering in bamboo plants are hypothesized to be smooth numbers, having only small prime numbers in their factorizations. Arts and literature Prime numbers have influenced many artists and writers. The French composer Olivier Messiaen used prime numbers to create ametrical music through "natural phenomena". In works such as La Nativité du Seigneur (1935) and Quatre études de rythme (1949–50), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". According to Messiaen this way of composing was "inspired by the movements of nature, movements of free and unequal durations". In his science fiction novel Contact, scientist Carl Sagan suggested that prime factorization could be used as a means of establishing two-dimensional image planes in communications with aliens, an idea that he had first developed informally with American astronomer Frank Drake in 1975. In the novel The Curious Incident of the Dog in the Night-Time by Mark Haddon, the narrator arranges the sections of the story by consecutive prime numbers as a way to convey the mental state of its main character, a mathematically gifted teen with Asperger syndrome. Prime numbers are used as a metaphor for loneliness and isolation in the Paolo Giordano novel The Solitude of Prime Numbers, in which they are portrayed as "outsiders" among integers. Notes References External links Caldwell, Chris, The Prime Pages at primes.utm.edu. Plus teacher and student package: prime numbers from Plus, the free online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge. Generators and calculators Prime factors calculator can factorize any positive integer up to 20 digits. Fast Online primality test with factorization makes use of the Elliptic Curve Method (up to thousand-digits numbers, requires Java). Huge database of prime numbers Prime Numbers up to 1 trillion Articles containing proofs Integer sequences
23669
https://en.wikipedia.org/wiki/Piers%20Anthony
Piers Anthony
Piers Anthony Dillingham Jacob (born August 6, 1934) is an American author in the science fiction and fantasy genres, publishing under the name Piers Anthony. He is best known for his long-running novel series set in the fictional realm of Xanth. Many of his books have appeared on The New York Times Best Seller list, and he claims one of his greatest achievements has been to publish a book beginning with every letter of the alphabet, from Anthonology to Zombie Lover. Early life Anthony's parents, Alfred and Norma Jacob, were Quaker pacifists studying at Oxford University who interrupted their studies in 1936 to undertake relief work on behalf of the Quakers during the Spanish Civil War, establishing a food kitchen for children in Barcelona. Piers and his sister were left in England in the care of their maternal grandparents and a nanny. Alfred Jacob, although a British citizen, had been born in America near Philadelphia, and in 1940, after being forced out of Spain and with the situation in Britain deteriorating, the family sailed to the United States. In 1941 the family settled in a rustic "back to the land" utopian community near Winhall, Vermont, where a young Piers made the acquaintance of radical author Scott Nearing, a neighbor. Both parents resumed their academic studies, and Alfred eventually became a professor of Romance languages, teaching at a number of colleges in the Philadelphia area. Piers was moved around to a number of schools, eventually enrolling in Goddard College in Vermont where he graduated in 1956. On This American Life on July 27, 2012, Anthony revealed that his parents had divorced, he was bullied, and he had poor grades in school. Anthony referred to his high school as a "very fancy private school", and refuses to donate money to it. He recalls being part of "the lower crust", and that no one paid attention to, or cared about him. He said, "I didn't like being a member of the underclass, of the peons like that". Marriage and early career Anthony met his future wife, Carol Marble, while both were attending college. They were married in 1956, the same year he graduated from Goddard College, and he subsequently worked as a handyman. In 1957, Anthony decided to join the United States Army, as his wife was now pregnant and they needed both medical coverage and a steady source of income. During his two-year enlistment, he became a naturalized U.S. citizen in 1958 and was editor and cartoonist for his battalion's newspaper. After completing military service, he briefly taught at Admiral Farragut Academy in St. Petersburg, Florida before deciding to try to become a full-time writer. Anthony and his wife made a deal: if he could sell a piece of writing within one year, she would continue to work to support him. But if he could not sell anything in that year, then he would forever give up his dream of being a writer. At the end of the year, he managed to get a short story published. He credits his wife as the person who made his writing career possible, and he advises aspiring writers that they need to have a source of income other than their writing in order to get through the early years of a writing career. Writing On multiple occasions Anthony has moved from one publisher to another (taking a popular series with him) when he says he felt the editors were unduly tampering with his work. He has sued publishers for accounting malfeasance and won judgments in his favor. Anthony maintains an Internet Publishers Survey in the interest of helping aspiring writers. For this service, he won the 2003 "Friend of EPIC" award for service to the electronic publishing community. His website won the Special Recognition for Service to Writers award from Preditors and Editors, an author's guide to publishers and writing services. His popular novel series Xanth has been optioned for movies. It inspired the DOS video game Companions of Xanth, by Legend Entertainment. The same series also spawned the board game Xanth by Mayfair Games. Anthony's novels usually end with a chapter-long Author's Note, in which he talks about himself, his life, and his experiences as they related to the process of writing the novel. He often discusses correspondence with readers and any real-world issues that influenced the novel. Since about 2000, Anthony has written his novels in a Linux environment. Anthony's Xanth series was ranked No. 99 in a 2011 NPR readers' poll of best science fiction and fantasy books. In other media Act One of episode 470 of the radio program This American Life is an account of boyhood obsessions with Piers Anthony. The act is written and narrated by writer Logan Hill who, as a 12-year-old, was consumed with reading Anthony's novels. For a decade he felt he must have been Anthony's number one fan, until, when he was 22, he met "Andy" at a wedding and discovered their mutual interest in the writer. Andy is interviewed for the story and explains that, as a teenager, he had used escapist novels in order to cope with his alienating school and home life in Buffalo, New York. In 1987, at age 15, he decided to run away to Florida in order to try to live with Piers Anthony. The story includes Anthony's reflections on these events. Naomi King, the daughter of writer Stephen King, enjoyed reading books by Piers Anthony, which included things like pixies, imps and fairies. After she told her father she had, "very little interest in my vampires, Ghoulies and slushy crawling things", he wrote The Eyes of the Dragon which was originally published in 1984 and later in 1987 by Viking Press. But What of Earth? Early in Anthony's literary career, there was a dispute surrounding the original publication (1976) of But What of Earth?. Editor Roger Elwood commissioned the novel for his nascent science-fiction line Laser Books. According to Anthony, he completed But What of Earth?, and Elwood accepted and purchased it. Elwood then told Anthony that he wished to make several minor changes, and in order not to waste Anthony's time, he had hired copy editor (and author) Robert Coulson to retype the manuscript with the changes. Anthony described Coulson as a friend and was initially open to his contribution. However, Elwood told Coulson he was to be a full collaborator, free to make revisions to Anthony's text in line with suggestions made by other copy editors. Elwood promised Coulson a 50–50 split with Anthony on all future royalties. According to Anthony, the published novel was very different from his version, with changes to characters and dialog, and with scenes added and removed. Anthony felt the changes worsened the novel. Laser's ultimate publication of But What of Earth? listed Anthony and Coulson together as collaborators. Publication rights were reverted to Anthony under threat of legal action. In 1989, Anthony (re)published his original But What of Earth? in an annotated edition through Tor Books. This edition contains an introduction and conclusion setting out the story of the novel's permutations and roughly 60 pages of notes by Anthony giving examples of changes to plot and characters, and describing some of the comments made by copy editors on his manuscript. Criticism Some critics have described Anthony's portrayal of female characters as stereotypical and misogynist, particularly in the early parts of the Xanth series, and have taken issue with themes of underage sexuality and eroticism within Anthony's work. Anthony has argued in interviews that these critiques do not accurately reflect his work, and states that he gets more fan mail from female readers than male readers. Personal life He and his first wife, Carol Ann Marble Jacob, had two daughters, Penelope "Penny" Carolyn and Cheryl. Penny had one child, and died in 2009, due to complications from skin cancer. Carol Ann died at home on October 3, 2019, due to what is believed to have been heart related complications due to a 15-year-long battle with chronic inflammatory demyelinating polyneuropathy (CIDP). On April 22, 2020, he married MaryLee Boyance Anthony lived on his tree farm in Florida until March 2023, at which time he sold his farm and moved to California. Anthony is a vegetarian. Religious beliefs Regarding his religious beliefs, Anthony wrote in the October 2004 entry of his personal website, "I'm agnostic, which means I regard the case as unproven, but I'm much closer to the atheist position than to the theist one." In 2017 he stated, "I am more certain about God and the Afterlife: they don't exist." Bibliography References External links Piers Anthony's page at Macmillan.com Extensive 2005 Interview Piers Anthony Collection.. University of South Florida. Special Collections. 1934 births 20th-century American male writers 20th-century American novelists 20th-century American short story writers 21st-century American male writers 21st-century American novelists 21st-century American short story writers Admiral Farragut Academy alumni American agnostics American fantasy writers American male novelists American male short story writers American science fiction writers English agnostics English emigrants to the United States English fantasy writers English science fiction writers Goddard College alumni Living people Naturalized citizens of the United States Novelists from Florida United States Army soldiers Westtown School alumni Writers from Oxford
23670
https://en.wikipedia.org/wiki/Perfect%20number
Perfect number
In number theory, a perfect number is a positive integer that is equal to the sum of its positive proper divisors, that is, divisors excluding the number itself. For instance, 6 has proper divisors 1, 2 and 3, and 1 + 2 + 3 = 6, so 6 is a perfect number. The next perfect number is 28, since 1 + 2 + 4 + 7 + 14 = 28. The first four perfect numbers are 6, 28, 496 and 8128. The sum of proper divisors of a number is called its aliquot sum, so a perfect number is one that is equal to its aliquot sum. Equivalently, a perfect number is a number that is half the sum of all of its positive divisors; in symbols, where is the sum-of-divisors function. This definition is ancient, appearing as early as Euclid's Elements (VII.22) where it is called (perfect, ideal, or complete number). Euclid also proved a formation rule (IX.36) whereby is an even perfect number whenever is a prime of the form for positive integer —what is now called a Mersenne prime. Two millennia later, Leonhard Euler proved that all even perfect numbers are of this form. This is known as the Euclid–Euler theorem. It is not known whether there are any odd perfect numbers, nor whether infinitely many perfect numbers exist. History In about 300 BC Euclid showed that if 2p − 1 is prime then 2p−1(2p − 1) is perfect. The first four perfect numbers were the only ones known to early Greek mathematics, and the mathematician Nicomachus noted 8128 as early as around AD 100. In modern language, Nicomachus states without proof that perfect number is of the form where is prime. He seems to be unaware that itself has to be prime. He also says (wrongly) that the perfect numbers end in 6 or 8 alternately. (The first 5 perfect numbers end with digits 6, 8, 6, 8, 6; but the sixth also ends in 6.) Philo of Alexandria in his first-century book "On the creation" mentions perfect numbers, claiming that the world was created in 6 days and the moon orbits in 28 days because 6 and 28 are perfect. Philo is followed by Origen, and by Didymus the Blind, who adds the observation that there are only four perfect numbers that are less than 10,000. (Commentary on Genesis 1. 14–19). St Augustine defines perfect numbers in City of God (Book XI, Chapter 30) in the early 5th century AD, repeating the claim that God created the world in 6 days because 6 is the smallest perfect number. The Egyptian mathematician Ismail ibn Fallūs (1194–1252) mentioned the next three perfect numbers (33,550,336; 8,589,869,056; and 137,438,691,328) and listed a few more which are now known to be incorrect. The first known European mention of the fifth perfect number is a manuscript written between 1456 and 1461 by an unknown mathematician. In 1588, the Italian mathematician Pietro Cataldi identified the sixth (8,589,869,056) and the seventh (137,438,691,328) perfect numbers, and also proved that every perfect number obtained from Euclid's rule ends with a 6 or an 8. Even perfect numbers Euclid proved that is an even perfect number whenever is prime (Elements, Prop. IX.36). For example, the first four perfect numbers are generated by the formula with a prime number, as follows: Prime numbers of the form are known as Mersenne primes, after the seventeenth-century monk Marin Mersenne, who studied number theory and perfect numbers. For to be prime, it is necessary that itself be prime. However, not all numbers of the form with a prime are prime; for example, is not a prime number. In fact, Mersenne primes are very rare: of the primes up to 68,874,199, is prime for only 48 of them. While Nicomachus had stated (without proof) that perfect numbers were of the form where is prime (though he stated this somewhat differently), Ibn al-Haytham (Alhazen) circa AD 1000 was unwilling to go that far, declaring instead (also without proof) that the formula yielded only every even perfect number. It was not until the 18th century that Leonhard Euler proved that the formula will yield all the even perfect numbers. Thus, there is a one-to-one correspondence between even perfect numbers and Mersenne primes; each Mersenne prime generates one even perfect number, and vice versa. This result is often referred to as the Euclid–Euler theorem. An exhaustive search by the GIMPS distributed computing project has shown that the first 48 even perfect numbers are for = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609 and 57885161 . Three higher perfect numbers have also been discovered, namely those for which = 74207281, 77232917, and 82589933. Although it is still possible there may be others within this range, initial but exhaustive tests by GIMPS have revealed no other perfect numbers for below 109332539. , 51 Mersenne primes are known, and therefore 51 even perfect numbers (the largest of which is with 49,724,095 digits). It is not known whether there are infinitely many perfect numbers, nor whether there are infinitely many Mersenne primes. As well as having the form , each even perfect number is the -th triangular number (and hence equal to the sum of the integers from 1 to ) and the -th hexagonal number. Furthermore, each even perfect number except for 6 is the -th centered nonagonal number and is equal to the sum of the first odd cubes (odd cubes up to the cube of ): Even perfect numbers (except 6) are of the form with each resulting triangular number , , (after subtracting 1 from the perfect number and dividing the result by 9) ending in 3 or 5, the sequence starting with , , , It follows that by adding the digits of any even perfect number (except 6), then adding the digits of the resulting number, and repeating this process until a single digit (called the digital root) is obtained, always produces the number 1. For example, the digital root of 8128 is 1, because , , and . This works with all perfect numbers with odd prime and, in fact, with numbers of the form for odd integer (not necessarily prime) . Owing to their form, every even perfect number is represented in binary form as ones followed by zeros; for example: Thus every even perfect number is a pernicious number. Every even perfect number is also a practical number (cf. Related concepts). Odd perfect numbers It is unknown whether any odd perfect numbers exist, though various results have been obtained. In 1496, Jacques Lefèvre stated that Euclid's rule gives all perfect numbers, thus implying that no odd perfect number exists. Euler stated: "Whether ... there are any odd perfect numbers is a most difficult question". More recently, Carl Pomerance has presented a heuristic argument suggesting that indeed no odd perfect number should exist. All perfect numbers are also harmonic divisor numbers, and it has been conjectured as well that there are no odd harmonic divisor numbers other than 1. Many of the properties proved about odd perfect numbers also apply to Descartes numbers, and Pace Nielsen has suggested that sufficient study of those numbers may lead to a proof that no odd perfect numbers exist. Any odd perfect number N must satisfy the following conditions: N > 101500. N is not divisible by 105. N is of the form N ≡ 1 (mod 12) or N ≡ 117 (mod 468) or N ≡ 81 (mod 324). The largest prime factor of N is greater than 108 and less than The second largest prime factor is greater than 104, and is less than . The third largest prime factor is greater than 100, and less than N has at least 101 prime factors and at least 10 distinct prime factors. If 3 is not one of the factors of N, then N has at least 12 distinct prime factors. N is of the form where: q, p1, ..., pk are distinct odd primes (Euler). q ≡ α ≡ 1 (mod 4) (Euler). The smallest prime factor of N is at most At least one of the prime powers dividing n exceeds 1062. . . . Furthermore, several minor results are known about the exponents e1, ..., ek. Not all ei ≡ 1 (mod 3). Not all ei ≡ 2 (mod 5). If all ei ≡ 1 (mod 3) or 2 (mod 5), then the smallest prime factor of N must lie between 108 and 101000. More generally, if all 2ei+1 have a prime factor in a given finite set S, then the smallest prime factor of N must be smaller than an effectively computable constant depending only on S. If (e1, ..., ek) =  (1, ..., 1, 2, ..., 2) with t ones and u twos, then . (e1, ..., ek) ≠ (1, ..., 1, 3), (1, ..., 1, 5), (1, ..., 1, 6). If , then e cannot be 3, 5, 24, 6, 8, 11, 14 or 18. and . In 1888, Sylvester stated: Minor results All even perfect numbers have a very precise form; odd perfect numbers either do not exist or are rare. There are a number of results on perfect numbers that are actually quite easy to prove but nevertheless superficially impressive; some of them also come under Richard Guy's strong law of small numbers: The only even perfect number of the form n3 + 1 is 28 . 28 is also the only even perfect number that is a sum of two positive cubes of integers . The reciprocals of the divisors of a perfect number N must add up to 2 (to get this, take the definition of a perfect number, , and divide both sides by n): For 6, we have ; For 28, we have , etc. The number of divisors of a perfect number (whether even or odd) must be even, because N cannot be a perfect square. From these two results it follows that every perfect number is an Ore's harmonic number. The even perfect numbers are not trapezoidal numbers; that is, they cannot be represented as the difference of two positive non-consecutive triangular numbers. There are only three types of non-trapezoidal numbers: even perfect numbers, powers of two, and the numbers of the form formed as the product of a Fermat prime with a power of two in a similar way to the construction of even perfect numbers from Mersenne primes. The number of perfect numbers less than n is less than , where c > 0 is a constant. In fact it is , using little-o notation. Every even perfect number ends in 6 or 28, base ten; and, with the only exception of 6, ends in 1 in base 9. Therefore, in particular the digital root of every even perfect number other than 6 is 1. The only square-free perfect number is 6. Related concepts The sum of proper divisors gives various other kinds of numbers. Numbers where the sum is less than the number itself are called deficient, and where it is greater than the number, abundant. These terms, together with perfect itself, come from Greek numerology. A pair of numbers which are the sum of each other's proper divisors are called amicable, and larger cycles of numbers are called sociable. A positive integer such that every smaller positive integer is a sum of distinct divisors of it is a practical number. By definition, a perfect number is a fixed point of the restricted divisor function , and the aliquot sequence associated with a perfect number is a constant sequence. All perfect numbers are also -perfect numbers, or Granville numbers. A semiperfect number is a natural number that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number. Most abundant numbers are also semiperfect; abundant numbers which are not semiperfect are called weird numbers. See also Hyperperfect number Leinster group List of Mersenne primes and perfect numbers Multiply perfect number Superperfect numbers Unitary perfect number Harmonic divisor number Notes References Sources Euclid, Elements, Book IX, Proposition 36. See D.E. Joyce's website for a translation and discussion of this proposition and its proof. Further reading Nankar, M.L.: "History of perfect numbers," Ganita Bharati 1, no. 1–2 (1979), 7–8. Riele, H.J.J. "Perfect Numbers and Aliquot Sequences" in H.W. Lenstra and R. Tijdeman (eds.): Computational Methods in Number Theory, Vol. 154, Amsterdam, 1982, pp. 141–157. Riesel, H. Prime Numbers and Computer Methods for Factorisation, Birkhauser, 1985. External links David Moews: Perfect, amicable and sociable numbers Perfect numbers – History and Theory OddPerfect.org A projected distributed computing project to search for odd perfect numbers. Great Internet Mersenne Prime Search (GIMPS) Perfect Numbers, math forum at Drexel. Divisor function Integer sequences Unsolved problems in number theory Mersenne primes
23672
https://en.wikipedia.org/wiki/Parthenon
Parthenon
The Parthenon (; ; ) is a former temple on the Athenian Acropolis, Greece, that was dedicated to the goddess Athena. Its decorative sculptures are considered some of the high points of classical Greek art, and the Parthenon is considered an enduring symbol of Ancient Greece, democracy, and Western civilization. The Parthenon was built in the 5th century BC in thanksgiving for the Hellenic victory over Persian Empire invaders during the Greco-Persian Wars. Like most Greek temples, the Parthenon also served as the city treasury. Construction started in 447 BC when the Delian League was at the peak of its power. It was completed in 438 BC; work on the artwork and decorations continued until 432 BC. For a time, it served as the treasury of the Delian League, which later became the Athenian Empire. In the final decade of the 6th century AD, the Parthenon was converted into a Christian church dedicated to the Virgin Mary. After the Ottoman conquest in the mid-15th century, it became a mosque. In the Morean War, a Venetian bomb landed on the Parthenon, which the Ottomans had used as a munitions dump, during the 1687 siege of the Acropolis. The resulting explosion severely damaged the Parthenon. From 1800 to 1803, the 7th Earl of Elgin took down some of the surviving sculptures, now known as the Elgin Marbles or simply Greek Marbles, which, although he had the permission of the then Ottoman government, has subsequently become controversial. Since 1975, numerous large-scale restoration projects have been undertaken to preserve remaining artifacts and ensure its structural integrity. Etymology The origin of the word "Parthenon" comes from the Greek word (), meaning "maiden, girl" as well as "virgin, unmarried woman". The Liddell–Scott–Jones Greek–English Lexicon states that it may have referred to the "unmarried women's apartments" in a house, but that in the Parthenon it seems to have been used for a particular room of the temple. There is some debate as to which room that was. The lexicon states that this room was the western cella of the Parthenon. This has also been suggested by J.B. Bury. One theory is that the Parthenon was the room where the arrephoroi, a group of four young girls chosen to serve Athena each year, wove a peplos that was presented to Athena during Panathenaic Festivals. Christopher Pelling asserts that the name "Parthenon" means the "temple of the virgin goddess", referring to the cult of Athena Parthenos that was associated with the temple. It has also been suggested that the name of the temple alludes to the maidens (), whose supreme sacrifice guaranteed the safety of the city. In that case, the room originally known as the Parthenon could have been a part of the temple known today as the Erechtheion. In 5th-century BC accounts of the building, the structure is simply called (; lit. "the temple"). Douglas Frame writes that the name "Parthenon" was a nickname related to the statue of Athena Parthenos, and only appeared a century after construction. He contends that "Athena's temple was never officially called the Parthenon and she herself most likely never had the cult title parthénos". The ancient architects Iktinos and Callicrates appear to have called the building (; lit. "the hundred footer") in their lost treatise on Athenian architecture. Harpocration wrote that some people used to call the Parthenon the "Hekatompedos", not due to its size but because of its beauty and fine proportions. The first instance in which Parthenon definitely refers to the entire building comes from the fourth century BC orator Demosthenes. In the 4th century BC and later, the building was referred to as the or the as well as the Parthenon. Plutarch referred to the building during the first century AD as the . A 2020 study by Janric van Rookhuijzen supports the idea that the building known today as the Parthenon was originally called the Hekatompedon. Based on literary and historical research, he proposes that "the treasury called the Parthenon should be recognized as the west part of the building now conventionally known as the Erechtheion". Because the Parthenon was dedicated to the Greek goddess Athena it has sometimes been referred to as the Temple of Minerva, the Roman name for Athena, particularly during the 19th century. was also applied to the Virgin Mary (Parthénos Maria) when the Parthenon was converted to a Christian church dedicated to the Virgin Mary in the final decade of the 6th century. Function Although the Parthenon is architecturally a temple and is usually called so, some scholars have argued that it is not really a temple in the conventional sense of the word. A small shrine has been excavated within the building, on the site of an older sanctuary probably dedicated to Athena as a way to get closer to the goddess, but the Parthenon apparently never hosted the official cult of Athena Polias, patron of Athens. The cult image of Athena Polias, which was bathed in the sea and to which was presented the peplos, was an olive-wood xoanon, located in another temple on the northern side of the Acropolis, more closely associated with the Great Altar of Athena. The High Priestess of Athena Polias supervised the city cult of Athena based in the Acropolis, and was the chief of the lesser officials, such as the plyntrides, arrephoroi and kanephoroi. The colossal statue of Athena by Phidias was not specifically related to any cult attested by ancient authors and is not known to have inspired any religious fervour. Preserved ancient sources do not associate it with any priestess, altar or cult name. According to Thucydides, during the Peloponnesian War when Sparta's forces were first preparing to invade Attica, Pericles, in an address to the Athenian people, said that the statue could be used as a gold reserve if that was necessary to preserve Athens, stressing that it "contained forty talents of pure gold and it was all removable", but adding that the gold would afterward have to be restored. The Athenian statesman thus implies that the metal, obtained from contemporary coinage, could be used again if absolutely necessary without any impiety. According to Aristotle, the building also contained golden figures that he described as "Victories". The classicist Harris Rackham noted that eight of those figures were melted down for coinage during the Peloponnesian War. Other Greek writers have claimed that treasures such as Persian swords were also stored inside the temple. Some scholars, therefore, argue that the Parthenon should be viewed as a grand setting for a monumental votive statue rather than as a cult site. Archaeologist Joan Breton Connelly has argued for the coherency of the Parthenon's sculptural programme in presenting a succession of genealogical narratives that track Athenian identity through the ages: from the birth of Athena, through cosmic and epic battles, to the final great event of the Athenian Bronze Age, the war of Erechtheus and Eumolpos. She argues a pedagogical function for the Parthenon's sculptured decoration, one that establishes and perpetuates Athenian foundation myth, memory, values and identity. While some classicists, including Mary Beard, Peter Green, and Garry Wills have doubted or rejected Connelly's thesis, an increasing number of historians, archaeologists, and classical scholars support her work. They include: J.J. Pollitt, Brunilde Ridgway, Nigel Spivey, Caroline Alexander, and A. E. Stallings. Older Parthenon The first endeavour to build a sanctuary for Athena Parthenos on the site of the present Parthenon was begun shortly after the Battle of Marathon (–488 BC) upon a solid limestone foundation that extended and levelled the southern part of the Acropolis summit. This building replaced a Hekatompedon temple ("hundred-footer") and would have stood beside the archaic temple dedicated to Athena Polias ("of the city"). The Older or Pre-Parthenon, as it is frequently referred to, was still under construction when the Persians sacked the city in 480 BC razing the Acropolis. The existence of both the proto-Parthenon and its destruction were known from Herodotus, and the drums of its columns were visibly built into the curtain wall north of the Erechtheion. Further physical evidence of this structure was revealed with the excavations of Panagiotis Kavvadias of 1885–1890. The findings of this dig allowed Wilhelm Dörpfeld, then director of the German Archaeological Institute, to assert that there existed a distinct substructure to the original Parthenon, called Parthenon I by Dörpfeld, not immediately below the present edifice as previously assumed. Dörpfeld's observation was that the three steps of the first Parthenon consisted of two steps of Poros limestone, the same as the foundations, and a top step of Karrha limestone that was covered by the lowest step of the Periclean Parthenon. This platform was smaller and slightly to the north of the final Parthenon, indicating that it was built for a different building, now completely covered over. This picture was somewhat complicated by the publication of the final report on the 1885–1890 excavations, indicating that the substructure was contemporary with the Kimonian walls, and implying a later date for the first temple. If the original Parthenon was indeed destroyed in 480, it invites the question of why the site was left as a ruin for thirty-three years. One argument involves the oath sworn by the Greek allies before the Battle of Plataea in 479 BC declaring that the sanctuaries destroyed by the Persians would not be rebuilt, an oath from which the Athenians were only absolved with the Peace of Callias in 450. The cost of reconstructing Athens after the Persian sack is at least as likely a cause. The excavations of Bert Hodge Hill led him to propose the existence of a second Parthenon, begun in the period of Kimon after 468. Hill claimed that the Karrha limestone step Dörpfeld thought was the highest of Parthenon I was the lowest of the three steps of Parthenon II, whose stylobate dimensions Hill calculated at . One difficulty in dating the proto-Parthenon is that at the time of the 1885 excavation, the archaeological method of seriation was not fully developed; the careless digging and refilling of the site led to a loss of much valuable information. An attempt to make sense of the potsherds found on the Acropolis came with the two-volume study by Graef and Langlotz published in 1925–1933. This inspired American archaeologist William Bell Dinsmoor to give limiting dates for the temple platform and the five walls hidden under the re-terracing of the Acropolis. Dinsmoor concluded that the latest possible date for Parthenon I was no earlier than 495 BC, contradicting the early date given by Dörpfeld. He denied that there were two proto-Parthenons, and held that the only pre-Periclean temple was what Dörpfeld referred to as Parthenon II. Dinsmoor and Dörpfeld exchanged views in the American Journal of Archaeology in 1935. Present building In the mid-5th century BC, when the Athenian Acropolis became the seat of the Delian League, Pericles initiated the building project that lasted the entire second half of the century. The most important buildings visible on the Acropolis today – the Parthenon, the Propylaia, the Erechtheion and the temple of Athena Nike – were erected during this period. The Parthenon was built under the general supervision of Phidias, who also had charge of the sculptural decoration. The architects Ictinos and Callicrates began their work in 447, and the building was substantially completed by 432. Work on the decorations continued until at least 431. The Parthenon was built primarily by men who knew how to work marble. These quarrymen had exceptional skills and were able to cut the blocks of marble to very specific measurements. The quarrymen also knew how to avoid the faults, which were numerous in the Pentelic marble. If the marble blocks were not up to standard, the architects would reject them. The marble was worked with iron tools – picks, points, punches, chisels, and drills. The quarrymen would hold their tools against the marble block and firmly tap the surface of the rock. A big project like the Parthenon attracted stonemasons from far and wide who travelled to Athens to assist in the project. Slaves and foreigners worked together with the Athenian citizens in the building of the Parthenon, doing the same jobs for the same pay. Temple building was a specialized craft, and there were not many men in Greece qualified to build temples like the Parthenon, so these men would travel and work where they were needed. Other craftsmen were necessary for the building of the Parthenon, specifically carpenters and metalworkers. Unskilled labourers also had key roles in the building of the Parthenon. They loaded and unloaded the marble blocks and moved the blocks from place to place. In order to complete a project like the Parthenon, many different labourers were needed. Architecture The Parthenon is a peripteral octastyle Doric temple with Ionic architectural features. It stands on a platform or stylobate of three steps. In common with other Greek temples, it is of post and lintel construction and is surrounded by columns ('peripteral') carrying an entablature. There are eight columns at either end ('octastyle') and seventeen on the sides. There is a double row of columns at either end. The colonnade surrounds an inner masonry structure, the cella, which is divided into two compartments. The opisthodomos (the back room of the cella) contained the monetary contributions of the Delian League. At either end of the building, the gable is finished with a triangular pediment originally occupied by sculpted figures. The Parthenon has been described as "the culmination of the development of the Doric order". The Doric columns, for example, have simple capitals, fluted shafts, and no bases. Above the architrave of the entablature is a frieze of carved pictorial panels (metopes), separated by formal architectural triglyphs, also typical of the Doric order. The continuous frieze in low relief around the cella and across the lintels of the inner columns, in contrast, reflects the Ionic order. Architectural historian John R. Senseney suggests that this unexpected switch between orders was due to an aesthetic choice on the part of builders during construction, and was likely not part of the original plan of the Parthenon. Measured at the stylobate, the dimensions of the base of the Parthenon are . The cella was 29.8 metres long by 19.2 metres wide (97.8 × 63.0 ft). On the exterior, the Doric columns measure in diameter and are high. The corner columns are slightly larger in diameter. The Parthenon had 46 outer columns and 23 inner columns in total, each column having 20 flutes. (A flute is the concave shaft carved into the column form.) The roof was covered with large overlapping marble tiles known as imbrices and tegulae. The Parthenon is regarded as the finest example of Greek architecture. John Julius Cooper wrote that "even in antiquity, its architectural refinements were legendary, especially the subtle correspondence between the curvature of the stylobate, the taper of the naos walls, and the entasis of the columns". Entasis refers to the slight swelling, of , in the center of the columns to counteract the appearance of columns having a waist, as the swelling makes them look straight from a distance. The stylobate is the platform on which the columns stand. As in many other classical Greek temples, it has a slight parabolic upward curvature intended to shed rainwater and reinforce the building against earthquakes. The columns might therefore be supposed to lean outward, but they actually lean slightly inward so that if they carried on, they would meet almost exactly above the centre of the Parthenon. Since they are all the same height, the curvature of the outer stylobate edge is transmitted to the architrave and roof above: "All follow the rule of being built to delicate curves", Gorham Stevens observed when pointing out that, in addition, the west front was built at a slightly higher level than that of the east front. It is not universally agreed what the intended effect of these "optical refinements" was. They may serve as a sort of "reverse optical illusion". As the Greeks may have been aware, two parallel lines appear to bow, or curve outward, when intersected by converging lines. In this case, the ceiling and floor of the temple may seem to bow in the presence of the surrounding angles of the building. Striving for perfection, the designers may have added these curves, compensating for the illusion by creating their own curves, thus negating this effect and allowing the temple to be seen as they intended. It is also suggested that it was to enliven what might have appeared an inert mass in the case of a building without curves. But the comparison ought to be, according to Smithsonian historian Evan Hadingham, with the Parthenon's more obviously curved predecessors than with a notional rectilinear temple. Some studies of the Acropolis, including of the Parthenon and its facade, have conjectured that many of its proportions approximate the golden ratio. More recent studies have shown that the proportions of the Parthenon do not match the golden proportion. Sculpture The cella of the Parthenon housed the chryselephantine statue of Athena Parthenos sculpted by Phidias and dedicated in 439 or 438 BC. The appearance of this is known from other images. The decorative stonework was originally highly coloured. The temple was dedicated to Athena at that time, though construction continued until almost the beginning of the Peloponnesian War in 432. By the year 438, the Doric metopes on the frieze above the exterior colonnade and the Ionic frieze around the upper portion of the walls of the cella had been completed. Only a small number of the original sculptures remain in situ. Most of the surviving sculptures are at the Acropolis Museum in Athens and (controversially) at the British Museum in London (see Elgin Marbles). Additional pieces are at the Louvre, the National Museum of Denmark, and Vienna. In March 2022, the Acropolis Museum launched a new website with "photographs of all the frieze blocks preserved today in the Acropolis Museum, the British Museum and the Louvre". Metopes The frieze of the Parthenon's entablature contained 92 metopes, 14 each on the east and west sides, 32 each on the north and south sides. They were carved in high relief, a practice employed until then only in treasuries (buildings used to keep votive gifts to the gods). According to the building records, the metope sculptures date to the years 446–440. The metopes of the east side of the Parthenon, above the main entrance, depict the Gigantomachy (the mythical battle between the Olympian gods and the Giants). The metopes of the west end show the Amazonomachy (the mythical battle of the Athenians against the Amazons). The metopes of the south side show the Thessalian Centauromachy (battle of the Lapiths aided by Theseus against the half-man, half-horse Centaurs). Metopes 13–21 are missing, but drawings from 1674 attributed to Jaques Carrey indicate a series of humans; these have been variously interpreted as scenes from the Lapith wedding, scenes from the early history of Athens, and various myths. On the north side of the Parthenon, the metopes are poorly preserved, but the subject seems to be the sack of Troy. The mythological figures of the metopes of the East, North, and West sides of the Parthenon had been deliberately mutilated by Christian iconoclasts in late antiquity. The metopes present examples of the Severe Style in the anatomy of the figures' heads, in the limitation of the corporal movements to the contours and not to the muscles, and in the presence of pronounced veins in the figures of the Centauromachy. Several of the metopes still remain on the building, but, with the exception of those on the northern side, they are severely damaged. Some of them are located at the Acropolis Museum, others are in the British Museum, and one is at the Louvre museum. In March 2011, archaeologists announced that they had discovered five metopes of the Parthenon in the south wall of the Acropolis, which had been extended when the Acropolis was used as a fortress. According to Eleftherotypia daily, the archaeologists claimed the metopes had been placed there in the 18th century when the Acropolis wall was being repaired. The experts discovered the metopes while processing 2,250 photos with modern photographic methods, as the white Pentelic marble they are made of differed from the other stone of the wall. It was previously presumed that the missing metopes were destroyed during the Morosini explosion of the Parthenon in 1687. Frieze The most characteristic feature in the architecture and decoration of the temple is the Ionic frieze running around the exterior of the cella walls. The bas-relief frieze was carved in situ and is dated to 442–438. One interpretation is that it depicts an idealized version of the Panathenaic procession from the Dipylon Gate in the Kerameikos to the Acropolis. In this procession held every year, with a special procession taking place every four years, Athenians and foreigners participated in honouring the goddess Athena by offering her sacrifices and a new peplos dress, woven by selected noble Athenian girls called . The procession is more crowded (appearing to slow in pace) as it nears the gods on the eastern side of the temple. Joan Breton Connelly offers a mythological interpretation for the frieze, one that is in harmony with the rest of the temple's sculptural programme which shows Athenian genealogy through a series of succession myths set in the remote past. She identifies the central panel above the door of the Parthenon as the pre-battle sacrifice of the daughter of the king Erechtheus, a sacrifice that ensured Athenian victory over Eumolpos and his Thracian army. The great procession marching toward the east end of the Parthenon shows the post-battle thanksgiving sacrifice of cattle and sheep, honey and water, followed by the triumphant army of Erechtheus returning from their victory. This represents the first Panathenaia set in mythical times, the model on which historic Panathenaic processions were based. This interpretation has been rejected by William St Clair, who considers that the frieze shows the celebration of the birth of Ion, who was a descendant of Erechtheus. This interpretation has been rejected by Catharine Titi, who agrees with St Clair that the mood is one of celebration (rather than sacrifice) but argues that the celebration of the birth of Ion requires the presence of an infant but there is no infant on the frieze. Pediments Two pediments rise above the portals of the Parthenon, one on the east front, one on the west. The triangular sections once contained massive sculptures that, according to the second-century geographer Pausanias, recounted the birth of Athena and the mythological battle between Athena and Poseidon for control of Athens. East pediment The east pediment originally contained 10 to 12 sculptures depicting the Birth of Athena. Most of those pieces were removed and lost during renovations in either the eighth or the twelfth century. Only two corners remain today with figures depicting the passage of time over the course of a full day. Tethrippa of Helios is in the left corner and Selene is on the right. The horses of Helios's chariot are shown with livid expressions as they ascend into the sky at the start of the day. Selene's horses struggle to stay on the pediment scene as the day comes to an end. West pediment The supporters of Athena are extensively illustrated at the back of the left chariot, while the defenders of Poseidon are shown trailing behind the right chariot. It is believed that the corners of the pediment are filled by Athenian water deities, such as the Kephisos river, the Ilissos river, and nymph Kallirhoe. This belief emerges from the fluid character of the sculptures' body position which represents the effort of the artist to give the impression of a flowing river. Next to the left river god, there are the sculptures of the mythical king of Athens (Cecrops or Kekrops) with his daughters ( Aglaurus, Pandrosos, Herse). The statue of Poseidon was the largest sculpture in the pediment until it broke into pieces during Francesco Morosini's effort to remove it in 1688. The posterior piece of the torso was found by Lusieri in the groundwork of a Turkish house in 1801 and is currently held in the British Museum. The anterior portion was revealed by Ross in 1835 and is now held in the Acropolis Museum of Athens. Every statue on the west pediment has a fully completed back, which would have been impossible to see when the sculpture was on the temple; this indicates that the sculptors put great effort into accurately portraying the human body. Athena Parthenos The only piece of sculpture from the Parthenon known to be from the hand of Phidias was the statue of Athena housed in the naos. This massive chryselephantine sculpture is now lost and known only from copies, vase painting, gems, literary descriptions, and coins. Later history Late antiquity A major fire broke out in the Parthenon shortly after the middle of the third century AD. which destroyed the roof and much of the sanctuary's interior. Heruli pirates sacked Athens in 276, and destroyed most of the public buildings there, including the Parthenon. Repairs were made in the fourth century AD, possibly during the reign of Julian the Apostate. A new wooden roof overlaid with clay tiles was installed to cover the sanctuary. It sloped at a greater angle than the original roof and left the building's wings exposed. The Parthenon survived as a temple dedicated to Athena for nearly 1,000 years until Theodosius II, during the Persecution of pagans in the late Roman Empire, decreed in 435 that all pagan temples in the Eastern Roman Empire be closed. It is debated exactly when during the 5th century that the closure of the Parthenon as a temple was put into practice. It is suggested to have occurred in –484, on the order of Emperor Zeno, because the temple had been the focus of Pagan Hellenic opposition against Zeno in Athens in support of Illus, who had promised to restore Hellenic rites to the temples that were still standing. At some point in the fifth century, Athena's great cult image was looted by one of the emperors and taken to Constantinople, where it was later destroyed, possibly during the siege and sack of Constantinople during the Fourth Crusade in 1204 AD. Christian church The Parthenon was converted into a Christian church in the final decades of the fifth century to become the Church of the Parthenos Maria (Virgin Mary) or the Church of the Theotokos (Mother of God). The orientation of the building was changed to face towards the east; the main entrance was placed at the building's western end, and the Christian altar and iconostasis were situated towards the building's eastern side adjacent to an apse built where the temple's pronaos was formerly located. A large central portal with surrounding side-doors was made in the wall dividing the cella, which became the church's nave, and from the rear chamber, the church's narthex. The spaces between the columns of the and the peristyle were walled up, though a number of doorways still permitted access. Icons were painted on the walls, and many Christian inscriptions were carved into the Parthenon's columns. These renovations inevitably led to the removal and dispersal of some of the sculptures. The Parthenon became the fourth most important Christian pilgrimage destination in the Eastern Roman Empire after Constantinople, Ephesos, and Thessaloniki. In 1018, the emperor Basil II went on a pilgrimage to Athens after his final victory over the First Bulgarian Empire for the sole purpose of worshipping at the Parthenon. In medieval Greek accounts it is called the Temple of Theotokos Atheniotissa and often indirectly referred to as famous without explaining exactly which temple they were referring to, thus establishing that it was indeed well known. At the time of the Latin occupation, it became for about 250 years a Roman Catholic church of Our Lady. During this period a tower, used either as a watchtower or bell tower and containing a spiral staircase, was constructed at the southwest corner of the cella, and vaulted tombs were built beneath the Parthenon's floor. The rediscovery of the Parthenon as an ancient monument dates back to the period of Humanism; Cyriacus of Ancona was the first after antiquity to describe the Parthenon, of which he had read many times in ancient texts. Thanks to him, Western Europe was able to have the first design of the monument, which Ciriaco called "temple of the goddess Athena", unlike previous travellers, who had called it "church of Virgin Mary": ...mirabile Palladis Divae marmoreum templum, divum quippe opus Phidiae ("...the wonderful temple of the goddess Athena, a divine work of Phidias"). Islamic mosque In 1456, Ottoman Turkish forces invaded Athens and laid siege to a Florentine army defending the Acropolis until June 1458, when it surrendered to the Turks. The Turks may have briefly restored the Parthenon to the Greek Orthodox Christians for continued use as a church. Some time before the end of the fifteenth century, the Parthenon became a mosque. The precise circumstances under which the Turks appropriated it for use as a mosque are unclear; one account states that Mehmed II ordered its conversion as punishment for an Athenian plot against Ottoman rule. The apse was repurposed into a mihrab, the tower previously constructed during the Roman Catholic occupation of the Parthenon was extended upwards to become a minaret, a minbar was installed, the Christian altar and iconostasis were removed, and the walls were whitewashed to cover icons of Christian saints and other Christian imagery. Despite the alterations accompanying the Parthenon's conversion into a church and subsequently a mosque, its structure had remained basically intact. In 1667, the Turkish traveller Evliya Çelebi expressed marvel at the Parthenon's sculptures and figuratively described the building as "like some impregnable fortress not made by human agency". He composed a poetic supplication stating that, as "a work less of human hands than of Heaven itself, [it] should remain standing for all time". The French artist Jacques Carrey in 1674 visited the Acropolis and sketched the Parthenon's sculptural decorations. Early in 1687, an engineer named Plantier sketched the Parthenon for the Frenchman Graviers d'Ortières. These depictions, particularly Carrey's, provide important, and sometimes the only, evidence of the condition of the Parthenon and its various sculptures prior to the devastation it suffered in late 1687 and the subsequent looting of its art objects. Destruction As part of the Morean War (1684–1699), the Venetians sent an expedition led by Francesco Morosini to attack Athens and capture the Acropolis. The Ottoman Turks fortified the Acropolis and used the Parthenon as a gunpowder magazine – despite having been forewarned of the dangers of this use by the 1656 explosion that severely damaged the Propylaea – and as a shelter for members of the local Turkish community. On 26 September 1687 a Venetian mortar round, fired from the Hill of Philopappos, blew up the magazine. The explosion blew out the building's central portion and caused the cella's walls to crumble into rubble. According to Greek architect and archaeologist Kornilia Chatziaslani: About three hundred people were killed in the explosion, which showered marble fragments over nearby Turkish defenders and sparked fires that destroyed many homes. Accounts written at the time conflict over whether this destruction was deliberate or accidental; one such account, written by the German officer Sobievolski, states that a Turkish deserter revealed to Morosini the use to which the Turks had put the Parthenon; expecting that the Venetians would not target a building of such historic importance. Morosini was said to have responded by directing his artillery to aim at the Parthenon. Subsequently, Morosini sought to loot sculptures from the ruin and caused further damage in the process. Sculptures of Poseidon and Athena's horses fell to the ground and smashed as his soldiers tried to detach them from the building's west pediment. In 1688 the Venetians abandoned Athens to avoid a confrontation with a large force the Turks had assembled at Chalcis; at that time, the Venetians had considered blowing up what remained of the Parthenon along with the rest of the Acropolis to deny its further use as a fortification to the Turks, but that idea was not pursued. Once the Turks had recaptured the Acropolis, they used some of the rubble produced by this explosion to erect a smaller mosque within the shell of the ruined Parthenon. For the next century and a half, parts of the remaining structure were looted for building material and especially valuable objects. The 18th century was a period of Ottoman stagnation—so that many more Europeans found access to Athens, and the picturesque ruins of the Parthenon were much drawn and painted, spurring a rise in philhellenism and helping to arouse sympathy in Britain and France for Greek independence. Amongst those early travellers and archaeologists were James Stuart and Nicholas Revett, who were commissioned by the Society of Dilettanti to survey the ruins of classical Athens. They produced the first measured drawings of the Parthenon, published in 1787 in the second volume of Antiquities of Athens Measured and Delineated. In 1801, the British Ambassador at Constantinople, the Earl of Elgin, claimed that he obtained a firman (edict) from the kaymakam, whose existence or legitimacy has not been proved to this day, to make casts and drawings of the antiquities on the Acropolis, and to remove sculptures that were lying on the ground. Independent Greece When independent Greece gained control of Athens in 1832, the visible section of the minaret was demolished; only its base and spiral staircase up to the level of the architrave remain intact. Soon all the medieval and Ottoman buildings on the Acropolis were destroyed. The image of the small mosque within the Parthenon's cella has been preserved in Joly de Lotbinière's photograph, published in Lerebours's Excursions Daguerriennes in 1842: the first photograph of the Acropolis. The area became a historical precinct controlled by the Greek government. In the later 19th century, the Parthenon was widely considered by Americans and Europeans to be the pinnacle of human architectural achievement, and became a popular destination and subject of artists, including Frederic Edwin Church and Sanford Robinson Gifford. Today it attracts millions of tourists every year, who travel up the path at the western end of the Acropolis, through the restored Propylaea, and up the Panathenaic Way to the Parthenon, which is surrounded by a low fence to prevent damage. Dispute over the marbles The dispute centres around those of the Parthenon Marbles removed by Thomas Bruce, 7th Earl of Elgin, from 1801 to 1803, which are in the British Museum. A few sculptures from the Parthenon are also in the Louvre in Paris, in Copenhagen, and elsewhere, while more than half are in the Acropolis Museum in Athens. A few can still be seen on the building itself. The Greek government has campaigned since 1983 for the British Museum to return the sculptures to Greece. The British Museum has consistently refused to return the sculptures, and successive British governments have been unwilling to force the museum to do so (which would require legislation). Talks between senior representatives from Greek and British cultural ministries and their legal advisors took place in London on 4 May 2007. These were the first serious negotiations for several years, and there were hopes that the two sides might move a step closer to a resolution. In December 2022, the British newspaper The Guardian published a story with quotes from Greek government officials that suggested negotiations to return the marbles were underway and a "credible" solution was being discussed. Four pieces of the sculptures have been repatriated to Greece: 3 from the Vatican, and 1 from a museum in Sicily. Restoration An organized effort to preserve and restore buildings on the Acropolis began in 1975, when the Greek government established the Committee for the Conservation of the Acropolis Monuments (ESMA). That group of interdisciplinary specialist scholars oversees the academic understanding of the site to guide restoration efforts. The project later attracted funding and technical assistance from the European Union. An archaeological committee thoroughly documented every artefact remaining on the site, and architects assisted with computer models to determine their original locations. Particularly important and fragile sculptures were transferred to the Acropolis Museum. A crane was installed for moving marble blocks; the crane was designed to fold away beneath the roofline when not in use. In some cases, prior re-constructions were found to be incorrect. These were dismantled, and a careful process of restoration began. Originally, various blocks were held together by elongated iron H pins that were completely coated in lead, which protected the iron from corrosion. Stabilizing pins added in the 19th century were not so coated, and corroded. Since the corrosion product (rust) is expansive, the expansion caused further damage by cracking the marble. In 2019, Greece's Central Archaeological Council approved a restoration of the interior cella's north wall (along with parts of others). The project will reinstate as many as 360 ancient stones, and install 90 new pieces of Pentelic marble, minimizing the use of new material as much as possible. The eventual result of these restorations will be a partial restoration of some or most of each wall of the interior cella. See also Ancient Greek architecture Knossos List of Ancient Greek temples National Monument of Scotland, Edinburgh Palermo Fragment Parthenon, Nashville – Full-scale replica Stripped Classicism Temple of Hephaestus Walhalla temple Regensburg – Exterior modelled on the Parthenon, but the interior is a hall of fame for distinguished Germans References Sources Printed sources Online sources Further reading Beard, Mary. The Parthenon. Harvard University: 2003. . Vinzenz Brinkmann (ed.): Athen. Triumph der Bilder. Exhibition catalogue Liebieghaus Skulpturensammlung, Frankfurt, Germany, 2016, . Connelly, Joan Breton Connelly. "The Parthenon Enigma: A New Understanding of the West's Most Iconic Building and the People Who Made It." Knopf: 2014. . Cosmopoulos, Michael (editor). The Parthenon and its Sculptures. Cambridge University: 2004. . King, Dorothy "The Elgin Marbles" Hutchinson / Random House, 2006. Osada, T. (ed.) The Parthenon Frieze. The Ritual Communication between the Goddess and the Polis. Parthenon Project Japan 2011–2014 Phoibos Verlag, Wien, Austria 2016, . . Papachatzis, Nikolaos D. Pausaniou Ellados Periegesis – Attika Athens, 1974. Tournikio, Panayotis. Parthenon. Abrams: 1996. . Traulos, Ioannis N. I Poleodomike ekselikses ton Athinon Athens, 1960. Woodford, Susan. The Parthenon. Cambridge University, 1981. Catharine Titi, The Parthenon Marbles and International Law, Springer, 2023, . External links The Acropolis of Athens: The Parthenon (official site with a schedule of its opening hours, tickets, and contact information) (Hellenic Ministry of Culture) The Acropolis Restoration Project (Hellenic Ministry of Culture) The Parthenon Frieze UNESCO World Heritage Centre – Acropolis, Athens Metropolitan Government of Nashville and Davidson County – The Parthenon The Athenian Acropolis by Livio C. Stecchini (Takes the heterodox view of the date of the proto-Parthenon, but a useful summary of the scholarship) (archived) The Friends of the Acropolis Illustrated Parthenon Marbles – Janice Siegel, Department of Classics, Hampden–Sydney College, Virginia Parthenon:description, photo album View a digital reconstruction of the Parthenon in virtual reality from Sketchfab Videos A Wikimedia video of the main sights of the Athenian Acropolis Secrets of the Parthenon video by Public Broadcasting Service, on YouTube Parthenon by Costas Gavras The history of Acropolis and Parthenon from the Greek tv show Η Μηχανή του Χρόνου (Time machine) , on YouTube The Acropolis of Athens in ancient Greece – Dimensions and proportions of Parthenon on Youtube Institute for Advanced Study: The Parthenon Sculptures 438 BC 5th-century BC religious buildings and structures Temples of Athena Acropolis of Athens Ancient Greek buildings and structures in Athens Landmarks in Athens Destroyed Greek temples Temples in ancient Athens Sculptures by Phidias Greek temples Conversion of non-Christian religious buildings and structures into churches Former churches in Greece Religious buildings and structures converted into mosques Former mosques in Greece Ruins in Greece World Heritage Sites in Greece Persecution of pagans in the late Roman Empire
23673
https://en.wikipedia.org/wiki/Pachomius%20the%20Great
Pachomius the Great
Pachomius (; Pakhomios; ; c. 292 – 9 May 348 AD), also known as Saint Pachomius the Great, is generally recognized as the founder of Christian cenobitic monasticism. Coptic churches celebrate his feast day on 9 May, and Eastern Orthodox and Catholic churches mark his feast on 15 May or 28 May. In Lutheranism, he is remembered as a renewer of the church, along with his contemporary (and fellow desert saint), Anthony of Egypt on 17 January. Name The name Pachomius is of Coptic origin: ⲡⲁϧⲱⲙ pakhōm from ⲁϧⲱⲙ akhōm "eagle or falcon" (ⲡ p- at the beginning is the Coptic definite article), from Middle Egyptian ꜥẖm "falcon", originally "divine image". Into Greek, it was adopted as Παχούμιος and Παχώμιος. By Greek folk etymology, it was sometimes interpreted as "broad-shouldered" from παχύς "thick, large" and ὦμος "shoulder". Life Pachomius was born in c. 292 in Thebaid (near modern-day Luxor, Egypt) to pagan parents. According to his hagiography, at age 21, Pachomius was swept up against his will in a Roman army recruitment drive, a common occurrence during this period of turmoil and civil war. With several other youths, he was put onto a ship that floated down the Nile and arrived at Thebes in the evening. Here he first encountered local Christians, who customarily brought food and comfort daily to the conscripted troops. This made a lasting impression, and Pachomius vowed to investigate Christianity further when he got out. He was able to leave the army without ever having to fight. He moved to the village of Sheneset (Chenoboskion) in Upper Egypt and was converted and baptized in 314. Pachomius then came into contact with several well known ascetics and decided to pursue that path under the guidance of the hermit named Palaemon (317). One of his devotions, popular at the time, was praying with his arms stretched out in the form of a cross. After studying seven years with Palaemon, Pachomius set out to lead the life of a hermit near St. Anthony of Egypt, whose practices he imitated until Pachomius heard a voice in Tabennisi that told him to build a dwelling for the hermits to come to. An earlier ascetic named Macarius had created a number of proto-monasteries called lavra, or cells, where holy men who were physically or mentally unable to achieve the rigors of Anthony's solitary life would live in a community setting. According to the Bohairic Life of Pachomius (17), while Pachomius was praying at the deserted village of Tabennisi, he heard a voice calling him, saying, "Pachomius, Pachomius, struggle, dwell in this place and build a monastery; for many will come to you to become monks with you, and they will profit their souls." Later, while praying at night after a day of harvesting reeds with his brother on a small island, Pachomius had another vision of an angel saying to him three times, "Pachomius, Pachomius, the Lord's will is [for you] to minister to the race of men and to unite them to himself" (Bohairic Life of Pachomius 22). Pachomius established his first monastery between 318 and 323 at Tabennisi, Egypt. His elder brother John joined him, and soon more than 100 monks lived nearby. Pachomius set about organizing these cells into a formal organization. Until then, Christian asceticism had been solitary or eremitic with male or female monastics living in individual huts or caves and meeting only for occasional worship services. Pachomius created the community or cenobitic organization, in which male or female monastics lived together and held their property in common under the leadership of an abbot or abbess. Pachomius realized that some men, acquainted only with the eremitical life, might speedily become disgusted if the distracting cares of the cenobitical life were thrust too abruptly upon them. He therefore allowed them to devote their whole time to spiritual exercises, undertaking all the community's administrative tasks himself. The community hailed Pachomius as "Abba" ("father" in Aramaic), from which "Abbot" derives. The monastery at Tabennisi, though enlarged several times, soon became too small and a second was founded at Pbow. This monastery at Pbow would go on to become the center for monasteries springing up along the Nile in Upper Egypt. Both of these are believed to have initially been abandoned villages, which were then repurposed for Pachomius’ vision of his Koinonia (network of monasteries). After 336, Pachomius spent most of his time at Pbow. Though Pachomius sometimes acted as lector for nearby shepherds, neither he nor any of his monks became priests. St. Athanasius visited and wished to ordain him in 333, but Pachomius fled from him. Athanasius' visit was probably a result of Pachomius' zealous defence of orthodoxy against Arianism. Basil of Caesarea visited, then took many of Pachomius' ideas, which he adapted and implemented in Caesarea. This ascetic rule, or Ascetica, is still used today by the Eastern Orthodox Church, comparable to that of the Rule of St. Benedict in the West. Pachomian monasteries Rule of St. Pachomius Pachomius was the first to set down a written monastic rule. The first rule was composed of prayers generally known and in general use, such as the Lord's Prayer. The monks were to pray them every day. As the community developed, the rules were elaborated with precepts taken from the Bible. He drew up a rule which made things easier for the less proficient, but did not check the most extreme asceticism in the more proficient. The Rule sought to balance prayer with work, the communal life with solitude. The day was organised around the liturgy, with time for manual work and devotional reading. Fasts and work were apportioned according to the individual's strength. Each monk received the same food and clothing. Common meals were provided, but those who wished to absent themselves from them were encouraged to do so, and bread, salt, and water were placed in their cells. In the Pachomian monasteries it was left very much to the individual taste of each monk to fix the order of life for himself. Thus the hours for meals and the extent of his fasting were settled by him alone, he might eat with the others in common or have bread and salt provided in his own cell every day or every second day. His rule was translated into Latin by Jerome. Honoratus of Lérins followed the Rule of St. Pachomius. Basil the Great and Benedict of Nursia adapted and incorporated parts of it in their rules. Death and legacy Pachomius continued as abbot to the cenobites for some thirty years. During an epidemic (probably plague), Pachomius called the monks, strengthened their faith, and failed to appoint his successor. Pachomius then died on 14 Pashons, 64 AM (9 May 348 AD). By the time Pachomius died, eight monasteries and several hundred monks followed his guidance. Within a generation, cenobic practices spread from Egypt to Palestine and the Judean Desert, Syria, North Africa and eventually Western Europe. The number of monks, rather than the number of monasteries, may have reached 7000. His reputation as a holy man has endured. As mentioned above, several liturgical calendars commemorate Pachomius. Among many miracles attributed to Pachomius, that though he had never learned the Greek or Latin tongues, he sometimes miraculously spoke them. Pachomius is also credited with being the first Christian to use and recommend use of a prayer rope. See also Anthony of Egypt St. Benedict Book of the First Monks Coptic monasticism Coptic saints Desert Fathers Pachomian monasteries References Further reading Goehring, J. E. (1999). Ascetics, Society, and the Desert: Studies in Early Egyptian Monasticism. Trinity Press International ISBN 9781563382697 Harmless, W. (2004). Desert Christians: An Introduction to the Literature of Early Monasticism. Oxford University Press ISBN 9780195162233 Hedstrom, D. L. B. (2021). The Monastic Landscape of Late Antique Egypt: an Archaeological Reconstruction. Cambridge University Press ISBN 9781316614082 External links The Rule Of Pachomius: Part 1, Part 2, Part 3, & Part 4 Coptic Orthodox Synaxarium (Book of Saints) Page of the "Saint Pachomius Library" (contains sources in full text) Evansville.edu Earlychurch.org.uk Catholic-forum.com Opera Omnia by Migne Patrologia Latina with analytical indexes Hypothetical reconstruction of a Pachomian monastery 292 births 348 deaths 4th-century Christian saints 4th-century Christian theologians 4th-century Romans Egyptian Christian monks Founders of Christian monasteries Saints from Roman Egypt Miracle workers Date of birth unknown Desert Fathers
23674
https://en.wikipedia.org/wiki/Philosophical%20Investigations
Philosophical Investigations
Philosophical Investigations () is a work by the philosopher Ludwig Wittgenstein, published posthumously in 1953. Philosophical Investigations is divided into two parts, consisting of what Wittgenstein calls, in the preface, Bemerkungen, translated by G. E. M. Anscombe as "remarks". A survey among American university and college teachers ranked the Investigations as the most important book of 20th-century philosophy. Relation to Wittgenstein's body of work In its preface, Wittgenstein says that Philosophical Investigations can be understood "only by contrast with and against the background of my old way of thinking". That "old way of thinking" is to be found in the only book Wittgenstein published in his lifetime, the Tractatus Logico-Philosophicus. Many of the ideas developed in the Tractatus are criticised in the Investigations, while other ideas are further developed. The Blue and Brown Books, a set of notes dictated to his class at Cambridge in 1933–1934, contain the seeds of Wittgenstein's later thoughts on language and are widely read as a turning point in his philosophy of language. Themes Language-games Wittgenstein develops this discussion of games into the key notion of a language-game. For Wittgenstein, his use of the term language-game "is meant to bring into prominence the fact that the speaking of language is part of an activity, or of a life-form." A central feature of language-games is that language is used in context and cannot be understood outside of that context. Wittgenstein lists the following as examples of language-games: "Giving orders, and obeying them"; "describing the appearance of an object, or giving its measurements"; "constructing an object from a description (a drawing)"; "reporting an event"; "speculating about an event". The famous example is the meaning of the word "game". We speak of various kinds of games: board games, betting games, sports, and "war games". These are all different uses of the word "games". Wittgenstein also gives the example of "Water!", which can be used as an exclamation, an order, a request, or an answer to a question. The meaning of the word depends on the language-game in which it is used. Another way Wittgenstein makes the point is that the word "water" has no meaning apart from its use within a language-game. One might use the word as an order to have someone else bring you a glass of water. But it can also be used to warn someone that the water has been poisoned. One might even use the word as a code by members of a secret society. Wittgenstein does not limit the application of his concept of language games to word meaning. He also applies it to sentence meaning. For example, the sentence "Moses did not exist" (§79) can mean various things. Wittgenstein argues that, independent of use, the sentence does not yet 'say' anything. It is 'meaningless' in the sense of not being significant for a particular purpose. It acquires significance only if we use it within a context; the sentence by itself does not determine its meaning but becomes meaningful only when it is used to say something. For instance, it can be used to say that no person or historical figure fits the descriptions attributed to the person who goes by the name of "Moses". But it can also mean that the leader of the Israelites was not called Moses. Or that there cannot have been anyone who accomplished all that the Bible relates about Moses, etc. What the sentence means thus depends on its use in a context. Meaning as use The Investigations deal largely with the difficulties of language and meaning. Wittgenstein viewed the tools of language as being fundamentally simple , and he believed that philosophers had obscured this simplicity by misusing language and by asking meaningless questions. He attempted in the Investigations to make things clear: "Der Fliege den Ausweg aus dem Fliegenglas zeigen"—to show the fly the way out of the fly bottle. Wittgenstein claims that the meaning of a word is based on how the word is understood within the language-game. A common summary of his argument is that meaning is use. According to the use theory of meaning, the words are not defined by reference to the objects they designate or by the mental representations one might associate with them, but by how they are used. For example, this means there is no need to postulate that there is something called good that exists independently of any good deed. Wittgenstein's theory of meaning contrasts with Platonic realism and with Gottlob Frege's notions of sense and reference. This argument has been labeled by some authors as "anthropological holism". Section 43 in Wittgenstein's Philosophical Investigations reads: "For a large class of cases—though not for all—in which we employ the word "meaning," it can be defined thus: the meaning of a word is its use in the language." Wittgenstein begins Philosophical Investigations with a quote from Augustine's Confessions, which represents the view that language serves to point out objects in the world and the view that he will be criticizing.The individual words in a language name objects—sentences are combinations of such names. In this picture of language, we find the roots of the following idea: Every word has a meaning. This meaning is correlated with the word. It is the object for which the word stands. Wittgenstein rejects a variety of ways of thinking about what the meaning of a word is or how meanings can be identified. He shows how, in each case, the meaning of the word presupposes our ability to use it. He first asks the reader to perform a thought experiment: come up with a definition of the word "game". While this may at first seem like a simple task, he then goes on to lead us through the problems with each of the possible definitions of the word "game". Any definition that focuses on amusement leaves us unsatisfied since the feelings experienced by a world-class chess player are very different from those of a circle of children playing Duck Duck Goose. Any definition that focuses on competition will fail to explain the game of catch, or the game of solitaire. And a definition of the word "game" that focuses on rules will fall into similar difficulties. The essential point of this exercise is often missed. Wittgenstein's point is not that it is impossible to define "game", but that even if we don't have a definition, we can still use the word successfully. Everybody understands what we mean when we talk about playing a game, and we can even clearly identify and correct inaccurate uses of the word, all without reference to any definition that consists of necessary and sufficient conditions for the application of the concept of a game. The German word for "game", "Spiele/Spiel", has a different sense than in English; the meaning of "Spiele" also extends to the concept of "play" and "playing." This German sense of the word may help readers better understand Wittgenstein's context in his remarks regarding games. Wittgenstein argues that definitions emerge from what he termed "forms of life", roughly the culture and society in which they are used. Wittgenstein stresses the social aspects of cognition; to see how language works in most cases, we have to see how it functions in a specific social situation. It is this emphasis on becoming attentive to the social backdrop against which language is rendered intelligible that explains Wittgenstein's elliptical comment that "If a lion could talk, we could not understand him." However, in proposing the thought experiment involving the fictional character Robinson Crusoe, a captain shipwrecked on a desolate island with no other inhabitant, Wittgenstein shows that language is not in all cases a social phenomenon (although it is in most cases); instead, the criterion for a language is grounded in a set of interrelated normative activities: teaching, explanations, techniques, and criteria of correctness. In short, it is essential that a language be shareable, but this does not imply that for a language to function, it must be already shared. Wittgenstein rejects the idea that ostensive definitions can provide us with the meaning of a word. For Wittgenstein, the thing that the word stands for does not give the meaning of the word. Wittgenstein argues for this by making a series of moves to show that understanding an ostensive definition presupposes an understanding of the way the word being defined is used. So, for instance, there is no difference between pointing to a piece of paper, to its colour, or to its shape, but understanding the difference is crucial to using the paper in an ostensive definition of a shape or of a colour. Family resemblances Why is it that we are sure a particular activity—e.g. Olympic target shooting—is a game while a similar activity—e.g. military sharp shooting—is not? Wittgenstein's explanation is tied up with an important analogy. How do we recognize that two people we know are related to one another? We may see similar height, weight, eye color, hair, nose, mouth, patterns of speech, social or political views, mannerisms, body structure, last names, etc. If we see enough matches we say we've noticed a family resemblance. It is perhaps important to note that this is not always a conscious process—generally we don't catalog various similarities until we reach a certain threshold, we just intuitively see the resemblances. Wittgenstein suggests that the same is true of language. We are all familiar (i.e. socially) with enough things that are games and enough things that are not games that we can categorize new activities as either games or not. This brings us back to Wittgenstein's reliance on indirect communication, and his reliance on thought-experiments. Some philosophical confusions come about because we aren't able to see family resemblances. We've made a mistake in understanding the vague and intuitive rules that language uses and have thereby tied ourselves up in philosophical knots. He suggests that an attempt to untangle these knots requires more than simple deductive arguments pointing out the problems with some particular position. Instead, Wittgenstein's larger goal is to try to divert us from our philosophical problems long enough to become aware of our intuitive ability to see the family resemblances. Rules and rule-following Wittgenstein's discussion of rules and rule-following ranges from § 138 through § 242. Wittgenstein begins his discussion of rules with the example of one person giving orders to another "to write down a series of signs according to a certain formation rule." The series of signs consists of the natural numbers. Wittgenstein draws a distinction between following orders by copying the numbers following instruction and understanding the construction of the series of numbers. One general characteristic of games that Wittgenstein considers in detail is the way in which they consist in following rules. Rules constitute a family, rather than a class that can be explicitly defined. As a consequence, it is not possible to provide a definitive account of what it is to follow a rule. Indeed, he argues that any course of action can be made out to accord with some particular rule, and that therefore a rule cannot be used to explain an action. Rather, that one is following a rule or not is to be decided by looking to see if the actions conform to the expectations in the particular form of life in which one is involved. Following a rule is a social activity. Saul Kripke provides an influential discussion of Wittgenstein's remarks on rules. For Kripke, Wittgenstein's discussion of rules "may be regarded as a new form of philosophical scepticism." He starts his discussion of Wittgenstein by quoting what he describes as Wittgenstein's sceptical paradox: "This was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. The answer was: if everything can be made out to accord with the rule, then it can also be made out to conflict with it. And so there would be neither accord nor conflict here." Kripke argues that the implications of Wittgenstein's discussion of rules is that no person can mean something by the language that they use or correctly follow (or fail to follow) a rule. In his 1984 book, Wittgenstein on Meaning, Colin McGinn disputed Kripke's interpretation. Private language Wittgenstein also ponders the possibility of a language that talks about those things that are known only to the user, whose content is inherently private. The usual example is that of a language in which one names one's sensations and other subjective experiences, such that the meaning of the term is decided by the individual alone. For example, the individual names a particular sensation, on some occasion, 'S', and intends to use that word to refer to that sensation. Such a language Wittgenstein calls a private language. Wittgenstein presents several perspectives on the topic. One point he makes is that it is incoherent to talk of knowing that one is in some particular mental state. Whereas others can learn of my pain, for example, I simply have my own pain; it follows that one does not know of one's own pain, one simply has a pain. For Wittgenstein, this is a grammatical point, part of the way in which the language-game involving the word "pain" is played. Although Wittgenstein certainly argues that the notion of private language is incoherent, because of the way in which the text is presented the exact nature of the argument is disputed. First, he argues that a private language is not really a language at all. This point is intimately connected with a variety of other themes in his later works, especially his investigations of "meaning". For Wittgenstein, there is no single, coherent "sample" or "object" that we can call "meaning". Rather, the supposition that there are such things is the source of many philosophical confusions. Meaning is a complicated phenomenon that is woven into the fabric of our lives. A good first approximation of Wittgenstein's point is that meaning is a social event; meaning happens between language users. As a consequence, it makes no sense to talk about a private language, with words that mean something in the absence of other users of the language. Wittgenstein also argues that one couldn't possibly use the words of a private language. He invites the reader to consider a case in which someone decides that each time she has a particular sensation she will place a sign S in a diary. Wittgenstein points out that in such a case one could have no criteria for the correctness of one's use of S. Again, several examples are considered. One is that perhaps using S involves mentally consulting a table of sensations, to check that one has associated S correctly; but in this case, how could the mental table be checked for its correctness? It is "[a]s if someone were to buy several copies of the morning paper to assure himself that what it said was true", as Wittgenstein puts it. One common interpretation of the argument is that while one may have direct or privileged access to one's current mental states, there is no such infallible access to identifying previous mental states that one had in the past. That is, the only way to check to see if one has applied the symbol S correctly to a certain mental state is to introspect and determine whether the current sensation is identical to the sensation previously associated with S. And while identifying one's current mental state of remembering may be infallible, whether one remembered correctly is not infallible. Thus, for a language to be used at all it must have some public criterion of identity. Often, what is widely regarded as a deep philosophical problem will vanish, argues Wittgenstein, and eventually be seen as a confusion about the significance of the words that philosophers use to frame such problems and questions. It is only in this way that it is interesting to talk about something like a "private language" — i.e., it is helpful to see how the "problem" results from a misunderstanding. To sum up: Wittgenstein asserts that, if something is a language, it cannot be (logically) private; and if something is private, it is not (and cannot be) a language. Wittgenstein's beetle Another point that Wittgenstein makes against the possibility of a private language involves the beetle-in-a-box thought experiment. He asks the reader to imagine that each person has a box, inside which is something that everyone intends to refer to with the word "beetle". Further, suppose that no one can look inside another's box, and each claims to know what a "beetle" is only by examining their own box. Wittgenstein suggests that, in such a situation, the word "beetle" could not be the name of a thing, because supposing that each person has something completely different in their boxes (or nothing at all) does not change the meaning of the word; the beetle as a private object "drops out of consideration as irrelevant". Thus, Wittgenstein argues, if we can talk about something, then it is not private, in the sense considered. And, contrapositively, if we consider something to be indeed private, it follows that we cannot talk about it. Mind Wittgenstein's investigations of language lead to several issues concerning the mind. His key target of criticism is any form of extreme mentalism that posits mental states that are entirely unconnected to the subject's environment. For Wittgenstein, thought is inevitably tied to language, which is inherently social. Part of Wittgenstein's credo is captured in the following proclamation: "An 'inner process' stands in need of outward criteria." This follows primarily from his conclusions about private languages: a private mental state (a sensation of pain, for example) cannot be adequately discussed without public criteria for identifying it. According to Wittgenstein, those who insist that consciousness (or any other apparently subjective mental state) is conceptually unconnected to the external world are mistaken. Wittgenstein explicitly criticizes so-called conceivability arguments: "Could one imagine a stone's having consciousness? And if anyone can do so—why should that not merely prove that such image-mongery is of no interest to us?" He considers and rejects the following reply as well: "But if I suppose that someone is in pain, then I am simply supposing that he has just the same as I have so often had." — That gets us no further. It is as if I were to say: "You surely know what 'It is 5 o'clock here' means; so you also know what 'It's 5 o'clock on the sun' means. It means simply that it is just the same there as it is here when it is 5 o'clock." — The explanation by means of identity does not work here. Thus, according to Wittgenstein, mental states are intimately connected to a subject's environment, especially to his or her linguistic environment, and conceivability or imaginability. Arguments that claim otherwise are misguided. Seeing that vs. seeing as In addition to ambiguous sentences, Wittgenstein discussed figures that can be seen and understood in two different ways. Often one can see something in a straightforward way — seeing that it is a rabbit, perhaps. But, at other times, one notices a particular aspect — seeing it as something. An example Wittgenstein uses is the "duck-rabbit", an ambiguous image that can be seen as either a duck or a rabbit. When one looks at the duck-rabbit and sees a rabbit, one is not interpreting the picture as a rabbit, but rather reporting what one sees. One just sees the picture as a rabbit. But what occurs when one sees it first as a duck, then as a rabbit? As the gnomic remarks in the Investigations indicate, Wittgenstein isn't sure. However, he is sure that it could not be the case that the external world stays the same while an "internal" cognitive change takes place. Response and influence Bertrand Russell made the following comment on the Philosophical Investigations in his book My Philosophical Development:I have not found in Wittgenstein's Philosophical Investigations anything that seemed to me interesting and I do not understand why a whole school finds important wisdom in its pages. Psychologically this is surprising. The earlier Wittgenstein, whom I knew intimately, was a man addicted to passionately intense thinking, profoundly aware of difficult problems of which I, like him, felt the importance, and possessed (or at least so I thought) of true philosophical genius. The later Wittgenstein, on the contrary, seems to have grown tired of serious thinking and to have invented a doctrine which would make such an activity unnecessary. I do not for one moment believe that the doctrine which has these lazy consequences is true. I realize, however, that I have an overpoweringly strong bias against it, for, if it is true, philosophy is, at best, a slight help to lexicographers, and at worst, an idle tea-table amusement. In his book Words and Things, Ernest Gellner was fiercely critical of the work of Ludwig Wittgenstein, J. L. Austin, Gilbert Ryle, Antony Flew, and many others. Ryle refused to have the book reviewed in the philosophical journal Mind (which he edited), and Bertrand Russell (who had written an approving foreword) protested in a letter to The Times. A response from Ryle and a lengthy correspondence ensued. Besides stressing the differences between the Investigations and the Tractatus, some critical approaches have claimed there to be more continuity and similarity between the two works than many suppose. One of these is the New Wittgenstein approach. Kripkenstein The discussion of private languages was revitalized in 1982 with the publication of Kripke's book Wittgenstein on Rules and Private Language. In this work, Kripke uses Wittgenstein's text to develop a particular type of skepticism about rules that stresses the communal nature of language-use as grounding meaning. Critics of Kripke's version of Wittgenstein have facetiously referred to it as "Kripkenstein," scholars such as Gordon Baker, Peter Hacker, Colin McGinn, and John McDowell seeing it as a radical misinterpretation of Wittgenstein's text. Other philosophers – such as Martin Kusch – have defended Kripke's views. Editions Philosophical Investigations was not ready for publication when Wittgenstein died in 1951. G. E. M. Anscombe translated Wittgenstein's manuscript into English, and it was first published in 1953. There are multiple editions of Philosophical Investigations with the popular third edition and 50th anniversary edition having been edited by Anscombe: First Edition: Blackwell Publishers 1953. () German-English Edition, translation by G. E. M. Anscombe. Second Edition: Blackwell Publishers, 1958. Third Edition: Prentice Hall, 1973 (). 50th Anniversary Edition: Blackwell Publishers, 2001 (). This edition includes the original German text in addition to the English translation. Fourth Edition: Wiley-Blackwell, 2009 (). This edition includes the original German text in addition to the English translation. See also Prior's tonk Notes References Sources External links The first 100 remarks from Wittgenstein's Philosophical Investigations with Commentary by Lois Shawver (archived 13 March 2016) Wittgenstein's Beetle – description of the thought experiment from Philosophy Online (archived 4 February 2012) As The Hammer Strikes in Fillip Original German text of the Philosophical Investigations at the Ludwig Wittgenstein Project 1953 non-fiction books Analytic philosophy literature Books by Ludwig Wittgenstein Epistemology books Philosophy of language literature Thought experiments in philosophy
23677
https://en.wikipedia.org/wiki/Poul%20Anderson
Poul Anderson
Poul William Anderson (November 25, 1926 – July 31, 2001) was an American fantasy and science fiction author who was active from the 1940s until his death in 2001. Anderson also wrote historical novels. He won the Hugo Award seven times and the Nebula Award three times, and was nominated many more times for awards. Biography Poul Anderson was born on November 25, 1926, in Bristol, Pennsylvania to Danish parents. Soon after his birth, his father, Anton Anderson, relocated the family to Texas, where they lived for more than ten years. After Anton Anderson's death, his widow took the children to Denmark. The family returned to the United States after the beginning of World War II, settling eventually on a Minnesota farm. While he was an undergraduate student at the University of Minnesota, Anderson's first stories were published by editor John W. Campbell in the magazine Astounding Science Fiction: "Tomorrow's Children" by Anderson and F. N. Waldrop in March 1947 and a sequel, "Chain of Logic" by Anderson alone, in July. He earned his BA in physics with honors but became a freelance writer after he graduated in 1948. His third story was printed in the December Astounding. Anderson married Karen Kruse in 1953 and relocated with her to the San Francisco Bay area. Their daughter Astrid (later married to science fiction author Greg Bear) was born in 1954. They made their home in Orinda, California. Over the years Poul gave many readings at The Other Change of Hobbit bookstore in Berkeley; his widow later donated his typewriter and desk to the store. In 1954, he published the fantasy novel The Broken Sword, one of his most known works. In 1965, Algis Budrys said that Anderson "has for some time been science fiction's best storyteller". He was a founding member of the Society for Creative Anachronism (SCA) in 1966 and of the Swordsmen and Sorcerers' Guild of America (SAGA), also during the mid-1960s. The latter was a group of Heroic fantasy authors organized by Lin Carter, originally eight in number, with entry by credentials as a fantasy writer alone. Anderson was the sixth President of the Science Fiction and Fantasy Writers of America, taking office in 1972. Robert A. Heinlein dedicated his 1985 novel The Cat Who Walks Through Walls to Anderson and eight of the other members of the Citizens' Advisory Council on National Space Policy. The Science Fiction Writers of America made Anderson its 16th SFWA Grand Master in 1998. In 2000's fifth class, he was inducted into the Science Fiction and Fantasy Hall of Fame as one of two deceased and two living writers. He died of prostate cancer on July 31, 2001, after a month in the hospital. A few of his novels were first published posthumously. Awards, honors and nominations Gandalf Grand Master of Fantasy (1978) Hugo Award (seven wins) John W. Campbell Memorial Award (2000) Inkpot Award (1986) Locus Award (41 nominations; one win, 1972) Mythopoeic Fantasy Award (one win (1975)) Nebula Award (three wins) Pegasus Award (best adaptation, with Anne Passovoy) (1998) Prometheus Award (five wins including the Hall of Fame award as well as Special Prometheus Award for Lifetime Achievement in 2001) SFWA Grand Master (1997) Science Fiction and Fantasy Hall of Fame (2000) Asteroid 7758 Poulanderson, discovered by Eleanor Helin at Palomar in 1990, was named in his honor. The official was published by the Minor Planet Center on September 2, 2001, a month after his death (). Bibliography See also Explanatory notes References Sources External links Bio, bibliography and book covers at FantasticFiction Obituary and tributes from the SFWA Poul Anderson Appreciation, by Dr. Paul Shackley Poul Anderson, an essay by William Tenn The Society for Creative Anachronism, of which Poul Anderson was a founding member The King of Ys review at FantasyLiterature.net By Poul Anderson On Thud and Blunder, an essay by Anderson on fantasy fiction, from the SFWA Poul Anderson's online fiction at Free Speculative Fiction Online SFWA directory of literary estates 1926 births 2001 deaths 20th-century American male writers 20th-century American novelists 21st-century American novelists American alternate history writers American fantasy writers American libertarians American male novelists American people of Danish descent American science fiction writers Analog Science Fiction and Fact people Caedmon Records artists Conan the Barbarian novelists Filkers Inkpot Award winners Novelists from Pennsylvania People from Bristol, Pennsylvania People from Orinda, California Pulp fiction writers Science Fiction Hall of Fame inductees SFWA Grand Masters Society for Creative Anachronism University of Minnesota alumni Writers from the San Francisco Bay Area 21st-century American male writers Presidents of the Science Fiction and Fantasy Writers Association
23678
https://en.wikipedia.org/wiki/Panspermia
Panspermia
Panspermia () is the hypothesis that life exists throughout the Universe, distributed by space dust, meteoroids, asteroids, comets, and planetoids, as well as by spacecraft carrying unintended contamination by microorganisms, known as directed panspermia. The theory argues that life did not originate on Earth, but instead evolved somewhere else and seeded life as we know it. Panspermia comes in many forms, such as radiopanspermia, lithopanspermia, and directed panspermia. Regardless of its form, the theories generally propose that microbes able to survive in Space (such as certain types of bacteria or plant spores) can become trapped in debris ejected into space after collisions between planets and small Solar System bodies that harbor life. This debris containing the lifeforms is then transported by meteors between bodies in a solar system, or even across solar systems within a galaxy. In this way, panspermia studies concentrate not on how life began but on methods that may distribute it within the Universe. This point is often used as a criticism of the theory. Panspermia is a fringe theory with little support amongst mainstream scientists. Critics argue that it does not answer the question of the origin of life but merely places it on another celestial body. It is also criticized because it cannot be tested experimentally. Historically, disputes over the merit of this theory centered on whether life is ubiquitous or emergent throughout the Universe. Due to its long history, the theory maintains support today, with some work being done to develop mathematical treatments of how life might migrate naturally throughout the Universe. Its long history also lends itself to extensive speculation and hoaxes that have arisen from meteoritic events. History Panspermia has a long history, dating back to the 5th century BCE and the natural philosopher Anaxagoras. Classicists came to agree that Anaxagoras maintained the Universe (or Cosmos) was full of life, and that life on Earth started from the fall of these extra-terrestrial seeds. Panspermia as it is known today, however, is not identical to this original theory. The name, as applied to this theory, was only first coined in 1908 by Svante Arrhenius, a Swedish scientist. Prior to this, since around the 1860s, since then many prominent scientists were becoming interested in the theory, for example Sir Fred Hoyle, and Chandra Wickramasinghe. Starting in the 1860s, scientists began to wonder about the origin of life, as opposed to leaving it to the philosophers. There were three scientific developments that began to bring the focus of the scientific community to the problem of the origin of life. Firstly, the Kant-Laplace Nebular theory of solar system and planetary formation was gaining favor, and implied that when the Earth first formed, the surface conditions would have been inhospitable to life as we know it. This meant that life could not have evolved parallel with the Earth, and must have evolved at a later date, without biological precursors. Secondly, Charles Darwin's famous theory of evolution implied some elusive origin, because in order for something to evolve, it must start somewhere. In his Origin of Species, Darwin was unable or unwilling to touch on this issue. Third and finally, Louis Pasteur and John Tyndall experimentally disproved the (now superseded) theory of spontaneous generation, which suggested that life was constantly evolving from non-living matter and did not have a common ancestor, as suggested by Darwin's theory of evolution. Altogether, these three developments in science presented the wider scientific community with a seemingly paradoxical situation regarding the origin of life: life must have evolved from non-biological precursors after the Earth was formed, and yet spontaneous generation as a theory had been experimentally disproved. From here, is where the study of the origin of life branched. Those who accepted Pasteur's rejection of spontaneous generation began to develop the theory that under (unknown) conditions on a primitive Earth, life must have gradually evolved from organic material. This theory became known as abiogenesis, and is the currently accepted one. On the other side of this are those scientists of the time who rejected Pasteur's results and instead supported the idea that life on Earth came from existing life. This necessarily requires that life has always existed somewhere on some planet, and that it has a mechanism of transferring between planets. Thus, the modern treatment of panspermia began in earnest. Lord Kelvin, in a presentation to The British Association for the Advancement of Science in 1871, proposed the idea that similarly to how seeds can be transferred through the air by winds, so can life be brought to Earth by the infall of a life-bearing meteorite. He further proposed the idea that life can only come from life, and that this principle is invariant under philosophical uniformitarianism, similar to how matter can neither be created nor destroyed. This argument was heavily criticized because of its boldness, and additionally due to technical objections from the wider community. In particular, Johann Zollner from Germany argued against Kelvin by saying that organisms carried in meteorites to Earth would not survive the descent through the atmosphere due to friction heating. The arguments went back and forth until Svante Arrhenius gave the theory its modern treatment and designation. Arrhenius argued against abiogenesis on the basis that it had no experimental foundation at the time, and believed that life had always existed somewhere in the Universe. He focused his efforts of developing the mechanism(s) by which this pervasive life may be transferred through the Universe. At this time, it was recently discovered that solar radiation can exert pressure, and thus force, on matter. Arrhenius thus concluded that it is possible that very small organisms such as bacterial spores could be moved around due to this radiation pressure. At this point, panspermia as a theory now had a potentially viable transport mechanism, as well as a vehicle for carrying life from planet to planet. The theory still faced criticism mostly due to doubts about how long spores would actually survive under the conditions of their transport from one planet, through space, to another. Despite all the emphasis placed on trying to establish the scientific legitimacy of this theory, it still lacked testability; that was and still is a serious problem the theory has yet to overcome. Support for the theory persisted, however, with Fred Hoyle and Chandra Wickramasinghe using two reasons for why an extra-terrestrial origin of life might be preferred. First is that required conditions for the origin of life may have been more favorable somewhere other than Earth, and second that life on Earth exhibits properties that are not accounted for by assuming an endogenic origin. Hoyle studied spectra of interstellar dust, and came to the conclusion that space contained large amounts of organics, which he suggested were the building blocks of the more complex chemical structures. Critically, Hoyle argued that this chemical evolution was unlikely to have taken place on a prebiotic Earth, and instead the most likely candidate is a comet. Furthermore, Hoyle and Wickramasinghe concluded that the evolution of life requires a large increase in genetic information and diversity, which might have resulted from the influx of viral material from space via comets. Interestingly, there is a coincidental arrival of major epidemics and close encounters with comets, which lead Hoyle to suggest that the epidemics were a direct result of material raining down from these comets. This claim in particular garnered criticism from biologists. Since the 1970s, a new era of planetary exploration meant that data could be used to test panspermia and potentially transform it from conjecture to a testable theory. Though it has yet to be tested, panspermia is still explored today in some mathematical treatments, and as its long history suggests, the appeal of the theory has stood the test of time. Overview Core requirements Panspermia requires: that life has always existed in the Universe somewhere that organic molecules originated in space (perhaps to be distributed to Earth) that life originated from these molecules, extraterrestrially that this extraterrestrial life was transported to Earth. The creation and distribution of organic molecules from space is now uncontroversial; it is known as pseudo-panspermia. The jump from organic materials to life originating from space, however, is hypothetical and currently untestable. Transport vessels Bacterial spores and plant seeds are two common proposed vessels for panspermia. According to the theory, they could be encased in a meteorite and transported to another planet from their origin, subsequently descend through the atmosphere and populate the surface with life (see lithopanspermia below). This naturally requires that these spores and seeds have formed somewhere else, maybe even in space in the case of how panspermia deals with bacteria. Understanding of planetary formation theory and meteorites has led to the idea that some rocky bodies originating from undifferentiated parent bodies could be able to generate local conditions conducive to life. Hypothetically, internal heating from radiogenic isotopes could melt ice to provide water as well as energy. In fact, some meteorites have been found to show signs of aqueous alteration which may indicate that this process has taken place. Given that there are such large numbers of these bodies found within the Solar System, an argument can be made that they each provide a potential site for life to develop. A collision occurring in the asteroid belt could alter the orbit of one such site, and eventually deliver it to Earth. Plant seeds can be an alternative transport vessel. Some plants produce seeds that are resistant to the conditions of space., and they have been shown to lie dormant in extreme cold, vacuum, and resist short wavelength UV radiation. However, they are not typically proposed to have originated on space, but on another planet. Theoretically, even if a plant is partially damaged during its travel in space, the pieces could still seed life in a sterile environment. Sterility of the environment is relevant because it is unclear if the novel plant could out-compete existing life forms. This idea is based on previous evidence showing that cellular reconstruction can occur from cytoplasms released from damaged algae. Furthermore, plant cells contain obligate endosymbionts, which could be released into a new environment. Though both plant seeds and bacterial spores have been proposed as potentially viable vehicles, their ability to not only survive in space for the required time, but also survive atmospheric entry is debated. Variations of panspermia theory Panspermia is generally subdivided into two classes: either transfer occurs between planets of the same system (interplanetary) or between stellar systems (interstellar). Further classifications are based on different proposed transport mechanisms, as follows. Space probes may be a viable transport mechanism for interplanetary cross-pollination within the Solar System. Space agencies have implemented planetary protection procedures to reduce the risk of planetary contamination, but microorganisms such as Tersicoccus phoenicis may be resistant to spacecraft assembly cleaning. Radiopanspermia In 1903, Svante Arrhenius proposed radiopanspermia, the theory that singular microscopic forms of life can be propagated in space, driven by the radiation pressure from stars. This is the mechanism by which light can exert a force on matter. Arrhenius argued that particles at a critical size below 1.5 μm would be propelled at high speed by radiation pressure of a star. However, because its effectiveness decreases with increasing size of the particle, this mechanism holds for very tiny particles only, such as single bacterial spores. Counterarguments The main criticism of radiopanspermia came from Iosif Shklovsky and Carl Sagan, who cited evidence for the lethal action of space radiation (UV and X-rays) in the cosmos. If enough of these microorganisms are ejected into space, some may rain down on a planet in a new star system after 106 years wandering interstellar space. There would be enormous death rates of the organisms due to radiation and the generally hostile conditions of space, but nonetheless this theory is considered potentially viable by some. Data gathered by the orbital experiments ERA, BIOPAN, EXOSTACK and EXPOSE showed that isolated spores, including those of B. subtilis, were rapidly killed if exposed to the full space environment for merely a few seconds, but if shielded against solar UV, the spores were capable of surviving in space for up to six years while embedded in clay or meteorite powder (artificial meteorites). Spores would therefore need to be heavily protected against UV radiation: exposure of unprotected DNA to solar UV and cosmic ionizing radiation would break it up into its constituent bases. Rocks at least 1 meter in diameter are required to effectively shield resistant microorganisms, such as bacterial spores against galactic cosmic radiation. Additionally, exposing DNA to the ultrahigh vacuum of space alone is sufficient to cause DNA damage, so the transport of unprotected DNA or RNA during interplanetary flights powered solely by light pressure is extremely unlikely. The feasibility of other means of transport for the more massive shielded spores into the outer Solar System—for example, through gravitational capture by comets—is unknown. There is little evidence in full support of the radiopanspermia hypothesis. Lithopanspermia This transport mechanism generally arose following the discovery of exoplanets, and the sudden availability of data following the growth of planetary science. Lithopanspermia is the proposed transfer of organisms in rocks from one planet to another through planetary objects such as in comets or asteroids, and remains speculative. A variant would be for organisms to travel between star systems on nomadic exoplanets or exomoons. Although there is no concrete evidence that lithopanspermia has occurred in the Solar System, the various stages have become amenable to experimental testing. Planetary ejection – For lithopanspermia to occur, microorganisms must first survive ejection from a planetary surface (assuming they do not form on meteorites, as suggested in), which involves extreme forces of acceleration and shock with associated temperature rises. Hypothetical values of shock pressures experienced by ejected rocks are obtained from Martian meteorites, which suggest pressures of approximately 5 to 55 GPa, acceleration of 3 Mm/s2, jerk of 6 Gm/s3 and post-shock temperature increases of about 1 K to 1000 K. Though these conditions are extreme, some organisms appear able to survive them. Survival in transit – Now in space, the microorganisms have to make it to their next destination for lithopanspermia to be successful. The survival of microorganisms has been studied extensively using both simulated facilities and in low Earth orbit. A large number of microorganisms have been selected for exposure experiments, both human-borne microbes (significant for future crewed missions) and extremophiles (significant for determining the physiological requirements of survival in space). Bacteria in particular can exhibit a survival mechanism whereby a colony generates a biofilm that enhances its protection against UV radiation. Atmospheric entry – The final stage of lithopanspermia, is re-entry onto a viable planet via its atmosphere. This requires that the organisms are able to further survive potential atmospheric ablation. Tests of this stage could use sounding rockets and orbital vehicles. B. subtilis spores inoculated onto granite domes were twice subjected to hypervelocity atmospheric transit by launch to a ~120 km altitude on an Orion two-stage rocket. The spores survived on the sides of the rock, but not on the forward-facing surface that reached 145 °C. As photosynthetic organisms must be close to the surface of a rock to obtain sufficient light energy, atmospheric transit might act as a filter against them by ablating the surface layers of the rock. Although cyanobacteria can survive the desiccating, freezing conditions of space, the STONE experiment showed that they cannot survive atmospheric entry. Small non-photosynthetic organisms deep within rocks might survive the exit and entry process, including impact survival. Lithopanspermia, described by the mechanism above can exist as either interplanetary or interstellar. It is possible to quantify panspermia models and treat them as viable mathematical theories. For example, a recent study of planets of the Trappist-1 planetary system, presents a model for estimating the probability of interplanetary panspermia, similar to studies in the past done about Earth-Mars panspermia. This study found that lithopanspermia is 'orders of magnitude more likely to occur' in the Trappist-1 system as opposed to the Earth-to-Mars scenario. According to their analysis, the increase in probability of lithopanspermia is linked to an increased probability of abiogenesis amongst the Trappist-1 planets. In a way, these modern treatments attempt to keep panspermia as a contributing factor to abiogenesis, as opposed to a theory that directly opposes it. In line with this, it is suggested that if biosignatures could be detected on two (or more) adjacent planets, that would provide evidence that panspermia is a potentially required mechanism for abiogenesis. As of yet, no such discovery has been made. Lithopanspermia has also been hypothesized to operate between stellar systems. One mathematical analysis, estimating the total number of rocky or icy objects that could potentially be captured by planetary systems within the Milky Way, has concluded that lithopanspermia is not necessarily bound to a single stellar system. This not only requires these objects have life in the first place, but also that it survives the journey. Thus intragalactic lithopanspermia is heavily dependent on the survival lifetime of organisms, as well as the velocity of the transporter. Again, there is no evidence that such a process has, or can occur. Counterarguments The complex nature of the requirements for lithopanspermia, as well as evidence against the longevity of bacteria being able to survive under these conditions, makes lithopanspermia a difficult theory to get behind. That being said, impact events did happen a lot in the early stages of the solar system formation, and still happen to a certain degree today within the asteroid belt. Directed panspermia First proposed in 1972 by Nobel prize winner Francis Crick, along with Leslie Orgel, directed panspermia is the theory that life was deliberately brought to Earth by a higher intelligent being from another planet. In light of the evidence at the time that it seems unlikely for an organism to have been delivered to Earth via radiopanspermia or lithopanspermia, Crick and Orgel proposed this as an alternative theory, though it is worth noting that Orgel was less serious about the claim. They do acknowledge that the scientific evidence is lacking, but discuss what kinds of evidence would be needed to support the theory. In a similar vein, Thomas Gold suggested that life on Earth might have originated accidentally from a pile of 'Cosmic Garbage' dumped on Earth long ago by extraterrestrial beings. These theories are often considered more science fiction, however, Crick and Orgel use the principle of cosmic reversibility to argue for it. This principle is based on the fact that if our species is capable of infecting a sterile planet, then what is preventing another technological society from having done that to Earth in the past? They concluded that it would be possible to deliberately infect another planet in the foreseeable future. As far as evidence goes, Crick and Orgel argued that given the universality of the genetic code, it follows that an infective theory for life is viable. Directed panspermia could, in theory, be demonstrated by finding a distinctive 'signature' message had been deliberately implanted into either the genome or the genetic code of the first microorganisms by our hypothetical progenitor, some 4 billion years ago. However, there is no known mechanism that could prevent mutation and natural selection from removing such a message over long periods of time. Counterarguments In 1972, both abiogenesis and panspermia were seen as viable theories by different experts. Given this, Crick and Orgel argued that experimental evidence required to validate one theory over the other was lacking. That being said, evidence strongly in favor of abiogenesis over panspermia exists today, whereas evidence for panspermia, particularly directed panspermia, is decidedly lacking. Origination and distribution of organic molecules: Pseudo-panspermia Pseudo-panspermia is the well-supported hypothesis that many of the small organic molecules used for life originated in space, and were distributed to planetary surfaces. Life then emerged on Earth, and perhaps on other planets, by the processes of abiogenesis. Evidence for pseudo-panspermia includes the discovery of organic compounds such as sugars, amino acids, and nucleobases in meteorites and other extraterrestrial bodies, and the formation of similar compounds in the laboratory under outer space conditions. A prebiotic polyester system has been explored as an example. Hoaxes & speculations Orgueil meteorite On May 14, 1864, twenty fragments from a meteorite crashed into the French city of Orgueil. A separate fragment of the Orgueil meteorite (kept in a sealed glass jar since its discovery) was found in 1965 to have a seed capsule embedded in it, while the original glassy layer on the outside remained undisturbed. Despite great initial excitement, the seed was found to be that of a European Juncaceae or rush plant that had been glued into the fragment and camouflaged using coal dust. The outer "fusion layer" was in fact glue. While the perpetrator of this hoax is unknown, it is thought that they sought to influence the 19th-century debate on spontaneous generation—rather than panspermia—by demonstrating the transformation of inorganic to biological matter. Oumuamua In 2017, the Pan-STARRS telescope in Hawaii detected a reddish object up to 400 meters in length. Analysis of its orbit provided evidence that it was an interstellar object, originating from outside our Solar System. From this Avi Loeb speculated that the object was instead an artifact from an alien civilization and could potentially be evidence for directed panspermia. This claim has been considered unlikely by other authors. See also References Further reading External links Cox, Brian. "Are we thinking about alien life all wrong?". BBC Ideas, video made by Pomona Pictures, 29 November 2021. Loeb, Abraham. "Did Life from Earth Escape the Solar System Eons Ago?". Scientific American, 4 November 2019 Loeb, Abraham. "Noah's Spaceship" Scientific American, 29 November 2020 Astrobiology Origin of life Biological hypotheses Prebiotic chemistry Fringe science 1900s neologisms
23680
https://en.wikipedia.org/wiki/There%27s%20Plenty%20of%20Room%20at%20the%20Bottom
There's Plenty of Room at the Bottom
"There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics" was a lecture given by physicist Richard Feynman at the annual American Physical Society meeting at Caltech on December 29, 1959. Feynman considered the possibility of direct manipulation of individual atoms as a more robust form of synthetic chemistry than those used at the time. Versions of the talk were reprinted in a few popular magazines, but it went largely unnoticed until the 1980s. Conception Feynman considered some ramifications of a general ability to manipulate matter on an atomic scale. He was particularly interested in the possibilities of denser computer circuitry and microscopes that could see things much smaller than is possible with scanning electron microscopes. These ideas were later realized by the use of the scanning tunneling microscope, the atomic force microscope and other examples of scanning probe microscopy and storage systems such as Millipede. Feynman also suggested that it should be possible, in principle, to make nanoscale machines that "arrange the atoms the way we want" and do chemical synthesis by mechanical manipulation. He also presented the possibility of "swallowing the doctor", an idea that he credited in the essay to his friend and graduate student Albert Hibbs. This concept involved building a tiny, swallowable surgical robot. As a thought experiment, he proposed developing a set of one-quarter-scale manipulator hands controlled by the hands of a human operator, to build one-quarter scale machine tools analogous to those found in any machine shop. This set of small tools would then be used by the small hands to build and operate ten sets of one-sixteenth-scale hands and tools, and so forth, culminating in perhaps a billion tiny factories to achieve massively parallel operations. He uses the analogy of a pantograph as a way of scaling down items. This idea was anticipated in part, down to the microscale, by science fiction author Robert A. Heinlein in his 1942 story Waldo. As the sizes got smaller, one would have to redesign tools because the relative strength of various forces would change. Gravity would become less important, and Van der Waals forces such as surface tension would become more important. Feynman mentioned these scaling issues during his talk. Nobody has yet attempted to implement this thought experiment; some types of biological enzymes and enzyme complexes (especially ribosomes) function chemically in a way close to Feynman's vision. Feynman also mentioned in his lecture that it might be better eventually to use glass or plastic because their greater uniformity would avoid problems in the very small scale (metals and crystals are separated into domains where the lattice structure prevails). This could be a good reason to make machines and electronics out of glass and plastic. At present, there are electronic components made of both materials. In glass, there are optical fiber cables that carry and amplify light. In plastic, field effect transistors are being made with polymers, such as polythiophene that becomes an electrical conductor when oxidized. Challenges At the meeting Feynman concluded his talk with two challenges, and offered a prize of $1000 for the first to solve each one. The first challenge involved the construction of a tiny motor, which, to Feynman's surprise, was achieved by November 1960 by Caltech graduate William McLellan, a meticulous craftsman, using conventional tools. The motor met the conditions, but did not advance the art. The second challenge involved the possibility of scaling down letters small enough so as to be able to fit the entire Encyclopædia Britannica on the head of a pin, by writing the information from a book page on a surface 1/25,000 smaller in linear scale. In 1985, Tom Newman, a Stanford graduate student, successfully reduced the first paragraph of A Tale of Two Cities by 1/25,000, and collected the second Feynman prize. Newman's thesis adviser, R. Fabian Pease, had read the paper in 1966, but it was another graduate student in the lab, Ken Polasko, who had recently read it who suggested attempting the challenge. Newman was looking for an arbitrary random pattern to demonstrate their technology. Newman said, "Text was ideal because it has so many different shapes." Reception The New Scientist reported "the scientific audience was captivated." Feynman had "spun the idea off the top of his mind" without even "notes from beforehand". There were no copies of the speech available. A "foresighted admirer" brought a tape recorder and an edited transcript, without Feynman's jokes, was made for publication by Caltech. In February 1960, Caltech's Engineering and Science published the speech. In addition to excerpts in The New Scientist, versions were printed in The Saturday Review and Popular Science. Newspapers announced the winning of the first challenge. The lecture was included as the final chapter in the 1961 book, Miniaturization. Impact K. Eric Drexler later took the Feynman concept of a billion tiny factories and added the idea that they could make more copies of themselves, via computer control instead of control by a human operator, in his 1986 book Engines of Creation: The Coming Era of Nanotechnology. After Feynman's death, scholars studying the historical development of nanotechnology have concluded that his role in catalyzing nanotechnology research was not highly rated by many people active in the nascent field in the 1980s and 1990s. Chris Toumey, a cultural anthropologist at the University of South Carolina, has reconstructed the history of the publication and republication of Feynman's talk, along with the record of citations to "Plenty of Room" in the scientific literature. In Toumey's 2008 article "Reading Feynman into Nanotechnology", he found 11 versions of the publication of "Plenty of Room", plus two instances of a closely related talk by Feynman, "Infinitesimal Machinery", which Feynman called "Plenty of Room, Revisited" (published under the name "Infinitesimal Machinery"). Also in Toumey's references are videotapes of that second talk. The journal Nature Nanotechnology dedicated an issue in 2009 to the subject. Toumey found that the published versions of Feynman's talk had a negligible influence in the twenty years after it was first published, as measured by citations in the scientific literature, and not much more influence in the decade after the scanning tunneling microscope was invented in 1981. Interest in "Plenty of Room" in the scientific literature greatly increased in the early 1990s. This is probably because the term "nanotechnology" gained serious attention just before that time, following its use by Drexler in his 1986 book, Engines of Creation: The Coming Era of Nanotechnology, which cited Feynman, and in a cover article headlined "Nanotechnology", published later that year in a mass-circulation science-oriented magazine, OMNI. The journal Nanotechnology was launched in 1989; the famous Eigler-Schweizer experiment, precisely manipulating 35 xenon atoms, was published in Nature in April 1990; and Science had a special issue on nanotechnology in November 1991. These and other developments hint that the retroactive rediscovery of "Plenty of Room" gave nanotechnology a packaged history that provided an early date of December 1959, plus a connection to Richard Feynman. Toumey's analysis also includes comments from scientists in nanotechnology who say that "Plenty of Room" did not influence their early work, and most of them had not read it until a later date. Feynman's stature as a Nobel laureate and an important figure in 20th-century science helped advocates of nanotechnology. It provided a valuable intellectual link to the past. More concretely, his stature and concept of atomically precise fabrication played a role in securing funding for nanotechnology research, illustrated by President Clinton's January 2000 speech calling for a Federal program: The version of the Nanotechnology Research and Development Act that the House passed in May 2003 called for a study of the technical feasibility of molecular manufacturing, but this study was removed to safeguard funding of less controversial research before it was passed by the Senate and signed into law by President George W. Bush on December 3, 2003. In 2016, a group of researchers of TU Delft and INL reported the storage of a paragraph of Feynman's talk using binary code where every bit was made with a single atomic vacancy. Using a scanning tunnelling microscope to manipulate thousand of atoms, the researchers crafted the text: This text uses exactly 1 kibibyte, i.e., 8192 bits, made with 1 atom vacancy each, constituting thereby the first atomic kibibyte, with a storage density 500 times larger than the state of the art approaches. The text required to "arrange the atoms the way we want", in a checkerboard pattern. This self-referential tribute to Feynman's vision was covered both by scientific journals and mainstream media. Fiction byproducts In "The Tree of Time", a short story published in 1964, Damon Knight uses the idea of a barrier that has to be constructed atom by atom (a time barrier, in the story). Editions A condensed version of the talk. A reprint of the talk. A sequel to his first talk. See also Foresight Nanotech Institute Feynman Prize Moore's law Nanocar References External links Feynman's classic 1959 talk "There's Plenty of Room at the Bottom" "There's Plenty of Room at the Bottom" in February 1960 Engineering and Science Caltech magazine Nanotechnology publications Physics papers Works by Richard Feynman 1959 speeches California Institute of Technology American Physical Society Thought experiments Lectures
23681
https://en.wikipedia.org/wiki/Philately
Philately
Philately (; ) is the study of postage stamps and postal history. It also refers to the collection and appreciation of stamps and other philatelic products. While closely associated with stamp collecting and the study of postage, it is possible to be a philatelist without owning any stamps. For instance, the stamps being studied may be very rare or reside only in museums. Etymology The word "philately" is the English transliteration of the French "", coined by Georges Herpin in 1864. Herpin stated that stamps had been collected and studied for the previous six or seven years and a better name was required for the new hobby than timbromanie (roughly "stamp mania"), which was disliked. The alternative terms "timbromania", "timbrophily", and "timbrology" gradually fell out of use as philately gained acceptance during the 1860s. Herpin took the Greek root word φιλ(ο)- phil(o)-, meaning "an attraction or affinity for something", and ateleia, meaning "exempt from duties and taxes", to form the neologism "philatélie". History Nineteenth century As a collection field, philately appeared after the introduction of the postage stamps in 1840, but did not gain large attraction until the mid-1850s. In the U.S., early collectors of stamps were known as "stamp gatherers". The United States Postal Service re-issued stamps in 1875 due to public demand for 'old stamps', including those from before the American Civil War. Some authors believe that the first philatelist appeared on the day of the release of the world's first postage stamp, dated to 6 May 1840, when the Liverson, Denby and Lavie London law office sent a letter to Scotland franked with ten uncut Penny Blacks, stamped with the postmark "LS.6MY6. 1840." In 1992 at an auction in Zürich, this envelope was sold for 690,000 francs. Already in 1846, cases of collecting stamps in large numbers were known in England. However, without reason for collection, stamps at this time were used for pasting wallpaper. The first philatelist is considered to be a postmaster going by the name Mansen, who lived in Paris, and in 1855 had sold his collection, which contained almost all the postage stamps issued by that time. The stamp merchant and second-hand book dealer Edard de Laplante bought it, recognizing the definitive collector's worth of the postage stamp. Due to the boom in popularity and news of this transaction, stamp merchants like Laplante began to emerge. Towards the end of the 19th century, stamp collecting reached hundreds of thousands of people of all classes. Some countries had collections of postage stamps – for example, England, Germany, France, Bavaria, and Bulgaria. In countries which held national collections, museums dedicated to the nation's history with philately were founded, and the first such appeared in Germany, France, and Bulgaria. Allegedly, the first of these museums housed the collection of the British Museum, curated by MP Thomas Tapling and bequeathed to the Museum in 1891. The Museum für Kommunikation Berlin also had an extensive collection of stamps. The largest private collection of the time belonged to Philipp von Ferrary in Paris. As the number of postage stamp issues increased every year, collection became progressively difficult. Therefore, from the early 1880s, "collector experts" appeared, specializing their collection to only one part of the world, a group of nations, or even only one. Twentieth century Philately as one of the most popular types of collecting continued to develop in the 20th century. Along with the "Scott", "Stanley Gibbons", and "Yvert et Tellier" catalogs, the "Zumstein" (first published in Switzerland, 1909), and the "Michel" (first published in Germany, 1910) catalogs began publication. In 1934, the idea to celebrate an annual Postage Stamp Day was suggested by Hans von Rudolphi, a German philatelist. The idea was adopted rapidly in Germany, and gained later adoption in other countries. Stamp Day is a memorial day established by the postal administration of a country and annually celebrated, which is designed to attract public attention to, popularize the use of, and expand the reach of postal correspondence, and contribute to the development of philately. In 1968, Cuba dedicated a postage stamp for Stamp Day with an image of G. Sciltian's "El filatelista". In 1926, the Fédération Internationale de Philatélie (FIP) was founded, where international philatelic exhibitions have been regularly organized since 1929. The first World Philatelic Exhibition in Prague was held between August and September 1962; in 1976, the FIP brought together national societies from 57 countries, which held over 100 exhibitions, and in 1987, over 60 countries entered the FIP. Since the middle of the 20th century, philately has become the most widespread field of amateur collecting, which was facilitated by: significantly expanded postal exchanges between countries, many countries' post offices issuing: commemorative emissions, multicolor series of stamps devoted to history, the most important events of our time, art, fauna, flora, sports, etc. .; individual stamps, sheets (a sheet with one or more printed stamps and inscription on the margins) and items intended specifically for philatelists; widespread sale of collection signs of postage (including commissioned ones), albums, stockbooks and other items of philately; publication of stamp catalogs; national and international exhibitions organized by philatelic societies, domestic and international exchanges, philately propaganda through specialized magazines and other periodicals. Philately magazines, at this time, were published as far east as Poland, and as far west as North America. In Canada, Canadian Stamp News was established in 1976 as an off-shoot to Canadian Coin News, which was launched about a decade earlier. Philately was largely advanced by the USSR and nations within its sphere of influence, and the United States, France, the UK, and Austria. The British Library Philatelic Collections and the postal museums in Stockholm, Paris, and Bern had unique national philately collections at that time, and among the famous private collections are those of the Royal Philatelic Collection, F. Ferrari (Austria), M. Burrus (Switzerland), A. Lichtenstein, A. Hind, J. Boker (U.S.), and H. Kanai (Japan). In the mid-1970s, national philately organizations and associations existed in most countries, and 150–200 million people were involved in philately during meetings established. Twenty-first century From 28 August to 1 September 2004, the World Stamp Championship was held for the first time in the history of world philately in Singapore. Types Traditional philately is the study of the technical aspects of stamp production and stamp identification, including: The stamp design process The paper used (wove, laid and including watermarks) The method of printing (engraving, typography) The gum The method of separation (perforation, rouletting) Any overprints on the stamp Any security markings, underprints or perforated initials ("perfins") The study of philatelic fakes and forgeries Diversification Expanding range of activity: Thematic philately, also known as topical philately, is the study of what is depicted on individual stamps. There are hundreds of popular subjects, such as birds, and ships, poets, presidents, monarchs, maps, aircraft, spacecraft, sports, and insects on stamps. Stamps depicted on stamps also constitute a topical area of collecting. Interesting aspects of topical philately include design mistakes and alterations; for instance, the recent editing out of cigarettes from the pictures used for United States stamps, and the stories of how particular images came to be used. Postal history studies the postal systems and how they operate and, or, the study of postage stamps and covers and associated material illustrating historical episodes of postal systems both before and after the introduction of the adhesive stamps. It includes the study of postmarks, post offices, postal authorities, postal rates and regulations and the process by which letters are moved from sender to recipient, including routes and choice of conveyance. A classic example is the Pony Express, which was the fastest way to send letters across the United States during the few months that it operated. Covers that can be proven to have been sent by the Pony Express are highly prized by collectors. Aerophilately is the branch of postal history that specializes in the study of airmail. Philatelists have observed the development of mail transport by air from its beginning, and all aspects of airmail services have been extensively studied and documented by specialists. Astrophilately is the branch of postal history that specializes in the study of stamps and postmarked envelopes that are connected to outer space. Postal stationery includes stamped envelopes, postal cards, letter sheets, aérogrammes (airletter sheets) and wrappers, most of which have an embossed or imprinted stamp or indicia indicating the prepayment of postage. Erinnophilia is the study of objects (cinderella stamps) that look like stamps, but are not postage stamps. Examples include Easter Seals, Christmas Seals, propaganda labels, and so forth. Philatelic literature documents the results of the philatelic study and includes thousands of books and periodicals. Revenue philately is the study of stamps used to collect taxes or fees on such things as legal documents, court fees, receipts, tobacco, alcoholic drinks, drugs and medicines, playing cards, hunting licenses and newspapers. Maximaphily is the study of Maximum Cards. Maximum Cards can be defined as a picture postcard with a postage stamp on the same theme and cancellation, with a maximum concordance between all three. Letterlocking includes "the process of folding and securing of letter substrates to become their own envelopes" or to create a form of "tamper-evident locking mechanism." Tools Philately uses several tools, including stamp tongs (a specialized form of tweezers) to safely handle the stamps, a strong magnifying glass and a perforation gauge (odontometer) to measure the perforation gauge of the stamp. The identification of watermarks is equally important and may be done with the naked eye by turning the stamp over or holding it up to the light. If this fails then watermark fluid may be used, which "wets" the stamp to reveal the mark. Other common tools include stamp catalogs, stamp stock books and stamp hinges. Organizations Philatelic organizations sprang up soon after people started collecting and studying stamps. They include local, national and international clubs and societies where collectors come together to share the various aspects of their hobby. The world's oldest philatelic society is the Royal Philatelic Society London, which was founded on 10 April 1869, as the Philatelic Society. In North America, the major national societies include the American Philatelic Society; the Royal Philatelic Society of Canada; and the Mexico-Elmhurst Philatelic Society, International. Local clubs and societies have been established in many cities of the world. The International Philatelic Federation was formed in 1926 which is originally based in Zürich, Switzerland but is now known to be the world federation for philately. See also List of notable postage stamps List of philatelic topics List of philatelists List of philatelic awards Numismatics – the study and collection of coinage and currency References Further reading Sefi, A.J. An Introduction to Advanced Philately, with special reference to typical methods of stamp production. London: Rowley & Rowley, 1926 (2nd edition 1932) (Electronic facsimile edition Royal Philatelic Society London 2010). Sutton, R.J. & K.W. Anthony. The Stamp Collector's Encyclopaedia. 6th edition. London: Stanley Paul, 1966. Williams, L.N. & M. Fundamentals of Philately. State College: The American Philatelic Society, 1971. External links Can Plastic Films Damage My Stamps? Translated from an article by Ib Krarup Rasmussen published in Dansk Filatelistisk Tidsskrift Number 4, 2008. Stamps and Plastics – the Good and the Bad by Roger Rhoads, 2009. 1948 Olympic Stamp - UK Parliament Living Heritage
23682
https://en.wikipedia.org/wiki/Puget%20Sound
Puget Sound
Puget Sound ( ; ) is a sound on the northwestern coast of the U.S. state of Washington. It is a complex estuarine system of interconnected marine waterways and basins. A part of the Salish Sea, Puget Sound has one major and two minor connections to the Strait of Juan de Fuca, which in turn connects to the open Pacific Ocean. The major connection is Admiralty Inlet; the minor connections are Deception Pass and the Swinomish Channel. Puget Sound extends approximately from Deception Pass in the north to Olympia in the south. Its average depth is and its maximum depth, off Jefferson Point between Indianola and Kingston, is . The depth of the main basin, between the southern tip of Whidbey Island and Tacoma, is approximately . In 2009, the term Salish Sea was established by the United States Board on Geographic Names as the collective waters of Puget Sound, the Strait of Juan de Fuca, and the Strait of Georgia. Sometimes the terms "Puget Sound" and "Puget Sound and adjacent waters" are used for not only Puget Sound proper but also for waters to the north, such as Bellingham Bay and the San Juan Islands region. The term "Puget Sound" is used not just for the body of water but also the Puget Sound region centered on the sound. Major cities on the sound include Seattle, Tacoma, Olympia, and Everett. Puget Sound is also the second-largest estuary in the United States, after Chesapeake Bay in Maryland and Virginia. Names In 1792, George Vancouver gave the name "Puget's Sound" to the waters south of the Tacoma Narrows, in honor of Peter Puget, a Huguenot lieutenant accompanying him on the Vancouver Expedition. This name later came to be used for the waters north of Tacoma Narrows as well. An alternative term for Puget Sound, used by a number of Native Americans and environmental groups, is Whulge (or Whulj), an Anglicization of the Lushootseed name for Puget Sound, , which literally means "sea, salt water, ocean, or sound". The name for the Lushootseed language, , is derived from the root word , an alternative name for Puget Sound. Definitions The USGS defines Puget Sound as all the waters south of three entrances from the Strait of Juan de Fuca. The main entrance at Admiralty Inlet is defined as a line between Point Wilson on the Olympic Peninsula, and Point Partridge on Whidbey Island. The second entrance is at Deception Pass along a line from West Point on Whidbey Island, to Deception Island, then to Rosario Head on Fidalgo Island. The third entrance is at the south end of the Swinomish Channel, which connects Skagit Bay and Padilla Bay. Under this definition, Puget Sound includes the waters of Hood Canal, Admiralty Inlet, Possession Sound, Saratoga Passage, and others. It does not include Bellingham Bay, Padilla Bay, the waters of the San Juan Islands or anything farther north. Another definition, given by NOAA, subdivides Puget Sound into five basins or regions. Four of these (including South Puget Sound) correspond to areas within the USGS definition, but the fifth, called "Northern Puget Sound" includes a large additional region. It is defined as bounded to the north by the international boundary with Canada, and to the west by a line running north from the mouth of the Sekiu River on the Olympic Peninsula. Under this definition, significant parts of the Strait of Juan de Fuca and the Strait of Georgia are included in Puget Sound, with the international boundary marking an abrupt and hydrologically arbitrary limit. According to Arthur Kruckeberg, the term "Puget Sound" is sometimes used for waters north of Admiralty Inlet and Deception Pass, especially for areas along the north coast of Washington and the San Juan Islands, essentially equivalent to NOAA's "Northern Puget Sound" subdivision described above. Kruckeberg uses the term "Puget Sound and adjacent waters". Kruckeberg's 1991 text, however, does not reflect the 2009 decision of the United States Board on Geographic Names to use the term Salish Sea to refer to the greater maritime environment. Geology Continental ice sheets have repeatedly advanced and retreated from the Puget Sound region. The most recent glacial period, called the Fraser Glaciation, had three phases, or stades. During the third, or Vashon Glaciation, a lobe of the Cordilleran Ice Sheet, called the Puget Lobe, spread south about 15,000 years ago, covering the Puget Sound region with an ice sheet about thick near Seattle, and nearly at the present Canada-U.S. border. Since each new advance and retreat of ice erodes away much of the evidence of previous ice ages, the most recent Vashon phase has left the clearest imprint on the land. At its maximum extent the Vashon ice sheet extended south of Olympia to near Tenino, and covered the lowlands between the Olympic and Cascade mountain ranges. About 14,000 years ago the ice began to retreat. By 11,000 years ago it survived only north of the Canada–US border. The melting retreat of the Vashon Glaciation eroded the land, creating a drumlin field of hundreds of aligned drumlin hills. Lake Washington and Lake Sammamish (which are ribbon lakes), Hood Canal, and the main Puget Sound basin were altered by glacial forces. These glacial forces are not specifically "carving", as in cutting into the landscape via the mechanics of ice/glaciers, but rather eroding the landscape from melt water of the Vashon Glacier creating the drumlin field. As the ice retreated, vast amounts of glacial till were deposited throughout the Puget Sound region. The soils of the region, less than ten thousand years old, are still characterized as immature. As the Vashon glacier receded a series of proglacial lakes formed, filling the main trough of Puget Sound and inundating the southern lowlands. Glacial Lake Russell was the first such large recessional lake. From the vicinity of Seattle in the north the lake extended south to the Black Hills, where it drained south into the Chehalis River. Sediments from Lake Russell form the blue-gray clay identified as the Lawton Clay. The second major recessional lake was Glacial Lake Bretz. It also drained to the Chehalis River until the , in the northeast Olympic Peninsula, melted, allowing the lake's water to rapidly drain north into the marine waters of the Strait of Juan de Fuca, which was rising as the ice sheet retreated. As icebergs calved off the toe of the glacier, their embedded gravels and boulders were deposited in the chaotic mix of unsorted till geologists call glaciomarine drift. Many beaches about the Sound display glacial erratics, rendered more prominent than those in coastal woodland solely by their exposed position; submerged glacial erratics sometimes cause hazards to navigation. The sheer weight of glacial-age ice depressed the landforms, which experienced post-glacial rebound after the ice sheets had retreated. Because the rate of rebound was not synchronous with the post-ice age rise in sea levels, the bed of what is now Puget Sound filled alternately with fresh and with sea water. The upper level of the lake-sediment Lawton Clay now lies about above sea level. The Puget Sound system consists of four deep basins connected by shallower sills. The four basins are Hood Canal, west of the Kitsap Peninsula, Whidbey Basin, east of Whidbey Island, South Sound, south of the Tacoma Narrows, and the Main Basin, which is further subdivided into Admiralty Inlet and the Central Basin. Puget Sound's sills, a kind of submarine terminal moraine, separate the basins from one another, and Puget Sound from the Strait of Juan de Fuca. Three sills are particularly significant—the one at Admiralty Inlet which checks the flow of water between the Strait of Juan de Fuca and Puget Sound, the one at the entrance to Hood Canal (about below the surface), and the one at the Tacoma Narrows (about ). Other sills that present less of a barrier include the ones at Blake Island, Agate Pass, Rich Passage, and Hammersley Inlet. The depth of the basins is a result of the Sound being part of the Cascadia subduction zone, where the terranes accreted at the edge of the Juan de Fuca Plate are being subducted under the North American Plate. There has not been a major subduction zone earthquake here since the magnitude nine Cascadia earthquake; according to Japanese records, it occurred on January 26, 1700. Lesser Puget Sound earthquakes with shallow epicenters, caused by the fracturing of stressed oceanic rocks as they are subducted, still cause great damage. The Seattle Fault cuts across Puget Sound, crossing the southern tip of Bainbridge Island and under Elliott Bay. To the south, the existence of a second fault, the Tacoma Fault, has buckled the intervening strata in the Seattle Uplift. Typical Puget Sound profiles of dense glacial till overlying permeable glacial outwash of gravels above an impermeable bed of silty clay may become unstable after periods of unusually wet weather and slump in landslides. Hydrology The United States Geological Survey (USGS) defines Puget Sound as a bay with numerous channels and branches; more specifically, it is a fjord system of flooded glacial valleys. Puget Sound is part of a larger physiographic structure termed the Puget Trough, which is a physiographic section of the larger Pacific Border province, which in turn is part of the larger Pacific Mountain System. Puget Sound is a large salt water estuary, or system of many estuaries, fed by highly seasonal freshwater from the Olympic and Cascade Mountain watersheds. The mean annual river discharge into Puget Sound is , with a monthly average maximum of about and minimum of about . Puget Sound's shoreline is long, encompassing a water area of and a total volume of at mean high water. The average volume of water flowing in and out of Puget Sound during each tide is . The maximum tidal currents, in the range of 9 to 10 knots, occurs at Deception Pass. Water flow through Deception Pass is approximately equal to 2% of the total tidal exchange between Puget Sound and the Strait of Juan de Fuca. The size of Puget Sound's watershed is . "Northern Puget Sound" is frequently considered part of the Puget Sound watershed, which enlarges its size to . The USGS uses the name "Puget Sound" for its hydrologic unit subregion 1711, which includes areas draining to Puget Sound proper as well as the Strait of Juan de Fuca, the Strait of Georgia, and the Fraser River. Significant rivers that drain to "Northern Puget Sound" include the Nooksack, Dungeness, and Elwha Rivers. The Nooksack empties into Bellingham Bay, the Dungeness and Elwha into the Strait of Juan de Fuca. The Chilliwack River flows north to the Fraser River in Canada. Tides in Puget Sound are of the mixed type with two high and two low tides each tidal day. These are called Higher High Water (HHW), Lower Low Water (LLW), Lower High Water (LHW), and Higher Low Water (HLW). The configuration of basins, sills, and interconnections cause the tidal range to increase within Puget Sound. The difference in height between the Higher High Water and the Lower Low Water averages about at Port Townsend on Admiralty Inlet, but increases to about at Olympia, the southern end of Puget Sound. Puget Sound is generally accepted as the start of the Inside Passage. Flora and fauna Important marine flora of Puget Sound include eelgrass (Zostera marina) and various kelp, important kelps include canopy forming bull kelp (Nereocystis luetkeana). and edible kelps like kombu (Saccharina latissima) Among the marine mammals species found in Puget Sound are harbor seals (Phoca vitulina). Orca (Orcinus orca), or "killer whales" are famous throughout the Sound, and are a large tourist attraction. Although orca are sometimes seen in Puget Sound proper they are far more prevalent around the San Juan Islands north of Puget Sound. Many fish species occur in Puget Sound. The various salmonid species, including salmon, trout, and char are particularly well-known and studied. Salmonid species of Puget Sound include chinook salmon (Oncorhynchus tshawytscha), chum salmon (O. keta), coho salmon (O. kisutch), pink salmon (O. gorbuscha), sockeye salmon (O. nerka), sea-run coastal cutthroat trout (O. clarki clarki), steelhead (O. mykiss irideus), sea-run bull trout (Salvelinus confluentus), and Dolly Varden trout (Salvelinus malma malma). Common forage fishes found in Puget Sound include Pacific herring (Clupea pallasii), surf smelt (Hypomesus pretiosus), and Pacific sand lance (Ammodytes hexapterus). Important benthopelagic fish of Puget Sound include North Pacific hake (Merluccius productus), Pacific cod (Gadus macrocelhalus), walleye pollock (Theragra chalcogramma), and the spiny dogfish (Squalus acanthias). There are about 28 species of Sebastidae (rockfish), of many types, found in Puget Sound. Among those of special interest are copper rockfish (Sebastes caurinus), quillback rockfish (S. maliger), black rockfish (S. melanops), yelloweye rockfish (S. ruberrimus), bocaccio rockfish (S. paucispinis), canary rockfish (S. pinniger), and Puget Sound rockfish (S. emphaeus). Many other fish species occur in Puget Sound, such as sturgeons, lampreys, various sharks, rays, and skates. Puget Sound is home to numerous species of marine invertebrates, including sponges, sea anemones, chitons, clams, sea snails, limpets, crabs, barnacles, starfish, sea urchins, and sand dollars. Dungeness crabs (Metacarcinus magister) occur throughout Washington waters, including Puget Sound. Many bivalves occur in Puget Sound, such as Pacific oysters (Crassostrea gigas) and geoduck clams (Panopea generosa). The Olympia oyster (Ostreola conchaphila), once common in Puget Sound, was depleted by human activities during the 20th century. There are ongoing efforts to restore Olympia oysters in Puget Sound. In 1967, an initial scuba survey estimated that were "about 110 million pounds of geoducks" (pronounced "gooey ducks") situated in Puget Sound's sediments. Also known as "king clam", geoducks are considered to be a delicacy in Asian countries. There are many seabird species of Puget Sound. Among these are grebes such as the western grebe (Aechmophorus occidentalis); loons such as the common loon (Gavia immer); auks such as the pigeon guillemot (Cepphus columba), rhinoceros auklet (Cerorhinca monocerata), common murre (Uria aalge), and marbled murrelet (Brachyramphus marmoratus); the brant goose (Branta bernicla); seaducks such as the long-tailed duck (Clangula hyemalis), harlequin duck (Histrionicus histrionicus), and surf scoter (Melanitta perspicillata); and cormorants such as the double-crested cormorant (Phalacrocorax auritus). Puget Sound is home to a non-migratory and marine-oriented subspecies of great blue herons (Ardea herodias fannini). Bald eagles (Haliaeetus leucocephalus) occur in relative high densities in the Puget Sound region. History Puget Sound has been home to many Indigenous peoples, such as the Lushootseed-speaking peoples, as well as the Twana, Chimakum, and Klallam, for millennia. The earliest known presence of Indigenous inhabitants in the Puget Sound region is between 14,000 BCE to 6,000 BCE. Dispatched in an attempt to locate the fabled Northwest Passage, British Royal Navy captain George Vancouver anchored on May 19, 1792, on the shores of Seattle, explored Puget Sound, and claimed it for Great Britain on June 4 the same year, naming it for one of his officers, Lieutenant Peter Puget. He further named the entire region; New Georgia, after King George III. After 1818 Britain and the United States, which both claimed the Oregon Country, agreed to "joint occupancy", deferring resolution of the Oregon boundary dispute until the 1846 Oregon Treaty. Puget Sound was part of the disputed region until 1846, after which it became US territory. American maritime fur traders visited Puget Sound in the early 19th century. An Hudson's Bay Company expedition led by James McMillan in late 1824 was first non-Indigenous group to enter Puget Sound since George Vancouver in 1792. The expedition went on to reach the Fraser River, first again to reach the lower Fraser since Fraser himself in 1808. The first non-Indigenous settlement in the Puget Sound area was Fort Nisqually, a fur trade post of the Hudson's Bay Company (HBC) built in 1833. Fort Nisqually was part of the HBC's Columbia District, headquartered at Fort Vancouver. In 1838, the HBC's subsidy operation, the Puget Sound Agricultural Company was established in part to procure resources and trade, as well as to further establish British claim to the region. Missionaries J.P. Richmond and W.H. Wilson were attending Fort Nisqually for two years by 1840. British ships, such as the Beaver'', exported foodstuffs and provisions from Fort Nisqually, and would eventually export Puget Sound lumber, an industry that would soon outpace the dominant fur trading market and drive the early Puget Sound economy. The first organized American expedition took place under the helm of Commander Charles Wilkes, whose exploring party sailed up Puget Sound in 1841. The first permanent American settlement on Puget Sound was Tumwater, founded in 1845 by Americans who had come via the Oregon Trail. The decision to settle north of the Columbia River was made in part because one of the settlers, George Washington Bush, was considered black and the Provisional Government of Oregon banned the residency of mulattoes but did not actively enforce the restriction north of the river. In 1853 Washington Territory was formed from part of Oregon Territory. In 1888 the Northern Pacific railroad line reached Puget Sound, linking the region to eastern states. Washington State was admitted to the union in 1889 as part of the Enabling Act, and the regions borders have since remained unchanged. Transportation The Washington State Ferries (WSF) are a state-run ferry system that connects the larger islands of Puget Sound the Washington mainland, and the Olympic and Kitsap Peninsulas. Its vessels carry both passengers and vehicular traffic. The system averaged 24.3 million passengers in the 2010s and 17.2 in 2022 with the COVID-19 pandemic. It is the largest ferry operator in the United States. Environmental issues Over the past 30 years, as the human population of the region has increased, there has been a correlating decrease in various plant and animal species which inhabit Puget Sound. The decline has been seen in numerous populations including forage fish, salmonids, bottom fish, marine birds, harbor porpoise, and orcas. The decline is attributed to a variety of issues, including human population growth, pollution, and climate change. Because of this population decline, there have been changes to the fishery practices, and an increase in petitioning to add species to the Endangered Species Act. There has also been an increase in recovery and management plans for many different area species. The causes of these environmental issues are toxic contamination, eutrophication (low oxygen due to excess nutrients), and near shore habitat changes. On May 22, 1978, a valve was mistakenly opened aboard the submarine USS Puffer, releasing up to 500 US gallons (1,900 L; 420 imp gal) of radioactive water into Puget Sound, during an overhaul in drydock at Bremerton Naval Shipyard. See also References Sources Further reading Reprinted in 2018: Also available online Reprinted: External links University of Washington Libraries Digital Collections – Oliver S. Van Olinda Photographs A collection of 420 photographs depicting life on Vashon Island, Whidbey Island, Seattle, and other communities of Washington State's Puget Sound from the 1880s through the 1930s. Bodies of water of Island County, Washington Bodies of water of Jefferson County, Washington Bodies of water of King County, Washington Bodies of water of Kitsap County, Washington Bodies of water of Mason County, Washington Bodies of water of Pierce County, Washington Bodies of water of Skagit County, Washington Bodies of water of Snohomish County, Washington Bodies of water of Thurston County, Washington Estuaries of Washington (state) Fjords of Washington (state) Physiographic sections Sounds of the United States
23688
https://en.wikipedia.org/wiki/Perjury
Perjury
Perjury (also known as foreswearing) is the intentional act of swearing a false oath or falsifying an affirmation to tell the truth, whether spoken or in writing, concerning matters material to an official proceeding. Like most other crimes in the common law system, to be convicted of perjury one must have had the intention (mens rea) to commit the act and to have actually committed the act (actus reus). Further, statements that are facts cannot be considered perjury, even if they might arguably constitute an omission, and it is not perjury to lie about matters that are immaterial to the legal proceeding. Statements that entail an interpretation of fact are not perjury because people often draw inaccurate conclusions unwittingly or make honest mistakes without the intent to deceive. Individuals may have honest but mistaken beliefs about certain facts or their recollection may be inaccurate, or may have a different perception of what is the accurate way to state the truth. In some jurisdictions, no crime has occurred when a false statement is (intentionally or unintentionally) made while under oath or subject to penalty. Instead, criminal culpability attaches only at the instant the declarant falsely asserts the truth of statements (made or to be made) that are material to the outcome of the proceeding. It is not perjury, for example, to lie about one's age except if age is a fact material to influencing the legal result, such as eligibility for old age retirement benefits or whether a person was of an age to have legal capacity. Perjury is considered a serious offence, as it can be used to usurp the power of the courts, resulting in miscarriages of justice. In Canada, those who commit perjury are guilty of an indictable offence and liable to imprisonment for a term not exceeding fourteen years. Perjury is a statutory offence in England and Wales. A person convicted of perjury is liable to imprisonment for a term not exceeding seven years, or to a fine, or to both. In the United States, the general perjury statute under federal law classifies perjury as a felony and provides for a prison sentence of up to five years. The California Penal Code allows for perjury to be a capital offense in cases causing wrongful execution. Perjury which caused the wrongful execution of another or in the pursuit of causing the wrongful execution of another is respectively construed as murder or attempted murder, and is normally itself punishable by execution in countries that retain the death penalty. Perjury is considered a felony in most U.S. states. However, prosecutions for perjury are rare. The rules for perjury also apply when a person has made a statement under penalty of perjury even if the person has not been sworn or affirmed as a witness before an appropriate official. An example is the US income tax return, which, by law, must be signed as true and correct under penalty of perjury (see ). Federal tax law provides criminal penalties of up to three years in prison for violation of the tax return perjury statute. See: In the United States, Kenya, Scotland and several other English-speaking Commonwealth nations, subornation of perjury, which is attempting to induce another person to commit perjury, is itself a crime. Perjury law by jurisdiction Australia Perjury is punishable by imprisonment in various states and territories of Australia. In several jurisdictions, longer prison sentences are possible if perjury was committed with the intent of convicting or acquitting a person charged with a serious offence. Australian Capital Territory: Perjury is punishable by a fine of up to AU$112,000 or 7 years imprisonment or both. If perjury was committed with the intent of convicting or acquitting someone of an offence which carries a prison sentence, the maximum penalty is AU$224,000 or 14 years imprisonment or both. New South Wales: Under Section 327 of the Crimes Act 1900, perjury is punishable by imprisonment of up to 10 years. Under Section 328, if a person commits perjury with the aim of convicting or acquitting a person charged with an offence that carries a prison sentence of 5 years or more, perjury is punishable by imprisonment of up to 14 years. Northern Territory: Perjury is punishable by imprisonment of up to 14 years. If perjury was committed to convict someone of an offence that carries life imprisonment, the perjurer can be imprisoned for life. Queensland: Perjury is punishable by imprisonment of up to 14 years. If perjury was committed to convict someone of an offence that carries life imprisonment, the perjurer can be imprisoned for life. South Australia: Perjury and subornation of perjury is punishable by imprisonment of up to 7 years. Tasmania: Perjury is a crime in Tasmania. Victoria: Perjury and subornation of perjury is punishable by imprisonment of up to 15 years. Western Australia: Under Section 125 of the Criminal Code Act Compilation Act 1913, perjury is punishable by imprisonment of up to 14 years. If perjury was committed to convict someone of an offence that carries life imprisonment, the perjurer can be imprisoned for life. Canada The offence of perjury is codified by section 132 of the Criminal Code. It is defined by section 131, which provides: As to corroboration, see section 133. Everyone who commits perjury is guilty of an indictable offence and liable to imprisonment for a term not exceeding fourteen years. European Union A person who, before the Court of Justice of the European Union, swears anything which he knows to be false or does not believe to be true is, whatever his nationality, guilty of perjury. Proceedings for this offence may be taken in any place in the State and the offence may for all incidental purposes be treated as having been committed in that place. India "The offence of perjury finds its place in law by virtue of Section 191 to Section 203 of the Indian Penal Code, 1860 ('IPC'). Unlike many other countries, the offence of perjury is muted on account of Section 195 of the Code of Criminal Procedure, 1973 ("Cr.P.C"). Section 195(1)(b)(i) of the Cr.P.C. restricts any court to take cognisance of an offence of perjury unless the same is by way of a complaint in writing by the court before which the offence is committed or by a superior court." New Zealand Punishment for perjury is defined under Section 109 of the Crimes Act 1961. A person who commits perjury may be imprisoned for up to 7 years. If a person commits perjury to procure the conviction of someone charged with an offence that carries a maximum sentence of not less than 3 years' imprisonment, the perjurer may be imprisoned for up to 14 years. Nigeria United Kingdom England and Wales Perjury is a statutory offence in England and Wales. It is created by section 1(1) of the Perjury Act 1911. Section 1 of that Act reads: The words omitted from section 1(1) were repealed by section 1(2) of the Criminal Justice Act 1948. A person guilty of an offence under section 11(1) of the European Communities Act 1972 (i.e. perjury before the Court of Justice of the European Union) may be proceeded against and punished in England and Wales as for an offence under section 1(1). Section 1(4) has effect in relation to proceedings in the Court of Justice of the European Union as it has effect in relation to a judicial proceeding in a tribunal of a foreign state. Section 1(4) applies in relation to proceedings before a relevant convention court under the European Patent Convention as it applies to a judicial proceeding in a tribunal of a foreign state. A statement made on oath by a witness outside the United Kingdom and given in evidence through a live television link by virtue of section 32 of the Criminal Justice Act 1988 must be treated for the purposes of section 1 as having been made in the proceedings in which it is given in evidence. Section 1 applies in relation to a person acting as an intermediary as it applies in relation to a person lawfully sworn as an interpreter in a judicial proceeding; and for this purpose, where a person acts as an intermediary in any proceeding which is not a judicial proceeding for the purposes of section 1, that proceeding must be taken to be part of the judicial proceeding in which the witness's evidence is given. Where any statement made by a person on oath in any proceeding which is not a judicial proceeding for the purposes of section 1 is received in evidence in pursuance of a special measures direction, that proceeding must be taken for the purposes of section 1 to be part of the judicial proceeding in which the statement is so received in evidence. Judicial proceeding The definition in section 1(2) is not "comprehensive". The book Archbold says that it appears to be immaterial whether the court before which the statement is made has jurisdiction in the particular cause in which the statement is made, because there is no express requirement in the Act that the court be one of "competent jurisdiction" and because the definition in section 1(2) does not appear to require this by implication either. Actus reus The actus reus of perjury might be considered to be the making of a statement, whether true or false, on oath in a judicial proceeding, where the person knows the statement to be false or believes it to be false. Perjury is a conduct crime. Mode of trial Perjury is triable only on indictment. Sentence A person convicted of perjury is liable to imprisonment for a term not exceeding seven years, or to a fine, or to both. The following cases are relevant: R v Hall (1982) 4 Cr App R (S) 153 R v Knight, 6 Cr App R (S) 31, [1984] Crim LR 304, CA R v Healey (1990) 12 Cr App R (S) 297 R v Dunlop [2001] 2 Cr App R (S) 27 R v Archer [2002] EWCA Crim 1996, [2003] 1 Cr App R (S) 86 R v Adams [2004] 2 Cr App R (S) 15 R v Cunningham [2007] 2 Cr App R (S) 61 See also the Crown Prosecution Service sentencing manual. History In Anglo-Saxon legal procedure, the offence of perjury could only be committed by both jurors and by compurgators. With time witnesses began to appear in court they were not so treated despite the fact that their functions were akin to that of modern witnesses. This was due to the fact that their role were not yet differentiated from those of the juror and so evidence or perjury by witnesses was not made a crime. Even in the 14th century, when witnesses started appearing before the jury to testify, perjury by them was not made a punishable offence. The maxim then was that every witness's evidence on oath was true. Perjury by witnesses began to be punished before the end of the 15th century by the Star Chamber. The immunity enjoyed by witnesses began also to be whittled down or interfered with by the Parliament in England in 1540 with subornation of perjury and, in 1562, with perjury proper. The punishment for the offence then was in the nature of monetary penalty, recoverable in a civil action and not by penal sanction. In 1613, the Star Chamber declared perjury by a witness to be a punishable offence at common law. Prior to the 1911 Act, perjury was governed by section 3 of the Maintenance and Embracery Act 1540 5 Eliz 1 c. 9 (; repealed 1967) and the Perjury Act 1728. Materiality The requirement that the statement be material can be traced back to and has been credited to Edward Coke, who said: Northern Ireland Perjury is a statutory offence in Northern Ireland. It is created by article 3(1) of the Perjury (Northern Ireland) Order 1979 (S.I. 1979/1714 (N.I. 19)). This replaces the Perjury Act (Northern Ireland) 1946 (c. 13) (N.I.). United States Perjury operates in American law as an inherited principle of the common law of England, which defined the act as the "willful and corrupt giving, upon a lawful oath, or in any form allowed by law to be substituted for an oath, in a judicial proceeding or course of justice, of a false testimony material to the issue or matter of inquiry". William Blackstone touched on the subject in his Commentaries on the Laws of England, establishing perjury as "a crime committed when a lawful oath is administered, in some judicial proceeding, to a person who swears willfully, absolutely, and falsely, in a matter material to the issue or point in question". The punishment for perjury under the common law has varied from death to banishment and has included such grotesque penalties as severing the tongue of the perjurer. The definitional structure of perjury provides an important framework for legal proceedings, as the component parts of this definition have permeated jurisdictional lines, finding a home in American legal constructs. As such, the main tenets of perjury, including mens rea, a lawful oath, occurring during a judicial proceeding, a false testimony have remained necessary pieces of perjury's definition in the United States. Statutory definitions Perjury's current position in the American legal system takes the form of state and federal statutes. Most notably, the United States Code prohibits perjury, which is defined in two senses for federal purposes as someone who: The above statute provides for a fine and/or up to five years in prison as punishment. Within federal jurisdiction, statements made in two broad categories of judicial proceedings may qualify as perjurious: 1) Federal official proceedings, and 2) Federal Court or Grand Jury proceedings. A third type of perjury entails the procurement of perjurious statements from another person. More generally, the statement must occur in the "course of justice," but this definition leaves room open for interpretation. One particularly precarious aspect of the phrasing is that it entails knowledge of the accused person's perception of the truthful nature of events and not necessarily the actual truth of those events. It is important to note the distinction here, between giving a false statement under oath and merely misstating a fact accidentally, but the distinction can be especially difficult to discern in court of law. Precedents The development of perjury law in the United States centers on United States v. Dunnigan, a seminal case that set out the parameters of perjury within United States law. The court uses the Dunnigan-based legal standard to determine if an accused person: "testifying under oath or affirmation violates this section if she gives false testimony concerning a material matter with the willful intent to provide false testimony, rather than as a result of confusion, mistake, or faulty memory." However, a defendant shown to be willfully ignorant may in fact be eligible for perjury prosecution. Dunnigan distinction manifests its importance with regard to the relation between two component parts of perjury's definition: in willfully giving a false statement, a person must understand that she is giving a false statement to be considered a perjurer under the Dunnigan framework. Deliberation on the part of the defendant is required for a statement to constitute perjury. Jurisprudential developments in the American law of perjury have revolved around the facilitation of "perjury prosecutions and thereby enhance the reliability of testimony before federal courts and grand juries". With that goal in mind, Congress has sometimes expanded the grounds on which an individual may be prosecuted for perjury, with section 1623 of the United States Code recognizing the utterance of two mutually incompatible statements as grounds for perjury indictment even if neither can unequivocally be proven false. However, the two statements must be so mutually incompatible that at least one must necessarily be false; it is irrelevant whether the false statement can be specifically identified from among the two. It thus falls on the government to show that a defendant (a) knowingly made a (b) false (c) material statement (d) under oath (e) in a legal proceeding. The proceedings can be ancillary to normal court proceedings, and thus, even such menial interactions as bail hearings can qualify as protected proceedings under this statute. Wilfulness is an element of the offense. The mere existence of two mutually-exclusive factual statements is not sufficient to prove perjury; the prosecutor nonetheless has the duty to plead and prove that the statement was willfully made. Mere contradiction will not sustain the charge; there must be strong corroborative evidence of the contradiction. One significant legal distinction lies in the specific realm of knowledge necessarily possessed by a defendant for her statements to be properly called perjury. Though the defendant must knowingly render a false statement in a legal proceeding or under federal jurisdiction, the defendant need not know that they are speaking under such conditions for the statement to constitute perjury. All tenets of perjury qualification persist: the "knowingly" aspect of telling the false statement simply does not apply to the defendant's knowledge about the person whose deception is intended. Materiality The evolution of United States perjury law has experienced the most debate with regards to the materiality requirement. Fundamentally, statements that are literally true cannot provide the basis for a perjury charge (as they do not meet the falsehood requirement) just as answers to truly ambiguous statements cannot constitute perjury. However, such fundamental truths of perjury law become muddled when discerning the materiality of a given statement and the way in which it was material to the given case. In United States v. Brown, the court defined material statements as those with "a natural tendency to influence, or is capable of influencing, the decision of the decision-making body to be addressed," such as a jury or grand jury. While courts have specifically made clear certain instances that have succeeded or failed to meet the nebulous threshold for materiality, the topic remains unresolved in large part, except in certain legal areas where intent manifests itself in an abundantly clear fashion, such as with the so-called perjury trap, a specific situation in which a prosecutor calls a person to testify before a grand jury with the intent of drawing a perjurious statement from the person being questioned. Defense of recantation Despite a tendency of US perjury law toward broad prosecutory power under perjury statutes, American perjury law has afforded potential defendants a new form of defense not found in the British Common Law. This defense requires that an individual admit to making a perjurious statement during that same proceeding and recanting the statement. Though this defensive loophole slightly narrows the types of cases which may be prosecuted for perjury, the effect of this statutory defense is to promote a truthful retelling of facts by witnesses, thus helping to ensure the reliability of American court proceedings just as broadened perjury statutes aimed to do. Subornation of perjury Subornation of perjury stands as a subset of US perjury laws and prohibits an individual from inducing another to commit perjury. Subornation of perjury entails equivalent possible punishments as perjury on the federal level. The crime requires an extra level of satisfactory proof, as prosecutors must show not only that perjury occurred but also that the defendant positively induced said perjury. Furthermore, the inducing defendant must know that the suborned statement is a false, perjurious statement. Notable convicted perjurers Jonathan Aitken, British politician, was sentenced to 18 months' imprisonment in 1999 for perjury. Jeffrey Archer, British novelist and politician, was sentenced to 4 years' imprisonment for perjury in 2001. Kwame Kilpatrick, Detroit mayor was convicted of perjury in 2008. Marion Jones, American track and field athlete, was sentenced to 6 months' imprisonment after being found guilty of two counts of perjury in 2008. Mark Fuhrman, Los Angeles Police Department detective, entered a no contest plea to a perjury charge relating to his testimony in the murder trial of O. J. Simpson. This was one of the seminal occurrences of perjury by a police officer. Alger Hiss, American government official who was accused of being a Soviet spy in 1948 and convicted of perjury in connection with this charge in 1950. Lil' Kim, American rapper was convicted of perjury in 2005 after lying to a grand jury in 2003 about a February 2001 shooting. She was sentenced to one year and one day of imprisonment. Lewis "Scooter" Libby, was convicted in 2007 of two counts of perjury in connection with the Plame affair. Bernie Madoff, the former Chairman of the NASDAQ stock exchange, in 2009 was found guilty of perjury in relation to investment fraud arising from his operating a Ponzi scheme. Michele Sindona, convicted of perjury related to a bogus kidnapping in August 1979. Tommy Sheridan, Scottish politician, found guilty of lying on affirmation in a trial in 2010. John Waller, British highwayman, known for his death while being pilloried for perjury in 1732 Allegations of perjury Notable people who have been accused of perjury include: Barry Bonds was indicted by a federal grand jury for allegedly perjuring himself in testimony denying the use of performance-enhancing drugs. The perjury charges were later dropped after a deadlock by the trial jury. Former U.S. President Bill Clinton was accused of perjury in the Clinton-Lewinsky scandal and as a result was impeached by the House of Representatives on 19 December 1998. No criminal charges were ever brought and upon leaving office he accepted immunity. Andy Coulson, British journalist and political aide, was cleared of perjury charges in the News International phone hacking scandal, because his questioned testimony was ruled immaterial. Michael Hayden, the former director of the Central Intelligence Agency (CIA), has been accused of lying to Congress during his 2007 testimony about the CIA's enhanced interrogation techniques. Keith B. Alexander, the former director of the National Security Agency (NSA), had told Congress in 2012 that "we don't hold data on US citizens". James R. Clapper, the former Director of National Intelligence, was accused of perjury for telling a congressional committee in March 2013, that the National Security Agency does not collect any type of data at all on millions of Americans. See also Brady material False confession Forced confession Horkos Lies (evidence) Making false statements Obstruction of justice Performativity Pitchess motion Statutory declaration Testilying References Notes External links Bryan Druzin, and Jessica Li, The Criminalization of Lying: Under what Circumstances, if any, should Lies be made Criminal?, 101 JOURNAL OF CRIMINAL LAW AND CRIMINOLOGY (Northwestern University) (forthcoming 2011). Gabriel J. Chin and Scott Wells, The "Blue Wall of Silence" as Evidence of Bias and Motive to Lie: A New Approach to Police Perjury, 59 University of Pittsburgh Law Review 233 (1998). Perjury Under Federal Law: A Brief Overview Congressional Research Service Crimes Lying Legal terminology Abuse of the legal system Negative Mitzvoth Articles containing video clips
23689
https://en.wikipedia.org/wiki/Phoenix
Phoenix
Phoenix most often refers to: Phoenix (mythology), an immortal bird in ancient Greek mythology Phoenix, Arizona, the capital of the U.S. state of Arizona and the most populated state capital in the United States Phoenix may also refer to: Greek mythology Phoenix (son of Amyntor), king of the Dolopians who raises Achilles Phoenix (son of Agenor), brother or father of Europa Phoenix, a chieftain who came as Guardian of the young Hymenaeus when they joined Dionysus in his campaign against India (see Phoenix (Greek myth)) Places Canada Phoenix, Alberta, a ghost town Phoenix, British Columbia, a ghost town United States Phoenix, Arizona, capital of Arizona and most populous city in the state Phoenix metropolitan area, Arizona Phoenix, Georgia, an unincorporated community Phoenix, Illinois, a village Phoenix, Louisiana, an unincorporated community Phoenix, Maryland, an unincorporated community Phoenix, Michigan, an unincorporated community Phoenix, Mississippi, an unincorporated community Phoenix, Edison, New Jersey, a neighborhood of the township of Edison Phoenix, Sayreville, New Jersey, a neighborhood of the borough of Sayreville Phoenix, New York, a village Phoenix, Oregon, a city Elsewhere Phoenix (Caria), a town of ancient Caria, now in Turkey Phoenix (Crete), a town of ancient Crete mentioned in the Bible Phoenix (Lycia), a town of ancient Lycia, now in Turkey Phoenix Park, Dublin, Ireland, an urban park Phoenix Islands, in the Republic of Kiribati Phoenix, KwaZulu-Natal, in South Africa Phoenix City, a nickname for Warsaw, the capital of Poland Phoenix, a river of Thessaly, Greece, that flowed at the ancient city of Anthela Arts and entertainment Fictional entities Characters Phoenix (comics), alias used by several comics characters Phoenix Force (comics), a Marvel Comics entity Jean Grey, also known as Phoenix and Dark Phoenix, an X-Men character Rachel Summers, a Marvel Comics character also known as Phoenix Phoenix (Transformers) Phoenix Hathaway, a character in the British soap opera Hollyoaks Phoenix Raynor, a Shortland Street character Phoenix Wright, an Ace Attorney character Aster Phoenix (or Edo Phoenix), a Yu-Gi-Oh! GX character Paul Phoenix (Tekken), a Tekken character Simon Phoenix, a Demolition Man character Stefano DiMera, also known as The Phoenix, a Days of our Lives character Phoenix, female protagonist of the film Phantom of the Paradise, played by Jessica Harper Phoenix Buchanan, a fictional actor and the main antagonist of Paddington 2 Phoenix Jackson, female protagonist of "A Worn Path" by Eudora Welty Organizations Phoenix Foundation (MacGyver) Phoenix Organization, an organization in John Doe Order of the Phoenix (fictional organisation), a secret society in Harry Potter Vessels Phoenix (Star Trek), a spacecraft Film Fushichō (English: Phoenix), a 1947 film by Keisuke Kinoshita The Phoenix (1959 film), by Robert Aldrich Phoenix (1978 film), a jidaigeki film by Kon Ichikawa Phoenix (1998 film), a crime film by Danny Cannon Phoenix (2006 film), a gay-related film by Michael Akers Phoenix (2014 film), a film by Christian Petzold Phoenix (2023 film), a Malayalam film Literature Books Phoenix: The Posthumous Papers of D. H. Lawrence (1885–1930), an anthology of work by D. H. Lawrence Phoenix (novel), by Stephen Brust The Phoenix (novel), by Henning Boëtius Phoenix IV: The History of the Videogame Industry, by Leonard Herman Comics Phoenix (manga) (Hi no Tori), by Osamu Tezuka The Phoenix (comics), a weekly British comics anthology Periodicals The Phoenix (magazine), Ireland The Phoenix (newspaper), United States' Phoenix (classics journal), originally The Phoenix, a journal of the Classical Association of Canada Project Phoenix, codename of the aborted BBC Newsbrief magazine List of periodicals named Phoenix Other literature The Phoenix (play), by Thomas Middleton The Phoenix (Old English poem) The Phoenix, a play by Morgan Spurlock The Phoenix, a poem attributed to Lactantius Music Musicians Phoenix (band), a French alternative rock band Transsylvania Phoenix, also known as Phoenix, a Romanian rock band Dave Farrell (born 1977), stage name Phoenix, American bass guitarist in the band Linkin Park Albums Phoenix (Agathodaimon album) Phoenix (Asia album) Phoenix (Vince Bell album) Phoenix, a 2003 EP by Breaking Pangaea Phoenix (Charlotte Cardin album) Phoenix (Carpark North album) The Phoenix (CKY album) Phoenix (Clan of Xymox album) Phoenix (Classic Crime album) Phoenix (Dreamtale album) Phoenix (Emil Bulls album) Phoenix (Everything in Slow Motion album) The Phoenix (EP), an EP by Flipsyde Phoenix (Dan Fogelberg album) Phoenix (Grand Funk Railroad album) Phoenix: The Very Best of InMe, a 2010 greatest hits collection The Phoenix (Lyfe Jennings album) Phoenix (Just Surrender album) Phoenix (Labelle album) The Phoenix (Mastercastle album) Phoenix (Nocturnal Rites album) Phoenix (Rita Ora album) Phoenix, an album by Pink Turns Blue The Phoenix (Raghav album) Phoenix (Warlocks album) Phoenix (EP), by the Warlocks Phoenix (Zebrahead album) Songs List of songs named for the phoenix Television The Phoenix (1982 TV series), an American science fiction series Phoenix (Australian TV series), an Australian police drama Phoenix (South Korean TV series), a 2004 Korean drama Phoenix (anime), a 2004 Japanese series based on the manga "Phoenix", the 1986 premiere episode of The Adventures of the Galaxy Rangers "The Phoenix", a 1995 episode of Lois & Clark: The New Adventures of Superman "Phoenix", a 2003 episode of Smallville "Phoenix" (Breaking Bad), a 2009 episode of Breaking Bad "Phoenix" (NCIS), a 2012 episode of NCIS Video gaming Phoenix (1980 video game), a shoot 'em up arcade game Phoenix (1987 video game), a space combat simulation developed by ERE Informatique Phoenix Games (American company), a video game company Phoenix1, a League of Legends team Other uses in arts and entertainment Atlanta from the Ashes (The Phoenix), an Atlanta, Georgia, monument Phoenix Art Museum, the Southwest United States' largest art museum for visual art Phoenix (chess), a fairy chess piece Phoenix (roller coaster) Phoenix, a Looping Starship ride at Busch Gardens Tampa Bay Business Phoenix company, a commercial entity which has emerged from the collapse of another through insolvency Airlines Phoenix Air, an airline operating from Georgia, United States Phoenix Aviation, a UAE-Kyrgyzstan airline Finance companies The Phoenix Companies, a Hartford-based financial services company Phoenix Finance, a financial company which attempted to enter into Formula One racing Phoenix Fire Office, a former British insurance company Media companies Phoenix (German TV station) Phoenix (St. Paul's Churchyard), a historical bookseller in London Phoenix Press Phoenix Games (American company) Phoenix Television, a Hong Kong broadcaster Theatres Phoenix Theatre (disambiguation) Phoenix Theatre, London, a West End theatre Phoenix Concert Theatre, a concert venue and nightclub in Toronto, Ontario, Canada Manufacturers Vehicle manufacturers Phoenix (bicycles), a Chinese company Phoenix (British automobile company), an early 1900s company Phoenix Industries, an American aircraft manufacturer Phoenix Motorcars, a manufacturer of electric vehicles Phoenix Venture Holdings, owner of the MG Rover Group Other manufacturers Phoenix (nuclear technology company), specializing in neutron generator technology Phoenix AG, a German rubber products company Phoenix Beverages, a brewery in Mauritius Phoenix Contact, a manufacturer of industrial automation, interconnection, and interface solutions Phoenix Iron Works (Phoenixville, Pennsylvania), owner of the Phoenix Bridge Company Phoenix Petroleum Philippines, Inc., a Philippine oil and gas company Military AIM-54 Phoenix, a missile BAE Systems Phoenix, an unmanned air vehicle HMHT-302 ("Phoenix"), a United States Marine Corps helicopter squadron Phoenix breakwaters, a set of World War II caissons Phoenix Program, a Vietnam War military operation Project Phoenix (South Africa), a National Defence Force program People Phoenix (given name) Phoenix (surname), multiple people Phoenix (drag queen), American drag performer Dave Farrell (born 1977), American bass guitarist, stage name Phoenix, in the band Linkin Park Nahshon Even-Chaim (born 1971), or "Phoenix", convicted Australian computer hacker Jody Fleisch (born 1980), professional wrestler nicknamed "The Phoenix" Vishnuvardhan (actor) (1950–2009), Indian actor, known as the "Phoenix of Indian cinema" Schools University of Phoenix, United States Phoenix Academy (disambiguation), including several private schools Phoenix High School (disambiguation) Science and technology Astronomy Phoenix Cluster, a galaxy cluster Phoenix (Chinese astronomy) Phoenix (constellation) Phoenix stream, a stream of very old stars found in the constellation Phoenix Dwarf, a galaxy Project Phoenix (SETI), a search for extraterrestrial intelligence Biology Phoenix (chicken) Phoenix (grape) Phoenix (moth) Phoenix (plant), a genus of palms Computing Phoenix (computer), an IBM mainframe at the University of Cambridge Phoenix (tkWWW-based browser), a web browser and HTML editor discontinued in 1995 Phoenix (web framework), a web development framework Phoenix Network Coordinates, used to compute network latency Phoenix Technologies, a BIOS manufacturer Apache Phoenix, a relational database engine Microsoft Phoenix, a compiler framework Mozilla Phoenix, the original name for the Firefox web browser Phoenix pay system, a payroll processing system Vehicles Phoenix (spacecraft), a NASA mission to Mars BAE Systems Phoenix, an unmanned air vehicle EADS Phoenix, a prototype launch vehicle Bristol Phoenix, an aircraft engine Chrysler Phoenix engine, an automotive engine series Dodge Dart Phoenix, an American car produced 1960–1961 Dodge Phoenix, Australian car produced 1960–1973 Pontiac Phoenix, an American car produced 1977–1984 Phoenix Air Phoenix, a Czech glider Other technologies Phoenix (ATC), an air traffic control system Ships , several Royal Navy ships , several ships that sailed for the British East India Company between 1680 and 1821 , several U.S. Navy ships Phoenix, involved in the 1688 Siege of Derry , involved in the sea otter trade , the first ship built in Russian America , made one voyage in 1824 carrying convicts to Tasmania; grounded, condemned, and turned into a prison hulk; broken up in 1837 , a steamboat built 1806–1807 , built in France in 1809; captured by the British Royal Navy in 1810; employed as a whaling ship from 1811 to 1829 , a merchant vessel launched in 1810; made one voyage to India for the British East India Company; made three voyages transporting convicts to Australia; wrecked in 1829 , a steamboat that burned on Lake Champlain in 1819; its wreck is a Vermont state historic site , a Nantucket whaling vessel in operation 1821–1858 , a steamship that burned on Lake Michigan in 1847 with the loss of at least 190 lives , a U.S. Coast Survey ship in service from 1845 to 1858 , a Danish ship built in 1929 , which went by the name Phoenix from 1946 to 1948 , a 1955 fireboat operating in San Francisco, California , a rescue vessel used to save migrants, refugees and other people in distress in the Mediterranean Sea Sports Phoenix (sports team), a list of sports teams named after the mythological creature or Phoenix, Arizona Phoenix club (sports), a team that closes and is rebuilt under a new structure and often a new name Phoenix Finance, a Formula One entrant Phoenix Raceway, Avondale, Arizona Phoenix, an annual sports festival at the National Institute of Technology Karnataka Other uses Phoenix (currency), the first currency of modern Greece Phoenix LRT station, Singapore Phoenix codes, radio shorthand used by British police The Phoenix Patrol Challenge, a Scoutcraft competition Phoenix Pay System, a Canadian federal employee payroll system The Phoenix – S K Club, a social club at Harvard College Phoenix National and Literary Society, 1856–1858 precursor of the Irish Republican Brotherhood See also Phoenix Marketcity (disambiguation), a brand of shopping malls in India De Phoenix (disambiguation) La Fenice (The Phoenix), an opera house in Venice, Italy Feniks (disambiguation) Fenix (disambiguation) Phenex, in demonology, a Great Marquis of Hell Phenix (disambiguation) Phönix (disambiguation) Fengcheng (disambiguation), various Chinese locations whose names mean "Phoenix" or "Phoenix City"
23690
https://en.wikipedia.org/wiki/Phosphate
Phosphate
In chemistry, a phosphate is an anion, salt, functional group or ester derived from a phosphoric acid. It most commonly means orthophosphate, a derivative of orthophosphoric acid, phosphoric acid . The phosphate or orthophosphate ion is derived from phosphoric acid by the removal of three protons . Removal of one proton gives the dihydrogen phosphate ion while removal of two protons gives the hydrogen phosphate ion . These names are also used for salts of those anions, such as ammonium dihydrogen phosphate and trisodium phosphate. In organic chemistry, phosphate or orthophosphate is an organophosphate, an ester of orthophosphoric acid of the form where one or more hydrogen atoms are replaced by organic groups. An example is trimethyl phosphate, . The term also refers to the trivalent functional group in such esters. Phosphates may contain sulfur in place of one or more oxygen atoms (thiophosphates and organothiophosphates). Orthophosphates are especially important among the various phosphates because of their key roles in biochemistry, biogeochemistry, and ecology, and their economic importance for agriculture and industry. The addition and removal of phosphate groups (phosphorylation and dephosphorylation) are key steps in cell metabolism. Orthophosphates can condense to form pyrophosphates. Chemical properties The phosphate ion has a molar mass of 94.97 g/mol, and consists of a central phosphorus atom surrounded by four oxygen atoms in a tetrahedral arrangement. It is the conjugate base of the hydrogen phosphate ion , which in turn is the conjugate base of the dihydrogen phosphate ion , which in turn is the conjugate base of orthophosphoric acid, . Many phosphates are soluble in water at standard temperature and pressure. The sodium, potassium, rubidium, caesium, and ammonium phosphates are all water-soluble. Most other phosphates are only slightly soluble or are insoluble in water. As a rule, the hydrogen and dihydrogen phosphates are slightly more soluble than the corresponding phosphates. Equilibria in solution In water solution, orthophosphoric acid and its three derived anions coexist according to the dissociation and recombination equilibria below Values are at 25°C and 0 ionic strength. The pKa values are the pH values where the concentration of each species is equal to that of its conjugate bases. At pH 1 or lower, the phosphoric acid is practically undissociated. Around pH 4.7 (mid-way between the first two pKa values) the dihydrogen phosphate ion, , is practically the only species present. Around pH 9.8 (mid-way between the second and third pKa values) the monohydrogen phosphate ion, , is the only species present. At pH 13 or higher, the acid is completely dissociated as the phosphate ion, . This means that salts of the mono- and di-phosphate ions can be selectively crystallised from aqueous solution by setting the pH value to either 4.7 or 9.8. In effect, , and behave as separate weak acids because the successive pKa differ by more than 4. Phosphate can form many polymeric ions such as pyrophosphate, , and triphosphate, . The various metaphosphate ions (which are usually long linear polymers) have an empirical formula of and are found in many compounds. Biochemistry of phosphates In biological systems, phosphorus can be found as free phosphate anions in solution (inorganic phosphate) or bound to organic molecules as various organophosphates. Inorganic phosphate is generally denoted Pi and at physiological (homeostatic) pH primarily consists of a mixture of and ions. At a neutral pH, as in the cytosol (pH = 7.0), the concentrations of the orthophoshoric acid and its three anions have the ratios Thus, only and ions are present in significant amounts in the cytosol (62% , 38% ). In extracellular fluid (pH = 7.4), this proportion is inverted (61% , 39% ). Inorganic phosphate can also be present as pyrophosphate anions , which give orthophosphate by hydrolysis: + H2O 2 Organic phosphates are commonly found in the form of esters as nucleotides (e.g. AMP, ADP, and ATP) and in DNA and RNA. Free orthophosphate anions can be released by the hydrolysis of the phosphoanhydride bonds in ATP or ADP. These phosphorylation and dephosphorylation reactions are the immediate storage and source of energy for many metabolic processes. ATP and ADP are often referred to as high-energy phosphates, as are the phosphagens in muscle tissue. Similar reactions exist for the other nucleoside diphosphates and triphosphates. Bones and teeth An important occurrence of phosphates in biological systems is as the structural material of bone and teeth. These structures are made of crystalline calcium phosphate in the form of hydroxyapatite. The hard dense enamel of mammalian teeth may contain fluoroapatite, a hydroxy calcium phosphate where some of the hydroxyl groups have been replaced by fluoride ions. Medical and biological research uses Phosphates are medicinal salts of phosphorus. Some phosphates, which help cure many urinary tract infections, are used to make urine more acidic. To avoid the development of calcium stones in the urinary tract, some phosphates are used. For patients who are unable to get enough phosphorus in their daily diet, phosphates are used as dietary supplements, usually because of certain disorders or diseases. Injectable phosphates can only be handled by qualified health care providers. Plant metabolism Plants take up phosphorus through several pathways: the arbuscular mycorrhizal pathway and the direct uptake pathway. Adverse health effects Hyperphosphatemia, or a high blood level of phosphates, is associated with elevated mortality in the general population. The most common cause of hyperphosphatemia in people, dogs, and cats is kidney failure. In cases of hyperphosphatemia, limiting consumption of phosphate-rich foods, such as some meats and dairy items and foods with a high phosphate-to-protein ratio, such as soft drinks, fast food, processed foods, condiments, and other products containing phosphate-salt additives is advised. Phosphates induce vascular calcification, and a high concentration of phosphates in blood was found to be a predictor of cardiovascular events. Production Geological occurrence Phosphates are the naturally occurring form of the element phosphorus, found in many phosphate minerals. In mineralogy and geology, phosphate refers to a rock or ore containing phosphate ions. Inorganic phosphates are mined to obtain phosphorus for use in agriculture and industry. The largest global producer and exporter of phosphates is Morocco. Within North America, the largest deposits lie in the Bone Valley region of central Florida, the Soda Springs region of southeastern Idaho, and the coast of North Carolina. Smaller deposits are located in Montana, Tennessee, Georgia, and South Carolina. The small island nation of Nauru and its neighbor Banaba Island, which used to have massive phosphate deposits of the best quality, have been mined excessively. Rock phosphate can also be found in Egypt, Israel, Palestine, Western Sahara, Navassa Island, Tunisia, Togo, and Jordan, countries that have large phosphate-mining industries. Phosphorite mines are primarily found in: North America: United States, especially Florida, with lesser deposits in North Carolina, Idaho, and Tennessee Africa: Morocco, Algeria, Egypt, Niger, Senegal, Togo, Tunisia, Mauritania Middle East: Saudi Arabia, Jordan, Israel, Syria, Iran and Iraq, at the town of Akashat, near the Jordanian border. Central Asia: Kazakhstan Oceania: Australia, Makatea, Nauru, and Banaba Island In 2007, at the current rate of consumption, the supply of phosphorus was estimated to run out in 345 years. However, some scientists thought that a "peak phosphorus" would occur in 30 years and Dana Cordell from Institute for Sustainable Futures said that at "current rates, reserves will be depleted in the next 50 to 100 years". Reserves refer to the amount assumed recoverable at current market prices. In 2012 the USGS estimated world reserves at 71 billion tons, while 0.19 billion tons were mined globally in 2011. Phosphorus comprises 0.1% by mass of the average rock (while, for perspective, its typical concentration in vegetation is 0.03% to 0.2%), and consequently there are quadrillions of tons of phosphorus in Earth's 3×1019-ton crust, albeit at predominantly lower concentration than the deposits counted as reserves, which are inventoried and cheaper to extract. If it is assumed that the phosphate minerals in phosphate rock are mainly hydroxyapatite and fluoroapatite, phosphate minerals contain roughly 18.5% phosphorus by weight. If phosphate rock contains around 20% of these minerals, the average phosphate rock has roughly 3.7% phosphorus by weight. Some phosphate rock deposits, such as Mulberry in Florida, are notable for their inclusion of significant quantities of radioactive uranium isotopes. This is a concern because radioactivity can be released into surface waters from application of the resulting phosphate fertilizer. In December 2012, Cominco Resources announced an updated JORC compliant resource of their Hinda project in Congo-Brazzaville of 531 million tons, making it the largest measured and indicated phosphate deposit in the world. Around 2018, Norway discovered phosphate deposits almost equal to those in the rest of Earth combined. In July 2022 China announced quotas on phosphate exportation. The largest importers in millions of metric tons of phosphate are Brazil 3.2, India 2.9 and the USA 1.6. Mining The three principal phosphate producer countries (China, Morocco and the United States) account for about 70% of world production. Ecology In ecological terms, because of its important role in biological systems, phosphate is a highly sought after resource. Once used, it is often a limiting nutrient in environments, and its availability may govern the rate of growth of organisms. This is generally true of freshwater environments, whereas nitrogen is more often the limiting nutrient in marine (seawater) environments. Addition of high levels of phosphate to environments and to micro-environments in which it is typically rare can have significant ecological consequences. For example, blooms in the populations of some organisms at the expense of others, and the collapse of populations deprived of resources such as oxygen (see eutrophication) can occur. In the context of pollution, phosphates are one component of total dissolved solids, a major indicator of water quality, but not all phosphorus is in a molecular form that algae can break down and consume. Calcium hydroxyapatite and calcite precipitates can be found around bacteria in alluvial topsoil. As clay minerals promote biomineralization, the presence of bacteria and clay minerals resulted in calcium hydroxyapatite and calcite precipitates. Phosphate deposits can contain significant amounts of naturally occurring heavy metals. Mining operations processing phosphate rock can leave tailings piles containing elevated levels of cadmium, lead, nickel, copper, chromium, and uranium. Unless carefully managed, these waste products can leach heavy metals into groundwater or nearby estuaries. Uptake of these substances by plants and marine life can lead to concentration of toxic heavy metals in food products. See also Diammonium phosphate - (NH4)2HPO4 Disodium phosphate – Na2HPO4 Fertilizer Hypophosphite – Metaphosphate – Monosodium phosphate – NaH2PO4 Organophosphorus compounds Ouled Abdoun Basin Phosphate – OP(OR)3, such as triphenyl phosphate Phosphate conversion coating Phosphate soda, a soda fountain beverage Phosphinate – OP(OR)R2 Phosphine – PR3 Phosphine oxide – OPR3 Phosphinite – P(OR)R2 Phosphite – P(OR)3 Phosphogypsum Phosphonate – OP(OR)2R Phosphonite – P(OR)2R Phosphorylation Polyphosphate – Pyrophosphate – Sodium tripolyphosphate – Na5P3O10 References External links US Minerals Databrowser provides data graphics covering consumption, production, imports, exports and price for phosphate and 86 other minerals Phosphate: analyte monograph – The Association for Clinical Biochemistry and Laboratory Medicine Functional groups Phosphorus oxyanions Industrial minerals Concrete admixtures Phosphorus(V) compounds
23692
https://en.wikipedia.org/wiki/Prime%20number%20theorem
Prime number theorem
In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function). The first such distribution found is , where is the prime-counting function (the number of primes less than or equal to N) and is the natural logarithm of . This means that for large enough , the probability that a random integer not greater than is prime is very close to . Consequently, a random integer with at most digits (for large enough ) is about half as likely to be prime as a random integer with at most digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (). In other words, the average gap between consecutive prime numbers among the first integers is roughly . Statement Let be the prime-counting function defined to be the number of primes less than or equal to , for any real number . For example, because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that is a good approximation to (where log here means the natural logarithm), in the sense that the limit of the quotient of the two functions and as increases without bound is 1: known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as This notation (and the theorem) does not say anything about the limit of the difference of the two functions as increases without bound. Instead, the theorem states that approximates in the sense that the relative error of this approximation approaches 0 as increases without bound. The prime number theorem is equivalent to the statement that the th prime number satisfies the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as increases without bound. For example, the th prime number is , and ()log() rounds to , a relative error of about 6.4%. On the other hand, the following asymptotic relations are logically equivalent: As outlined below, the prime number theorem is also equivalent to where and are the first and the second Chebyshev functions respectively, and to where is the Mertens function. History of the proof of the asymptotic law of prime numbers Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that is approximated by the function , where and are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with and . Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of and stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients. In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function , for real values of the argument "", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as goes to infinity of exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near 1, for all sufficiently large . Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for were strong enough for him to prove Bertrand's postulate that there exists a prime number between and for any integer . An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, chiefly that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper that the idea to apply methods of complex analysis to the study of the real function originates. Extending Riemann's ideas, two proofs of the asymptotic law of the distribution of prime numbers were found independently by Jacques Hadamard and Charles Jean de la Vallée Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function is nonzero for all complex values of the variable that have the form with . During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). Hadamard's and de la Vallée Poussin's original proofs are long and elaborate; later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by the American mathematician Donald J. Newman. Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis. Proof sketch Here is a sketch of the proof referred to in one of Terence Tao's lectures. Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function , defined by This is sometimes written as where is the von Mangoldt function, namely It is now relatively easy to check that the PNT is equivalent to the claim that Indeed, this follows from the easy estimates and (using big notation) for any , The next step is to find a useful representation for . Let be the Riemann zeta function. It can be shown that is related to the von Mangoldt function , and hence to , via the relation A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer the equation holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term (claimed to be the correct asymptotic order of ) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms. The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately: which vanishes for large . The nontrivial zeros, namely those on the critical strip , can potentially be of an asymptotic order comparable to the main term if , so we need to show that all zeros have real part strictly less than 1. Non-vanishing on To do this, we take for granted that is meromorphic in the half-plane , and is analytic there except for a simple pole at , and that there is a product formula for . This product formula follows from the existence of unique prime factorization of integers, and shows that is never zero in this region, so that its logarithm is defined there and Write ; then Now observe the identity so that for all . Suppose now that . Certainly is not zero, since has a simple pole at . Suppose that and let tend to 1 from above. Since has a simple pole at and stays analytic, the left hand side in the previous inequality tends to 0, a contradiction. Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D.J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove. Newman's proof of the prime number theorem D. J. Newman gives a quick proof of the prime number theorem (PNT). The proof is "non-elementary" by virtue of relying on complex analysis, but uses only elementary techniques from a first course in the subject: Cauchy's integral formula, Cauchy's integral theorem and estimates of complex integrals. Here is a brief sketch of this proof. See for the complete details. The proof uses the same preliminaries as in the previous section except instead of the function , the Chebyshev function is used, which is obtained by dropping some of the terms from the series for . It is easy to show that the PNT is equivalent to . Likewise instead of the function is used, which is obtained by dropping some terms in the series for . The functions and differ by a function holomorphic on . Since, as was shown in the previous section, has no zeroes on the line , has no singularities on . One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that is bounded. This is proved using an ingenious and easy method due to Chebyshev. Integration by parts shows how and are related. For , Newman's method proves the PNT by showing the integral converges, and therefore the integrand goes to zero as , which is the PNT. In general, the convergence of the improper integral does not imply that the integrand goes to zero at infinity, since it may oscillate, but since is increasing, it is easy to show in this case. To show the convergence of , for let and where then which is equal to a function holomorphic on the line . The convergence of the integral , and thus the PNT, is proved by showing that . This involves change of order of limits since it can be written and therefore classified as a Tauberian theorem. The difference is expressed using Cauchy's integral formula and then shown to be small for large by estimating the integrand. Fix and such that is holomorphic in the region where , and let be the boundary of this region. Since 0 is in the interior of the region, Cauchy's integral formula gives where is the factor introduced by Newman, which does not change the integral since is entire and . To estimate the integral, break the contour into two parts, where and . Then where . Since , and hence , is bounded, let be an upper bound for the absolute value of . This bound together with the estimate for gives that the first integral in absolute value is . The integrand over in the second integral is entire, so by Cauchy's integral theorem, the contour can be modified to a semicircle of radius in the left half-plane without changing the integral, and the same argument as for the first integral gives the absolute value of the second integral is . Finally, letting , the third integral goes to zero since and hence goes to zero on the contour. Combining the two estimates and the limit get This holds for any so , and the PNT follows. Prime-counting function in terms of the logarithmic integral In a handwritten note on a reprint of his 1838 paper "", which he mailed to Gauss, Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to is given by the offset logarithmic integral function , defined by Indeed, this integral is strongly suggestive of the notion that the "density" of primes around should be . This function is related to the logarithm by the asymptotic expansion So, the prime number theorem can also be written as . In fact, in another paper in 1899 de la Vallée Poussin proved that for some positive constant , where is the big notation. This has been improved to where . In 2016, Trudgian proved an explicit upper bound for the difference between and : for . The connection between the Riemann zeta function and is one reason the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901 that if the Riemann hypothesis is true, the error term in the above relation can be improved to (this last estimate is in fact equivalent to the Riemann hypothesis). The constant involved in the big notation was estimated in 1976 by Lowell Schoenfeld: assuming the Riemann hypothesis, for all . He also derived a similar bound for the Chebyshev prime-counting function : for all  . This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers) and  noise and to also correspond to the Tweedie compound Poisson distribution. (The Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem.) The logarithmic integral is larger than for "small" values of . This is because it is (in some sense) counting not primes, but prime powers, where a power of a prime is counted as of a prime. This suggests that should usually be larger than by roughly and in particular should always be larger than . However, in 1914, J. E. Littlewood proved that changes sign infinitely often. The first value of where exceeds is probably around ; see the article on Skewes' number for more details. (On the other hand, the offset logarithmic integral is smaller than already for ; indeed, , while .) Elementary proofs In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis. This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though Wiener's proof ultimately relies on properties of the Riemann zeta function on the line , where complex analysis must be used. In March 1948, Atle Selberg established, by "elementary" means, the asymptotic formula where for primes . By July of that year, Selberg and Paul Erdős had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point. These proofs effectively laid to rest the notion that the PNT was "deep" in that sense, and showed that technically "elementary" methods were more powerful than had been believed to be the case. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see an article by Dorian Goldfeld. There is some debate about the significance of Erdős and Selberg's result. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory, so it is not clear exactly in what sense their proof is "elementary". Although it does not use complex analysis, it is in fact much more technical than the standard proof of PNT. One possible definition of an "elementary" proof is "one that can be carried out in first-order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first-order methods, but such theorems are rare to date. Erdős and Selberg's proof can certainly be formalized in Peano arithmetic, and in 1994, Charalambos Cornaros and Costas Dimitracopoulos proved that their proof can be formalized in a very weak fragment of PA, namely . However, this does not address the question of whether or not the standard proof of PNT can be formalized in PA. A more recent "elementary" proof of the prime number theorem uses ergodic theory, due to Florian Richter. The prime number theorem is obtained there in an equivalent form that the Cesaro sum of the values of the Liouville function is zero. The Liouville function is where is the number of prime factors, with multiplicity, of the integer . Bergelson and Richter (2022) then obtain this form of the prime number theorem from an ergodic theorem which they prove: Let be a compact metric space, a continuous self-map of , and a -invariant Borel probability measure for which is uniquely ergodic. Then, for every , This ergodic theorem can also be used to give "soft" proofs of results related to the prime number theorem, such as the Pillai–Selberg theorem and Erdős–Delange theorem. Computer verifications In 2005, Avigad et al. employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT. This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of. In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis. By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize "a direct, modern and elegant proof instead of the more involved 'elementary' Erdős–Selberg argument". Prime number theorem for arithmetic progressions Let denote the number of primes in the arithmetic progression that are less than . Dirichlet and Legendre conjectured, and de la Vallée Poussin proved, that if and are coprime, then where is Euler's totient function. In other words, the primes are distributed evenly among the residue classes modulo with  . This is stronger than Dirichlet's theorem on arithmetic progressions (which only states that there is an infinity of primes in each class) and can be proved using similar methods used by Newman for his proof of the prime number theorem. The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes. Bennett et al. proved the following estimate that has explicit constants and (Theorem 1.3): Let be an integer and let be an integer that is coprime to . Then there are positive constants and such that where and Prime number race Although we have in particular empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at . However Littlewood showed in 1914 that there are infinitely many sign changes for the function so the lead in the race switches back and forth infinitely many times. The phenomenon that is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that and change places when and are coprime to . Granville and Martin give a thorough exposition and survey. Another example is the distribution of the last digit of prime numbers. Except for 2 and 5, all prime numbers end in 1, 3, 7, or 9. Dirichlet's theorem states that asymptotically, 25% of all primes end in each of these four digits. However, empirical evidence shows that the number of primes that end in 3 or 7 less than n tends to be slightly bigger than the number of primes that end in 1 or 9 less than n (a generation of the Chebyshev's bias). This follows that 1 and 9 are quadratic residues modulo 10, and 3 and 7 are quadratic nonresidues modulo 10. Non-asymptotic bounds on the prime-counting function The prime number theorem is an asymptotic result. It gives an ineffective bound on as a direct consequence of the definition of the limit: for all , there is an such that for all , However, better bounds on are known, for instance Pierre Dusart's The first inequality holds for all and the second one for . A weaker but sometimes useful bound for is In Pierre Dusart's thesis there are stronger versions of this type of inequality that are valid for larger . Later in 2010, Dusart proved: The proof by de la Vallée Poussin implies the following: For every , there is an such that for all , Approximations for the th prime number As a consequence of the prime number theorem, one gets an asymptotic expression for the th prime number, denoted by : A better approximation is Again considering the th prime number , this gives an estimate of ; the first 5 digits match and relative error is about 0.00005%. Rosser's theorem states that This can be improved by the following pair of bounds: Table of , , and The table compares exact values of to the two approximations and . The last column, , is the average prime gap below . {| class="wikitable" style="text-align: right" ! ! ! ! ! % error ! % error ! |- | 10 | 4 | 0 | 2 |8.22% |42.606% | 2.500 |- | 102 | 25 | 3 | 5 |14.06% |18.597% | 4.000 |- | 103 | 168 | 23 | 10 |14.85% |5.561% | 5.952 |- | 104 | 1,229 | 143 | 17 |12.37% |1.384% | 8.137 |- | 105 | 9,592 | 906 | 38 |9.91% |0.393% | 10.425 |- | 106 | 78,498 | 6,116 | 130 |8.11% |0.164% | 12.739 |- | 107 | 664,579 | 44,158 | 339 |6.87% |0.051% | 15.047 |- | 108 | 5,761,455 | 332,774 | 754 |5.94% |0.013% | 17.357 |- | 109 | 50,847,534 | 2,592,592 | 1,701 |5.23% |3.34 % | 19.667 |- | 1010 | 455,052,511 | 20,758,029 | 3,104 |4.66% |6.82 % | 21.975 |- | 1011 | 4,118,054,813 | 169,923,159 | 11,588 |4.21% |2.81 % | 24.283 |- | 1012 | 37,607,912,018 | 1,416,705,193 | 38,263 |3.83% |1.02 % | 26.590 |- | 1013 | 346,065,536,839 | 11,992,858,452 | 108,971 |3.52% |3.14 % | 28.896 |- | 1014 | 3,204,941,750,802 | 102,838,308,636 | 314,890 |3.26% |9.82 % | 31.202 |- | 1015 | 29,844,570,422,669 | 891,604,962,452 | 1,052,619 |3.03% |3.52 % | 33.507 |- | 1016 | 279,238,341,033,925 | 7,804,289,844,393 | 3,214,632 |2.83% |1.15 % | 35.812 |- | 1017 | 2,623,557,157,654,233 | 68,883,734,693,928 | 7,956,589 |2.66% |3.03 % | 38.116 |- | 1018 | 24,739,954,287,740,860 | 612,483,070,893,536 | 21,949,555 |2.51% |8.87 % | 40.420 |- | 1019 | 234,057,667,276,344,607 | 5,481,624,169,369,961 | 99,877,775 |2.36% |4.26 % | 42.725 |- | 1020 | 2,220,819,602,560,918,840 | 49,347,193,044,659,702 | 222,744,644 |2.24% |1.01 % | 45.028 |- | 1021 | 21,127,269,486,018,731,928 | 446,579,871,578,168,707 | 597,394,254 |2.13% |2.82 % | 47.332 |- | 1022 | 201,467,286,689,315,906,290 | 4,060,704,006,019,620,994 | 1,932,355,208 |2.03% |9.59 % | 49.636 |- | 1023 | 1,925,320,391,606,803,968,923 | 37,083,513,766,578,631,309 | 7,250,186,216 |1.94% |3.76 % | 51.939 |- | 1024 | 18,435,599,767,349,200,867,866 | 339,996,354,713,708,049,069 | 17,146,907,278 |1.86% |9.31 % | 54.243 |- | 1025 | 176,846,309,399,143,769,411,680 | 3,128,516,637,843,038,351,228 | 55,160,980,939 |1.78% |3.21 % | 56.546 |- | 1026 | 1,699,246,750,872,437,141,327,603 | 28,883,358,936,853,188,823,261 | 155,891,678,121 |1.71% |9.17 % | 58.850 |- | 1027 | 16,352,460,426,841,680,446,427,399 | 267,479,615,610,131,274,163,365 | 508,666,658,006 |1.64% |3.11 % | 61.153 |- | 1028 | 157,589,269,275,973,410,412,739,598 | 2,484,097,167,669,186,251,622,127 | 1,427,745,660,374 |1.58% |9.05 % | 63.456 |- | 1029 | 1,520,698,109,714,272,166,094,258,063 | 23,130,930,737,541,725,917,951,446 | 4,551,193,622,464 |1.53% |2.99 % | 65.759 |} The value for was originally computed assuming the Riemann hypothesis; it has since been verified unconditionally. Analogue for irreducible polynomials over a finite field There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem. To state it precisely, let be the finite field with elements, for some fixed , and let be the number of monic irreducible polynomials over whose degree is equal to . That is, we are looking at polynomials with coefficients chosen from , which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that If we make the substitution , then the right hand side is just which makes the analogy clearer. Since there are precisely monic polynomials of degree (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree is selected randomly, then the probability of it being irreducible is about . One can even prove an analogue of the Riemann hypothesis, namely that The proofs of these statements are far simpler than in the classical case. It involves a short, combinatorial argument, summarised as follows: every element of the degree extension of is a root of some irreducible polynomial whose degree divides ; by counting these roots in two different ways one establishes that where the sum is over all divisors of . Möbius inversion then yields where is the Möbius function. (This formula was known to Gauss.) The main term occurs for , and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of can be no larger than . See also Abstract analytic number theory for information about generalizations of the theorem. Landau prime ideal theorem for a generalization to prime ideals in algebraic number fields. Riemann hypothesis Citations References External links Table of Primes by Anton Felkel. Short video visualizing the Prime Number Theorem. Prime formulas and Prime number theorem at MathWorld. How Many Primes Are There? and The Gaps between Primes by Chris Caldwell, University of Tennessee at Martin. Tables of prime-counting functions by Tomás Oliveira e Silva Eberl, Manuel and Paulson, L. C. The Prime Number Theorem (Formal proof development in Isabelle/HOL, Archive of Formal Proofs) The Prime Number Theorem: the "elementary" proof − An exposition of the elementary proof of the Prime Number Theorem of Atle Selberg and Paul Erdős at www.dimostriamogoldbach.it/en/ Logarithms Theorems about prime numbers Theorems in analytic number theory
23693
https://en.wikipedia.org/wiki/Conflict%20of%20laws
Conflict of laws
Conflict of laws (also called private international law) is the set of rules or laws a jurisdiction applies to a case, transaction, or other occurrence that has connections to more than one jurisdiction. This body of law deals with three broad topics: jurisdiction, rules regarding when it is appropriate for a court to hear such a case; foreign judgments, dealing with the rules by which a court in one jurisdiction mandates compliance with a ruling of a court in another jurisdiction; and choice of law, which addresses the question of which substantive laws will be applied in such a case. These issues can arise in any private-law context, but they are especially prevalent in contract law and tort law. Scope and terminology The term conflict of laws is primarily used in the United States and Canada, though it has also come into use in the United Kingdom. Elsewhere, the term private international law is commonly used. Some scholars from countries that use conflict of laws consider the term private international law confusing because this body of law does not consist of laws that apply internationally, but rather is solely composed of domestic laws; the calculus only includes international law when the nation has treaty obligations (and even then, only to the extent that domestic law renders the treaty obligations enforceable). The term private international law comes from the private law/public law dichotomy in civil law systems. In this form of legal system, the term private international law does not imply an agreed upon international legal corpus, but rather refers to those portions of domestic private law that apply to international issues. Importantly, while conflict of laws generally deals with disputes of an international nature, the applicable law itself is domestic law. This is because, unlike public international law (better known simply as international law), conflict of laws does not regulate the relation between countries but rather how individual countries regulate internally the affairs of individuals with connections to more than one jurisdiction. To be sure, as in other contexts, domestic law can be affected by international treaties to which a country is party. Moreover, in federal republics where substantial lawmaking occurs at the subnational level—notably in the United States—issues within conflict of laws often arise in wholly domestic contexts, relating to the laws of different states (or provinces, etc.) rather than of foreign countries. History Western legal systems first recognized a core underpinning of conflict of laws—namely, that "foreign law, in appropriate instances, should be applied to foreign cases"—in the twelfth century. Prior to that, the prevailing system was that of personal law, in which the laws applicable to each individual were dictated by the group to which he or she belonged. Initially, the mode of this body of law was simply to determine which jurisdiction's law would be most fair to apply; over time, however, the law came to favor more well-defined rules. These rules were systematically summarized by law professor Bartolus de Saxoferrato in the middle of the fourteenth century, a work that came to be cited repeatedly for the next several centuries. Later, in the seventeenth century, several Dutch legal scholars, including Christian Rodenburg, Paulus Voet, Johannes Voet, and Ulrik Huber, further expounded the jurisprudence of conflict of laws. Their key conceptual contributions were twofold: First, nations are wholly sovereign within their borders and therefore cannot be compelled to enforce foreign law in their own courts. Second, in order for international conflicts of law to work rationally, nations must exercise comity in enforcing others' laws, because it is in their mutual interest to do so.Scholars began to consider ways to resolve the question of how and when formally equal sovereign States ought to recognize each other's authority. The doctrine of comity was introduced as one of the means to answer these questions. Comity has undergone various changes since its creation. However, it still refers to the idea that every State is sovereign; often, the most just exercise of one State's authority is by recognizing the authority of another through the recognition and enforcement of another state's laws and judgments. Many states continue to recognize the principle of comity as the underpinning of private international law such as in Canada. In some countries, such as the United States of America and Australia, the principle of comity is written into the State's constitution. In the United States, salient issues in the field of conflict of laws date back at least to the framing of the Constitution. There was concern, for example, about what body of law the newly created federal courts would apply when handling cases between parties from different states (a type of case specifically assigned to the federal courts). Within the first two decades following ratification of the Constitution, over one hundred cases dealt with these issues, though the term conflict of laws was not yet used. The Constitution created a "plurilegal federal union" in which conflicts are inherently abundant, and as a result, American judges encounter conflicts cases far more often—about 5,000 per year as of the mid-2010s—and have accumulated far more experience in resolving them than anywhere else in the world. Alongside domestic developments relating to conflict of laws, the nineteenth century also saw the beginnings of substantial international collaboration in the field. The first international meeting on the topic took place in Lima in 1887 and 1888; delegates from five South American countries attended, but failed to produce an enforceable agreement. The first major multilateral agreements on the topic of conflict of laws arose from the First South American Congress of Private International Law, which was held in Montevideo from August 1888 to February 1889. The seven South American nations represented at the Montevideo conference agreed on eight treaties, which broadly adopted the ideas of Friedrich Carl von Savigny, determining applicable law on the basis of four types of factual relations (domicile, location of object, location of transaction, location of court). Soon after, European nations gathered for a conference in The Hague organized by Tobias Asser in 1893. This was followed by successive conferences in 1894, 1900, and 1904. Like their counterparts in Montevideo, these conferences produced several multilateral agreements on various topics within conflict of laws. Thereafter, the pace of these meetings slowed, with the next conventions occurring in 1925 and 1928. The seventh meeting at The Hague occurred in 1951, at which point the sixteen involved states established a permanent institution for international collaboration on conflict-of-laws issues. The organization is known today as the Hague Conference on Private International Law (HCCH). , HCCH includes eighty-six member states. As attention to the field became more widespread in the second half of the twentieth century, the European Union began to take action to harmonize conflict of laws jurisprudence across its member states. The first of these was the Brussels Convention agreed in 1968, which addressed questions of jurisdiction for cross-border cases. This was followed in 1980 by the Rome Convention, which addressed choice-of-law rules for contract disputes within EU member states. In 2009 and 2010, respectively, the EU enacted the Rome II Regulation to address choice-of-law in tort cases and the Rome III Regulation to address choice-of-law in divorce matters. Jurisdiction One of the key questions addressed within conflict of laws is the determination of when the legislature of a given jurisdiction may legislate, or the court of a given jurisdiction can properly adjudicate, regarding a matter that has extra-jurisdictional dimensions. This is known as jurisdiction (sometimes subdivided into adjudicative jurisdiction, the authority to hear a certain case, and prescriptive jurisdiction, the authority of a legislature to pass laws covering certain conduct). Like all aspects of conflict of laws, this question is in the first instance resolved by domestic law, which may or may not incorporate relevant international treaties or other supranational legal concepts. That said, relative to the other two main subtopics of conflicts of law (enforcement of judgements, and choice of law, which are both discussed below), the theory regarding jurisdiction has developed consistent international norms. This is perhaps because, unlike the other subtopics, jurisdiction relates to the particularly thorny question of when it is appropriate for a country to exercise its coercive power at all, rather that merely how it should do so. There are five bases of jurisdiction generally recognized in international law. These are not mutually exclusive; an individual or an occurrence may be subject to simultaneous jurisdiction in more than one place. They are as follows: Territoriality—A country has jurisdiction to regulate whatever occurs within its territorial boundaries. Of all bases of jurisdiction, the territoriality principle garners the strongest consensus in international law (subject to various complexities relating to actions that did not obviously occur wholly in one country). Passive personality—A country has jurisdiction over an occurrence that harmed its national. Nationality (or active personality)—A country has jurisdiction over a wrong of which its national is the perpetrator. Protective—A country has jurisdiction to address threats to its own security (such as by pursuing counterfeiters of official documents). Universal—A country has jurisdiction over certain acts based on their intrinsic rejection by the international community (such as violent deprivations of basic human rights). This is the most controversial of the five bases of jurisdiction. Countries have also developed bodies of law for adjudicating jurisdiction disputes between subnational entities. For example, in the United States, the minimum contacts rule derived from the Due Process Clause of the Fourteenth Amendment to the U.S. Constitution regulates the extent to which one state can exercise jurisdiction over people domiciled in other states, or occurrences that took place in other states. Choice of law Courts faced with a choice of law issue have a two-stage process: the court will apply the law of the forum (lex fori) to all procedural matters (including the choice of law rules); it counts the factors that connect or link the legal issues to the laws of potentially relevant states and applies the laws that have the greatest connection, e.g. the law of nationality (lex patriae) or the law of habitual residence (lex domicilii). (See also 'European Harmonization Provisions': "The concept of habitual residence is the civil law equivalent of the common law test of lex domicilii".) The court will determine the plaintiffs' legal status and capacity. The court will determine the law of the state in which land is situated (lex situs) that will be applied to determine all questions of title. The law of the place where a transaction physically takes place or of the occurrence that gave rise to the litigation (lex loci actus) will often be the controlling law selected when the matter is substantive, but the proper law has become a more common choice. Contracts Many contracts and other forms of legally binding agreement include a jurisdiction or arbitration clause specifying the parties' choice of venue for any litigation (called a forum selection clause). In the EU, this is governed by the Rome I Regulation. Choice of law clauses may specify which laws the court or tribunal should apply to each aspect of the dispute. This matches the substantive policy of freedom of contract and will be determined by the law of the state where the choice of law clause confers its competence. Oxford Professor Adrian Briggs suggests that this is doctrinally problematic as it is emblematic of 'pulling oneself up by the bootstraps'. Judges have accepted that the principle of party autonomy allows the parties to select the law most appropriate to their transaction. This judicial acceptance of subjective intent excludes the traditional reliance on objective connecting factors; it also harms consumers as vendors often impose one-sided contractual terms selecting a venue far from the buyer's home or workplace. Contractual clauses relating to consumers, employees, and insurance beneficiaries are regulated under additional terms set out in Rome I, which may modify the contractual terms imposed by vendors. See also A. V. Dicey Comity List of Hague Conventions on Private International Law Place of the Relevant Intermediary Approach Microsoft Corp. v. Motorola Inc. Notes References and further reading CILE Studies (Center for International Legal Education – University of Pittsburgh School of Law) Private Law, Private International Law, and Judicial cooperation in the EU-US Relationship. External links American Society of Comparative Law Official website ASIL Guide to Electronic Resources for International Law British Institute of International and Comparative Law CONFLICT OF LAWS.NET – News and Views in Private International Law EEC Rome convention 1980 European Institute for International Law and International Relations Hague Conference on Private International Law International & Foreign Law Community International Chamber of Commerce International Court of Arbitration International Institute for the Unification of Private Law (UNIDROIT) Max Planck Institute – for Comparative and International Private Law Private International Law, Research Guide , Peace Palace Library Republic of Argentina v NML Capital Ltd [2010] EWCA Civ 41, regarding a hedge fund's enforcement of claim against Argentina United Nations Commission for International Trade Law U.S. State Department Private International Law Database Why the Hague Convention on jurisdiction threatens to strangle e-commerce and Internet free speech, by Chris Sprigman
23696
https://en.wikipedia.org/wiki/Timeline%20of%20programming%20languages
Timeline of programming languages
This is a record of notable programming languages, by decade. Pre-1950 1950s 1960s 1970s 1980s 1990s 2000s 2010s 2020s See also History of computing hardware History of programming languages Programming language Timeline of computing Timeline of programming language theory References External links Online Historical Encyclopaedia of Programming Languages Diagram & history of programming languages Eric Levenez's timeline diagram of computer languages history Programming Lists of programming languages History of computer science
23698
https://en.wikipedia.org/wiki/International%20Fixed%20Calendar
International Fixed Calendar
The International Fixed Calendar (also known as the Cotsworth plan, the Cotsworth calendar, the Eastman plan or the Yearal was a proposed reform of the Gregorian calendar designed by Moses B. Cotsworth, first presented in 1902. The International Fixed Calendar divides the year into 13 months of 28-days each. A type of perennial calendar, every date is fixed to the same weekday every year. Though it was never officially adopted at the country level, the entrepreneur George Eastman instituted its use at the Eastman Kodak Company in 1928, where it was used until 1989. While it is sometimes described as the 13-month calendar or the equal-month calendar, various alternative calendar designs share these features. Rules The calendar year has 13 months with 28 days each, divided into exactly 4 weeks (13 × 28 = 364). An extra day added as a holiday at the end of the year (after December 28, i.e. equal to December 31 Gregorian), sometimes called "Year Day", does not belong to any week and brings the total to 365 days. Each year coincides with the corresponding Gregorian year, so January 1 in the Cotsworth calendar always falls on Gregorian January 1. Twelve months are named and ordered the same as those of the Gregorian calendar, except that the extra month is inserted between June and July, and called Sol. Situated in mid-summer (from the point of view of its Northern Hemisphere authors) and including the mid-year solstice, the name of the new month was chosen in homage to the sun. Leap years in the International Fixed Calendar contain 366 days, and its occurrence follows the Gregorian rule. There is a leap year in every year whose number is divisible by 4, but not if the year number is divisible by 100, unless it is also divisible by 400. So although the year 2000 was a leap year, the years 1700, 1800, and 1900 were common years. The International Fixed Calendar inserts the extra day in leap years as June 29 - between Saturday June 28 and Sunday Sol 1. Each month begins on a Sunday, and ends on a Saturday; consequently, every year begins on Sunday. Neither Year Day nor Leap Day are considered to be part of any week; they are preceded by a Saturday and are followed by a Sunday, making a long weekend. As a result, a particular day usually has a different day of the week in the IFC than in all traditional calendars that contain a seven-day week. The IFC is, however, almost compatible with the World Calendar in this regard, because it also starts Sunday and has the extra day at the end of the year and the leap day in the middle, except IFC leaps on Gregorian June 17 and TWC leaps two weeks later on July 1. Since this break of the ancient week cycle has been a major concern raised against its adoption, various leap week calendars have been proposed as a solution. * The two special dates have been recorded as either the 29th day of the month ending or the 0th day of the month beginning, or, more correctly, as outside any month and week with no ordinal number. The date for today, , using this calendar is . The following table shows how the 13 months and extra days of the International Fixed Calendar occur in relation to the dates of the Gregorian calendar: * In a leap year, these Gregorian dates between March and June are a day earlier. March in the Fixed Calendar always has a fixed number of days (28), and includes an eventual Gregorian February 29. The rule for finding leap years is the same in both calendars. History Lunisolar calendars, with fixed weekdays, existed in many ancient cultures, with certain holidays always falling on the same dates of the month and days of the week. The idea of a 13-month perennial calendar has been around since at least the middle of the 18th century. Versions of the idea differ mainly on how the months are named, and the treatment of the extra day in leap year. The "Georgian calendar" was proposed in 1745 by Reverend Hugh Jones, an American colonist from Maryland writing under the pen name Hirossa Ap-Iccim. The author named the plan, and the thirteenth month, after King George II of Great Britain. The 365th day each year was to be set aside as Christmas. The treatment of leap year varied from the Gregorian rule, however, and the year would begin closer to the winter solstice. In a later version of the plan, published in 1753, the 13 months were all renamed for Christian saints. In 1849 the French philosopher Auguste Comte (1798–1857) proposed the 13-month Positivist Calendar, naming the months: Moses, Homer, Aristotle, Archimedes, Caesar, St Paul, Charlemagne, Dante, Gutenberg, Shakespeare, Descartes, Frederic and Bichat. The days of the year were likewise dedicated to "saints" in the Positivist Religion of Humanity. Positivist weeks, months, and years begin with Monday instead of Sunday. Comte also reset the year number, beginning the era of his calendar (year 1) with the Gregorian year 1789. For the extra days of the year not belonging to any week or month, Comte followed the pattern of Ap-Iccim (Jones), ending each year with a festival on the 365th day, followed by a subsequent feast day occurring only in leap years. Whether Moses Cotsworth was familiar with the 13-month plans that preceded his International Fixed Calendar is not known. He did follow Ap-Iccim (Jones) in designating the 365th day of the year as Christmas. His suggestion was that this last day of the year should be designated a Sunday, and hence, because the following day would be New Year's Day and a Sunday also, he called it a Double Sunday. Since Cotsworth's goal was a simplified, more "rational" calendar for business and industry, he would carry over all the features of the Gregorian calendar consistent with this goal, including the traditional month names, the week beginning on Sunday (still traditionally used in US, but uncommon in Europe and in the ISO week standard, starting their weeks on Monday), and the Gregorian leap-year rule. To promote Cotsworth's calendar reform the International Fixed Calendar League was founded in 1923, just after the plan was selected by the League of Nations as the best of 130 calendar proposals put forward. Sir Sandford Fleming, the inventor and driving force behind worldwide adoption of standard time, became the first president of the IFCL. The League opened offices in London and later in Rochester, New York. George Eastman, of the Eastman Kodak Company, became a fervent supporter of the IFC, and instituted its use at Kodak. Some organized opposition to the proposed reform came from rabbi Joseph Hertz, who objected to the way that the Jewish Sabbath would move throughout the week. The International Fixed Calendar League ceased operations shortly after the calendar plan failed to win final approval of the League of Nations in 1937. See also Calendar reform ISO week date Leap week calendar Positivist calendar World Calendar References Notes Citations Sources External links Cotsworth Calendar of George Eastman NUCAL New Universal Calendar Project International Fixed Calendar League Proposed calendars Specific calendars 1902 in science 1902 works
23703
https://en.wikipedia.org/wiki/Potential%20energy
Potential energy
In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality. Common types of potential energy include the gravitational potential energy of an object, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J). Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is in-turn called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function. Overview There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration. Forces derivable from a potential are also called conservative forces. The work done by a conservative force is where is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep. Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall. Consider a ball whose mass is and whose height is . The acceleration of free fall is approximately constant, so the weight force of the ball is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position. History From around 1840 scientists sought to define and understand energy and work. The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as ‘energy of configuration’ in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded. Work and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B. The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces. Derivable from a potential In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that This means that the units of U′ must be this case, work along the curve is given by which can be evaluated using the gradient theorem to obtain This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path. Potential energy is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is In this case, the application of the del operator to the work function yields, and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field. Computing potential energy Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve from to , and computing, For the force field F, let , then the gradient theorem yields, The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is Examples of work that can be computed from potential functions are gravity and spring forces. Potential energy for near-Earth gravity For small height changes, gravitational potential energy can be computed using where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules. In classical physics, gravity exerts a constant downward force on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory , such as the track of a roller coaster is calculated using its velocity, , to obtain where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve . Potential energy for a linear spring A horizontal spring exerts a force that is proportional to its deformation in the axial or x direction. The work of this spring on a body moving along the space curve , is calculated using its velocity, , to obtain For convenience, consider contact with the spring occurs at , then the integral of the product of the distance x and the x-velocity, xvx, is x2/2. The function is called the potential energy of a linear spring. Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy. Potential energy for gravitational forces between two bodies The gravitational potential function, also known as gravitational potential energy, is: The negative sign follows the convention that work is gained from a loss of potential energy. Derivation The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation where is a vector of length 1 pointing from M to m and G is the gravitational constant. Let the mass m move at the velocity then the work of gravity on this mass as it moves from position to is given by The position and velocity of the mass m are given by where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to, This calculation uses the fact that Potential energy for electrostatic forces between two bodies The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's Law where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. The work W required to move q from A to any point B in the electrostatic force field is given by the potential function Reference level The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience. Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions. Gravitational potential energy Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount. Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact. The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail. Local approximation The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the equation for work, and the equation The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember ). The upward force required while moving at a constant velocity is equal to the weight, , of an object, so the work done in lifting it through a height is the product . Thus, when accounting only for mass, gravity, and altitude, the equation is: where is the potential energy of the object relative to its being on the Earth's surface, is the mass of the object, is the acceleration due to gravity, and h is the altitude of the object. Hence, the potential difference is General formula However, over large variations in distance, the approximation that is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance between the two bodies. Using that definition, the gravitational potential energy of a system of masses and at a distance using the Newtonian constant of gravitation is where is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making negative; for why this is physically reasonable, see below. Given this formula for , the total potential energy of a system of bodies is found by summing, for all pairs of two bodies, the potential energy of the system of those two bodies. Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. therefore, Negative gravitational energy As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which becomes zero: and . The choice of at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative. The singularity at in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with for , would result in potential energy being positive, but infinitely large for all nonzero values of , and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and is always non-zero in practice, the choice of at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first. The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this. Uses Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction. Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window. Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls. Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES). Chemical potential energy Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. Electric potential energy An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy). Electrostatic potential energy Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q which is given by where is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby. The work W required to move q from A to any point B in the electrostatic force field is given by typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge. Magnetic potential energy The energy of a magnetic moment in an externally produced magnetic B-field has potential energy The magnetization in a field is where the integral can be over all space or, equivalently, where is nonzero. Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart. Nuclear potential energy Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space. Forces and potential energy Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by or , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is The gravitational potential (specific energy) of the two bodies is where is the reduced mass. The work done against gravity by moving an infinitesimal mass from point A with to point B with is and the work done going back the other way is so that the total work done in moving from A to B and returning to A is If the potential is redefined at A to be and the potential at B to be , where is a constant (i.e. can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is as before. In practical terms, this means that one can set the zero of and anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section). A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. Notes References External links What is potential energy? Energy (physics) Forms of energy Mechanical quantities
23704
https://en.wikipedia.org/wiki/Pyramid
Pyramid
A pyramid () is a structure whose visible surfaces are triangular in broad outline and converge toward the top, making the appearance roughly a pyramid in the geometric sense. The base of a pyramid can be of any polygon shape, such as triangular or quadrilateral, and its lines either filled or step. A pyramid has the majority of its mass closer to the ground with less mass towards the pyramidion at the apex. This is due to the gradual decrease in the cross-sectional area along the vertical axis with increasing elevation. This offers a weight distribution that allowed early civilizations to create monumental structures.Civilizations in many parts of the world have built pyramids. The largest pyramid by volume is the Mesoamerican Great Pyramid of Cholula, in the Mexican state of Puebla. For millennia, the largest structures on Earth were pyramids—first the Red Pyramid in the Dashur Necropolis and then the Great Pyramid of Khufu, both in Egypt—the latter is the only extant example of the Seven Wonders of the Ancient World. Ancient monuments West Asia Mesopotamia The Mesopotamians built the earliest pyramidal structures, called ziggurats. In ancient times, these were brightly painted in gold/bronze. They were constructed of sun-dried mud-brick, and little remains of them. Ziggurats were built by the Sumerians, Babylonians, Elamites, Akkadians, and Assyrians. Each ziggurat was part of a temple complex that included other buildings. The ziggurat's precursors were raised platforms that date from the Ubaid period of the fourth millennium BC. The earliest ziggurats began near the end of the Early Dynastic Period. The original pyramidal structure, the anu ziggurat, dates to around 4000 BC. The White Temple was built on top of it circa 3500 BC. Built in receding tiers upon a rectangular, oval, or square platform, the ziggurat was a pyramidal structure with a flat top. Sun-baked bricks made up the core of the ziggurat with facings of fired bricks on the outside. The facings were often glazed in different colors and may have had astrological significance. Kings sometimes had their names engraved on them. The number of tiers ranged from two to seven. It is assumed that they had shrines at the top, but no archaeological evidence supports this and the only textual evidence is from Herodotus. Access to the shrine would have been by a series of ramps on one side of the ziggurat or by a spiral ramp from base to summit. Africa Egypt The most famous African pyramids are in Egypt — huge structures built of bricks or stones, some of which are among the world's largest constructions. They are shaped in reference to the sun's rays. Most had a smoothed white limestone surface. Many of the facing stones have fallen or were removed and used for construction in Cairo. The capstone was usually made of limestone, granite or basalt and some were plated with electrum. Ancient Egyptians built pyramids from 2700 BC until around 1700 BC. The first pyramid was erected during the Third Dynasty by the Pharaoh Djoser and his architect Imhotep. This step pyramid consisted of six stacked mastabas. Early kings such as Snefru built pyramids, with subsequent kings adding to the number until the end of the Middle Kingdom. The age of the pyramids reached its zenith at Giza in 2575–2150 BC. The last king to build royal pyramids was Ahmose, with later kings hiding their tombs in the hills, such as those in the Valley of the Kings in Luxor's West Bank. In Medinat Habu and Deir el-Medina, smaller pyramids were built by individuals. Smaller pyramids with steeper sides were built by the Nubians who ruled Egypt in the Late Period. The Great Pyramid of Giza is the largest in Egypt and one of the largest in the world. At it was the tallest structure in the world until the Lincoln Cathedral was finished in 1311 AD. Its base covers an area of around . The Great Pyramid is the only extant one of the Seven Wonders of the Ancient World. Ancient Egyptian pyramids were, in most cases, placed west of the river Nile because the divine pharaoh's soul was meant to join with the sun during its descent before continuing with the sun in its eternal round. As of 2008, some 135 pyramids had been discovered in Egypt, most located near Cairo. Sudan While African pyramids are commonly associated with Egypt, Sudan has 220 extant pyramids, the most in the world. Nubian pyramids were constructed (roughly 240 of them) at three sites in Sudan to serve as tombs for the kings and queens of Napata and Meroë. The pyramids of Kush, also known as Nubian Pyramids, have different characteristics than those of Egypt. The Nubian pyramids had steeper sides than the Egyptian ones. Pyramids were built in Sudan as late as 200 AD. Sahel The Tomb of Askia, in Gao, Mali, is believed to be the burial place of Askia Mohammad I, one of the Songhai Empire's most prolific emperors. It was built at the end of the fifteenth century and is designated as a UNESCO World Heritage Site. UNESCO describes the tomb as an example of the monumental mud-building traditions of the West African Sahel. The complex includes the pyramidal tomb, two mosques, a cemetery and an assembly ground. At 17 metres (56 ft) in height it is the largest pre-colonial architectural monument in Gao. It is a notable example of the Sudano-Sahelian architectural style that later spread throughout the region. Nigeria One of the unique structures of Igbo culture was the Nsude pyramids, in the Nigerian town of Nsude, northern Igboland. Ten pyramidal structures were built of clay/mud. The first base section was in circumference and in height. The next stack was in circumference. Circular stacks continued to the top. The structures were temples for the god Ala, who was believed to reside there. A stick was placed at the top to represent the god's residence. The structures were laid in groups of five parallel to each other. Because it was built of clay/mud like the Deffufa of Nubia, over time periodic reconstruction has been required. Europe Greece Pausanias (2nd century AD) mentions two buildings resembling pyramids, one, 19 kilometres (12 mi) southwest of a still standing structure at Hellenikon, a common tomb for soldiers who died in a legendary struggle for the throne of Argos and another that he was told was the tomb of Argives killed in a battle around 669/8 BC. Neither survives and no evidence indicates that they resembled Egyptian pyramids. At least two surviving pyramid-like structures are available to study, one at Hellenikon and the other at Ligourio/Ligurio, a village near the ancient theatre Epidaurus. These buildings have inwardly sloping walls, but bear no other resemblance to Egyptian pyramids. They had large central rooms (unlike Egyptian pyramids) and the Hellenikon structure is rectangular rather than square, which means that the sides could not have met at a point. The stone used to build these structures was limestone quarried locally and was cut to fit, not into freestanding blocks like the Great Pyramid of Giza. These structures were dated from pot shards excavated from the floor and grounds. The latest estimates are around the 5th and 4th centuries. Normally this technique is used for dating pottery, but researchers used it to try to date stone flakes from the structure walls. This launched debate about whether or not these structures are actually older than Egypt, part of the Black Athena controversy. Lefkowitz criticised this research, suggesting that some of the research was done not to determine the reliability of the dating method, as was suggested, but to back up a claim and to make points about pyramids and Greek civilization. She claimed that not only were the results imprecise, but that other structures mentioned in the research are not in fact pyramids, e.g. a tomb alleged to be the tomb of Amphion and Zethus near Thebes, a structure at Stylidha (Thessaly) which is a long wall, etc. She pushed the possibility that the stones that were dated might have been recycled from earlier constructions. She claimed that earlier research from the 1930s, confirmed in the 1980s by Fracchia, was ignored. Liritzis responded that Lefkowitz failed to understand and misinterpreted the methodology. Spain The Pyramids of Güímar refer to six rectangular pyramid-shaped, terraced structures, built from lava without mortar. They are located in the district of Chacona, part of the town of Güímar on the island of Tenerife in the Canary Islands. The structures were dated to the 19th century and their function explained as a byproduct of contemporary agricultural techniques. Autochthonous Guanche traditions as well as surviving images indicate that similar structures (also known as, "Morras", "Majanos", "Molleros", or "Paredones") were built in many locations on the island. However, over time they were dismantled and used as building material. Güímar hostred nine pyramids, only six of which survive. Roman Empire The 27-metre-high Pyramid of Cestius was built by the end of the 1st century BC and survives close to the Porta San Paolo. Another, named Meta Romuli, stood in the Ager Vaticanus (today's Borgo), but was destroyed at the end of the 15th century. Medieval Europe Pyramids were occasionally used in Christian architecture during the feudal era, e.g. as the tower of Oviedo's Gothic Cathedral of San Salvador. Americas Peru Andean cultures used pyramids in various architectural structures such as the ones in Caral, Túcume and Chavín de Huantar, constructed around the same time as early Egyptian pyramids. Mesoamerica Several Mesoamerican cultures built pyramid-shaped structures. Mesoamerican pyramids were usually stepped, with temples on top, more similar to the Mesopotamian ziggurat than the Egyptian pyramid. The largest by volume is the Great Pyramid of Cholula, in the Mexican state of Puebla. Constructed from the 3rd century BC to the 9th century AD, this pyramid is the world's largest monument, and is still not fully excavated. The third largest pyramid in the world, the Pyramid of the Sun, at Teotihuacan, is also located in Mexico. An unusual pyramid with a circular plan survives at the site of Cuicuilco, now inside Mexico City and mostly covered with lava from an eruption of the Xitle Volcano in the 1st century BC. Several circular stepped pyramids called Guachimontones survive in Teuchitlán, Jalisco. Pyramids in Mexico were often used for human sacrifice. Harner stated that for the dedication of the Great Pyramid of Tenochtitlan in 1487, "one source states 20,000, another 72,344, and several give 80,400" as the number of humans sacrificed. United States Many pre-Columbian Native American societies of ancient North America built large pyramidal earth structures known as platform mounds. Among the largest and best-known of these structures is Monks Mound at the site of Cahokia in what became Illinois, completed around 1100 AD. It has a base larger than that of the Great Pyramid. Many mounds underwent repeated episodes of expansion. They are believed to have played a central role in the mound-building peoples' religious life. Documented uses include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms. Cultures that built substructure mounds include the Troyville culture, Coles Creek culture, Plaquemine culture and Mississippian cultures. Asia Many square flat-topped mound tombs in China. The first emperor Qin Shi Huang (, who unified the seven pre-imperial kingdoms) was buried under a large mound outside modern-day Xi'an. In the following centuries about a dozen more Han dynasty royal persons were also buried under flat-topped pyramidal earthworks. India Numerous giant, granite, temple pyramids were built in South India during the Chola Empire, many of which remain in use. Examples include Brihadisvara Temple at Thanjavur, Brihadisvara Temple at Gangaikonda Cholapuram, and the Airavatesvara Temple at Darasuram. However, the largest temple (area) is the Ranganathaswamy Temple in Srirangam, Tamil Nadu. The Thanjavur temple was built by Raja Raja Chola in the 11th century. The Brihadisvara Temple was declared a World Heritage Site by UNESCO in 1987; the Temple of Gangaikondacholapuram and the Airavatesvara Temple at Darasuram were added in 2004. Indonesia Austronesian megalithic culture in Indonesia featured earth and stone step pyramid structures called punden berundak. These were discovered in Pangguyangan near Cisolok and in Cipari near Kuningan. The stone pyramids were based on beliefs that mountains and high places were the abode for the spirit of the ancestors. The step pyramid is the basic design of the 8th century Borobudur Buddhist monument in Central Java. However later Java temples were influenced by Indian Hindu architecture, as exemplified by the spires of Prambanan temple. In the 15th century, during late Majapahit period, Java saw the revival of indigenous Austronesian elements as displayed by Sukuh temple that somewhat resemble Mesoamerican pyramids, and also stepped pyramids of Mount Penanggungan. East Asia, Southeast Asia and Central Asia In east Asia, Buddhist stupas were usually represented as tall pagodas. However, some pyramidal stupas survive. One theory is that these pyramids were inspired by the Borobudur monument through Sumatran and Javanese monks. A similar Buddhist monument survives in Vrang, Tajikistan. At least nine Buddhist step pyramids survive, 4 from former Gyeongsang Province of Korea, 3 from Japan, 1 from Indonesia (Borobudur) and 1 from Tajikistan. Oceania Several pyramids were erected throughout the Pacific islands, such as Puʻukoholā Heiau in Hawaii, the Pulemelei Mound in Samoa, and Nan Madol in Pohnpei. Modern pyramids Two pyramid-shaped tombs were erected in Maudlin's Cemetery, Ireland, c. 1840, belonging to the De Burgh family. The Louvre Pyramid in Paris, France, in the court of the Louvre Museum, is a 20.6 metre (about 70 foot) glass structure that acts as a museum entrance. It was designed by American architect I. M. Pei and completed in 1989. The Pyramide Inversée (Inverted Pyramid) is displayed in the underground Louvre shopping mall. The Tama-Re village was an Egyptian-themed set of buildings and monuments built near Eatonton, Georgia by Nuwaubians in 1993 that was mostly demolished after it was sold in 2005. The Luxor Hotel in Las Vegas, United States, is a 30-story pyramid. The 32-story Memphis Pyramid (Memphis was named after the Egyptian capital whose name was derived from the name of one of its pyramids). Built in 1991, it was the home court for the University of Memphis men's basketball program, and the National Basketball Association's Memphis Grizzlies until 2004. It was not regularly used as a sports or entertainment venue after 2007, and in 2015 was re-purposed as a Bass Pro Shops megastore. The Walter Pyramid, home of the basketball and volleyball teams of the California State University, Long Beach, campus in California, United States, is an 18-story-tall blue true pyramid. The 48-story Transamerica Pyramid in San Francisco, California, designed by William Pereira, is a city symbol. The 105-story Ryugyong Hotel is in Pyongyang, North Korea. A former museum/monument in Tirana, Albania is commonly known as the "Pyramid of Tirana". It differs from typical pyramids in having a radial rather than square or rectangular shape, and gently sloped sides that make it short in comparison to the size of its base. The Slovak Radio Building in Bratislava, Slovakia is an inverted pyramid. The Palace of Peace and Reconciliation is in Astana, Kazakhstan. The three pyramids of Moody Gardens are in Galveston, Texas. The Co-Op Bank Pyramid or Stockport Pyramid in Stockport, England is a large pyramid-shaped office block. (The surrounding part of the valley of the upper Mersey has sometimes been called the "Kings Valley" after the Egypt's Valley of the Kings.) The Ames Monument in southeastern Wyoming honors the brothers who financed the Union Pacific Railroad. The Trylon, a triangular pyramid was erected for the 1939 World's Fair in Flushing, Queens and demolished after the Fair closed. The Ballandean Pyramid, at Ballandean in rural Queensland is a 15-metre folly pyramid made from blocks of local granite. The Karlsruhe Pyramid is a pyramid made of red sandstone, located in the centre of the market square of Karlsruhe, Germany. It was erected in 1823–1825 over the vault of the city's founder, Margrave Charles III William (1679–1738). Muttart Conservatory greenhouses are in Edmonton, Alberta. Sunway Pyramid shopping mall is in Selangor, Malaysia. Hanoi Museum has an overall design of a reversed pyramid. The Ha! Ha! Pyramid by artist Jean-Jules Soucy in La Baie, Quebec is made out of 3,000 give way signs. The culture-entertainment complex and is in Kazan, Russia. The Time pyramid in Wemding, Germany is a pyramid begun in 1993 and scheduled for completion in the year 3183. Triangle is a proposed skyscraper in Paris. The Shimizu Mega-City Pyramid is a proposed project for construction of a massive pyramid over Tokyo Bay in Japan. The Donkin Memorial was erected on a Xhosa reserve in 1820 by Cape Governor Sir Rufane Shaw Donkin in memory of his late wife Elizabeth, in Port Elizabeth, South Africa. The pyramid is used in many different coats-of-arms associated with Port Elizabeth. Modern mausoleums With the Egyptian Revival movement in the nineteenth and early twentieth century, pyramids became more common in funerary architecture. The tomb of Quintino Sella, outside the monumental cemetery of Oropa, is pyramid-shaped. This style was popular with tycoons in the US. The Schoenhofen Pyramid Mausoleum (1889) in Chicago and Hunt's Tomb (1930) in Phoenix, Arizona are notable examples. Some people build pyramid tombs for themselves. Nicolas Cage bought a pyramid tomb for himself in a famed New Orleans graveyard. See also List of largest monoliths Lists of pyramids List of pyramid mausoleums in North America Mound Pyramid power Stupa Triadic pyramid Tumulus (burial mound) References Types of monuments and memorials sn:Dumba
23705
https://en.wikipedia.org/wiki/Predestination
Predestination
Predestination, in theology, is the doctrine that all events have been willed by God, usually with reference to the eventual fate of the individual soul. Explanations of predestination often seek to address the paradox of free will, whereby God's omniscience seems incompatible with human free will. In this usage, predestination can be regarded as a form of religious determinism; and usually predeterminism, also known as theological determinism. History Pre-Christian period Some have argued that the Book of Enoch contains a deterministic worldview that is combined with dualism. The book of Jubilees seems to harmonize or mix together a doctrine of free will and determinism. Ben Sira affirms free will, where God allows a choice of bad or good before the human and thus they can choose which one to follow. New Testament period There is some disagreement among scholars regarding the views on predestination of first-century AD Judaism, out of which Christianity came. Josephus wrote during the first century that the three main Jewish sects differed on this question. He argued that the Essenes and Pharisees argued that God's providence orders all human events, but the Pharisees still maintained that people are able to choose between right and wrong. He wrote that the Sadducees did not have a doctrine of providence. Biblical scholar N. T. Wright argues that Josephus's portrayal of these groups is incorrect, and that the Jewish debates referenced by Josephus should be seen as having to do with God's work to liberate Israel rather than philosophical questions about predestination. Wright asserts that Essenes were content to wait for God to liberate Israel while Pharisees believed Jews needed to act in cooperation with God. John Barclay responded that Josephus's description was an over-simplification and there were likely to be complex differences between these groups which may have been similar to those described by Josephus. Francis Watson has also argued on the basis of 4 Ezra, a document dated to the first century AD, that Jewish beliefs in predestination are primarily concerned with God's choice to save some individual Jews. However some in the Qumran community possibly believed in predestination, for example 1QS states that "God has caused (his chosen ones) to inherit the lot of the Holy Ones". In the New Testament, Romans 8–11 presents a statement on predestination. In Romans 8:28–30, Paul writes, Biblical scholars have interpreted this passage in several ways. Many say this only has to do with service, and is not about salvation. The Catholic biblical commentator Brendan Byrne wrote that the predestination mentioned in this passage should be interpreted as applied to the Christian community corporately rather than individuals. Another Catholic commentator, Joseph Fitzmyer, wrote that this passage teaches that God has predestined the salvation of all humans. Douglas Moo, a Protestant biblical interpreter, reads the passage as teaching that God has predestined a certain set of people to salvation, and predestined the remainder of humanity to reprobation (damnation). Similarly, Wright's interpretation is that in this passage Paul teaches that God will save those whom he has chosen, but Wright also emphasizes that Paul does not intend to suggest that God has eliminated human free will or responsibility. Instead, Wright asserts, Paul is saying that God's will works through that of humans to accomplish salvation. Patristic period Pre-Nicene period Origen, writing in the third century, taught that God's providence extends to every individual. He believed God's predestination was based on God's foreknowledge of every individual's merits, whether in their current life or a previous life. Gill and Gregg Alisson argued that Clement of Rome held to a predestinarian view of salvation. Some verses in the Odes of Solomon, which was made by an Essene convert into Christianity, might possibly suggest a predestinarian worldview, where God chooses who are saved and go into heaven, although there is controversy about what it teaches. The Odes of Solomon talks about God "imprinting a seal on the face of the elect before they existed". The Thomasines saw themselves as children of the light, but the ones who were not part of the elect community were sons of darkness. The Thomasines thus had a belief in a type of election or predestination, they saw themselves as elect because they were born from the light. Valentinus believed in a form of predestination, in his view humans are born into one of three natures, depending on which elements prevail in the person. In the views of Valentinus, a person born with a bad nature can never be saved because they are too inclined into evil, some people have a nature which is a mixture of good and evil, thus they can choose salvation, and others have a good nature, who will be saved, because they will be inclined into good. Irenaeus also attacked the doctrine of predestination set out by Valentinus, arguing that it is unfair. For Irenaeus, humans were free to choose salvation or not. Justin Martyr attacked predestinarian views held by some Greek philosophers. Post-Nicene period Later in the fourth and fifth centuries, Augustine of Hippo (354–430) also taught that God orders all things while preserving human freedom. Prior to 396, Augustine believed that predestination was based on God's foreknowledge of whether individuals would believe, that God's grace was "a reward for human assent". Later, in response to Pelagius, Augustine said that the sin of pride consists in assuming that "we are the ones who choose God or that God chooses us (in his foreknowledge) because of something worthy in us", and argued that it is God's grace that causes the individual act of faith. Scholars are divided over whether Augustine's teaching implies double predestination, or the belief that God chooses some people for damnation as well as some for salvation. Catholic scholars tend to deny that he held such a view while some Protestants and secular scholars affirm that Augustine did believe in double predestination. Augustine's position raised objections. Julian of Eclanum expressed the view that Augustine was bringing Manichean thoughts into the church. For Vincent of Lérins, this was a disturbing innovation. This new tension eventually became obvious with the confrontation between Augustine and Pelagius culminating in condemnation of Pelagianism (as interpreted by Augustine) at the Council of Ephesus in 431. Pelagius denied Augustine's view of predestination in order to affirm that salvation is achieved by an act of free will. The Council of Arles in the late fifth century condemned the position "that some have been condemned to death, others have been predestined to life", though this may seem to follow from Augustine's teaching. The Second Council of Orange in 529 also condemned the position that "some have been truly predestined to evil by divine power". In the eighth century, John of Damascus emphasized the freedom of the human will in his doctrine of predestination, and argued that acts arising from peoples' wills are not part of God's providence at all. Damascene teaches that people's good actions are done in cooperation with God, but are not caused by him. Prosper of Aquitaine (390 – c. 455 AD) defended Augustine's view of predestination against semi-Pelagians. Marius Mercator, who was a pupil of Augustine, wrote five books against Pelagianism and one book about predestination. Fulgentius of Ruspe and Caesarius of Arles rejected the view that God gives free choice to believe and instead believed in predestination. Cassian believed that despite predestination being a work that God does, God only decides to predestinate based on how human beings will respond. Augustine himself stated thus:And thus Christ's Church has never failed to hold the faith of this predestination, which is now being defended with new solicitude against these modern heretics – Augustine. Middle Ages Gottschalk of Orbais, a ninth-century Saxon monk, argued that God predestines some people to hell as well as predestining some to heaven, a view known as double predestination. He was condemned by several synods, but his views remained popular. Irish theologian John Scotus Eriugena wrote a refutation of Gottschalk. Eriugena abandoned Augustine's teaching on predestination. He wrote that God's predestination should be equated with his foreknowledge of people's choices. In the thirteenth century, Thomas Aquinas taught that God predestines certain people to the beatific vision based solely on his own goodness rather than that of creatures. Aquinas also believed that people are free in their choices, fully cause their own sin, and are solely responsible for it. According to Aquinas, there are several ways in which God wills actions. He directly wills the good, indirectly wills evil consequences of good things, and only permits evil. Aquinas held that in permitting evil, God does not will it to be done or not to be done. In the thirteenth century, William of Ockham taught that God does not cause human choices and equated predestination with divine foreknowledge. Though Ockham taught that God predestines based on people's foreseen works, he maintained that God's will was not constrained to do this. Medieval theologians who believed in predestination include: Ratramnus (died 868), Thomas Bradwardine (1300–1349), Gregory of Rimini (1300–1358), John Wycliffe (1320s–1384), Johann Ruchrat von Wesel (died 1481), Girolamo Savonarola (1452–1498) and Johannes von Staupitz (1460–1524). The medieval Cathars denied the free will of humans. Reformation John Calvin rejected the idea that God permits rather than actively decrees the damnation of sinners, as well as other evil. Calvin did not believe God to be guilty of sin, but rather he considered God inflicting sin upon his creations to be an unfathomable mystery. Though he maintained God's predestination applies to damnation as well as salvation, he taught that the damnation of the damned is caused by their sin, but that the salvation of the saved is solely caused by God. Other Protestant Reformers, including Huldrych Zwingli, also held double predestinarian views. Views of Christian branches Eastern Orthodoxy The Eastern Orthodox view was summarized by Bishop Theophan the Recluse in response to the question, "What is the relationship between the Divine provision and our free will?" Roman Catholicism Roman Catholicism teaches the doctrine of predestination. The Catechism of the Catholic Church says, "To God, all moments of time are present in their immediacy. When therefore He establishes His eternal plan of 'predestination', He includes in it each person's free response to his grace." Therefore, in the Roman Catholic conception of predestination, free will is not denied. However, Roman Catholic theology has discouraged beliefs that it is possible for anyone to know or predict anything about the operation and outcomes of predestination, and therefore it normally plays a very small role in Roman Catholic thinking. The heretical seventeenth and eighteenth centuries sect within Roman Catholicism known as Jansenism preached the doctrine of double predestination, although Jansenism claimed that even members of the saved elect could lose their salvation by doing sinful, un-repented deeds, as implied in Ezekiel 18:21–28 in the Old Testament of the Bible. According to the Roman Catholic Church, God does not will anyone to mortally sin and so to deserve punishment in hell. Pope John Paul II wrote: Augustine of Hippo laid the foundation for much of the later Roman Catholic teaching on predestination. His teachings on grace and free will were largely adopted by the Second Council of Orange (529), whose decrees were directed against the Semipelagians. Augustine wrote, Augustine also teaches that people have free will. For example, in "On Grace and Free Will", (see especially chapters II–IV) Augustine states that "He [God] has revealed to us, through His Holy Scriptures, that there is in man a free choice of will," and that "God's precepts themselves would be of no use to a man unless he had free choice of will, so that by performing them he might obtain the promised rewards." (chap. II) Thomas Aquinas' views concerning predestination are largely in agreement with Augustine and can be summarized by many of his writings in his Summa Theologiæ: Protestantism Comparison This table summarizes the classical views of three different Protestant beliefs. Lutheranism Lutherans historically hold to unconditional election to salvation. However, some do not believe that there are certain people that are predestined to salvation, but salvation is predestined for those who seek God. Lutherans believe Christians should be assured that they are among the predestined. However, they disagree with those who make predestination the source of salvation rather than Christ's suffering, death, and resurrection. Unlike some Calvinists, Lutherans do not believe in a predestination to damnation. Instead, Lutherans teach eternal damnation is a result of the unbeliever's rejection of the forgiveness of sins and unbelief. Martin Luther's attitude towards predestination is set out in his On the Bondage of the Will, published in 1525. This publication by Luther was in response to the published treatise by Desiderius Erasmus in 1524 known as On Free Will. Calvinism The Belgic Confession of 1561 affirmed that God "delivers and preserves" from perdition "all whom he, in his eternal and unchangeable council, of mere goodness hath elected in Christ Jesus our Lord, without respect to their works" (Article XVI). Calvinists believe that God picked those whom he will save and bring with him to Heaven before the world was created. They also believe that those people God does not save will go to Hell. John Calvin thought people who were saved could never lose their salvation and the "elect" (those God saved) would know they were saved because of their actions. In this common, loose sense of the term, to affirm or to deny predestination has particular reference to the Calvinist doctrine of unconditional election. In the Calvinist interpretation of the Bible, this doctrine normally has only pastoral value related to the assurance of salvation and the absolution of salvation by grace alone. However, the philosophical implications of the doctrine of election and predestination are sometimes discussed beyond these systematic bounds. Under the topic of the doctrine of God (theology proper), the predestinating decision of God cannot be contingent upon anything outside of himself, because all other things are dependent upon him for existence and meaning. Under the topic of the doctrines of salvation (soteriology), the predestinating decision of God is made from God's knowledge of his own will (Romans 9:15), and is therefore not contingent upon human decisions (rather, free human decisions are outworkings of the decision of God, which sets the total reality within which those decisions are made in exhaustive detail: that is, nothing left to chance). Calvinists do not pretend to understand how this works; but they are insistent that the Scriptures teach both the sovereign control of God and the responsibility and freedom of human decisions. Calvinist groups use the term Hyper-Calvinism to describe Calvinistic systems that assert without qualification that God's intention to destroy some is equal to his intention to save others. Some forms of Hyper-Calvinism have racial implications, as when Dutch Calvinist theologian Franciscus Gomarus argued that Jews, because of their refusal to worship Jesus Christ, were members of the non-elect, as also argued by John Calvin himself, based on I John 2:22–23 in The New Testament of the Bible. Some Dutch settlers in South Africa argued that black people were sons of Ham, whom Noah had cursed to be slaves, according to Genesis 9:18–19, or drew analogies between them and the Canaanites, suggesting a "chosen people" ideology similar to that espoused by proponents of the Jewish nation. This justified racial hierarchy on earth, as well as racial segregation of congregations, but did not exclude blacks from being part of the elect. Other Calvinists vigorously objected to these arguments (see Afrikaner Calvinism). Expressed sympathetically, the Calvinist doctrine is that God has mercy or withholds it, with particular consciousness of who are to be the recipients of mercy in Christ. Therefore, the particular persons are chosen, out of the total number of human beings, who will be rescued from enslavement to sin and the fear of death, and from punishment due to sin, to dwell forever in his presence. Those who are being saved are assured through the gifts of faith, the sacraments, and communion with God through prayer and increase of good works, that their reconciliation with him through Christ is settled by the sovereign determination of God's will. God also has particular consciousness of those who are passed over by his selection, who are without excuse for their rebellion against him, and will be judged for their sins. Calvinists typically divide on the issue of predestination into infralapsarians (sometimes called 'sublapsarians') and supralapsarians. Infralapsarians interpret the biblical election of God to highlight his love (1 John 4:8; Ephesians 1:4b–5a) and chose his elect considering the situation after the Fall, while supralapsarians interpret biblical election to highlight God's sovereignty (Romans 9:16) and that the Fall was ordained by God's decree of election. In infralapsarianism, election is God's response to the Fall, while in supralapsarianism the Fall is part of God's plan for election. In spite of the division, many Calvinist theologians would consider the debate surrounding the infra- and supralapsarian positions one in which scant Scriptural evidence can be mustered in either direction, and that, at any rate, has little effect on the overall doctrine. Some Calvinists decline to describe the eternal decree of God in terms of a sequence of events or thoughts, and many caution against the simplifications involved in describing any action of God in speculative terms. Most make distinctions between the positive manner in which God chooses some to be recipients of grace, and the manner in which grace is consciously withheld so that some are destined for everlasting punishments. Debate concerning predestination according to the common usage concerns the destiny of the damned: whether God is just if that destiny is settled prior to the existence of any actual volition of the individual, and whether the individual is in any meaningful sense responsible for his destiny if it is settled by the eternal action of God. Arminianism At the beginning of the 17th century, the Dutch theologian Jacobus Arminius formulated Arminianism and disagreed with Calvin in particular on election and predestination. Arminianism is defined by God's limited mode of providence. This mode of providence affirms the compatibility between human free will and divine foreknowledge, but its incompatibility with theological determinism. Thus predestination in Arminianism is based on divine foreknowledge, unlike in Calvinism. It is therefore a predestination by foreknowledge. From this perspective, comes the notion of a conditional election on the one who wills to have faith in God for salvation. This means that God does not predetermine, but instead infallibly knows who will believe and perseveringly be saved. Although God knows from the beginning of the world who will go where, the choice is still with the individual. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS Church) rejects the doctrine of predestination, but does believe in foreordination. Foreordination, an important doctrine of the LDS Church, teaches that during the pre-mortal existence, God selected ("foreordained") particular people to fulfill certain missions ("callings") during their mortal lives. For example, prophets were foreordained to be the Lord's servants (see Jeremiah 1:5), all who receive the priesthood were foreordained to that calling, and Jesus was foreordained to enact the atonement. However, all such persons foreordained retain their agency in mortality to fulfill that foreordination or not. The LDS Church teaches the doctrine of moral agency, the ability to choose and act for oneself, and decide whether to accept Christ's atonement. Types of predestination Conditional election Conditional election is the belief that God chooses for eternal salvation those whom he foresees will have faith in Christ. This belief emphasizes the importance of a person's free will. The counter-view is known as unconditional election, and is the belief that God chooses whomever he will, based solely on his purposes and apart from an individual's free will. It has long been an issue in Calvinist–Arminian debate. An alternative viewpoint is Corporate election, which distinguishes God's election and predestination for corporate entities such as the community "in Christ," and individuals who can benefit from that community's election and predestination so long as they continue belonging to that community. Supralapsarianism and infralapsarianism Infralapsarianism (also called sublapsarianism) holds that predestination logically coincides with the preordination of Man's fall into sin. That is, God predestined sinful men for salvation. Therefore, according to this view, God is the ultimate cause, but not the proximate source or "author" of sin. Infralapsarians often emphasize a difference between God's decree (which is inviolable and inscrutable), and his revealed will (against which man is disobedient). Proponents also typically emphasize the grace and mercy of God toward all men, although teaching also that only some are predestined for salvation. In common English parlance, the doctrine of predestination often has particular reference to the doctrines of Calvinism. The version of predestination espoused by John Calvin, after whom Calvinism is named, is sometimes referred to as "double predestination" because in it God predestines some people for salvation (i.e. unconditional election) and some for condemnation (i.e. Reprobation) which results by allowing the individual's own sins to condemn them. Calvin himself defines predestination as "the eternal decree of God, by which he determined with himself whatever he wished to happen with regard to every man. Not all are created on equal terms, but some are preordained to eternal life, others to eternal damnation; and, accordingly, as each has been created for one or other of these ends, we say that he has been predestined to life or to death." On the spectrum of beliefs concerning predestination, Calvinism is the strongest form among Christians. It teaches that God's predestining decision is based on the knowledge of his own will rather than foreknowledge, concerning every particular person and event; and, God continually acts with entire freedom, in order to bring about his will in completeness, but in such a way that the freedom of the creature is not violated, "but rather, established". Calvinists who hold the infralapsarian view of predestination usually prefer that term to "sublapsarianism," perhaps with the intent of blocking the inference that they believe predestination is on the basis of foreknowledge (sublapsarian meaning, assuming the fall into sin). The different terminology has the benefit of distinguishing the Calvinist double predestination version of infralapsarianism from Lutheranism's view that predestination is a mystery, which forbids the unprofitable intrusion of prying minds since God only reveals partial knowledge to the human race. Supralapsarianism is the doctrine that God's decree of predestination for salvation and reprobation logically precedes his preordination of the human race's fall into sin. That is, God decided to save, and to damn; he then determined the means by which that would be made possible. It is a matter of controversy whether or not Calvin himself held this view, but most scholars link him with the infralapsarian position. It is known, however, that Calvin's successor in Geneva, Theodore Beza, held to the supralapsarian view. Double predestination Double predestination, or the double decree, is the doctrine that God actively reprobates, or decrees damnation of some, as well as salvation for those whom he has elected. During the Protestant Reformation John Calvin held this double predestinarian view: "By predestination we mean the eternal decree of God, by which he determined with himself whatever he wished to happen with regard to every man. All are not created on equal terms, but some are preordained to eternal life, others to eternal damnation; and, accordingly, as each has been created for one or other of these ends, we say that he has been predestinated to life or to death." Gottschalk of Orbais taught double predestination explicitly in the ninth century, and Gregory of Rimini in the fourteenth. Some trace this doctrine to statements made by Augustine in the early fifth century that on their own also seem to teach double predestination, but in the context of his other writings it is not clear whether he held this view. In "The City of God," Augustine describes all of humanity as being predestinated for salvation (i.e., the city of God) or damnation (i.e., the earthly city of man); but Augustine also held that all human beings were born "reprobate" but "need not necessarily remain" in that state of reprobation. Corporate election Corporate election is a non-traditional Arminian view of election. In corporate election, God does not choose which individuals he will save prior to creation, but rather God chooses the church as a whole. Or put differently, God chooses what type of individuals he will save. Another way the New Testament puts this is to say that God chose the church in Christ (Eph. 1:4). In other words, God chose from all eternity to save all those who would be found in Christ, by faith in God. This choosing is not primarily about salvation from eternal destruction either but is about God's chosen agency in the world. Thus individuals have full freedom in terms of whether they become members of the church or not. Corporate election is thus consistent with the open view's position on God's omniscience, which states that God's foreknowledge does not determine the outcomes of individual free will. Middle Knowledge Middle Knowledge is a concept that was developed by Jesuit theologian Luis de Molina, and exists under a doctrine called Molinism. It attempts to deal with the topic of predestination by reconciling God's sovereign providence with the notion of libertarian free will. The concept of Middle Knowledge holds that God has a knowledge of true pre-volitional counterfactuals for all free creatures. That is, what any individual creature with a free will (e.g. a human) would do under any given circumstance. God's knowledge of counterfactuals is reasoned to occur logically prior to his divine creative decree (that is, prior to creation), and after his knowledge of necessary truths. Thus, Middle Knowledge holds that before the world was created, God knew what every existing creature capable of libertarian freedom (e.g. every individual human) would freely choose to do in all possible circumstances. It then holds that based on this information, God elected from a number of these possible worlds, the world most consistent with his ultimate will, which is the actual world that we live in. For example: if Free Creature A was to be placed in Circumstance B, God via his Middle Knowledge would know that Free Creature A will freely choose option Y over option Z. if Free Creature A was to be placed in Circumstance C, God via his Middle Knowledge would know that Free Creature A will freely choose option Z over option Y. Based on this Middle Knowledge, God has the ability to actualise the world in which A is placed in a circumstance that he freely chooses to do what is consistent with Gods ultimate will. If God determined that the world most suited to his purposes is a world in which A would freely choose Y instead of Z, God can actualise a world in which Free Creature A finds himself in Circumstance B. In this way, Middle Knowledge is thought of by its proponents to be consistent with any theological doctrines that assert God as having divine providence and man having a libertarian freedom (e.g. Calvinism, Catholicism, Lutheranism), and to offer a potential solution to the concerns that God's providence somehow nullifies man from having true liberty in his choices. See also Biochemical Predestination Clockwork universe Eternal security Fatalism Jansenism Oedipus Tyrannus and Greek tragedy Predestination (film) Predestination in Calvinism Predestination in Islam Providentialism Theological determinism The Protestant Ethic and the Spirit of Capitalism Vocation References Citations Sources Further reading Leif Dixon, Practical Predestinarians in England, c. 1590–1640; Farnham, Ashgate, 2013, . Book review at Practical Predestinarians in England, c. 1590–1640 | Reviews in History Akin, James. The Salvation Controversy. San Diego, Calif.: Catholic Answers, 2001. Vid. pp. 77, 83–87, explaining the resemblances of this Catholic dogma with, and the divergences from, the teaching of Calvin and Luther on this matter. Garrigou-Lagrange, Réginald. Predestination. Rockford, Ill.: TAN Books, 1998, cop. 1939. N.B.: Trans. of the author's La Prédestination des saints et la grâce; reprint of the 1939 ed. of the trans. published by G. Herder Book Co., Saint Louis, Mo. ___. "John Plaifere (d.1632) on Conditional Predestination: A Well-mixed Version of scientia media and Resistible Grace." Reformation & Renaissance Review, 18.2 (2016): 155–173. External links "Determinism in Theology: Predestination" by Robert M. Kindon in The Dictionary of the History of Ideas (1973–1974) "The question asked was does God know the future and how we will turn out." Predestination Understanding Predestination in Islam Detailed Lecture on Islamic Perspective on Fate Occurrences of "predestination" in the Bible text (ESV) The Reformed Doctrine of Predestination (1932) by Loraine Boettner (conservative Calvinist perspective) The Biblical Doctrine Of Predestination, Foreordination, and Election by F. Furman Kearley (Arminian perspective) "Predestination" from The Catholic Encyclopedia (1913) Academic articles on predestination and election (Lutheran perspective) Predestination and Free Will Overview of the concept of predestination from the Protestant and Catholic perspectives On the Presuppositions of our Personal Salvation Grace and predestination from the Orthodox perspective Calvinist theology Catholic theology and doctrine Christian philosophy Christian soteriology Christian terminology Destiny Religious philosophical concepts Determinism
23706
https://en.wikipedia.org/wiki/Primitive%20notion
Primitive notion
In mathematics, logic, philosophy, and formal systems, a primitive notion is a concept that is not defined in terms of previously-defined concepts. It is often motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory, relations between primitive notions are restricted by axioms. Some authors refer to the latter as "defining" primitive notions by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress (per the regress problem). For example, in contemporary geometry, point, line, and contains are some primitive notions. Instead of attempting to define them, their interplay is ruled (in Hilbert's axiom system) by axioms like "For every two points there exists a line that contains them both". Details Alfred Tarski explained the role of primitive notions as follows: When we set out to construct a given discipline, we distinguish, first of all, a certain small group of expressions of this discipline that seem to us to be immediately understandable; the expressions in this group we call PRIMITIVE TERMS or UNDEFINED TERMS, and we employ them without explaining their meanings. At the same time we adopt the principle: not to employ any of the other expressions of the discipline under consideration, unless its meaning has first been determined with the help of primitive terms and of such expressions of the discipline whose meanings have been explained previously. The sentence which determines the meaning of a term in this way is called a DEFINITION,... An inevitable regress to primitive notions in the theory of knowledge was explained by Gilbert de B. Robinson: To a non-mathematician it often comes as a surprise that it is impossible to define explicitly all the terms which are used. This is not a superficial problem but lies at the root of all knowledge; it is necessary to begin somewhere, and to make progress one must clearly state those elements and relations which are undefined and those properties which are taken for granted. Examples The necessity for primitive notions is illustrated in several axiomatic foundations in mathematics: Set theory: The concept of the set is an example of a primitive notion. As Mary Tiles writes: [The] 'definition' of 'set' is less a definition than an attempt at explication of something which is being given the status of a primitive, undefined, term. As evidence, she quotes Felix Hausdorff: "A set is formed by the grouping together of single objects into a whole. A set is a plurality thought of as a unit." Naive set theory: The empty set is a primitive notion. To assert that it exists would be an implicit axiom. Peano arithmetic: The successor function and the number zero are primitive notions. Since Peano arithmetic is useful in regards to properties of the numbers, the objects that the primitive notions represent may not strictly matter. Arithmetic of real numbers: Typically, primitive notions are: real number, two binary operations: addition and multiplication, numbers 0 and 1, ordering <. Axiomatic systems: The primitive notions will depend upon the set of axioms chosen for the system. Alessandro Padoa discussed this selection at the International Congress of Philosophy in Paris in 1900. The notions themselves may not necessarily need to be stated; Susan Haack (1978) writes, "A set of axioms is sometimes said to give an implicit definition of its primitive terms." Euclidean geometry: Under Hilbert's axiom system the primitive notions are point, line, plane, congruence, betweeness, and incidence. Euclidean geometry: Under Peano's axiom system the primitive notions are point, segment, and motion. Russell's primitives In his book on philosophy of mathematics, The Principles of Mathematics Bertrand Russell used the following notions: for class-calculus (set theory), he used relations, taking set membership as a primitive notion. To establish sets, he also establishes propositional functions as primitive, as well as the phrase "such that" as used in set builder notation. (pp 18,9) Regarding relations, Russell takes as primitive notions the converse relation and complementary relation of a given xRy. Furthermore, logical products of relations and relative products of relations are primitive. (p 25) As for denotation of objects by description, Russell acknowledges that a primitive notion is involved. (p 27) The thesis of Russell’s book is "Pure mathematics uses only a few notions, and these are logical constants." (p xxi) See also Axiomatic set theory Foundations of geometry Foundations of mathematics Logical atomism Logical constant Mathematical logic Notion (philosophy) Natural semantic metalanguage References Philosophy of logic Set theory Concepts in logic Mathematical concepts