Dataset Viewer
Auto-converted to Parquet
source
stringlengths
33
227
text
stringlengths
12
2k
categories
stringlengths
2
5.87k
https://en.wikipedia.org/wiki/International%20Atomic%20Time
International Atomic Time (abbreviated TAI, from its French name ) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth's surface and which has leap seconds. UTC deviates from TAI by a number of whole seconds. , immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed. TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth. Operation TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock. The participating institutions each broadcast, in real time, a frequency signal with timeco
Time scales
https://en.wikipedia.org/wiki/Astronomer
An astronomer is a scientist in the field of astronomy who focuses on a specific question or field outside the scope of Earth. Astronomers observe astronomical objects, such as stars, planets, moons, comets and galaxies – in either observational (by analyzing the data) or theoretical astronomy. Examples of topics or fields astronomers study include planetary science, solar astronomy, the origin or evolution of stars, or the formation of galaxies. A related but distinct subject is physical cosmology, which studies the Universe as a whole. Types Astronomers typically fall under either of two main types: observational and theoretical. Observational astronomers make direct observations of celestial objects and analyze the data. In contrast, theoretical astronomers create and investigate models of things that cannot be observed. Because it takes millions to billions of years for a system of stars or a galaxy to complete a life cycle, astronomers must observe snapshots of different systems at unique points in their evolution to determine how they form, evolve, and die. They use this data to create models or simulations to theorize how different celestial objects work. Further subcategories under these two main branches of astronomy include planetary astronomy, astrobiology, stellar astronomy, astrometry, galactic astronomy, extragalactic astronomy, or physical cosmology. Astronomers can also specialize in certain specialties of observational astronomy, such as infrared astronomy, neutrino astronomy, x-ray astronomy, and gravitational-wave astronomy. Academic History Historically, astronomy was more concerned with the classification and description of phenomena in the sky, while astrophysics attempted to explain these phenomena and the differences between them using physical laws. Today, that distinction has mostly disappeared and the terms "astronomer" and "astrophysicist" are interchangeable. Professional astronomers are highly educated individuals who typically hav
;Astronomy;Science occupations
https://en.wikipedia.org/wiki/Atomic%20number
The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of its atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons. For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A. Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century. The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a phys
Atoms;Chemical properties;Dimensionless numbers of chemistry;Nuclear physics;Numbers
https://en.wikipedia.org/wiki/Aardwolf
The aardwolf (Proteles cristatus) is an insectivorous hyaenid species, native to East and Southern Africa. Its name means "earth-wolf" in Afrikaans and Dutch. It is also called the maanhaar-jackal (Afrikaans for "mane-jackal"), termite-eating hyena and civet hyena, based on its habit of secreting substances from its anal gland, a characteristic shared with the African civet. Unlike many of its relatives in the order Carnivora, the aardwolf does not hunt large animals. It eats insects and their larvae, mainly termites; one aardwolf can lap up as many as 300,000 termites during a single night using its long, sticky tongue. The aardwolf's tongue has adapted to be tough enough to withstand the strong bite of termites. The aardwolf lives in the shrublands of eastern and southern Africa – open lands covered with stunted trees and shrubs. It is nocturnal, resting in burrows during the day and emerging at night to seek food. Taxonomy The aardwolf is generally classified as part of the hyena family Hyaenidae. However, it was formerly placed in its own family Protelidae. Early on, scientists felt that it was merely mimicking the striped hyena, which subsequently led to the creation of Protelidae. Recent studies have suggested that the aardwolf probably diverged from other hyaenids early on; how early is still unclear, as the fossil record and genetic studies disagree by 10 million years. The aardwolf is the only surviving species in the subfamily Protelinae. There is disagreement as to whether the species is monotypic, or can be divided into subspecies. A 2021 study found the genetic differences in eastern and southern aardwolves may be pronounced enough to categorize them as species. A 2006 molecular analysis indicates it is phylogenetically the most basal of the four extant hyaenidae species. Etymology The generic name Proteles is derived from two words of Greek origin: , prōtos and téleios, which combined mean "complete in front" referring to the aardwolf's five toe
Carnivorans of Africa;Fauna of East Africa;Hyenas;Mammals described in 1783;Mammals of Africa;Mammals of Southern Africa;Myrmecophagous mammals;Nocturnal animals;Taxa named by Anders Sparrman
https://en.wikipedia.org/wiki/Adobe
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world. Adobe architecture has been dated to before 5,100 BP. Description Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth. Strength In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake. Distribution Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Ande
Adobe buildings and structures;Appropriate technology;Masonry;Soil-based building materials;Sustainable building;Vernacular architecture
https://en.wikipedia.org/wiki/Analog%20signal
An analog signal (American English) or analogue signal (British and Commonwealth English) is any continuous-time signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves. In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values. Digital sampling imposes some bandwidth and dynamic range constraints on the representation and adds quantization noise. The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, and other systems may also convey or be considered analog signals. Representation An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information. Any information may be conveyed by an analog signal; such a signal may be a measured response to changes in a physical variable, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, sound striking the diaphragm of a microphone induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone or the voltage produced by a condenser microphone. The voltage or the current is said to be an analog of the sound. Noise An analog signal is subject to electronic noise and distortion introduced by communication channels, recording and signal processing operations, which can progressively degrade the signal-to-noise ratio (SNR). As the signal is transmitted, copied, or processed, the unavoidable noise introduced in the signal path will accumulate as a generation loss, progressively and irreversibly degrading the SNR
Analog circuits;Electronic design;Television terminology;Video signal
https://en.wikipedia.org/wiki/Analysis
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 BC), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Science and technology Chemistry The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis Qualitative Analysis It is concerned with which components are in a given sample or compound. Example: Precipitation reaction Quantitative Analysis It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with issues i
;Abstraction;Critical thinking skills;Emergence;Empiricism;Epistemological theories;Intelligence;Mathematical modeling;Metaphysics of mind;Methodology;Ontology;Philosophy of logic;Rationalism;Reasoning;Research methods;Scientific method;Theory of mind
https://en.wikipedia.org/wiki/Automorphism
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. Definition In an algebraic structure such as a group, a ring, or vector space, an automorphism is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.) More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism is an automorphism if there is a morphism such that where is the identity morphism of . For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the trivial automorphism. Automorphism group The automorphisms of an object form a group under composition of morphisms, which is called the automorphism group of . This results straightforwardly from the definition of a category. The automorphism group of an object in a category is often denoted , or simply Aut(X) if the category is clear from context. Examples In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X. In elementary arithmetic, the set of integers, , considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. A group automorphism is a group isomorphism from a group to itself. Informally, it is a per
Abstract algebra;Morphisms;Symmetry
https://en.wikipedia.org/wiki/Atomic%20physics
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Isolated atoms Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though
;Atomic, molecular, and optical physics
https://en.wikipedia.org/wiki/Acoustic%20theory
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach. For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as Where is the perturbed velocity of the fluid, is the pressure of the fluid at rest, is the perturbed pressure of the system as a function of space and time, is the density of the fluid at rest, and is the variance in the density of the fluid over space and time. In the case that the velocity is irrotational (), we then have the acoustic wave equation that describes the system: Where we have Derivation for a medium at rest Starting with the Continuity Equation and the Euler Equation: If we take small perturbations of a constant pressure and density: Then the equations of the system are Noting that the equilibrium pressures and densities are constant, this simplifies to A Moving Medium Starting with We can have these equations work for a moving medium by setting , where is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and is the fluid velocity. In this case the equations look very similar: Note that setting returns the equations at rest. Linearized Waves Starting with the above given equations of motion for a medium at rest: Let us now take to all be small quantities. In the case that we keep terms to first order, for the continuity equation, we have the term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density: Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small
Acoustics;Fluid dynamics;Sound
https://en.wikipedia.org/wiki/Alpha%20decay
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus). The parent nucleus transforms or "decays" into a daughter product, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms. Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles. Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force. Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variat
Helium;Nuclear physics;Radioactivity
https://en.wikipedia.org/wiki/Analytical%20engine
The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, which was a design for a simpler mechanical calculator. The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage. Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837. Design Babbage's first attempt at a mechanical computing device, the difference engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project. During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833. The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine wo
Ada Lovelace;Charles Babbage;Computer-related introductions in 1837;English inventions;Mechanical calculators;Mechanical computers;One-of-a-kind computers
https://en.wikipedia.org/wiki/Almost%20all
In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null. In contrast, "almost no" means "a negligible quantity"; that is, "almost no elements of " means "a negligible quantity of elements of ". Meanings in different areas of mathematics Prevalent meaning Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) except for finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) except for countably many". Examples: Almost all positive integers are greater than 1012. Almost all prime numbers are odd (2 is the only exception). Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra). If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x). Meaning in measure theory When speaking about the reals, sometimes "almost all" can mean "all reals except for a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S except for those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points except for those in a null set" or "all points in S except for those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory. Examples: In a measure space, such as the real line, countable sets are null. The set of rational numbers is
Mathematical terminology
https://en.wikipedia.org/wiki/Antiparticle
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron. Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle. Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography. The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not tru
Antimatter;Particle physics;Quantum field theory;Subatomic particles
https://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry%20computer
The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. The device was limited by the technology of the day. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) which is integrated into every modern processor's design. Its unique contribution was to make computing faster by being the first to use vacuum tubes to do arithmetic calculations. Prior to this, slower electro-mechanical methods were used by Konrad Zuse's Z1 computer, and the simultaneously developed Harvard Mark I. The first electronic, programmable, digital machine, the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC. Overview Conceived in 1937, the machine was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed only to solve systems of linear equations and was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was not perfected, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990. Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amid patent disputes over the first instance of an electronic computer. At that time ENIAC, that had been created by John Mauchly and J. Presper Eckert, was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that t
1940s computers;Computer-related introductions in 1942;Early computers;Iowa State University;One-of-a-kind computers;Paper data storage;Serial computers;Vacuum tube computers
https://en.wikipedia.org/wiki/Acre
The acre ( ) is a unit of land area used in the British imperial and the United States customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, of a square mile, 4,840 square yards, or 43,560 square feet, and approximately 4,047 m2, or about 40% of a hectare. Based upon the international yard and pound agreement of 1959, an acre may be declared as exactly 4,046.8564224 square metres. The acre is sometimes abbreviated ac, but is usually spelled out as the word "acre". Traditionally, in the Middle Ages, an acre was conceived of as the area of land that could be ploughed by one man using a team of eight oxen in one day. The acre is still a statutory measure in the United States. Both the international acre and the US survey acre are in use, but they differ by only four parts per million. The most common use of the acre is to measure tracts of land. The acre is used in many established and former Commonwealth of Nations countries by custom. In a few, it continues as a statute measure, although not since 2010 in the UK, and not for decades in Australia, New Zealand, and South Africa. In many places where it is not a statute measure, it is still lawful to "use for trade" if given as supplementary information and is not used for land registration. Description One acre equals (0.0015625) square mile, 4,840 square yards, 43,560 square feet, or about (see below). While all modern variants of the acre contain 4,840 square yards, there are alternative definitions of a yard, so the exact size of an acre depends upon the particular yard on which it is based. Originally, an acre was understood as a strip of land sized at forty perches (660 ft, or 1 furlong) long and four perches (66 ft) wide; this may have also been understood as an approximation of the amount of land a yoke of oxen could plough in one day (a furlong being "a furrow long"). A square enclosing one acre is approximately
Customary units of measurement in the United States;Imperial units;Surveying;Units of area
https://en.wikipedia.org/wiki/Allotropy
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners. For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations). The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element. For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state. History The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recog
;Chemistry;Inorganic chemistry;Physical chemistry
https://en.wikipedia.org/wiki/Anthemius%20of%20Tralles
Anthemius of Tralles (, Medieval Greek: , Anthémios o Trallianós;  – 533  558) was a Byzantine Greek from Tralles who worked as a geometer and architect in Constantinople, the capital of the Byzantine Empire. With Isidore of Miletus, he designed the Hagia Sophia for Justinian I. Life Anthemius was one of the five sons of Stephanus of Tralles, a physician. His brothers were Dioscorus, Alexander, Olympius, and Metrodorus. Dioscorus followed his father's profession in Tralles; Alexander did so in Rome and became one of the most celebrated medical men of his time; Olympius became a noted lawyer; and Metrodorus worked as a grammarian in Constantinople. Anthemius was said to have annoyed his neighbor Zeno in two ways: first, by engineering a miniature earthquake by sending steam through leather tubes he had fixed among the joists and flooring of Zeno's parlor while he was entertaining friends and, second, by simulating thunder and lightning and flashing intolerable light into Zeno's eyes from a slightly hollowed mirror. In addition to his familiarity with steam, some dubious authorities credited Anthemius with a knowledge of gunpowder or other explosive compound. Mathematics Anthemius was a capable mathematician. In the course of his treatise On Burning Mirrors, he intended to facilitate the construction of surfaces to reflect light to a single point, he described the string construction of the ellipse and assumed a property of ellipses not found in Apollonius of Perga's Conics: the equality of the angles subtended at a focus by two tangents drawn from a point. His work also includes the first practical use of the directrix: having given the focus and a double ordinate, he used the focus and directrix to obtain any number of points on a parabola. This work was later known to Arab mathematicians such as Alhazen. Eutocius of Ascalon's commentary on Apollonius's Conics was dedicated to Anthemius. Architecture As an architect, Anthemius is best known for his work design
470s births;5th-century Byzantine scientists;5th-century Byzantine writers;5th-century mathematicians;6th-century Byzantine scientists;6th-century Byzantine writers;6th-century architects;6th-century deaths;6th-century mathematicians;Byzantine architects;Greek Christians;Hagia Sophia;Justinian I;People from Tralles
https://en.wikipedia.org/wiki/Antigen
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response. Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria. Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction. Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases. Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example. Etymology Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre
;Biomolecules;Immune system
https://en.wikipedia.org/wiki/Apparent%20magnitude
Apparent magnitude () is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer. Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856. The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0. The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5. The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude. Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than
Logarithmic scales of measurement;Observational astronomy
https://en.wikipedia.org/wiki/Andrew%20Wiles
Sir Andrew John Wiles (born 11 April 1953) is an English mathematician and a Royal Society Research Professor at the University of Oxford, specialising in number theory. He is best known for proving Fermat's Last Theorem, for which he was awarded the 2016 Abel Prize and the 2017 Copley Medal and for which he was appointed a Knight Commander of the Order of the British Empire in 2000. In 2018, Wiles was appointed the first Regius Professor of Mathematics at Oxford. Wiles is also a 1997 MacArthur Fellow. Wiles was born in Cambridge to theologian Maurice Frank Wiles and Patricia Wiles. While spending much of his childhood in Nigeria, Wiles developed an interest in mathematics and in Fermat's Last Theorem in particular. After moving to Oxford and graduating from there in 1974, he worked on unifying Galois representations, elliptic curves and modular forms, starting with Barry Mazur's generalizations of Iwasawa theory. In the early 1980s, Wiles spent a few years at the University of Cambridge before moving to Princeton University, where he worked on expanding out and applying Hilbert modular forms. In 1986, upon reading Ken Ribet's seminal work on Fermat's Last Theorem, Wiles set out to prove the modularity theorem for semistable elliptic curves, which implied Fermat's Last Theorem. By 1993, he had been able to convince a knowledgeable colleague that he had a proof of Fermat's Last Theorem, though a flaw was subsequently discovered. After an insight on 19 September 1994, Wiles and his student Richard Taylor were able to circumvent the flaw, and published the results in 1995, to widespread acclaim. In proving Fermat's Last Theorem, Wiles developed new tools for mathematicians to begin unifying disparate ideas and theorems. His former student Taylor along with three other mathematicians were able to prove the full modularity theorem by 2000, using Wiles' work. Upon receiving the Abel Prize in 2016, Wiles reflected on his legacy, expressing his belief that he did not just
1953 births;20th-century English mathematicians;21st-century English mathematicians;Abel Prize laureates;Alumni of Clare College, Cambridge;Alumni of King's College, Cambridge;Alumni of Merton College, Oxford;British number theorists;Clay Research Award recipients;Fellows of Merton College, Oxford;Fellows of the Royal Society;Fermat's Last Theorem;Foreign associates of the National Academy of Sciences;Institute for Advanced Study visiting scholars;Knights Commander of the Order of the British Empire;Living people;MacArthur Fellows;Members of the American Philosophical Society;Members of the French Academy of Sciences;People educated at The Leys School;People from Cambridge;Princeton University faculty;Recipients of the Copley Medal;Regius Professors of Mathematics (University of Oxford);Rolf Schock Prize laureates;Royal Medal winners;Trustees of the Institute for Advanced Study;Whitehead Prize winners;Wolf Prize in Mathematics laureates
https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid
Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves freely in water to give mildly acidic solutions. It is a mild reducing agent. Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form (erythorbic acid) can be made by chemical synthesis, but has no significant biological role. History The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor. In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively. Chemical properties Acidit
3-Hydroxypropenals;Antioxidants;Biomolecules;Coenzymes;Corrosion inhibitors;Dietary antioxidants;Furanones;Organic acids;Vitamers;Vitamin C
https://en.wikipedia.org/wiki/Audio%20signal%20processing
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for p
Audio electronics;Signal processing
https://en.wikipedia.org/wiki/Amdahl%27s%20law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula that shows how much faster a task can be completed when more resources are added to the system. The law can be stated as: "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967. Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. Definition In the context of Amdahl's law, speedup can be defined as: or Amdahl's law can be formulated in the following way: where represents the total speedup of a program represents the proportion of time spent on the portion of the code where improvements are made represents the extent of the improvement The is frequently much lower than one might expect. For instance, if a programmer enhances a part of the code that represents 10% of the total execution time (i.e. of 0.10) and achieves a of 10,000, then becomes 1.11 which means only 11% improvement in total speedup of the program. So, despite a massive improvement in one section, the overall benefit is quite small. In another example, if the programmer optimizes a section that accounts for 99% of the execution time (i.e. of 0.99) with a speedup factor of 100 (i.e. of 100), the only reaches 50. This indicates that half of the potential performance gain ( will reach 100 if 100% of the execution time is covered) is lost due to the remaining 1% of execution time that was not improved. Implications Followings are implications of Amdahl's law: Diminishing Returns: Adding more processors gives diminishing returns. Beyond a certain point, adding more processors doesn't significantly increase speedup. Limited Speedup: Even with many processors, t
Analysis of parallel algorithms;Computer architecture statements
https://en.wikipedia.org/wiki/Aon%20%28company%29
Aon plc () is a British-American professional services firm that offers a range of risk-mitigation products. Aon has over 66,000 employees across 120 countries. Founded in Chicago by Patrick Ryan, Aon was created in 1982 when the Ryan Insurance Group merged with the Combined Insurance Company of America under W. Clement Stone. In 1987, the holding company was renamed Aon from aon, a Gaelic word meaning "one". The company is globally headquartered in London with its North America operations based in Chicago at the Aon Center. Aon is listed on the New York Stock Exchange under AON with a market cap of $65 billion in April 2023. History W. Clement Stone's mother bought a small Detroit insurance agency, and in 1918 brought her son into the business. Mr. Stone sold low-cost, low-benefit accident insurance, underwriting and issuing policies on-site. The next year he founded his own agency, the Combined Registry Co. As the Great Depression began, Stone reduced his workforce and improved training. Forced by his son's respiratory illness to winter in the South, Stone moved to Arkansas and Texas. In 1939 he bought American Casualty Insurance Co. of Dallas, Texas. It was consolidated with other purchases as the Combined Insurance Co. of America in 1947. The company continued through the 1950s and 1960s, continuing to sell health and accident policies. In the 1970s, Combined expanded overseas despite being hit hard by the recession. In 1982, after 10 years of stagnation under Clement Stone Jr., the elder Stone, then 79, resumed control until the completion of a merger with Ryan Insurance Co. allowed him to transfer control to Patrick Ryan. Ryan, the son of a Ford dealer in Wisconsin and a graduate of Northwestern University, had started his company as an auto credit insurer in 1964. In 1976, the company bought the insurance brokerage units of the Esmark conglomerate. Ryan focused on insurance brokering and added more upscale insurance products. He also trimmed staff and too
1982 establishments in Michigan;Actuarial firms;British brands;Companies based in London;Companies listed on the New York Stock Exchange;Consulting firms established in 1982;Consulting firms of the United States;Dual-listed companies;Financial services companies based in the City of London;Financial services companies established in 1982;Financial services companies of the United States;Human resource management consulting firms;Insurance companies of the United Kingdom;International management consulting firms;Management consulting firms of the United Kingdom;Risk management companies;Tax inversions
https://en.wikipedia.org/wiki/Adenylyl%20cyclase
Adenylate cyclase (EC 4.6.1.1, also commonly known as adenyl cyclase and adenylyl cyclase, abbreviated AC) is an enzyme with systematic name ATP diphosphate-lyase (cyclizing; 3′,5′-cyclic-AMP-forming). It catalyzes the following reaction: ATP = 3′,5′-cyclic AMP + diphosphate It has key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues. All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters. Classes Class I The first class of adenylyl cyclases occur in many bacteria including E. coli (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that E. coli deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing other sugars. cAMP exerts this effect by binding the transcription factor CRP, also known as CAP. Class I AC's are large cytosolic enzymes (~100 kDa) with a large regulatory domain (~50 kDa) that indirectly senses glucose levels. , no crystal structure is available for class I AC. Some indirect structural information is available for this class. It is known that the N-terminal half is the catalytic portion, and that it requires two Mg2+ ions. S103, S113, D114, D116 and W118 are the five absolut
Cell signaling;EC 4.6.1;Signal transduction
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major motivating factor for the development of computer science. Logical foundations While the roots of formalized logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalized mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the first-order theory of the natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system, there are true statements that cannot be proved in the system. This to
;Formal methods
https://en.wikipedia.org/wiki/Abated
See also, Abatement. Abated, an ancient technical term applied in masonry and metal work to those portions which are sunk beneath the surface, as in inscriptions where the ground is sunk round the letters so as to leave the letters or ornament in relief. References
Construction;Masonry
https://en.wikipedia.org/wiki/Aedicula
In ancient Roman religion, an aedicula (: aediculae) is a small shrine, and in classical architecture refers to a niche covered by a pediment or entablature supported by a pair of columns and typically framing a statue, the early Christian ones sometimes contained funeral urns. Aediculae are also represented in art as a form of ornamentation. The word aedicula is the diminutive of the Latin aedes, a temple building or dwelling place. The Latin word has been anglicised as "aedicule" and as "edicule". Describing post-antique architecture, especially Renaissance architecture, aedicular forms may be described using the word tabernacle, as in tabernacle window. Classical aediculae Many aediculae were household shrines (lararia) that held small altars or statues of the Lares and Di Penates. The Lares were Roman deities protecting the house and the family household gods. The Penates were originally patron gods (really genii) of the storeroom, later becoming household gods guarding the entire house. Other aediculae were small shrines within larger temples, usually set on a base, surmounted by a pediment and surrounded by columns. In ancient Roman architecture the aedicula has this representative function in the society. They are installed in public buildings like the triumphal arch, city gate, and thermae. The Library of Celsus in Ephesus ( AD) is a good example. From the 4th century Christianization of the Roman Empire onwards such shrines, or the framework enclosing them, are often called by the Biblical term tabernacle, which becomes extended to any elaborated framework for a niche, window or picture. Gothic aediculae In Gothic architecture, too, an aedicula or tabernacle is a structural framing device that gives importance to its contents, whether an inscribed plaque, a cult object, a bust or the like, by assuming the tectonic vocabulary of a little building that sets it apart from the wall against which it is placed. A tabernacle frame on a wall serves similar
Ancient Roman architectural elements;Ancient Roman temples;Architectural elements
https://en.wikipedia.org/wiki/Arabian%20Sea
The Arabian Sea () is a region of sea in the northern Indian Ocean, bounded on the west by the Arabian Peninsula, Gulf of Aden and Guardafui Channel, on the northwest by Gulf of Oman and Iran, on the north by Pakistan, on the east by India, and on the southeast by the Laccadive Sea and the Maldives, on the southwest by Somalia. Its total area is and its maximum depth is . The Gulf of Aden in the west connects the Arabian Sea to the Red Sea through the strait of Bab-el-Mandeb, and the Gulf of Oman is in the northwest, connecting it to the Persian Gulf. Geography The Arabian Sea's surface area is about . The maximum width of the sea is approximately , and its maximum depth is . The biggest river flowing into the sea is the Indus River. The Arabian Sea has two important branches: the Gulf of Aden in the southwest, connecting with the Red Sea through the strait of Bab-el-Mandeb; and the Gulf of Oman to the northwest, connecting with the Persian Gulf. There are also the gulfs of Khambhat and Kutch on the Indian Coast. The Arabian Sea has been crossed by many important marine trade routes since the 3rd or 2nd millennium BCE. Major seaports include Kandla Port, Mundra Port, Pipavav Port, Dahej Port, Hazira Port, Mumbai Port, Nhava Sheva Port (Navi Mumbai), Mormugão Port (Goa), New Mangalore Port and Kochi Port in India, the Port of Karachi, Port Qasim, and the Gwadar Port in Pakistan, Chabahar Port in Iran and the Port of Salalah in Salalah, Oman. The largest islands in the Arabian Sea include Socotra (Yemen), Masirah Island (Oman), Lakshadweep (India) and Astola Island (Pakistan). The countries with coastlines on the Arabian Sea are Yemen, Oman, Pakistan, Iran, India and the Maldives. Limits The International Hydrographic Organization defines the limits of the Arabian Sea as follows: On the west: the eastern limit of the Gulf of Aden. On the north: a line joining Ràs al Hadd, east point of the Arabian Peninsula (22°32'N) and Ràs Jiyùni (61°43'E) on the coast of Pakis
;Bodies of water of Iran;Bodies of water of Oman;Bodies of water of Pakistan;Bodies of water of Somalia;Bodies of water of the Maldives;India–Pakistan border;Marine ecoregions;Oman–Yemen border;Sea;Seas of Africa;Seas of Asia;Seas of India;Seas of Iran;Seas of Yemen;Seas of the Indian Ocean
https://en.wikipedia.org/wiki/Ames%20test
The Ames test is a widely employed method that uses bacteria to test whether a given chemical can cause mutations in the DNA of the test organism. More formally, it is a biological assay to assess the mutagenic potential of chemical compounds. A positive test indicates that the chemical is mutagenic and therefore may act as a carcinogen, because cancer is often linked to mutation. The test serves as a quick and convenient assay to estimate the carcinogenic potential of a compound because standard carcinogen assays on mice and rats are time-consuming (taking two to three years to complete) and expensive. However, false-positives and false-negatives are known. The procedure was described in a series of papers in the early 1970s by Bruce Ames and his group at the University of California, Berkeley. General procedure The Ames test uses several strains of the bacterium Salmonella typhimurium that carry mutations in genes involved in histidine synthesis. These strains are auxotrophic mutants, i.e. they require histidine for growth, but cannot produce it. The method tests the capability of the tested substance in creating mutations that result in a return to a "prototrophic" state, so that the cells can grow on a histidine-free medium. The tester strains are specially constructed to detect either frameshift (e.g. strains TA-1537 and TA-1538) or point (e.g. strain TA-1531) mutations in the genes required to synthesize histidine, so that mutagens acting via different mechanisms may be identified. Some compounds are quite specific, causing reversions in just one or two strains. The tester strains also carry mutations in the genes responsible for lipopolysaccharide synthesis, making the cell wall of the bacteria more permeable, and in the excision repair system to make the test more sensitive. Larger organisms like mammals have metabolic processes that could potentially turn a chemical considered not mutagenic into one that is or one that is considered mutagenic into one
Applied genetics;Biochemistry detection reactions;Laboratory techniques;Toxicology tests
https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976. Coding system This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone. The ATC classification system is a strict hierarchy, meaning that each code necessarily has one and only one parent code, except for the 14 codes at the topmost level which have no parents. The codes are semantic identifiers, meaning they depict information by themselves beyond serving as identifiers (namely, the codes depict themselves the complete lineage of parenthood). As of 7 May 2020, there are 6,331 codes in ATC; the table below gives the count per level. History The ATC system is based on the earlier Anatomical Classification System, which is intended as a tool
;Drugs;Pharmacological classification systems;World Health Organization
https://en.wikipedia.org/wiki/Antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and . Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval. In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference. Examples The function is an antiderivative of , since the derivative of is . Since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. The graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value . More generally, the power function has antiderivative if , and if . In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on).
Integral calculus;Linear operators in calculus
https://en.wikipedia.org/wiki/AI-complete
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm. In the past, problems supposed to be AI-complete included computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. AI-complete tasks were notably considered useful for testing the presence of humans, as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks. History The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File. Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations. DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks w
Artificial intelligence;Computational problems
https://en.wikipedia.org/wiki/Ammeter
An ammeter (abbreviation of ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured. Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit. History The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers". The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered. Types Some instruments are panel meters, meant to be mounted on some sort of control
Electrical meters;Electronic test equipment;Flow meters
https://en.wikipedia.org/wiki/Alkali
In chemistry, an alkali (; from the Arabic word , ) is a basic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases. Etymology The word alkali is derived from Arabic al qalīy (or alkali), meaning (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name ), which ultimately derived from alkali. Common properties of alkalis and bases Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include: Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink. Concentrated solutions are caustic (causing chemical burns). Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin. Alkalis are normally water-soluble, a
Inorganic chemistry
https://en.wikipedia.org/wiki/Amyl%20nitrite
Amyl nitrite is a chemical compound with the formula C5H11ONO. A variety of isomers are known, but they all feature an amyl group attached to the nitrite functional group. The alkyl group (the amyl in this case) is unreactive and the chemical and biological properties are mainly due to the nitrite group. Like other alkyl nitrites, amyl nitrite is bioactive in mammals, being a vasodilator, which is the basis of its use as a prescription medicine. As an inhalant, it also has a psychoactive effect, which has led to its recreational use, with its smell being described as that of old socks or dirty feet. It was first documented in 1844 and came into medical use in 1867. Uses Amyl nitrite was historically employed medically to treat heart diseases as well as angina. Amyl nitrite was sometimes used as an antidote for cyanide poisoning. It was thought to act as an oxidant, to induce the formation of methemoglobin. Methemoglobin in turn can sequester cyanide as cyanomethemoglobin. However, it has been replaced by hydroxocobalamin which had better efficacy, and the use of amyl nitrite has been found to be ineffective and unscientific. Trace amounts are added to some perfumes. It is also used recreationally as an inhalant drug that induces a brief euphoric state, and when combined with other intoxicant stimulant drugs such as cocaine or MDMA, the euphoric state intensifies and is prolonged. Once some stimulative drugs wear off, a common side effect is a period of depression or anxiety, colloquially called a "come down"; amyl nitrite is sometimes used to combat these negative after-effects. This effect, combined with its dissociative effects, has led to its use as a recreational drug . Nomenclature The term "amyl nitrite" encompasses several isomers. In older literature, the common non-systematic name amyl was often used for the pentyl group, where the amyl group is a linear or normal (n) alkyl group, and the resulting amyl nitrite would have the structural formula
Alkyl nitrites;Antianginals;Antidotes;Isoamyl esters;Muscle relaxants;Vasodilators
https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric%20mean
In mathematics, the arithmetic–geometric mean (AGM or agM) of two positive real numbers and is the mutual limit of a sequence of arithmetic means and a sequence of geometric means. The arithmetic–geometric mean is used in fast algorithms for exponential, trigonometric functions, and other special functions, as well as some mathematical constants, in particular, computing . The AGM is defined as the limit of the interdependent sequences and . Assuming , we write:These two sequences converge to the same number, the arithmetic–geometric mean of and ; it is denoted by , or sometimes by or . The arithmetic–geometric mean can be extended to complex numbers and, when the branches of the square root are allowed to be taken inconsistently, it is a multivalued function. Example To find the arithmetic–geometric mean of and , iterate as follows:The first five iterations give the following values: The number of digits in which and agree (underlined) approximately doubles with each iteration. The arithmetic–geometric mean of 24 and 6 is the common limit of these two sequences, which is approximately . History The first algorithm based on this sequence pair appeared in the works of Lagrange. Its properties were further analyzed by Gauss. Properties Both the geometric mean and arithmetic mean of two positive numbers and are between the two numbers. (They are strictly between when .) The geometric mean of two positive numbers is never greater than the arithmetic mean. So the geometric means are an increasing sequence ; the arithmetic means are a decreasing sequence ; and for any . These are strict inequalities if . is thus a number between and ; it is also between the geometric and arithmetic mean of and . If then . There is an integral-form expression for :where is the complete elliptic integral of the first kind:Since the arithmetic–geometric process converges so quickly, it provides an efficient way to compute elliptic integrals, which are used, for exa
Articles containing proofs;Elliptic functions;Means;Special functions
https://en.wikipedia.org/wiki/Andrew%20S.%20Tanenbaum
Andrew Stuart Tanenbaum (born March 16, 1944), sometimes referred to by the handle AST, is an American-born Dutch computer scientist and retired professor emeritus of computer science at the Vrije Universiteit Amsterdam in the Netherlands. He is the author of MINIX, a free Unix-like operating system for teaching purposes, and has written multiple computer science textbooks regarded as standard texts in the field. He regards his teaching job as his most important work. Since 2004 he has operated Electoral-vote.com, a website dedicated to analysis of polling data in federal elections in the United States. Biography Tanenbaum was born in New York City and grew up in suburban White Plains, New York, where he attended the White Plains High School. His paternal grandfather was born in Khorostkiv in the Austro-Hungarian empire. He received his Bachelor of Science degree in physics from MIT in 1965 and his Doctor of Philosophy degree in astrophysics from the University of California, Berkeley in 1971. As an undergraduate, he had obtained experience at computer programming, which helped him get a summer internship at the National Radio Astronomy Observatory in West Virginia. After receiving his doctorate, he decided that he was more interested in programming. He became an assistant professor in Amsterdam based in part on his expertise in programming the university's new computer. He taught courses on Computer Organization and Operating Systems and supervised the work of PhD candidates at the VU University Amsterdam. On July 9, 2014, he announced his retirement. He is married to a Dutch woman, but retains his American citizenship. Teaching Books Tanenbaum's textbooks on computer science include: (1981, with David J. Wetherall and Nickolas Feamster) Operating Systems: Design and Implementation, co-authored with Albert Woodhull Modern Operating Systems (1992, 2001, 2007, 2014, 2022) (with Maarten van Steen) His book, Operating Systems: Design and Implementati
1944 births;1996 fellows of the Association for Computing Machinery;21st-century American Jews;Academic staff of Vrije Universiteit Amsterdam;American computer scientists;American emigrants to the Netherlands;American male non-fiction writers;American political writers;American technology writers;Computer science educators;Computer systems researchers;European Research Council grantees;Fellows of the IEEE;Free software programmers;Information technology in the Netherlands;Jewish American academics;Jewish American non-fiction writers;Kernel programmers;Living people;Massachusetts Institute of Technology School of Science alumni;Members of the Royal Netherlands Academy of Arts and Sciences;Minix;Scientists from New York City;University of California, Berkeley alumni
https://en.wikipedia.org/wiki/Accumulator%20%28computing%29
In a computer's central processing unit (CPU), the accumulator is a register in which intermediate arithmetic logic unit results are stored. Without a register like an accumulator, it would be necessary to write the result of each calculation (addition, multiplication, shift, etc.) to cache or main memory, perhaps only to be read right back again for use in the next operation. Accessing memory is slower than accessing a register like an accumulator because the technology used for the large main memory is slower (but cheaper) than that used for a register. Early electronic computer systems were often split into two groups, those with accumulators and those without. Modern computer systems often have multiple general-purpose registers that can operate as accumulators, and the term is no longer as common as it once was. However, to simplify their design, a number of special-purpose processors still use a single accumulator. Basic concept Mathematical operations often take place in a stepwise fashion, using the results from one operation as the input to the next. For instance, a manual calculation of a worker's weekly payroll might look something like: look up the number of hours worked from the employee's time card look up the pay rate for that employee from a table multiply the hours by the pay rate to get their basic weekly pay multiply their basic pay by a fixed percentage to account for income tax subtract that number from their basic pay to get their weekly pay after tax multiply that result by another fixed percentage to account for retirement plans subtract that number from their basic pay to get their weekly pay after all deductions A computer program carrying out the same task would follow the same basic sequence of operations, although the values being looked up would all be stored in computer memory. In early computers, the number of hours would likely be held on a punch card and the pay rate in some other form of memory, perhaps a magnetic d
Central processing unit;Digital registers
https://en.wikipedia.org/wiki/Afterglow
An afterglow in meteorology consists of several atmospheric optical phenomena, with a general definition as a broad arch of whitish or pinkish sunlight in the twilight sky, consisting of the bright segment and the purple light. Purple light mainly occurs when the Sun is 2–6° below the horizon, from civil to nautical twilight, while the bright segment lasts until the end of the nautical twilight. Afterglow is often in cases of volcanic eruptions discussed, while its purple light is discussed as a different particular volcanic purple light. Specifically in volcanic occurrences it is light scattered by fine particulates, like dust, suspended in the atmosphere. In the case of alpenglow, which is similar to the Belt of Venus, afterglow is used in general for the golden-red glowing light from the sunset and sunrise reflected in the sky, and in particularly for its last stage, when the purple light is reflected. The opposite of an afterglow is a foreglow, which occurs before sunrise. Sunlight reaches Earth around civil twilight during golden hour intensely in its low-energy and low-frequency red component. During this part of civil twilight after sunset and before sundawn the red sunlight remains visible by scattering through particles in the air. Backscattering, possibly after being reflected off clouds or high snowfields in mountain regions, furthermore creates a reddish to pinkish light. The high-energy and high-frequency components of light towards blue are scattered out broadly, producing the broader blue light of nautical twilight before or after the reddish light of civil twilight, while in combination with the reddish light producing the purple light. This period of blue dominating is referred to as the blue hour and is, like the golden hour, widely treasured by photographers and painters. After the 1883 eruption of the volcano Krakatoa, a remarkable series of red sunsets appeared worldwide. An enormous amount of exceedingly fine dust were blown to a great height
Atmospheric optical phenomena
https://en.wikipedia.org/wiki/Arteriovenous%20malformation
An arteriovenous malformation (AVM) is an abnormal connection between arteries and veins, bypassing the capillary system. Usually congenital, this vascular anomaly is widely known because of its occurrence in the central nervous system (usually as a cerebral AVM), but can appear anywhere in the body. The symptoms of AVMs can range from none at all to intense pain or bleeding, and they can lead to other serious medical problems. Signs and symptoms Symptoms of AVMs vary according to their location. Most neurological AVMs produce few to no symptoms. Often the malformation is discovered as part of an autopsy or during treatment of an unrelated disorder (an "incidental finding"); in rare cases, its expansion or a micro-bleed from an AVM in the brain can cause epilepsy, neurological deficit, or pain. The most general symptoms of a cerebral AVM include headaches and epileptic seizures, with more specific symptoms that normally depend on its location and the individual, including: Difficulties with movement coordination, including muscle weakness and even paralysis; Vertigo (dizziness); Difficulties of speech (dysarthria) and communication, as found with aphasia; Difficulties with everyday activities, as found with apraxia; Abnormal sensations (numbness, tingling, or spontaneous pain); Memory and thought-related problems, such as confusion, dementia, or hallucinations. Cerebral AVMs may present themselves in a number of different ways: Bleeding (45% of cases) "parkinsonism" 4 symptoms in Parkinson's disease. Acute onset of severe headache. May be described as the worst headache of the patient's life. Depending on the location of bleeding, may be associated with new fixed neurologic deficit. In unruptured brain AVMs, the risk of spontaneous bleeding may be as low as 1% per year. After a first rupture, the annual bleeding risk may increase to more than 5%. Seizure or brain seizure (46%). Depending on the place of the AVM, it can contribute to loss of vision. He
Angiogenesis;Congenital vascular defects;Gross pathology;RASopathies;Vascular anomalies
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin. The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler. Definition A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence of elements of P exists. Equivalently, every weakly ascending sequence of elements of P eventually stabilizes, meaning that there exists a positive integer n such that Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite strictly descending chain of elements of P. Equivalently, every weakly descending sequence of elements of P eventually stabilizes. Comments Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set. Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition). Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded. Example Consider the ring of integers. Each ideal of consists of all multiples of some number . For example, the ideal consists of all multiples of . Let be the ideal c
Commutative algebra;Order theory;Wellfoundedness
https://en.wikipedia.org/wiki/Absolute%20infinite
The absolute infinite (symbol: Ω), in context often called "absolute", is an extension of the idea of infinity proposed by mathematician Georg Cantor. Cantor linked the absolute infinite with God, and believed that it had various mathematical properties, including the reflection principle: every property of the absolute infinite is also held by some smaller object. Cantor's view Cantor said: While using the Latin expression in Deo (in God), Cantor identifies absolute infinity with God (GA 175–176, 376, 378, 386, 399). According to Cantor, Absolute Infinity is beyond mathematical comprehension and shall be interpreted in terms of negative theology. Cantor also mentioned the idea in his letters to Richard Dedekind (text in square brackets not present in original): The Burali-Forti paradox The idea that the collection of all ordinal numbers cannot logically exist seems paradoxical to many. This is related to the Burali-Forti's paradox which implies that there can be no greatest ordinal number. All of these problems can be traced back to the idea that, for every property that can be logically defined, there exists a set of all objects that have that property. However, as in Cantor's argument (above), this idea leads to difficulties. More generally, as noted by A. W. Moore, there can be no end to the process of set formation, and thus no such thing as the totality of all sets, or the set hierarchy. Any such totality would itself have to be a set, thus lying somewhere within the hierarchy and thus failing to contain every set. A standard solution to this problem is found in Zermelo set theory, which does not allow the unrestricted formation of sets from arbitrary properties. Rather, we may form the set of all objects that have a given property and lie in some given set (Zermelo's Axiom of Separation). This allows for the formation of sets based on properties, in a limited sense, while (hopefully) preserving the consistency of the theory. While this solves the lo
Conceptions of God;Infinity;Philosophy of mathematics;Superlatives in religion
https://en.wikipedia.org/wiki/Acceptance%20testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. In software testing, the ISTQB defines acceptance testing as: The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met. Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. Overview Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test. Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details. The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software. UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must in
Agile software development;Facilities engineering;Hardware testing;Procurement;Software testing
https://en.wikipedia.org/wiki/Amygdalin
Amygdalin (from Ancient Greek: 'almond') is a naturally occurring chemical compound found in many plants, most notably in the seeds (kernels, pips or stones) of apricots, bitter almonds, apples, peaches, cherries and plums, and in the roots of manioc. Amygdalin is classified as a cyanogenic glycoside, because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning. Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to not only be clinically ineffective in treating cancer, but also potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history". It has also been described as traditional Chinese medicine. Sources Amygdalin is contained in Rosaceae plants stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). In one study, bitter almond amygdalin concentrations ranged from 33 to 54 g/kg depending on variety; semibitter varieties averaged 1 g/kg and sweet varieties averaged 0.063 g/kg with significant variability based on variety and growing region. Chemistry Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus Prunus, Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and
Alternative cancer treatments;B;Cyanogenic glycosides;Health fraud;Plant toxins
https://en.wikipedia.org/wiki/Brackish%20water
Brackish water, sometimes termed brack water, is water occurring in a natural environment that has more salinity than freshwater, but not as much as seawater. It may result from mixing seawater (salt water) and fresh water together, as in estuaries, or it may occur in brackish fossil aquifers. The word comes from the Middle Dutch root brak. Certain human activities can produce brackish water, in particular civil engineering projects such as dikes and the flooding of coastal marshland to produce brackish water pools for freshwater prawn farming. Brackish water is also the primary waste product of the salinity gradient power process. Because brackish water is hostile to the growth of most terrestrial plant species, without appropriate management it can be damaging to the environment (see article on shrimp farms). Technically, brackish water contains between 0.5 and 30 grams of salt per litre—more often expressed as 0.5 to 30 parts per thousand (‰), which is a specific gravity of between 1.0004 and 1.0226. Thus, brackish covers a range of salinity regimes and is not considered a precisely defined condition. It is characteristic of many brackish surface waters that their salinity can vary considerably over space or time. Water with a salt concentration greater than 30‰ is considered saline. Brackish water habitats Estuaries Brackish water condition commonly occurs when fresh water meets seawater. In fact, the most extensive brackish water habitats worldwide are estuaries, where a river meets the sea. The River Thames flowing through London is a classic river estuary. The town of Teddington a few miles west of London marks the boundary between the tidal and non-tidal parts of the Thames, although it is still considered a freshwater river about as far east as Battersea insofar as the average salinity is very low and the fish fauna consists predominantly of freshwater species such as roach, dace, carp, perch, and pike. The Thames Estuary becomes brackish between Batt
*;Aquatic ecology;Coastal geography;Liquid water
https://en.wikipedia.org/wiki/Bluetooth%20Special%20Interest%20Group
The Bluetooth Special Interest Group (Bluetooth SIG) is the standards organization that oversees the development of Bluetooth standards and the licensing of the Bluetooth technologies and trademarks to manufacturers. The SIG is a not-for-profit, non-stock corporation founded in September 1998. The SIG is headquartered in Kirkland, Washington, US. The SIG does not make, manufacture or sell Bluetooth-enabled products. Introduction Bluetooth technology provides a way to exchange information between wireless devices such as PDAs, laptops, computers, printers and digital cameras via a secure, low-cost, globally available short-range radio frequency band. Originally developed by Ericsson, Bluetooth technology is now used in many different products by many different manufacturers. These manufacturers must be either Associate or Promoter members of (see below) the Bluetooth SIG before they are granted early access to the Bluetooth specifications, but published Bluetooth specifications are available online via the Bluetooth SIG Website bluetooth.com. The SIG owns the Bluetooth word mark, figure mark and combination mark. These trademarks are licensed out for use to companies that are incorporating Bluetooth wireless technology into their products. To become a licensee, a company must become a member of the Bluetooth SIG. The SIG also manages the Bluetooth SIG Qualification program, a certification process required for any product using Bluetooth wireless technology and a pre-condition of the intellectual property license for Bluetooth technology. The main tasks for the SIG are to publish the Bluetooth specifications, protect the Bluetooth trademarks and evangelize Bluetooth wireless technology. In 2016, the SIG introduced a new visual and creative identity to support Bluetooth technology as the catalyst for the Internet of Things (IoT). This change included an updated logo, a new tagline and deprecation of the Bluetooth Smart and Bluetooth Smart Ready logos. At its i
Bluetooth;Kirkland, Washington;Organizations based in Washington (state);Organizations established in 1998;Standards organizations in the United States
https://en.wikipedia.org/wiki/Bijection
In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set. A function is bijective if and only if it is invertible; that is, a function is bijective if and only if there is a function the inverse of , such that each of the two ways for composing the two functions produces an identity function: for each in and for each in For example, the multiplication by two defines a bijection from the integers to the even numbers, which has the division by two as its inverse function. A function is bijective if and only if it is both injective (or one-to-one)—meaning that each element in the codomain is mapped from at most one element of the domain—and surjective (or onto)—meaning that each element of the codomain is mapped from at least one element of the domain. The term one-to-one correspondence must not be confused with one-to-one function, which means injective but not necessarily surjective. The elementary operation of counting establishes a bijection from some finite set to the first natural numbers , up to the number of elements in the counted set. It results that two finite sets have the same number of elements if and only if there exists a bijection between them. More generally, two sets are said to have the same cardinal number if there exists a bijection between them. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group. Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections
Basic concepts in set theory;Functions and mappings;Mathematical relations;Types of functions
https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz (symbol Hz). It may refer more specifically to two subcategories: Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth is equal to the upper cutoff frequency of a low-pass filter or baseband signal, which includes a zero frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful
Filter frequency response;Signal processing;Spectrum (physical sciences);Telecommunication theory
https://en.wikipedia.org/wiki/Basel%20Convention
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to restrict the transfer of hazardous waste from developed to less developed countries. It does not address the movement of radioactive waste, controlled by the International Atomic Energy Agency. The Basel Convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2024, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but did not ratify it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. History With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently,
1992 in the environment;Chemical safety;Environmental treaties;Hazardous waste;Pollution;Treaties concluded in 1989;Treaties entered into force in 1992;Waste treaties
https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. Formally: if is continuous then there exists an such that: . The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case. The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball: If is a continuous odd function, then there exists an such that: . If is a continuous function which is odd on (the boundary of ), then there exists an such that: . History According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by . Equivalent statements The following statements are equivalent to the Borsuk–Ulam theorem. With odd functions A function is called odd (aka antipodal or antipode-preserving) if for every , . The Borsuk–Ulam theorem is equivalent to each of the following statements: (1) Each continuous odd function has a zero. (2) There is no continuous odd function . Here is a proof that the Borsuk-Ulam theorem is equivalent to (1): () If the theorem is correct, then it is specifically correct for odd functions, and
Combinatorics;Theorems in algebraic topology;Theorems in topology;Theory of continuous functions
https://en.wikipedia.org/wiki/BQP
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. Definition BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that For all , Qn takes n qubits as input and outputs 1 bit For all x in L, For all x not in L, Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes, the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. Relationship to other complexity classes BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means . Informally, this is true because polynomial time algorithms are closed under comp
Probabilistic complexity classes;Quantum complexity theory;Quantum computing
https://en.wikipedia.org/wiki/Bioleaching
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. Process Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1)      spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2)      (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3)      (sulfur oxidizers)
Applied microbiology;Biotechnology;Economic geology;Metallurgical processes
https://en.wikipedia.org/wiki/Boiling%20point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 100°C (or with scientific precision: ) under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it c
Gases;Meteorological quantities;Temperature;Threshold temperatures
https://en.wikipedia.org/wiki/Bohrium
Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes. In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements. Introduction History Discovery Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55, respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its pare
;Chemical elements;Chemical elements with hexagonal close-packed structure;Synthetic elements;Transition metals
https://en.wikipedia.org/wiki/Bayer%20designation
A Bayer designation is a stellar designation in which a specific star is identified by a Greek or Latin letter followed by the genitive form of its parent constellation's Latin name. The original list of Bayer designations contained 1564 stars. The brighter stars were assigned their first systematic names by the German astronomer Johann Bayer in 1603, in his star atlas Uranometria. Bayer catalogued only a few stars too far south to be seen from Germany, but later astronomers (including Nicolas-Louis de Lacaille and Benjamin Apthorp Gould) supplemented Bayer's catalog with entries for southern constellations. Scheme Bayer assigned a lowercase Greek letter (alpha (α), beta (β), gamma (γ), etc.) or a Latin letter (A, b, c, etc.) to each star he catalogued, combined with the Latin name of the star's parent constellation in genitive (possessive) form. The constellation name is frequently abbreviated to a standard three-letter form. For example, Aldebaran in the constellation Taurus (the Bull) is designated α Tauri (abbreviated α Tau, pronounced Alpha Tauri), which means "Alpha of the Bull". Bayer used Greek letters for the brighter stars, but the Greek alphabet has only twenty-four letters, while a single constellation may contain fifty or more stars visible to the naked eye. When the Greek letters ran out, Bayer continued with Latin letters: uppercase A, followed by lowercase b through z (omitting j and v, but o was included), for a total of another 24 letters. Bayer did not label "permanent" stars with uppercase letters (except for A, which he used instead of a to avoid confusion with α). However, a number of stars in southern constellations have uppercase letter designations, like B Centauri and G Scorpii. These letters were assigned by later astronomers, notably Lacaille in his Coelum Australe Stelliferum and Gould in his Uranometria Argentina. Lacaille followed Bayer's use of Greek letters, but this was insufficient for many constellations. He used first the lowe
*;Astronomical catalogues
https://en.wikipedia.org/wiki/Broadcast%20domain
A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments. In terms of current popular technologies, any computer connected to the same Ethernet repeater or switch is a member of the same broadcast domain. Further, any computer connected to the same set of interconnected switches or repeaters is a member of the same broadcast domain. Routers and other network-layer devices form boundaries between broadcast domains. The notion of a broadcast domain can be compared with a collision domain, which would be all nodes on the same set of inter-connected repeaters and divided by switches and network bridges. Collision domains are generally smaller than and contained within broadcast domains. While some data-link-layer devices are able to divide the collision domains, broadcast domains are only divided by network-layer devices such as routers or layer-3 switches. Separating VLANs divides broadcast domains as well. Further explanation The distinction between broadcast and collision domains comes about because simple Ethernet and similar systems use a shared medium for communication. In simple Ethernet (without switches or bridges), data frames are transmitted to all other nodes on a network. Each receiving node checks the destination address of each frame and simply ignores any frame not addressed to its own MAC address or the broadcast address. Switches act as buffers, receiving and analyzing the frames from each connected network segment. Frames destined for nodes connected to the originating segment are not forwarded by the switch. Frames destined for a specific node on a different segment are sent only to that segment. Only broadcast frames are forwarded to all other segments. This reduces unnecessary traffic and collisions. In such a switched network, transmitted frames may not be receive
Network architecture
https://en.wikipedia.org/wiki/Base%20pair
A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes. Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code. The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (wit
Molecular genetics;Nucleic acids;Nucleobases
https://en.wikipedia.org/wiki/Backplane
A backplane or backplane system is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications. A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing. Usage Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards. Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information. In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two. Active vis-à-vis passive backplanes Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors w
Computer buses
https://en.wikipedia.org/wiki/Backward%20compatibility
In telecommunications and computing, backward compatibility (or backwards compatibility) is a property of an operating system, software, real-world product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. Such breaking usually incurs various types of costs, such as switching cost. A complementary concept is forward compatibility; a design that is forward-compatible usually has a roadmap for compatibility with future standards and products. Usage In hardware A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was initially mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen. Full backward compatibility is particularly important in computer instruction set architectures, two of the most successful being the IBM 360/370/390/Zseries families of mainframes, and the Intel x86 family of microprocessors. IBM announced the first 360 models in 1964 and has continued to update the series ever since, with migration over the decades from 32-bit register/24-bit addresses to 64-bit re
;Interoperability
https://en.wikipedia.org/wiki/Bacterial%20conjugation
Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. This takes place through a pilus. It is a parasexual mode of reproduction in bacteria. It is a mechanism of horizontal gene transfer as are transformation and transduction although these two other mechanisms do not involve cell-to-cell contact. Classical E. coli bacterial conjugation is often regarded as the bacterial equivalent of sexual reproduction or mating, since it involves the exchange of genetic material. However, it is not sexual reproduction, since no exchange of gamete occurs, and indeed no generation of a new organism: instead, an existing organism is transformed. During classical E. coli conjugation, the donor cell provides a conjugative or mobilizable genetic element that is most often a plasmid or transposon. Most conjugative plasmids have systems ensuring that the recipient cell does not already contain a similar element. The genetic information transferred is often beneficial to the recipient. Benefits may include antibiotic resistance, xenobiotic tolerance or the ability to use new metabolites. Other elements can be detrimental, and may be viewed as bacterial parasites. Conjugation in Escherichia coli by spontaneous zygogenesis and in Mycobacterium smegmatis by distributive conjugal transfer differ from the better studied classical E. coli conjugation in that these cases involve substantial blending of the parental genomes. History The process was discovered by Joshua Lederberg and Edward Tatum in 1946. Mechanism Conjugation diagram Donor cell produces pilus. Pilus attaches to recipient cell and brings the two cells together. The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donor for the
Antimicrobial resistance;Bacteriology;Biotechnology;Modification of genetic information;Molecular biology
https://en.wikipedia.org/wiki/B%20%28programming%20language%29
B is a programming language developed at Bell Labs circa 1969 by Ken Thompson and Dennis Ritchie. B was derived from BCPL, and its name may possibly be a contraction of BCPL. Thompson's coworker Dennis Ritchie speculated that the name might be based on Bon, an earlier, but unrelated, programming language that Thompson designed for use on Multics. B was designed for recursive, non-numeric, machine-independent applications, such as system and language software. It was a typeless language, with the only data type being the underlying machine's natural memory word format, whatever that might be. Depending on the context, the word was treated either as an integer or a memory address. As machines with ASCII processing became common, notably the DEC PDP-11 that arrived at Bell Labs, support for character data stuffed in memory words became important. The typeless nature of the language was seen as a disadvantage, which led Thompson and Ritchie to develop an expanded version of the language supporting new internal and user-defined types, which became the C programming language. History Circa 1969, Ken Thompson and later Dennis Ritchie developed B basing it mainly on the BCPL language Thompson used in the Multics project. B was essentially the BCPL system stripped of any component Thompson felt he could do without in order to make it fit within the memory capacity of the minicomputers of the time. The BCPL to B transition also included changes made to suit Thompson's preferences (mostly along the lines of reducing the number of non-whitespace characters in a typical program). Much of the typical ALGOL-like syntax of BCPL was rather heavily changed in this process. The assignment operator := reverted to the = of Rutishauser's Superplan, and the equality operator = was replaced by ==. Thompson added "two-address assignment operators" using x =+ y syntax to add y to x (in C the operator is written +=). This syntax came from Douglas McIlroy's implementation of TMG, in wh
Procedural programming languages;Programming languages;Programming languages created in 1969
https://en.wikipedia.org/wiki/Wireless%20broadband
Wireless broadband is a telecommunications technology that provides high-speed wireless Internet access or computer networking access over a wide area. The term encompasses both fixed and mobile broadband. The term broadband Originally the word "broadband" had a technical meaning, but became a marketing term for any kind of relatively high-speed computer network or Internet access technology. According to the 802.16-2004 standard, broadband means "having instantaneous bandwidths greater than 1 MHz and supporting data rates greater than about 1.5 Mbit/s." The Federal Communications Commission (FCC) recently re-defined the word to mean download speeds of at least 25 Mbit/s and upload speeds of at least 3 Mbit/s. Technology and speeds A wireless broadband network is an outdoor fixed and/or mobile wireless network providing point-to-multipoint or point-to-point terrestrial wireless links for broadband services. Wireless networks can feature data rates exceeding 1 Gbit/s. Many fixed wireless networks are exclusively half-duplex (HDX), however, some licensed and unlicensed systems can also operate at full-duplex (FDX) allowing communication in both directions simultaneously. Outdoor fixed wireless broadband networks commonly use a priority TDMA based protocol in order to divide communication into timeslots. This timeslot technique eliminates many of the issues common to 802.11 Wi-Fi protocol in outdoor networks such as the hidden node problem. Few wireless Internet service providers (WISPs) provide download speeds of over 100 Mbit/s; most broadband wireless access (BWA) services are estimated to have a range of from a tower. Technologies used include Local Multipoint Distribution Service (LMDS) and Multichannel Multipoint Distribution Service (MMDS), as well as heavy use of the industrial, scientific and medical (ISM) radio bands and one particular access technology was standardized by IEEE 802.16, with products known as WiMAX. WiMAX is highly popular in Europe bu
Broadband;Wireless networking
https://en.wikipedia.org/wiki/Bilinear%20transform
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa. The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used for converting a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often named an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often named a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used for warping the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters. The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. The change in frequency is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency. Discrete-time approximation The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform
Control theory;Digital signal processing;Transforms
https://en.wikipedia.org/wiki/Bogie
A bogie ( ) (or truck in North American English) comprises two or more wheelsets (two wheels on an axle), in a frame, attached under a vehicle by a pivot. Bogies take various forms in various modes of transport. A bogie may remain normally attached (as on many railroad cars and semi-trailers) or be quickly detachable (as for a dolly in a road train or in railway bogie exchange). It may include suspension components within it (as most rail and trucking bogies do), or be solid and in turn be suspended (as are most bogies of tracked vehicles). It may be mounted on a swivel, as traditionally on a railway carriage or locomotive, additionally jointed and sprung (as in the landing gear of an airliner), or held in place by other means (centreless bogies). Although bogie is the preferred spelling and first-listed variant in various dictionaries, bogey and bogy are also used. Railway A bogie in the UK, or a railroad truck, wheel truck, or simply truck in North America, is a structure underneath a railway vehicle (wagon, coach or locomotive) to which axles (hence, wheels) are attached through bearings. In Indian English, bogie may also refer to an entire railway carriage. In South Africa, the term bogie is often alternatively used to refer to a freight or goods wagon (shortened from bogie wagon). A locomotive with a bogie was built by engineer William Chapman in 1812. It hauled itself along by chains and was not successful, but Chapman built a more successful locomotive with two gear-driven bogies in 1814. The bogie was first used in America for wagons on the Quincy Granite Railroad in 1829. The first successful locomotive with a bogie to guide the locomotive into curves while also supporting the smokebox was built by John B. Jervis in 1831. The concept took decades before it was widely accepted but eventually became a component of the vast majority of mainline locomotive designs. The first use of bogie coaches in Britain was in 1872 by the Festiniog Railway. The first sta
;Locomotive parts;Rail technologies;Vehicle technology
https://en.wikipedia.org/wiki/Binomial%20theorem
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and are nonnegative integers satisfying and the coefficient of each term is a specific positive integer depending on and . For example, for , The coefficient in each term is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations (i.e. subsets) of elements that can be chosen from an -element set. Therefore is usually pronounced as " choose ". Statement According to the theorem, the expansion of any nonnegative integer power of the binomial is a sum of the form where each is a positive integer known as a binomial coefficient, defined as This formula is also referred to as the binomial formula or the binomial identity. Using summation notation, it can be written more concisely as The final expression follows from the previous one by the symmetry of and in the first expression, and by comparison it follows that the sequence of binomial coefficients in the formula is symmetrical, A simple variant of the binomial formula is obtained by substituting for , so that it involves only a single variable. In this form, the formula reads Examples The first few cases of the binomial theorem are: In general, for the expansion of on the right side in the th row (numbered so that the top row is the 0th row): the exponents of in the terms are (the last term implicitly contains ); the exponents of in the terms are (the first term implicitly contains ); the coefficients form the th row of Pascal's triangle; before combining like terms, there are terms in the expansion (not shown); after combining like terms, there are terms, and t
Articles containing proofs;Factorial and binomial topics;Theorems about polynomials
https://en.wikipedia.org/wiki/Bidirectional%20text
A bidirectional text contains two text directionalities, right-to-left (RTL) and left-to-right (LTR). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row. An example is the RTL Hebrew name Sarah: , spelled sin (ש) on the right, resh (ר) in the middle, and heh (ה) on the left. Many computer programs failed to display this correctly, because they were designed to display text in one direction only. Some so-called right-to-left scripts such as the Persian script and Arabic are mostly, but not exclusively, right-to-left—mathematical expressions, numeric dates and numbers bearing units are embedded from left to right. That also happens if text from a left-to-right language such as English is embedded in them; or vice versa, if Arabic is embedded in a left-to-right script such as English. Bidirectional script support Bidirectional script support is the capability of a computer system to correctly display bidirectional text. The term is often shortened to "BiDi" or "bidi". Early computer installations were designed only to support a single writing system, typically for left-to-right scripts based on the Latin alphabet only. Adding new character sets and character encodings enabled a number of other left-to-right scripts to be supported, but did not easily support right-to-left scripts such as Arabic or Hebrew, and mixing the two was not practical. Right-to-left scripts were introduced through encodings like ISO/IEC 8859-6 and ISO/IEC 8859-8, storing the letters (usually) in writing and reading order. It is possible to simply flip the left-to-right display order to a right-to-left display order, but doing this sacrifices the ability to correctly display left-to-right scripts. With bidirectional script support, it is possible to mix characters from different scripts on the same page, regardless of writing direction. In particular, the Unicode standard provides foundations
Character encoding;Internationalization and localization;Unicode algorithms;Writing direction
https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality
In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants: Integer exponent Case 1: for every integer and real number . The inequality is strict if and . Case 2: for every integer and every real number . Case 3: for every even integer and every real number . Real exponent for every real number and . The inequality is strict if and . for every real number and . History Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often. According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis". Proof for integer exponent The first case has a simple inductive proof: Suppose the statement is true for : Then it follows that Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form: we prove the inequality for , from validity for some r we deduce validity for . For , is equivalent to which is true. Similarly, for we have Now suppose the statement is true for : Then it follows that since as well as . By the modified induction we conclude the statement is true for every non-negative integer . By noting that if , then is negative gives case 3. Generalizations Generalization of exponent The exponent can be generalized to an arbitrary real number as follows: if , then for or , and for . This generalization can be proved by convexity (see below) or by comparing derivatives. The strict versions of these inequalities require and . The case can also be derived from the case by noting that (using the main case result) and by using the fact that is monotonic. We can conclude t
Inequalities (mathematics)
https://en.wikipedia.org/wiki/Bestiary
A bestiary () is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. Thus the bestiary is also a reference to the symbolic language of animals in Western Christian art and literature. History The bestiary — the medieval book of beasts — was among the most popular illuminated texts in northern Europe during the Middle Ages (about 500–1500). Medieval Christians understood every element of the world as a manifestation of God, and bestiaries largely focused on each animal's religious meaning. Much of what is in the bestiary came from the ancient Greeks and their philosophers. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the Physiologus, which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's Historia Animalium and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists. Following the Physiologus, Saint Isidore of Seville (Book XII of the Etymologiae) and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest in or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, such as the migration o
*;Medieval European legendary creatures;Medieval literature;Types of illuminated manuscript;Zoology
https://en.wikipedia.org/wiki/Behavior
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. While some behavior is produced in response to an organism's environment (extrinsic motivation), behavior can also be the product of intrinsic motivation, also referred to as "agency" or "free will". Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector. Models Biology Definition Behavior may be defined as "the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli". A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny). Behaviour can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment. Determination by genetics or the environment Behaviors can be either innate or learned from the environment, or both, dependent on the organism. The more complex nervous systems (or brains) are, the more influence learning has on behavior. However, even in mammals, a large fraction of behavior is genetically determined. For instance, prairie voles tend to be monogamous while, while mea
Behavior
https://en.wikipedia.org/wiki/Biosphere
The biosphere (), also called the ecosphere (), is the worldwide sum of all ecosystems. It can also be termed the zone of life on the Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and e
;Biological systems;Oceanography;Superorganisms
https://en.wikipedia.org/wiki/Blood%20alcohol%20content
Blood alcohol content (BAC), also called blood alcohol concentration or blood alcohol level, is a measurement of alcohol intoxication used for legal or medical purposes. BAC is expressed as mass of alcohol per volume of blood. In US and many international publications, BAC levels are written as a percentage such as 0.08%, i.e. there is 0.8 grams of alcohol per liter of blood. In different countries, the maximum permitted BAC when driving ranges from the limit of detection (zero tolerance) to 0.08% (0.8 ). BAC levels above 0.40% (4 g/L) can be potentially fatal. Units of measurement BAC is generally defined as a fraction of weight of alcohol per volume of blood, with an SI coherent derived unit of kg/m3 or equivalently grams per liter (g/L). Countries differ in how this quantity is normally expressed. Common formats are listed in the table below. For example, the US and many international publications present BAC as a percentage, such as 0.05%. This would be interpreted as 0.05 grams per deciliter of blood. This same concentration could be expressed as 0.5‰ or 50 mg% in other countries. It is also possible to use other units. For example, in the 1930s Widmark measured alcohol and blood by mass, and thus reported his concentrations in units of g/kg or mg/g, weight alcohol per weight blood. Blood is denser than water and 1 mL of blood has a mass of approximately 1.055 grams, thus a mass-volume BAC of 1 g/L corresponds to a mass-mass BAC of 0.948 mg/g. Sweden, Denmark, Norway, Finland, Germany, and Switzerland use mass-mass concentrations in their laws, but this distinction is often skipped over in public materials, implicitly assuming that 1 L of blood weighs 1 kg. In pharmacokinetics, it is common to use the amount of substance, in moles, to quantify the dose. As the molar mass of ethanol is 46.07 g/mol, a BAC of 1 g/L is 21.706 mmol/L (21.706 mM). Effects by alcohol level The magnitude of sensory impairment may vary in people of differing weights. The NIAAA
Alcohol law;Alcohol policy;Concentration indicators;Driving under the influence;Metabolism
https://en.wikipedia.org/wiki/Bucket%20argument
Isaac Newton's rotating bucket argument (also known as Newton's bucket) is a thought experiment that was designed to demonstrate that true rotational motion cannot be defined as the relative rotation of the body with respect to the immediately surrounding bodies. It is one of five arguments from the "properties, causes, and effects" of "true motion and rest" that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime. Background These arguments, and a discussion of the distinctions between absolute and relative time, space, place and motion, appear in a scholium at the end of Definitions sections in Book I of Newton's work, The Mathematical Principles of Natural Philosophy (1687) (not to be confused with General Scholium at the end of Book III), which established the foundations of classical mechanics and introduced his law of universal gravitation, which yielded the first quantitatively adequate dynamical explanation of planetary motion. Despite their embrace of the principle of rectilinear inertia and the recognition of the kinematical relativity of apparent motion (which underlies whether the Ptolemaic or the Copernican system is correct), natural philosophers of the seventeenth century continued to consider true motion and rest as physically separate descriptors of an individual body. The dominant view Newton opposed was devised by René Descartes, and was supported (in part) by Gottfried Leibniz. It held that empty space is a metaphysical impossibility because space is nothing othe
Classical mechanics;Isaac Newton;Rotation;Thought experiments in physics
https://en.wikipedia.org/wiki/Bayesian%20probability
Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief. The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses; that is, with propositions whose truth or falsity is unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability. Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies a prior probability. This, in turn, is then updated to a posterior probability in the light of new, relevant data (evidence). The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation. The term Bayesian derives from the 18th-century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Mathematician Pierre-Simon Laplace pioneered and popularized what is now called Bayesian probability. Bayesian methodology Bayesian methods are characterized by concepts and procedures as follows: The use of random variables, or more generally unknown quantities, to model all sources of uncertainty in statistical models including uncertainty resulting from lack of information (see also aleatoric and epistemic uncertainty). The need to determine the prior probability distribution taking into account the available (prior) information. The sequential use of Bayes' theorem: as more data become available, calculate the posterior distribution using Bayes' theorem; subsequently, the posterior distribution becomes the n
Justification (epistemology);Philosophy of mathematics;Philosophy of science;Probability;Probability interpretations
https://en.wikipedia.org/wiki/Bunsen%20burner
A Bunsen burner, named after Robert Bunsen, is a kind of ambient air gas burner used as laboratory equipment; it produces a single open gas flame, and is used for heating, sterilization, and combustion. The gas can be natural gas (which is mainly methane) or a liquefied petroleum gas, such as propane, butane, a mixture or, as Bunsen himself used, coal gas. Combustion temperature achieved depends in part on the adiabatic flame temperature of the chosen fuel mixture. History In 1852, the University of Heidelberg hired Bunsen and promised him a new laboratory building. The city of Heidelberg had begun to install coal-gas street lighting, and the university laid gas lines to the new laboratory. The designers of the building intended to use the gas not just for lighting, but also as fuel for burners for laboratory operations. For any burner lamp, it was desirable to maximize the temperature of its flame, and minimize its luminosity (which represented lost heating energy). Bunsen sought to improve existing laboratory burner lamps as regards economy, simplicity, and flame temperature, and adapt them to coal-gas fuel. While the building was under construction in late 1854, Bunsen suggested certain design principles to the university's mechanic, Peter Desaga, and asked him to construct a prototype. Similar principles had been used in an earlier burner design by Michael Faraday, and in a device patented in 1856 by gas engineer R. W. Elsner. The Bunsen/Desaga design generated a hot, sootless, non-luminous flame by mixing the gas with air in a controlled fashion before combustion. Desaga created adjustable slits for air at the bottom of the cylindrical burner, with the flame issuing at the top. When the building opened early in 1855, Desaga had made 50 burners for Bunsen's students. Two years later Bunsen published a description, and many of his colleagues soon adopted the design. Bunsen burners are now used in laboratories around the world. Operation The device in use
Burners;Combustion engineering;German inventions;Laboratory equipment
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem: Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and ; equality occurs only if one of and is a multiple of the other. As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as , with Bézout coefficients −9 and 2. Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity. A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains. Structure of solutions If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers. If and are both nonzero and none of them divides the other, then exactly two of the pairs of Bézout coefficients satisfy If and are both positive, one has and for one of these pairs, and and for the other. If is a divisor of (including the case ), then one pair of Bézout coefficients is . This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and , and another one such that and . The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to .
Articles containing proofs;Diophantine equations;Lemmas in number theory
https://en.wikipedia.org/wiki/Bistability
In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems. In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them. A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time. Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger. Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input. Bistability can also arise in biochemical systems, where it creates digi
2 (number);Digital electronics
https://en.wikipedia.org/wiki/Berry%20paradox
The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic". The paradox was called "Richard's paradox" by Jean-Yves Girard. Overview Consider the expression: Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a smallest positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it is definable in under sixty letters, and is not the smallest positive integer not definable in under sixty letters, and is not defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it. Mathematician and computer scientist Gregory Chaitin in The Unknowable (1999) adds this comment: "Well, the Mexican mathematical historian Alejandro Garcidiego has taken the trouble to find that letter [of Berry's from which Russell penned his remarks], and it is rather a different paradox. Berry’s letter actually talks about the first ordinal that c
Algorithmic information theory;Eponymous paradoxes;Logical paradoxes;Mathematical paradoxes;Self-referential paradoxes
https://en.wikipedia.org/wiki/Country
A country is a distinct part of the world, such as a state, nation, or other political entity. When referring to a specific polity, the term "country" may refer to a sovereign state, state with limited recognition, constituent country, or dependent territory. Most sovereign states, but not all countries, are members of the United Nations. There is no universal agreement on the number of "countries" in the world, since several states have disputed sovereignty status or limited recognition, and a number of non-sovereign entities are commonly considered countries. The definition and usage of the word "country" are flexible and have changed over time. The Economist wrote in 2010 that "any attempt to find a clear definition of a country soon runs into a thicket of exceptions and anomalies." Areas much smaller than a political entity may be referred to as a "country", such as the West Country in England, "big sky country" (used in various contexts of the American West), "coal country" (used to describe coal-mining regions), or simply "the country" (used to describe a rural area). The term "country" is also used as a qualifier descriptively, such as country music or country living. Etymology The word country comes from Old French , which derives from Vulgar Latin () ("(land) lying opposite"; "(land) spread before"), derived from ("against, opposite"). It most likely entered the English language after the Franco-Norman invasion during the 11th century. Definition of a country In English, the word has increasingly become associated with political divisions, so that one sense, associated with the indefinite article – "a country" – is now frequently applied as a synonym for a state or a former sovereign state. It may also be used as a synonym for "nation". Taking as examples Canada, Sri Lanka, and Yugoslavia, cultural anthropologist Clifford Geertz wrote in 1997 that "it is clear that the relationships between 'country' and 'nation' are so different from one [place] t
;Human geography
https://en.wikipedia.org/wiki/Cytoplasm
The cytoplasm describes all the material within a eukaryotic or prokaryotic cell, enclosed by the cell membrane, including the organelles and excluding the nucleus in eukaryotic cells. The material inside the nucleus of a eukaryotic cell and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are the cytosol (a gel-like substance), the cell's internal sub-structures, and various cytoplasmic inclusions. In eukaryotes the cytoplasm also includes the nucleus, and other membrane-bound organelles.The cytoplasm is about 80% water and is usually colorless. The submicroscopic ground cell substance, or cytoplasmic matrix, that remains after the exclusion of the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, a highly complex, polyphasic system in which all resolvable cytoplasmic elements are suspended, including the larger organelles such as the ribosomes, mitochondria, plant plastids, lipid droplets, and vacuoles. Many cellular activities take place within the cytoplasm, such as many metabolic pathways, including glycolysis, photosynthesis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex, or ectoplasm. Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes. In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming. History The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance and organelles outside the nucleus. There has been certain disagreement on the definition of cytoplasm, as some authors prefer to exclude from it some organelles, especially the vacuoles and sometimes the plastids. Physical nature It remains uncertain how the various components of the cytoplasm interact to allow movement of organelles while maintaining the
Cell anatomy
https://en.wikipedia.org/wiki/Code
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English, Spanish, etc. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. Theory In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as
;Signal processing
https://en.wikipedia.org/wiki/Cipher
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates.". When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message, with some exceptions such as ROT13 and Atbash. Most modern ciphers can be categorized in several ways: By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on
*;Cryptography
https://en.wikipedia.org/wiki/Common%20descent
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth. Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely they are related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: History The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of
Descent;Evolutionary biology;Most recent common ancestors
https://en.wikipedia.org/wiki/STS-51-F
STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985. While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light. During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude. Crew Crew seat assignments Launch STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink. At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperatur
1985 in spaceflight;1985 in the United States;Crewed space observatories;Edwards Air Force Base;Space Shuttle missions;Spacecraft launched in 1985;Spacecraft which reentered in 1985
https://en.wikipedia.org/wiki/Combination
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by . A combination is a selection of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960. Number of k-combinations The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a vari
Combinatorics
https://en.wikipedia.org/wiki/Country%20code
A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed. The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU). ISO 3166-1 The standard ISO 3166-1 defines short identification codes for most countries and dependent areas: ISO 3166-1 alpha-2: two-letter code ISO 3166-1 alpha-3: three-letter code ISO 3166-1 numeric: three-digit code The two-letter codes are used as the basis for other codes and applications, for example, for ISO 4217 currency codes with deviations, for country code top-level domain names (ccTLDs) on the Internet: list of Internet TLDs. Other applications are defined in ISO 3166-1 alpha-2. ITU Telephone country codes Telephone country codes are telephone number prefixes for international direct dialing (IDD), a system for reaching subscribers in foreign areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. Country codes constitute the international telephone numbering plan. They are dialed before the national telephone number of a destination in a foreign country or area, but typically require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. ITU standards recommend the digit sequence 00 for the prefix, and most countries comply. Other ITU codes The ITU also maintains the following other country codes: The E.212 mobile country codes (MCC), for mobile/wireless phone addresses. The ITU prefix, the first few characters of call signs of radio and television stations (maritime, aeronautical, broadcasting, and other types) to identify their country of origin
;Geocodes
https://en.wikipedia.org/wiki/Californium
Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. It was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). It was named after the university and the U.S. state of California. Two crystalline forms exist at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory (ORNL) in the United States and Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements with practical uses. Most of these applications exploit the fact that certain isotopes of californium emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is used as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. It can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. Characteristics Physical properties Californium is a silvery-white actinide metal with a
;Actinides;Chemical elements;Chemical elements with double hexagonal close-packed structure;Ferromagnetic materials;Neutron sources;Synthetic elements
https://en.wikipedia.org/wiki/Continuum%20hypothesis
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states: Or equivalently: In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. History Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen. Cardinality of infinite sets Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets and to have the same cardinality means that it is possible to "pair off" elements of with elements of in such a fashion that every element of is paired off with exactly one element of and vice versa. Hence,
Basic concepts in infinite set theory;Cardinal numbers;Forcing (mathematics);Hilbert's problems;Hypotheses;Independence results;Infinity
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

This dataset is for MNLP M3

Downloads last month
89