Dataset Viewer
title
stringlengths 4
67
| text
stringlengths 43
278k
|
---|---|
Bilinear form | In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately:
B(u + v, w) = B(u, w) + B(v, w) and B(λu, v) = λB(u, v)
B(u, v + w) = B(u, v) + B(u, w) and B(u, λv) = λB(u, v)
The dot product on
R
n
{\displaystyle \mathbb {R} ^{n}}
is an example of a bilinear form which is also an inner product. An example of a bilinear form that is not an inner product would be the four-vector product.
The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms.
When K is the field of complex numbers C, one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument.
== Coordinate representation ==
Let V be an n-dimensional vector space with basis {e1, …, en}.
The n × n matrix A, defined by Aij = B(ei, ej) is called the matrix of the bilinear form on the basis {e1, …, en}.
If the n × 1 matrix x represents a vector x with respect to this basis, and similarly, the n × 1 matrix y represents another vector y, then:
B
(
x
,
y
)
=
x
T
A
y
=
∑
i
,
j
=
1
n
x
i
A
i
j
y
j
.
{\displaystyle B(\mathbf {x} ,\mathbf {y} )=\mathbf {x} ^{\textsf {T}}A\mathbf {y} =\sum _{i,j=1}^{n}x_{i}A_{ij}y_{j}.}
A bilinear form has different matrices on different bases. However, the matrices of a bilinear form on different bases are all congruent. More precisely, if {f1, …, fn} is another basis of V, then
f
j
=
∑
i
=
1
n
S
i
,
j
e
i
,
{\displaystyle \mathbf {f} _{j}=\sum _{i=1}^{n}S_{i,j}\mathbf {e} _{i},}
where the
S
i
,
j
{\displaystyle S_{i,j}}
form an invertible matrix S. Then, the matrix of the bilinear form on the new basis is STAS.
== Properties ==
=== Non-degenerate bilinear forms ===
Every bilinear form B on V defines a pair of linear maps from V to its dual space V∗. Define B1, B2: V → V∗ by
This is often denoted as
where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying).
For a finite-dimensional vector space V, if either of B1 or B2 is an isomorphism, then both are, and the bilinear form B is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element:
B
(
x
,
y
)
=
0
{\displaystyle B(x,y)=0}
for all
y
∈
V
{\displaystyle y\in V}
implies that x = 0 and
B
(
x
,
y
)
=
0
{\displaystyle B(x,y)=0}
for all
x
∈
V
{\displaystyle x\in V}
implies that y = 0.
The corresponding notion for a module over a commutative ring is that a bilinear form is unimodular if V → V∗ is an isomorphism. Given a finitely generated module over a commutative ring, the pairing may be injective (hence "nondegenerate" in the above sense) but not unimodular. For example, over the integers, the pairing B(x, y) = 2xy is nondegenerate but not unimodular, as the induced map from V = Z to V∗ = Z is multiplication by 2.
If V is finite-dimensional then one can identify V with its double dual V∗∗. One can then show that B2 is the transpose of the linear map B1 (if V is infinite-dimensional then B2 is the transpose of B1 restricted to the image of V in V∗∗). Given B one can define the transpose of B to be the bilinear form given by
The left radical and right radical of the form B are the kernels of B1 and B2 respectively; they are the vectors orthogonal to the whole space on the left and on the right.
If V is finite-dimensional then the rank of B1 is equal to the rank of B2. If this number is equal to dim(V) then B1 and B2 are linear isomorphisms from V to V∗. In this case B is nondegenerate. By the rank–nullity theorem, this is equivalent to the condition that the left and equivalently right radicals be trivial. For finite-dimensional spaces, this is often taken as the definition of nondegeneracy:
Given any linear map A : V → V∗ one can obtain a bilinear form B on V via
This form will be nondegenerate if and only if A is an isomorphism.
If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero. Likewise, a nondegenerate form is one for which the determinant of the associated matrix is non-zero (the matrix is non-singular). These statements are independent of the chosen basis. For a module over a commutative ring, a unimodular form is one for which the determinant of the associate matrix is a unit (for example 1), hence the term; note that a form whose matrix determinant is non-zero but not a unit will be nondegenerate but not unimodular, for example B(x, y) = 2xy over the integers.
=== Symmetric, skew-symmetric, and alternating forms ===
We define a bilinear form to be
symmetric if B(v, w) = B(w, v) for all v, w in V;
alternating if B(v, v) = 0 for all v in V;
skew-symmetric or antisymmetric if B(v, w) = −B(w, v) for all v, w in V;
Proposition
Every alternating form is skew-symmetric.
Proof
This can be seen by expanding B(v + w, v + w).
If the characteristic of K is not 2 then the converse is also true: every skew-symmetric form is alternating. However, if char(K) = 2 then a skew-symmetric form is the same as a symmetric form and there exist symmetric/skew-symmetric forms that are not alternating.
A bilinear form is symmetric (respectively skew-symmetric) if and only if its coordinate matrix (relative to any basis) is symmetric (respectively skew-symmetric). A bilinear form is alternating if and only if its coordinate matrix is skew-symmetric and the diagonal entries are all zero (which follows from skew-symmetry when char(K) ≠ 2).
A bilinear form is symmetric if and only if the maps B1, B2: V → V∗ are equal, and skew-symmetric if and only if they are negatives of one another. If char(K) ≠ 2 then one can decompose a bilinear form into a symmetric and a skew-symmetric part as follows
B
+
=
1
2
(
B
+
t
B
)
B
−
=
1
2
(
B
−
t
B
)
,
{\displaystyle B^{+}={\tfrac {1}{2}}(B+{}^{\text{t}}B)\qquad B^{-}={\tfrac {1}{2}}(B-{}^{\text{t}}B),}
where tB is the transpose of B (defined above).
=== Reflexive bilinear forms and orthogonal vectors ===
A bilinear form B is reflexive if and only if it is either symmetric or alternating. In the absence of reflexivity we have to distinguish left and right orthogonality. In a reflexive space the left and right radicals agree and are termed the kernel or the radical of the bilinear form: the subspace of all vectors orthogonal with every other vector. A vector v, with matrix representation x, is in the radical of a bilinear form with matrix representation A, if and only if Ax = 0 ⇔ xTA = 0. The radical is always a subspace of V. It is trivial if and only if the matrix A is nonsingular, and thus if and only if the bilinear form is nondegenerate.
Suppose W is a subspace. Define the orthogonal complement
W
⊥
=
{
v
∣
B
(
v
,
w
)
=
0
for all
w
∈
W
}
.
{\displaystyle W^{\perp }=\left\{\mathbf {v} \mid B(\mathbf {v} ,\mathbf {w} )=0{\text{ for all }}\mathbf {w} \in W\right\}.}
For a non-degenerate form on a finite-dimensional space, the map V/W → W⊥ is bijective, and the dimension of W⊥ is dim(V) − dim(W).
=== Bounded and elliptic bilinear forms ===
Definition: A bilinear form on a normed vector space (V, ‖⋅‖) is bounded, if there is a constant C such that for all u, v ∈ V,
B
(
u
,
v
)
≤
C
‖
u
‖
‖
v
‖
.
{\displaystyle B(\mathbf {u} ,\mathbf {v} )\leq C\left\|\mathbf {u} \right\|\left\|\mathbf {v} \right\|.}
Definition: A bilinear form on a normed vector space (V, ‖⋅‖) is elliptic, or coercive, if there is a constant c > 0 such that for all u ∈ V,
B
(
u
,
u
)
≥
c
‖
u
‖
2
.
{\displaystyle B(\mathbf {u} ,\mathbf {u} )\geq c\left\|\mathbf {u} \right\|^{2}.}
== Associated quadratic form ==
For any bilinear form B : V × V → K, there exists an associated quadratic form Q : V → K defined by Q : V → K : v ↦ B(v, v).
When char(K) ≠ 2, the quadratic form Q is determined by the symmetric part of the bilinear form B and is independent of the antisymmetric part. In this case there is a one-to-one correspondence between the symmetric part of the bilinear form and the quadratic form, and it makes sense to speak of the symmetric bilinear form associated with a quadratic form.
When char(K) = 2 and dim V > 1, this correspondence between quadratic forms and symmetric bilinear forms breaks down.
== Relation to tensor products ==
By the universal property of the tensor product, there is a canonical correspondence between bilinear forms on V and linear maps V ⊗ V → K. If B is a bilinear form on V the corresponding linear map is given by
In the other direction, if F : V ⊗ V → K is a linear map the corresponding bilinear form is given by composing F with the bilinear map V × V → V ⊗ V that sends (v, w) to v⊗w.
The set of all linear maps V ⊗ V → K is the dual space of V ⊗ V, so bilinear forms may be thought of as elements of (V ⊗ V)∗ which (when V is finite-dimensional) is canonically isomorphic to V∗ ⊗ V∗.
Likewise, symmetric bilinear forms may be thought of as elements of (Sym2V)* (dual of the second symmetric power of V) and alternating bilinear forms as elements of (Λ2V)∗ ≃ Λ2V∗ (the second exterior power of V∗). If char(K) ≠ 2, (Sym2V)* ≃ Sym2(V∗).
== Generalizations ==
=== Pairs of distinct vector spaces ===
Much of the theory is available for a bilinear mapping from two vector spaces over the same base field to that field
Here we still have induced linear mappings from V to W∗, and from W to V∗. It may happen that these mappings are isomorphisms; assuming finite dimensions, if one is an isomorphism, the other must be. When this occurs, B is said to be a perfect pairing.
In finite dimensions, this is equivalent to the pairing being nondegenerate (the spaces necessarily having the same dimensions). For modules (instead of vector spaces), just as how a nondegenerate form is weaker than a unimodular form, a nondegenerate pairing is a weaker notion than a perfect pairing. A pairing can be nondegenerate without being a perfect pairing, for instance Z × Z → Z via (x, y) ↦ 2xy is nondegenerate, but induces multiplication by 2 on the map Z → Z∗.
Terminology varies in coverage of bilinear forms. For example, F. Reese Harvey discusses "eight types of inner product". To define them he uses diagonal matrices Aij having only +1 or −1 for non-zero elements. Some of the "inner products" are symplectic forms and some are sesquilinear forms or Hermitian forms. Rather than a general field K, the instances with real numbers R, complex numbers C, and quaternions H are spelled out. The bilinear form
∑
k
=
1
p
x
k
y
k
−
∑
k
=
p
+
1
n
x
k
y
k
{\displaystyle \sum _{k=1}^{p}x_{k}y_{k}-\sum _{k=p+1}^{n}x_{k}y_{k}}
is called the real symmetric case and labeled R(p, q), where p + q = n. Then he articulates the connection to traditional terminology:
Some of the real symmetric cases are very important. The positive definite case R(n, 0) is called Euclidean space, while the case of a single minus, R(n−1, 1) is called Lorentzian space. If n = 4, then Lorentzian space is also called Minkowski space or Minkowski spacetime. The special case R(p, p) will be referred to as the split-case.
=== General modules ===
Given a ring R and a right R-module M and its dual module M∗, a mapping B : M∗ × M → R is called a bilinear form if
for all u, v ∈ M∗, all x, y ∈ M and all α, β ∈ R.
The mapping ⟨⋅,⋅⟩ : M∗ × M → R : (u, x) ↦ u(x) is known as the natural pairing, also called the canonical bilinear form on M∗ × M.
A linear map S : M∗ → M∗ : u ↦ S(u) induces the bilinear form B : M∗ × M → R : (u, x) ↦ ⟨S(u), x⟩, and a linear map T : M → M : x ↦ T(x) induces the bilinear form B : M∗ × M → R : (u, x) ↦ ⟨u, T(x)⟩.
Conversely, a bilinear form B : M∗ × M → R induces the R-linear maps S : M∗ → M∗ : u ↦ (x ↦ B(u, x)) and T′ : M → M∗∗ : x ↦ (u ↦ B(u, x)). Here, M∗∗ denotes the double dual of M.
== See also ==
== Citations ==
== References ==
== External links ==
"Bilinear form", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Bilinear form". PlanetMath.
This article incorporates material from Unimodular on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. |
Surveillance capitalism | Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. This phenomenon is distinct from government surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described by Shoshana Zuboff, is driven by a profit-making incentive, and arose as advertising companies, led by Google's AdWords, saw the possibilities of using personal data to target consumers more precisely.
Increased data collection may have various benefits for individuals and society, such as self-optimization (the quantified self), societal optimizations (e.g., by smart cities) and optimized services (including various web applications). However, as capitalism focuses on expanding the proportion of social life that is open to data collection and data processing, this can have significant implications for vulnerability and control of society, as well as for privacy.
The economic pressures of capitalism are driving the intensification of online connection and monitoring, with spaces of social life opening up to saturation by corporate actors, directed at making profits and/or regulating behavior. Therefore, personal data points increased in value after the possibilities of targeted advertising were known. As a result, the increasing price of data has limited access to the purchase of personal data points to the richest in society.
== Background ==
Shoshana Zuboff writes that "analysing massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems". In 2014, Vincent Mosco referred to the marketing of information about customers and subscribers to advertisers as surveillance capitalism and made note of the surveillance state alongside it. Christian Fuchs found that the surveillance state fuses with surveillance capitalism.
Similarly, Zuboff informs that the issue is further complicated by highly invisible collaborative arrangements with state security apparatuses. According to Trebor Scholz, companies recruit people as informants for this type of capitalism. Zuboff contrasts the mass production of industrial capitalism with surveillance capitalism, where the former was interdependent with its populations, who were its consumers and employees, and the latter preys on dependent populations, who are neither its consumers nor its employees and largely ignorant of its procedures.
Their research shows that the capitalist addition to the analysis of massive amounts of data has taken its original purpose in an unexpected direction. Surveillance has been changing power structures in the information economy, potentially shifting the balance of power further from nation-states and towards large corporations employing the surveillance capitalist logic.
Zuboff notes that surveillance capitalism extends beyond the conventional institutional terrain of the private firm, accumulating not only surveillance assets and capital but also rights, and operating without meaningful mechanisms of consent. In other words, analysing massive data sets was at some point not only executed by the state apparatuses but also companies. Zuboff claims that both Google and Facebook have invented surveillance capitalism and translated it into "a new logic of accumulation".
This mutation resulted in both companies collecting very large numbers of data points about their users, with the core purpose of making a profit. By selling these data points to external users (particularly advertisers), it has become an economic mechanism. The combination of the analysis of massive data sets and the use of these data sets as a market mechanism has shaped the concept of surveillance capitalism. Surveillance capitalism has been heralded as the successor to neoliberalism.
Oliver Stone, creator of the film Snowden, pointed to the location-based game Pokémon Go as the "latest sign of the emerging phenomenon and demonstration of surveillance capitalism". Stone criticized that the location of its users was used not only for game purposes, but also to retrieve more information about its players. By tracking users' locations, the game collected far more information than just users' names and locations: "it can access the contents of your USB storage, your accounts, photographs, network connections, and phone activities, and can even activate your phone, when it is in standby mode". This data can then be analysed and commodified by companies such as Google (which significantly invested in the game's development) to improve the effectiveness of targeted advertisement.
Another aspect of surveillance capitalism is its influence on political campaigning. Personal data retrieved by data miners can enable various companies (most notoriously Cambridge Analytica) to improve the targeting of political advertising, a step beyond the commercial aims of previous surveillance capitalist operations. In this way, it is possible that political parties will be able to produce far more targeted political advertising to maximise its impact on voters. However, Cory Doctorow writes that the misuse of these data sets "will lead us towards totalitarianism". This may resemble a corporatocracy, and Joseph Turow writes that "the centrality of corporate power is a direct reality at the very heart of the digital age".: 17
== Theory ==
=== Shoshana Zuboff ===
The terminology "surveillance capitalism" was popularized by Harvard Professor Shoshana Zuboff.: 107 In Zuboff's theory, surveillance capitalism is a novel market form and a specific logic of capitalist accumulation. In her 2014 essay A Digital Declaration: Big Data as Surveillance Capitalism, she characterized it as a "radically disembedded and extractive variant of information capitalism" based on the commodification of "reality" and its transformation into behavioral data for analysis and sales.
In a subsequent article in 2015, Zuboff analyzed the societal implications of this mutation of capitalism. She distinguished between "surveillance assets", "surveillance capital", and "surveillance capitalism" and their dependence on a global architecture of computer mediation that she calls "Big Other", a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such as freedom, democracy, and privacy.
According to Zuboff, surveillance capitalism was pioneered by Google and later Facebook, just as mass-production and managerial capitalism were pioneered by Ford and General Motors a century earlier, and has now become the dominant form of information capitalism. Zuboff emphasizes that behavioral changes enabled by artificial intelligence have become aligned with the financial goals of American internet companies such as Google, Facebook, and Amazon.: 107
In her Oxford University lecture published in 2016, Zuboff identified the mechanisms and practices of surveillance capitalism, including the production of "prediction products" for sale in new "behavioral futures markets." She introduced the concept "dispossession by surveillance", arguing that it challenges the psychological and political bases of self-determination by concentrating rights in the surveillance regime. This is described as a "coup from above."
==== Key features ====
Zuboff's book The Age of Surveillance Capitalism is a detailed examination of the unprecedented power of surveillance capitalism and the quest by powerful corporations to predict and control human behavior. Zuboff identifies four key features in the logic of surveillance capitalism and explicitly follows the four key features identified by Google's chief economist, Hal Varian:
The drive toward more and more data extraction and analysis.
The development of new contractual forms using computer-monitoring and automation.
The desire to personalize and customize the services offered to users of digital platforms.
The use of the technological infrastructure to carry out continual experiments on its users and consumers.
==== Analysis ====
Zuboff compares demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the Internet to asking Henry Ford to make each Model T by hand and states that such demands are existential threats that violate the basic mechanisms of the entity's survival.
Zuboff warns that principles of self-determination might be forfeited due to "ignorance, learned helplessness, inattention, inconvenience, habituation, or drift" and states that "we tend to rely on mental models, vocabularies, and tools distilled from past catastrophes," referring to the twentieth century's totalitarian nightmares or the monopolistic predations of Gilded Age capitalism, with countermeasures that have been developed to fight those earlier threats not being sufficient or even appropriate to meet the novel challenges.
She also poses the question: "will we be the masters of information, or will we be its slaves?" and states that "if the digital future is to be our home, then it is we who must make it so".
In her book, Zuboff discusses the differences between industrial capitalism and surveillance capitalism. Zuboff writes that as industrial capitalism exploited nature, surveillance capitalism exploits human nature.
=== John Bellamy Foster and Robert W. McChesney ===
The term "surveillance capitalism" has also been used by political economists John Bellamy Foster and Robert W. McChesney, although with a different meaning. In an article published in Monthly Review in 2014, they apply it to describe the manifestation of the "insatiable need for data" of financialization, which they explain is "the long-term growth speculation on financial assets relative to GDP" introduced in the United States by industry and government in the 1980s that evolved out of the military-industrial complex and the advertising industry.
== Response ==
Numerous organizations have been struggling for free speech and privacy rights in the new surveillance capitalism and various national governments have enacted privacy laws. It is also conceivable that new capabilities and uses for mass-surveillance require structural changes towards a new system to create accountability and prevent misuse. Government attention towards the dangers of surveillance capitalism especially increased after the exposure of the Facebook-Cambridge Analytica data scandal that occurred in early 2018. In response to the misuse of mass-surveillance multiple states have taken preventive measures. The European Union, for example, has reacted to these events and restricted its rules and regulations on misusing big data. Surveillance-Capitalism has become a lot harder under these rules, known as the General Data Protection Regulations. However, implementing preventive measures against misuse of mass-surveillance is hard for many countries as it requires structural change of the system.
Bruce Sterling's 2014 lecture at Strelka Institute "The epic struggle of the internet of things" explained how consumer products could become surveillance objects that track people's everyday life. In his talk, Sterling highlights the alliances between multinational corporations who develop Internet of Things-based surveillance systems which feeds surveillance capitalism.
In 2015, Tega Brain and Surya Mattu's satirical artwork Unfit Bits encourages users to subvert fitness data collected by Fitbits. They suggested ways to fake datasets by attaching the device, for example to a metronome or on a bicycle wheel. In 2018, Brain created a project with Sam Lavigne called New Organs which collect people's stories of being monitored online and offline.
The 2019 documentary film The Great Hack tells the story of how a company named Cambridge Analytica used Facebook to manipulate the 2016 U.S. presidential election. Extensive profiling of users and news feeds that are ordered by black box algorithms were presented as the main source of the problem, which is also mentioned in Zuboff's book. The usage of personal data to subject individuals to categorization and potentially politically influence individuals highlights how individuals can become voiceless in the face of data misusage. This highlights the crucial role surveillance capitalism can have on social injustice as it can affect all aspects of life.
== See also ==
Adware – Software with, often unwanted, adverts
Commercialization of the Internet – Running online services principally for financial gain
Criticism of capitalism – Arguments against the economic system of capitalism
Data capitalism
Data mining – Process of extracting and discovering patterns in large data sets
Decomputing
Digital integrity – law to protect people's digital livesPages displaying wikidata descriptions as a fallback
Five Eyes – Anglosphere intelligence alliance
Free and open-source software – Software whose source code is available and which is permissively licensed
Googlization – NeologismPages displaying short descriptions with no spaces
Mass surveillance industry
Microtargeting – Usage of online data for individuals advertising
Surveillance § Corporate
Targeted advertising – Form of advertising
Personalized marketing – Marketing strategy using data analysis to deliver individualized messages and products
Platform capitalism – Business model of technological platforms
Privacy concerns with social networking services
Social profiling – Process of constructing a social media user's profile using his or her social data
== References ==
== Further reading ==
Couldry, Nick; Mejias, Ulises Ali (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford, California: Stanford University Press. ISBN 9781503609754.
Crain, Matthew (2021). Profit over Privacy: How Surveillance Advertising Conquered the Internet. Minneapolis: University of Minnesota Press. ISBN 9781517905057.
Zuboff, Shoshana (2018). Das Zeitalter des Überwachungskapitalismus. Berlin: Campus Verlag. ISBN 9783593509303.
== External links ==
Shoshana Zuboff Keynote: Reality is the Next Big Thing, YouTube, Elevate Festival, 2014
Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, Shoshana Zuboff
Capitalism's New Clothes, Evgeny Morozov, The Baffler (4 February 2019) |
Loewner order | In mathematics, Loewner order is the partial order defined by the convex cone of positive semi-definite matrices. This order is usually employed to generalize the definitions of monotone and concave/convex scalar functions to monotone and concave/convex Hermitian valued functions. These functions arise naturally in matrix and operator theory and have applications in many areas of physics and engineering.
== Definition ==
Let A and B be two Hermitian matrices of order n. We say that A ≥ B if A − B is positive semi-definite. Similarly, we say that A > B if A − B is positive definite.
Although it is commonly discussed on matrices (as a finite-dimensional case), the Loewner order is also well-defined on operators (an infinite-dimensional case) in the analogous way.
== Properties ==
When A and B are real scalars (i.e. n = 1), the Loewner order reduces to the usual ordering of R. Although some familiar properties of the usual order of R are also valid when n ≥ 2, several properties are no longer valid. For instance, the comparability of two matrices may no longer be valid. In fact, if
A
=
[
1
0
0
0
]
{\displaystyle A={\begin{bmatrix}1&0\\0&0\end{bmatrix}}\ }
and
B
=
[
0
0
0
1
]
{\displaystyle B={\begin{bmatrix}0&0\\0&1\end{bmatrix}}\ }
then neither A ≥ B or B ≥ A holds true. In other words, the Loewner order is a partial order, but not a total order.
Moreover, since A and B are Hermitian matrices, their eigenvalues are all real numbers.
If λ1(B) is the maximum eigenvalue of B and λn(A) the minimum eigenvalue of A, a sufficient criterion to have A ≥ B is that λn(A) ≥ λ1(B). If A or B is a multiple of the identity matrix, then this criterion is also necessary.
The Loewner order does not have the least-upper-bound property, and therefore does not form a lattice. It is bounded: for any finite set
S
{\displaystyle S}
of matrices, one can find an "upper-bound" matrix A that is greater than all of S. However, there will be multiple upper bounds. In a lattice, there would exist a unique maximum
max
(
S
)
{\displaystyle \max(S)}
such that any upper bound U on
S
{\displaystyle S}
obeys
max
(
S
)
{\displaystyle \max(S)}
≤ U. But in the Loewner order, one can have two upper bounds A and B that are both minimal (there is no element C < A that is also an upper bound) but that are incomparable (A - B is neither positive semidefinite nor negative semidefinite).
== See also ==
Trace inequalities
== References ==
Pukelsheim, Friedrich (2006). Optimal design of experiments. Society for Industrial and Applied Mathematics. pp. 11–12. ISBN 9780898716047.
Bhatia, Rajendra (1997). Matrix Analysis. New York, NY: Springer. ISBN 9781461206538.
Zhan, Xingzhi (2002). Matrix inequalities. Berlin: Springer. pp. 1–15. ISBN 9783540437987. |
Statistical interference | When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much.
This technique can be used for geometric dimensioning of mechanical parts, determining when an applied load exceeds the strength of a structure, and in many other situations. This type of analysis can also be used to estimate the probability of failure or the failure rate.
== Dimensional interference ==
Mechanical parts are usually designed to fit precisely together. For example, if a shaft is designed to have a "sliding fit" in a hole, the shaft must be a little smaller than the hole. (Traditional tolerances may suggest that all dimensions fall within those intended tolerances. A process capability study of actual production, however, may reveal normal distributions with long tails.) Both the shaft and hole sizes will usually form normal distributions with some average (arithmetic mean) and standard deviation.
With two such normal distributions, a distribution of interference can be calculated. The derived distribution will also be normal, and its average will be equal to the difference between the means of the two base distributions. The variance of the derived distribution will be the sum of the variances of the two base distributions.
This derived distribution can be used to determine how often the difference in dimensions will be less than zero (i.e., the shaft cannot fit in the hole), how often the difference will be less than the required sliding gap (the shaft fits, but too tightly), and how often the difference will be greater than the maximum acceptable gap (the shaft fits, but not tightly enough).
== Physical property interference ==
Physical properties and the conditions of use are also inherently variable. For example, the applied load (stress) on a mechanical part may vary. The measured strength of that part (tensile strength, etc.) may also be variable. The part will break when the stress exceeds the strength.
With two normal distributions, the statistical interference may be calculated as above. (This problem is also workable for transformed units such as the log-normal distribution). With other distributions, or combinations of different distributions, a Monte Carlo method or simulation is often the most practical way to quantify the effects of statistical interference.
== See also ==
Interference fit
Interval estimation
Joint probability distribution
Probabilistic design
Process capability
Reliability engineering
Specification
Tolerance (engineering)
== References ==
Paul H. Garthwaite, Byron Jones, Ian T. Jolliffe (2002) Statistical Inference. ISBN 0-19-857226-3
Haugen, (1980) Probabilistic mechanical design, Wiley. ISBN 0-471-05847-5 |
Explanation-based learning | Explanation-based learning (EBL) is a form of machine learning that exploits a very strong, or even perfect, domain theory (i.e. a formal theory of an application domain akin to a domain model in ontology engineering, not to be confused with Scott's domain theory) in order to make generalizations or form concepts from training examples. It is also linked with Encoding (memory) to help with Learning.
== Details ==
An example of EBL using a perfect domain theory is a program that learns to play chess through example. A specific chess position that contains an important feature such as "Forced loss of black queen in two moves" includes many irrelevant features, such as the specific scattering of pawns on the board. EBL can take a single training example and determine what are the relevant features in order to form a generalization.
A domain theory is perfect or complete if it contains, in principle, all information needed to decide any question about the domain. For example, the domain theory for chess is simply the rules of chess. Knowing the rules, in principle, it is possible to deduce the best move in any situation. However, actually making such a deduction is impossible in practice due to combinatoric explosion. EBL uses training examples to make searching for deductive consequences of a domain theory efficient in practice.
In essence, an EBL system works by finding a way to deduce each training example from the system's existing database of domain theory. Having a short proof of the training example extends the domain-theory database, enabling the EBL system to find and classify future examples that are similar to the training example very quickly.
The main drawback of the method—the cost of applying the learned proof macros, as these become numerous—was analyzed by Minton.
=== Basic formulation ===
EBL software takes four inputs:
a hypothesis space (the set of all possible conclusions)
a domain theory (axioms about a domain of interest)
training examples (specific facts that rule out some possible hypothesis)
operationality criteria (criteria for determining which features in the domain are efficiently recognizable, e.g. which features are directly detectable using sensors)
== Application ==
An especially good application domain for an EBL is natural language processing (NLP). Here a rich domain theory, i.e., a natural language grammar—although neither perfect nor complete, is tuned to a particular application or particular language usage, using a treebank (training examples). Rayner pioneered this work. The first successful industrial application was to a commercial NL interface to relational databases. The method has been successfully applied to several large-scale natural language parsing systems, where the utility problem was solved by omitting the original grammar (domain theory) and using specialized LR-parsing techniques, resulting in huge speed-ups, at a cost in coverage, but with a gain in disambiguation.
EBL-like techniques have also been applied to surface generation, the converse of parsing.
When applying EBL to NLP, the operationality criteria can be hand-crafted, or can be
inferred from the treebank using either the entropy of its or-nodes
or a target coverage/disambiguation trade-off (= recall/precision trade-off = f-score).
EBL can also be used to compile grammar-based language models for speech recognition, from general unification grammars.
Note how the utility problem, first exposed by Minton, was solved by discarding the original grammar/domain theory, and that the quoted articles tend to contain the phrase grammar specialization—quite the opposite of the original term explanation-based generalization. Perhaps the best name for this technique would be data-driven search space reduction.
Other people who worked on EBL for NLP include Guenther Neumann, Aravind Joshi, Srinivas Bangalore, and Khalil Sima'an.
== See also ==
One-shot learning in computer vision
Zero-shot learning
== References == |
Chernoff__apos__s distribution | In probability theory, Chernoff's distribution, named after Herman Chernoff, is the probability distribution of the random variable
Z
=
argmax
s
∈
R
(
W
(
s
)
−
s
2
)
,
{\displaystyle Z={\underset {s\in \mathbf {R} }{\operatorname {argmax} }}\ (W(s)-s^{2}),}
where W is a "two-sided" Wiener process (or two-sided "Brownian motion") satisfying W(0) = 0.
If
V
(
a
,
c
)
=
argmax
s
∈
R
(
W
(
s
)
−
c
(
s
−
a
)
2
)
,
{\displaystyle V(a,c)={\underset {s\in \mathbf {R} }{\operatorname {argmax} }}\ (W(s)-c(s-a)^{2}),}
then V(0, c) has density
f
c
(
t
)
=
1
2
g
c
(
t
)
g
c
(
−
t
)
{\displaystyle f_{c}(t)={\frac {1}{2}}g_{c}(t)g_{c}(-t)}
where gc has Fourier transform given by
g
^
c
(
s
)
=
(
2
/
c
)
1
/
3
Ai
(
i
(
2
c
2
)
−
1
/
3
s
)
,
s
∈
R
{\displaystyle {\hat {g}}_{c}(s)={\frac {(2/c)^{1/3}}{\operatorname {Ai} (i(2c^{2})^{-1/3}s)}},\ \ \ s\in \mathbf {R} }
and where Ai is the Airy function. Thus fc is symmetric about 0 and the density ƒZ = ƒ1. Groeneboom (1989) shows that
f
Z
(
z
)
∼
1
2
4
4
/
3
|
z
|
Ai
′
(
a
~
1
)
exp
(
−
2
3
|
z
|
3
+
2
1
/
3
a
~
1
|
z
|
)
as
z
→
∞
{\displaystyle f_{Z}(z)\sim {\frac {1}{2}}{\frac {4^{4/3}|z|}{\operatorname {Ai} '({\tilde {a}}_{1})}}\exp \left(-{\frac {2}{3}}|z|^{3}+2^{1/3}{\tilde {a}}_{1}|z|\right){\text{ as }}z\rightarrow \infty }
where
a
~
1
≈
−
2.3381
{\displaystyle {\tilde {a}}_{1}\approx -2.3381}
is the largest zero of the Airy function Ai and where
Ai
′
(
a
~
1
)
≈
0.7022
{\displaystyle \operatorname {Ai} '({\tilde {a}}_{1})\approx 0.7022}
. In the same paper, Groeneboom also gives an analysis of the process
{
V
(
a
,
1
)
:
a
∈
R
}
{\displaystyle \{V(a,1):a\in \mathbf {R} \}}
. The connection with the statistical problem of estimating a monotone density is discussed in Groeneboom (1985). Chernoff's distribution is now known to appear in a wide range of monotone problems including isotonic regression.
The Chernoff distribution should not be confused with the Chernoff geometric distribution (called the Chernoff point in information geometry) induced by the Chernoff information.
== History ==
Groeneboom, Lalley and Temme state that the first investigation of this distribution was probably by Chernoff in 1964, who studied the behavior of a certain estimator of a mode. In his paper, Chernoff characterized the distribution through an analytic representation through the heat equation with suitable boundary conditions. Initial attempts at approximating Chernoff's distribution via solving the heat equation, however, did not achieve satisfactory precision due to the nature of the boundary conditions. The computation of the distribution is addressed, for example, in Groeneboom and Wellner (2001).
The connection of Chernoff's distribution with Airy functions was also found independently by Daniels and Skyrme and Temme, as cited in Groeneboom, Lalley and Temme. These two papers, along with Groeneboom (1989), were all written in 1984.
== References == |
Single-particle trajectory | Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule.
Molecules can now by visualized based on recent super-resolution microscopy, which allow routine collections of thousands of short and long trajectories. These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell, as emphasized in various cell types such as neuronal cells, astrocytes, immune cells and many others.
== SPTs allow observing moving molecules inside cells to collect statistics ==
SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization, but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data
== Assembling points into a trajectory based on tracking algorithms ==
Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points. Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise.
== Extract physical parameters from redundant SPTs ==
The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level. In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute the mean-square displacement (MSD) or second order statistical moment:
⟨
|
X
(
t
+
Δ
t
)
−
X
(
t
)
|
2
⟩
∼
t
α
{\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle \sim t^{\alpha }}
(average over realizations), where
α
{\displaystyle \alpha }
is the called the anomalous exponent.
For a Brownian motion,
⟨
|
X
(
t
+
Δ
t
)
−
X
(
t
)
|
2
⟩
=
2
n
D
t
{\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle =2nDt}
, where D is the diffusion coefficient, n is dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion. The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated.
== Physical model to recover spatial properties from redundant SPTs ==
=== Langevin and Smoluchowski equations as a model of motion ===
Statistical methods to extract information from SPTs are based on stochastic models, such as the Langevin equation or its Smoluchowski's limit and associated models that account for additional localization point identification noise or memory kernel. The Langevin equation describes a stochastic particle driven by a Brownian force
Ξ
{\displaystyle \Xi }
and a field of force (e.g., electrostatic, mechanical, etc.) with an expression
F
(
x
,
t
)
{\displaystyle F(x,t)}
:
m
x
¨
+
Γ
x
˙
−
F
(
x
,
t
)
=
Ξ
,
{\displaystyle m{\ddot {x}}+\Gamma {\dot {x}}-F(x,t)=\Xi ,}
where m is the mass of the particle and
Γ
=
6
π
a
ρ
{\displaystyle \Gamma =6\pi a\rho }
is the friction coefficient of a diffusing particle,
ρ
{\displaystyle \rho }
the viscosity. Here
Ξ
{\displaystyle \Xi }
is the
δ
{\displaystyle \delta }
-correlated Gaussian white noise. The force can derived from a potential well U so that
F
(
x
,
t
)
=
−
U
′
(
x
)
{\displaystyle F(x,t)=-U'(x)}
and in that case, the equation takes the form
m
d
2
x
d
t
2
+
Γ
d
x
d
t
+
∇
U
(
x
)
=
2
ε
γ
d
η
d
t
,
{\displaystyle m{\frac {d^{2}x}{dt^{2}}}+\Gamma {\frac {dx}{dt}}+\nabla U(x)={\sqrt {2\varepsilon \gamma }}\,{\frac {d\eta }{dt}},}
where
ε
=
k
B
T
,
{\displaystyle \varepsilon =k_{\text{B}}T,}
is the energy and
k
B
{\displaystyle k_{\text{B}}}
the Boltzmann constant and T the temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations.
In the large friction limit
γ
→
∞
{\displaystyle \gamma \to \infty }
the trajectories
x
(
t
)
{\displaystyle x(t)}
of the Langevin equation converges in probability to those of the Smoluchowski's equation
γ
x
˙
+
U
′
(
x
)
=
2
ε
γ
w
˙
,
{\displaystyle \gamma {\dot {x}}+U^{\prime }(x)={\sqrt {2\varepsilon \gamma }}\,{\dot {w}},}
where
w
˙
(
t
)
{\displaystyle {\dot {w}}(t)}
is
δ
{\displaystyle \delta }
-correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vs Stratonovich integral representations or any others.
=== General model equations ===
For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation
X
˙
(
t
)
=
b
(
X
(
t
)
)
+
2
B
e
(
X
(
t
)
)
w
˙
(
t
)
,
(
1
)
{\displaystyle {\dot {X}}(t)={b}(X(t))+{\sqrt {2}}{B}_{e}(X(t)){\dot {w}}(t),\qquad \qquad (1)}
where
b
(
X
)
{\displaystyle {b}(X)}
is the drift field and
B
e
{\displaystyle {B}_{e}}
the diffusion matrix. The effective diffusion tensor can vary in space
D
(
X
)
=
1
2
B
(
X
)
B
T
X
T
{\displaystyle D(X)={\frac {1}{2}}B(X)B^{T}X^{T}}
(
X
T
{\textstyle X^{T}}
denotes the transpose of
X
{\textstyle X}
). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficient
γ
{\displaystyle \gamma }
remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic.
== Statistical analysis of these trajectories ==
The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data. The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes.
=== Empirical estimators for the drift and diffusion tensor of a stochastic process ===
Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells. The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference.
The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions.
The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the increments
Δ
X
=
X
(
t
+
Δ
t
)
−
X
(
t
)
{\displaystyle \Delta X=X(t+\Delta t)-X(t)}
:
a
(
x
)
=
lim
Δ
t
→
0
E
[
Δ
X
(
t
)
∣
X
(
t
)
=
x
]
Δ
t
,
{\displaystyle a(x)=\lim _{\Delta t\rightarrow 0}{\frac {E[\Delta X(t)\mid X(t)=x]}{\Delta t}},}
D
(
x
)
=
lim
Δ
t
→
0
E
[
Δ
X
(
t
)
T
Δ
X
(
t
)
∣
X
(
t
)
=
x
]
2
Δ
t
.
{\displaystyle D(x)=\lim _{\Delta t\rightarrow 0}{\frac {E[\Delta X(t)^{T}\,\Delta X(t)\mid X(t)=x]}{2\,\Delta t}}.}
Here the notation
E
[
⋅
|
X
(
t
)
=
x
]
{\displaystyle E[\cdot \,|\,X(t)=x]}
means averaging over all trajectories that are at point x at time t. The coefficients of the Smoluchowski equation can be statistically estimated at each point x from an infinitely large sample of its trajectories in the neighborhood of the point x at time t.
=== Empirical estimation ===
In practice, the expectations for a and D are estimated by finite sample averages and
Δ
t
{\displaystyle \Delta t}
is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time step
Δ
t
{\displaystyle \Delta t}
, where for tens to hundreds of points falling in any bin. This is usually enough for the estimation.
To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square bins
S
(
x
k
,
r
)
{\displaystyle S(x_{k},r)}
of side r and centre
x
k
{\displaystyle x_{k}}
and the local drift and diffusion are estimated for each of the square. Considering a sample with
N
t
{\displaystyle N_{t}}
trajectories
{
x
i
(
t
1
)
,
…
,
x
i
(
t
N
s
)
}
,
{\displaystyle \{x^{i}(t_{1}),\dots ,x^{i}(t_{N_{s}})\},}
where
t
j
{\displaystyle t_{j}}
are the sampling times, the discretization of equation for the drift
a
(
x
k
)
=
(
a
x
(
x
k
)
,
a
y
(
x
k
)
)
{\displaystyle a(x_{k})=(a_{x}(x_{k}),a_{y}(x_{k}))}
at position
x
k
{\displaystyle x_{k}}
is given for each spatial projection on the x and y axis by
a
x
(
x
k
)
≈
1
N
k
∑
j
=
1
N
t
∑
i
=
0
,
x
~
i
j
∈
S
(
x
k
,
r
)
N
s
−
1
(
x
i
+
1
j
−
x
i
j
Δ
t
)
{\displaystyle a_{x}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,{\tilde {x}}_{i}^{j}\in S(x_{k},r)}^{N_{s}-1}\left({\frac {x_{i+1}^{j}-x_{i}^{j}}{\Delta t}}\right)}
a
y
(
x
k
)
≈
1
N
k
∑
j
=
1
N
t
∑
i
=
0
,
x
~
i
j
∈
S
(
x
k
,
r
)
N
s
−
1
(
y
i
+
1
j
−
y
i
j
Δ
t
)
,
{\displaystyle a_{y}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,{\tilde {x}}_{i}^{j}\in S(x_{k},r)}^{N_{s}-1}\left({\frac {y_{i+1}^{j}-y_{i}^{j}}{\Delta t}}\right),}
where
N
k
{\displaystyle N_{k}}
is the number of points of trajectory that fall in the square
S
(
x
k
,
r
)
{\displaystyle S(x_{k},r)}
. Similarly, the components of the effective diffusion tensor
D
(
x
k
)
{\displaystyle D(x_{k})}
are approximated by the empirical sums
D
x
x
(
x
k
)
≈
1
N
k
∑
j
=
1
N
t
∑
i
=
0
,
x
i
∈
S
(
x
k
,
r
)
N
s
−
1
(
x
i
+
1
j
−
x
i
j
)
2
2
Δ
t
,
{\displaystyle D_{xx}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(x_{i+1}^{j}-x_{i}^{j})^{2}}{2\,\Delta t}},}
D
y
y
(
x
k
)
≈
1
N
k
∑
j
=
1
N
t
∑
i
=
0
,
x
i
∈
S
(
x
k
,
r
)
N
s
−
1
(
y
i
+
1
j
−
y
i
j
)
2
2
Δ
t
,
{\displaystyle D_{yy}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(y_{i+1}^{j}-y_{i}^{j})^{2}}{2\,\Delta t}},}
D
x
y
(
x
k
)
≈
1
N
k
∑
j
=
1
N
t
∑
i
=
0
,
x
i
∈
S
(
x
k
,
r
)
N
s
−
1
(
x
i
+
1
j
−
x
i
j
)
(
y
i
+
1
j
−
y
i
j
)
2
Δ
t
.
{\displaystyle D_{xy}(x_{k})\approx {\frac {1}{N_{k}}}\sum _{j=1}^{N_{t}}\sum _{i=0,x_{i}\in S(x_{k},r)}^{N_{s}-1}{\frac {(x_{i+1}^{j}-x_{i}^{j})(y_{i+1}^{j}-y_{i}^{j})}{2\,\Delta t}}.}
The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radius r or by moving sliding windows (of the order of 50 to 100 nm).
=== Automated recovery of the boundary of a nanodomain ===
Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers.
== References == |
Item tree analysis | Item tree analysis (ITA) is a data analytical method which allows constructing a
hierarchical structure on the items of a questionnaire or test from observed response
patterns. Assume that we have a questionnaire with m items and that subjects can
answer positive (1) or negative (0) to each of these items, i.e. the items are
dichotomous. If n subjects answer the items this results in a binary data matrix D
with m columns and n rows.
Typical examples of this data format are test items which can be solved (1) or failed
(0) by subjects. Other typical examples are questionnaires where the items are
statements to which subjects can agree (1) or disagree (0).
Depending on the content of the items it is possible that the response of a subject to an
item j determines her or his responses to other items. It is, for example, possible that
each subject who agrees to item j will also agree to item i. In this case we say that
item j implies item i (short
i
→
j
{\displaystyle i\rightarrow j}
). The goal of an ITA is to uncover such
deterministic implications from the data set D.
== Algorithms for ITA ==
ITA was originally developed by Van Leeuwe in 1974. The result of his algorithm,
which we refer in the following as Classical ITA, is a logically consistent set of
implications
i
→
j
{\displaystyle i\rightarrow j}
. Logically consistent means that if i implies j and j implies k then i implies k for each triple i, j, k of items. Thus the outcome of an ITA is a reflexive and transitive relation on the item set, i.e. a quasi-order on the items.
A different algorithm to perform an ITA was suggested in Schrepp (1999). This algorithm is called Inductive ITA.
Classical ITA and inductive ITA both construct a quasi-order on the item set by explorative data analysis. But both methods use a different algorithm to construct this quasi-order. For a given data set the resulting quasi-orders from classical and inductive ITA will usually differ.
A detailed description of the algorithms used in classical and inductive ITA can be found in Schrepp (2003) or Schrepp (2006)[1]. In a recent paper (Sargin & Ünlü, 2009) some modifications to the algorithm of inductive ITA are proposed, which improve the ability of this method to detect the correct implications from data (especially in the case of higher random response error rates).
== Relation to other methods ==
ITA belongs to a group of data analysis methods called Boolean analysis of questionnaires.
Boolean analysis was introduced by Flament in 1976. The goal of a Boolean analysis is to
detect deterministic dependencies (formulas from Boolean logic connecting the items, like for example
i
→
j
{\displaystyle i\rightarrow j}
,
i
∧
j
→
k
{\displaystyle i\wedge j\rightarrow k}
, and
i
∨
j
→
k
{\displaystyle i\vee j\rightarrow k}
) between the items of a questionnaire or test.
Since the basic work of Flament (1976) a number of different methods for boolean analysis
have been developed. See, for example, Van Buggenhaut and Degreef (1987), Duquenne (1987) or Theuns (1994).
These methods share the goal to derive deterministic dependencies between the items of a
questionnaire from data, but differ in the algorithms to reach this goal. A comparison of ITA
to other methods of boolean data analysis can be found in Schrepp (2003).
== Applications ==
There are several research papers available, which describe concrete applications of item tree analysis.
Held and Korossy (1998) analyzes implications on a set of algebra problems with classical ITA. Item tree analysis is also used in a number of social science studies to get insight into the structure of dichotomous data. In Bart and Krus (1973), for example, a predecessor of ITA is used to establish a hierarchical order on items that describe socially unaccepted behavior. In Janssens (1999) a method of Boolean analysis is used to investigate the
integration process of minorities into the value system of the dominant culture. Schrepp describes several applications of inductive ITA in the analysis of dependencies between items of social science questionnaires.
== Example of an application ==
To show the possibilities of an analysis of a data set by ITA we analyse the statements of question 4 of the International Social Science Survey Programme (ISSSP) for the year 1995 by inductive and classical ITA.
The ISSSP is a continuing annual program of cross-national collaboration on surveys covering important topics for social science research. The program conducts each year one survey with comparable questions in each of the participating nations. The theme of the 1995 survey was national identity. We analyze the results for question 4 for the data set of Western Germany.
The statement for question 4 was:
Some people say the following things are important for being truly German. Others say they are not important. How important do you think each of the following is:
1. to have been born in Germany
2. to have German citizenship
3. to have lived in Germany for most of one’s life
4. to be able to speak German
5. to be a Christian
6. to respect Germany’s political institutions
7. to feel German
The subjects had the response possibilities Very important, Important, Not very important, Not important at all, and Can’t choose to answer the statements.
To apply ITA to this data set we changed the answer categories. Very important and Important are coded as 1. Not very important and Not important at all are coded as 0. Can’t choose was handled as missing data.
The following figure shows the resulting quasi-orders
≤
I
I
T
A
{\displaystyle \leq _{IITA}}
from inductive ITA and
≤
C
I
T
A
{\displaystyle \leq _{CITA}}
from classical ITA.
== Available software ==
The program ITA 2.0 implements both classical and inductive ITA. The program is available on [2]. A short documentation of the program is available in [3].
== See also ==
Item response theory
== Notes ==
== References ==
Bart, W. M., & Krus, D. J. (1973). An ordering-theoretic method to determine hierarchies among items. Educational and psychological measurement, 33, 291–300.
Duquenne V (1987). Conceptual Implications Between Attributes and some Representation Properties for Finite Lattices. In B Ganter, R Wille, K Wolfe (eds.), Beiträge zur Begriffsanalyse: Vorträge der Arbeitstagung Begriffsanalyse, Darmstadt 1986, pp. 313–339. Wissenschafts-Verlag, Mannheim.
Flament C (1976). L’Analyse Bool´eenne de Questionnaire. Mouton, Paris.
Held, T., & Korossy, K. (1998). Data-analysis as heuristic for establishing theoretically founded item structures. Zeitschrift für Psychologie, 206, 169–188.
Janssens, R. (1999). A Boolean approach to the measurement of group processes and attitudes. The concept of integration as an example. Mathematical Social Sciences, 38, 275–293.
Schrepp M (1999). On the Empirical Construction of Implications on Bi-valued Test Items. Mathematical Social Sciences, 38(3), 361–375.
Schrepp, M (2002). Explorative analysis of empirical data by boolean analysis of questionnaires. Zeitschrift für Psychologie, 210/2, S. 99-109.
Schrepp, M. (2003). A method for the analysis of hierarchical dependencies between items of a questionnaire. Methods of Psychological Research, 19, 43–79.
Schrepp, M. (2006). ITA 2.0: A program for Classical and Inductive Item Tree Analysis. Journal of Statistical Software, Vol. 16, Issue 10.
Schrepp, M. (2006). Properties of the correlational agreement coefficient: A comment to Ünlü & Albert (2004). Mathematical Social Science, Vol. 51, Issue 1, 117–123.
Schrepp, M. (2007). On the evaluation of fit measures for quasi-orders. Mathematical Social Sciences Vol. 53, Issue 2, 196–208.
Theuns P (1994). A Dichotomization Method for Boolean Analysis of Quantifiable Cooccurence Data. In G Fischer, D Laming (eds.), Contributions to Mathematical Psychology, Psychometrics and Methodology, Scientific Psychology Series, pp. 173–194. Springer-Verlag, New York.
Ünlü, A., & Albert, D. (2004). The Correlational Agreement Coefficient CA - a mathematical analysis of a descriptive goodness-of-fit measure. Mathematical Social Sciences, 48, 281–314.
Van Buggenhaut J, Degreef E (1987). On Dichotomization Methods in Boolean Analysis of Questionnaires. In E Roskam, R Suck (eds.), Mathematical Psychology in Progress, Elsevier Science Publishers B.V., North Holland.
Van Leeuwe, J.F.J. (1974). Item tree analysis. Nederlands Tijdschrift voor de Psychologie, 29, 475–484.
Sargin, A., & Ünlü, A. (2009). Inductive item tree analysis: Corrections, improvements, and comparisons. Mathematical Social Sciences, 58, 376–392. |
NSynth | NSynth (a portmanteau of "Neural Synthesis") is a WaveNet-based autoencoder for synthesizing audio, outlined in a paper in April 2017.
== Overview ==
The model generates sounds through a neural network based synthesis, employing a WaveNet-style autoencoder to learn its own temporal embeddings from four different sounds. Google then released an open source hardware interface for the algorithm called NSynth Super, used by notable musicians such as Grimes and YACHT to generate experimental music using artificial intelligence. The research and development of the algorithm was part of a collaboration between Google Brain, Magenta and DeepMind.
== Technology ==
=== Dataset ===
The NSynth dataset is composed of 305,979 one-shot instrumental notes featuring a unique pitch, timbre, and envelope, sampled from 1,006 instruments from commercial sample libraries. For each instrument the dataset contains four-second 16 kHz audio snippets by ranging over every pitch of a standard MIDI piano, as well as five different velocities. The dataset is made available under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
=== Machine learning model ===
A spectral autoencoder model and a WaveNet autoencoder model are publicly available on GitHub. The baseline model uses a spectrogram with fft_size 1024 and hop_size 256, MSE loss on the magnitudes, and the Griffin-Lim algorithm for reconstruction. The WaveNet model trains on mu-law encoded waveform chunks of size 6144. It learns embeddings with 16 dimensions that are downsampled by 512 in time.
== NSynth Super ==
In 2018 Google released a hardware interface for the NSynth algorithm, called NSynth Super, designed to provide an accessible physical interface to the algorithm for musicians to use in their artistic production.
Design files, source code and internal components are released under an open source Apache License 2.0, enabling hobbyists and musicians to freely build and use the instrument. At the core of the NSynth Super there is a Raspberry Pi, extended with a custom printed circuit board to accommodate the interface elements.
== Influence ==
Despite not being publicly available as a commercial product, NSynth Super has been used by notable artists, including Grimes and YACHT.
Grimes reported using the instrument in her 2020 studio album Miss Anthropocene.
YACHT announced an extensive use of NSynth Super in their album Chain Tripping.
Claire L. Evans compared the potential influence of the instrument to the Roland TR-808.
The NSynth Super design was honored with a D&AD Yellow Pencil award in 2018.
== References ==
== Further reading ==
Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279 [cs.LG].
== External links ==
Official Nsynth Super site
Official Magenta site
In-browser emulation of the Nsynth algorithm |
Energy-based model | An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence.
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
== Description ==
For a given input
x
{\displaystyle x}
, the model describes an energy
E
θ
(
x
)
{\displaystyle E_{\theta }(x)}
such that the Boltzmann distribution
P
θ
(
x
)
=
exp
(
−
β
E
θ
(
x
)
)
/
Z
(
θ
)
{\displaystyle P_{\theta }(x)=\exp(-\beta E_{\theta }(x))/Z(\theta )}
is a probability (density), and typically
β
=
1
{\displaystyle \beta =1}
.
Since the normalization constant:
Z
(
θ
)
:=
∫
x
∈
X
exp
(
−
β
E
θ
(
x
)
)
d
x
{\displaystyle Z(\theta ):=\int _{x\in X}\exp(-\beta E_{\theta }(x))dx}
(also known as the partition function) depends on all the Boltzmann factors of all possible inputs
x
{\displaystyle x}
, it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example
x
{\displaystyle x}
is given by using the chain rule:
∂
θ
log
(
P
θ
(
x
)
)
=
E
x
′
∼
P
θ
[
∂
θ
E
θ
(
x
′
)
]
−
∂
θ
E
θ
(
x
)
(
∗
)
{\displaystyle \partial _{\theta }\log \left(P_{\theta }(x)\right)=\mathbb {E} _{x'\sim P_{\theta }}[\partial _{\theta }E_{\theta }(x')]-\partial _{\theta }E_{\theta }(x)\,(*)}
The expectation in the above formula for the gradient can be approximately estimated by drawing samples
x
′
{\displaystyle x'}
from the distribution
P
θ
{\displaystyle P_{\theta }}
using Markov chain Monte Carlo (MCMC).
Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using:
x
0
′
∼
P
0
,
x
i
+
1
′
=
x
i
′
−
α
2
∂
E
θ
(
x
i
′
)
∂
x
i
′
+
ϵ
{\displaystyle x_{0}'\sim P_{0},x_{i+1}'=x_{i}'-{\frac {\alpha }{2}}{\frac {\partial E_{\theta }(x_{i}')}{\partial x_{i}'}}+\epsilon }
,
where
ϵ
∼
N
(
0
,
α
)
{\displaystyle \epsilon \sim {\mathcal {N}}(0,\alpha )}
. A replay buffer of past values
x
i
′
{\displaystyle x_{i}'}
is used with LD to initialize the optimization module.
The parameters
θ
{\displaystyle \theta }
of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation:
the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters
θ
{\displaystyle \theta }
based on the difference between the training examples and the synthesized ones – see equation
(
∗
)
{\displaystyle (*)}
. This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
Essentially, the model learns a function
E
θ
{\displaystyle E_{\theta }}
that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model
E
θ
{\displaystyle E_{\theta }}
, the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by:
P
a
c
c
(
x
i
→
x
∗
)
=
min
(
1
,
P
θ
(
x
∗
)
P
θ
(
x
i
)
)
.
{\displaystyle P_{acc}(x_{i}\to x^{*})=\min \left(1,{\frac {P_{\theta }(x^{*})}{P_{\theta }(x_{i})}}\right).}
== History ==
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
== Characteristics ==
EBMs demonstrate useful properties:
Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples.
Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples.
Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
== Experimental results ==
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
== Applications ==
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
== Alternatives ==
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
== Extensions ==
=== Joint energy-based models ===
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability
p
θ
(
y
|
x
)
=
e
f
→
θ
(
x
)
[
y
]
∑
j
=
1
K
e
f
→
θ
(
x
)
[
j
]
for
y
=
1
,
…
,
K
and
f
→
θ
=
(
f
1
,
…
,
f
K
)
∈
R
K
,
{\displaystyle p_{\theta }(y|x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{\sum _{j=1}^{K}e^{{\vec {f}}_{\theta }(x)[j]}}}\ \ {\text{ for }}y=1,\dotsc ,K{\text{ and }}{\vec {f}}_{\theta }=(f_{1},\dotsc ,f_{K})\in \mathbb {R} ^{K},}
where
f
→
θ
(
x
)
[
y
]
{\displaystyle {\vec {f}}_{\theta }(x)[y]}
is the y-th index of the logits
f
→
{\displaystyle {\vec {f}}}
corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
p
θ
(
y
,
x
)
=
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
,
{\displaystyle p_{\theta }(y,x)={\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}},}
with unknown partition function
Z
(
θ
)
{\displaystyle Z(\theta )}
and energy
E
θ
(
x
,
y
)
=
−
f
θ
(
x
)
[
y
]
{\displaystyle E_{\theta }(x,y)=-f_{\theta }(x)[y]}
.
By marginalization, we obtain the unnormalized density
p
θ
(
x
)
=
∑
y
p
θ
(
y
,
x
)
=
∑
y
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
=:
exp
(
−
E
θ
(
x
)
)
,
{\displaystyle p_{\theta }(x)=\sum _{y}p_{\theta }(y,x)=\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}=:\exp(-E_{\theta }(x)),}
therefore,
E
θ
(
x
)
=
−
log
(
∑
y
e
f
→
θ
(
x
)
[
y
]
Z
(
θ
)
)
,
{\displaystyle E_{\theta }(x)=-\log \left(\sum _{y}{\frac {e^{{\vec {f}}_{\theta }(x)[y]}}{Z(\theta )}}\right),}
so that any classifier can be used to define an energy function
E
θ
(
x
)
{\displaystyle E_{\theta }(x)}
.
== See also ==
Empirical likelihood
Posterior predictive distribution
Contrastive learning
== Literature ==
Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
== References ==
== External links ==
"CIAR NCAP Summer School". www.cs.toronto.edu. Retrieved 2019-12-27.
Dayan, Peter; Hinton, Geoffrey; Neal, Radford; Zemel, Richard S. (1999), "Helmholtz Machine", Unsupervised Learning, The MIT Press, doi:10.7551/mitpress/7011.003.0017, ISBN 978-0-262-28803-3
Hinton, Geoffrey E. (August 2002). "Training Products of Experts by Minimizing Contrastive Divergence". Neural Computation. 14 (8): 1771–1800. doi:10.1162/089976602760128018. ISSN 0899-7667. PMID 12180402. S2CID 207596505.
Salakhutdinov, Ruslan; Hinton, Geoffrey (2009-04-15). "Deep Boltzmann Machines". Artificial Intelligence and Statistics: 448–455. |
NETtalk (artificial neural network) | NETtalk is an artificial neural network that learns to pronounce written English text by supervised learning. It takes English text as input, and produces a matching phonetic transcriptions as output.
It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified models that might shed light on the complexity of learning human level cognitive tasks, and their implementation as a connectionist model that could also learn to perform a comparable task. The authors trained it by backpropagation.
The network was trained on a large amount of English words and their corresponding pronunciations, and is able to generate pronunciations for unseen words with a high level of accuracy. The success of the NETtalk network inspired further research in the field of pronunciation generation and speech synthesis and demonstrated the potential of neural networks for solving complex natural language processing problems. The output of the network was a stream of phonemes, which fed into DECtalk to produce audible speech, It achieved popular success, appearing on the Today show.: 115
From the point of view of modeling human cognition, NETtalk does not specifically model the image processing stages and letter recognition of the visual cortex. Rather, it assumes that the letters have been pre-classified and recognized. It is NETtalk's task to learn proper associations between the correct pronunciation with a given sequence of letters based on the context in which the letters appear.
A similar architecture had been subsequently used for the opposite task, that of converting continuous speech signal to a phoneme sequence.
== Training ==
The training dataset was a 20,008-word subset of the Brown Corpus, with manually annotated phoneme and stress for each letter. The development process was described in a 1993 interview. It took three months -- 250 person-hours -- to create the training dataset, but only a few days to train the network.
After it was run successfully on this, the authors tried it on a phonological transcription of an interview with a young Latino boy from a barrio in Los Angeles. This resulted in a network that reproduced his Spanish accent.: 115
The original NETtalk was implemented on a Ridge 32, which took 0.275 seconds per learning step (one forward and one backward pass). Training NETtalk became a benchmark to test for the efficiency of backpropagation programs. For example, an implementation on Connection Machine-1 (with 16384 processors) ran at 52x speedup. An implementation on a 10-cell Warp ran at 340x speedup.
The following table compiles the benchmark scores as of 1988. Speed is measured in "millions of connections per second" (MCPS). For example, the original NETtalk on Ridge 32 took 0.275 seconds per forward-backward pass, giving
18629
/
10
6
0.275
=
0.068
{\displaystyle {\frac {18629/10^{6}}{0.275}}=0.068}
MCPS. Relative times are normalized to the MicroVax.
== Architecture ==
The network had three layers and 18,629 adjustable weights, large by the standards of 1986. There were worries that it would overfit the dataset, but it was trained successfully.
The input of the network has 203 units, divided into 7 groups of 29 units each. Each group is a one-hot encoding of one character. There are 29 possible characters: 26 letters, comma, period, and word boundary (whitespace). To produce the pronunciation of a single character, the network takes the character itself, as well as 3 characters before and 3 characters after it.
The hidden layer has 80 units.
The output has 26 units. 21 units encode for articulatory features (point of articulation, voicing, vowel height, etc.) of phonemes, and 5 units encode for stress and syllable boundaries.
Sejnowski studied the learned representation in the network, and found that phonemes that sound similar are clustered together in representation space. The output of the network degrades, but remains understandable, when some hidden neurons are removed.
== References ==
== External links ==
Original NETtalk training set
New York Times article about NETtalk |
PVLV | The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons that fire in proportion to unexpected rewards. It is an alternative to the temporal-differences (TD) algorithm.
It is used as part of Leabra.
== References == |
Graded structure | In mathematics, the term "graded" has a number of meanings, mostly related:
In abstract algebra, it refers to a family of concepts:
An algebraic structure
X
{\displaystyle X}
is said to be
I
{\displaystyle I}
-graded for an index set
I
{\displaystyle I}
if it has a gradation or grading, i.e. a decomposition into a direct sum
X
=
⨁
i
∈
I
X
i
{\textstyle X=\bigoplus _{i\in I}X_{i}}
of structures; the elements of
X
i
{\displaystyle X_{i}}
are said to be "homogeneous of degree i".
The index set
I
{\displaystyle I}
is most commonly
N
{\displaystyle \mathbb {N} }
or
Z
{\displaystyle \mathbb {Z} }
, and may be required to have extra structure depending on the type of
X
{\displaystyle X}
.
Grading by
Z
2
{\displaystyle \mathbb {Z} _{2}}
(i.e.
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
) is also important; see e.g. signed set (the
Z
2
{\displaystyle \mathbb {Z} _{2}}
-graded sets).
The trivial (
Z
{\displaystyle \mathbb {Z} }
- or
N
{\displaystyle \mathbb {N} }
-) gradation has
X
0
=
X
,
X
i
=
0
{\displaystyle X_{0}=X,X_{i}=0}
for
i
≠
0
{\displaystyle i\neq 0}
and a suitable trivial structure
0
{\displaystyle 0}
.
An algebraic structure is said to be doubly graded if the index set is a direct product of sets; the pairs may be called "bidegrees" (e.g. see Spectral sequence).
A
I
{\displaystyle I}
-graded vector space or graded linear space is thus a vector space with a decomposition into a direct sum
V
=
⨁
i
∈
I
V
i
{\textstyle V=\bigoplus _{i\in I}V_{i}}
of spaces.
A graded linear map is a map between graded vector spaces respecting their gradations.
A graded ring is a ring that is a direct sum of additive abelian groups
R
i
{\displaystyle R_{i}}
such that
R
i
R
j
⊆
R
i
+
j
{\displaystyle R_{i}R_{j}\subseteq R_{i+j}}
, with
i
{\displaystyle i}
taken from some monoid, usually
N
{\displaystyle \mathbb {N} }
or
Z
{\displaystyle \mathbb {Z} }
, or semigroup (for a ring without identity).
The associated graded ring of a commutative ring
R
{\displaystyle R}
with respect to a proper ideal
I
{\displaystyle I}
is
gr
I
R
=
⨁
n
∈
N
I
n
/
I
n
+
1
{\textstyle \operatorname {gr} _{I}R=\bigoplus _{n\in \mathbb {N} }I^{n}/I^{n+1}}
.
A graded module is left module
M
{\displaystyle M}
over a graded ring that is a direct sum
⨁
i
∈
I
M
i
{\textstyle \bigoplus _{i\in I}M_{i}}
of modules satisfying
R
i
M
j
⊆
M
i
+
j
{\displaystyle R_{i}M_{j}\subseteq M_{i+j}}
.
The associated graded module of an
R
{\displaystyle R}
-module
M
{\displaystyle M}
with respect to a proper ideal
I
{\displaystyle I}
is
gr
I
M
=
⨁
n
∈
N
I
n
M
/
I
n
+
1
M
{\textstyle \operatorname {gr} _{I}M=\bigoplus _{n\in \mathbb {N} }I^{n}M/I^{n+1}M}
.
A differential graded module, differential graded
Z
{\displaystyle \mathbb {Z} }
-module or DG-module is a graded module
M
{\displaystyle M}
with a differential
d
:
M
→
M
:
M
i
→
M
i
+
1
{\displaystyle d\colon M\to M\colon M_{i}\to M_{i+1}}
making
M
{\displaystyle M}
a chain complex, i.e.
d
∘
d
=
0
{\displaystyle d\circ d=0}
.
A graded algebra is an algebra
A
{\displaystyle A}
over a ring
R
{\displaystyle R}
that is graded as a ring; if
R
{\displaystyle R}
is graded we also require
A
i
R
j
⊆
A
i
+
j
⊇
R
i
A
j
{\displaystyle A_{i}R_{j}\subseteq A_{i+j}\supseteq R_{i}A_{j}}
.
The graded Leibniz rule for a map
d
:
A
→
A
{\displaystyle d\colon A\to A}
on a graded algebra
A
{\displaystyle A}
specifies that
d
(
a
⋅
b
)
=
(
d
a
)
⋅
b
+
(
−
1
)
|
a
|
a
⋅
(
d
b
)
{\displaystyle d(a\cdot b)=(da)\cdot b+(-1)^{|a|}a\cdot (db)}
.
A differential graded algebra, DG-algebra or DGAlgebra is a graded algebra that is a differential graded module whose differential obeys the graded Leibniz rule.
A homogeneous derivation on a graded algebra A is a homogeneous linear map of grade d = |D| on A such that
D
(
a
b
)
=
D
(
a
)
b
+
ε
|
a
|
|
D
|
a
D
(
b
)
,
ε
=
±
1
{\displaystyle D(ab)=D(a)b+\varepsilon ^{|a||D|}aD(b),\varepsilon =\pm 1}
acting on homogeneous elements of A.
A graded derivation is a sum of homogeneous derivations with the same
ε
{\displaystyle \varepsilon }
.
A DGA is an augmented DG-algebra, or differential graded augmented algebra, (see Differential graded algebra).
A superalgebra is a
Z
2
{\displaystyle \mathbb {Z} _{2}}
-graded algebra.
A graded-commutative superalgebra satisfies the "supercommutative" law
y
x
=
(
−
1
)
|
x
|
|
y
|
x
y
.
{\displaystyle yx=(-1)^{|x||y|}xy.}
for homogeneous x,y, where
|
a
|
{\displaystyle |a|}
represents the "parity" of
a
{\displaystyle a}
, i.e. 0 or 1 depending on the component in which it lies.
CDGA may refer to the category of augmented differential graded commutative algebras.
A graded Lie algebra is a Lie algebra that is graded as a vector space by a gradation compatible with its Lie bracket.
A graded Lie superalgebra is a graded Lie algebra with the requirement for anticommutativity of its Lie bracket relaxed.
A supergraded Lie superalgebra is a graded Lie superalgebra with an additional super
Z
2
{\displaystyle \mathbb {Z} _{2}}
-gradation.
A differential graded Lie algebra is a graded vector space over a field of characteristic zero together with a bilinear map
[
,
]
:
L
i
⊗
L
j
→
L
i
+
j
{\displaystyle [\ ,]\colon L_{i}\otimes L_{j}\to L_{i+j}}
and a differential
d
:
L
i
→
L
i
−
1
{\displaystyle d\colon L_{i}\to L_{i-1}}
satisfying
[
x
,
y
]
=
(
−
1
)
|
x
|
|
y
|
+
1
[
y
,
x
]
,
{\displaystyle [x,y]=(-1)^{|x||y|+1}[y,x],}
for any homogeneous elements x, y in L, the "graded Jacobi identity" and the graded Leibniz rule.
The Graded Brauer group is a synonym for the Brauer–Wall group
B
W
(
F
)
{\displaystyle BW(F)}
classifying finite-dimensional graded central division algebras over the field F.
An
A
{\displaystyle {\mathcal {A}}}
-graded category for a category
A
{\displaystyle {\mathcal {A}}}
is a category
C
{\displaystyle {\mathcal {C}}}
together with a functor
F
:
C
→
A
{\displaystyle F\colon {\mathcal {C}}\rightarrow {\mathcal {A}}}
.
A differential graded category or DG category is a category whose morphism sets form differential graded
Z
{\displaystyle \mathbb {Z} }
-modules.
Graded manifold – extension of the manifold concept based on ideas coming from supersymmetry and supercommutative algebra, including sections on
Graded function
Graded vector fields
Graded exterior forms
Graded differential geometry
Graded differential calculus
In other areas of mathematics:
Functionally graded elements are used in finite element analysis.
A graded poset is a poset
P
{\displaystyle P}
with a rank function
ρ
:
P
→
N
{\displaystyle \rho \colon P\to \mathbb {N} }
compatible with the ordering (i.e.
ρ
(
x
)
<
ρ
(
y
)
⟹
x
<
y
{\displaystyle \rho (x)<\rho (y)\implies x<y}
) such that
y
{\displaystyle y}
covers
x
⟹
ρ
(
y
)
=
ρ
(
x
)
+
1
{\displaystyle x\implies \rho (y)=\rho (x)+1}
. |
Link-centric preferential attachment | In mathematical modeling of social networks, link-centric preferential attachment
is a node's propensity to re-establish links to nodes it has previously been in contact with in time-varying networks. This preferential attachment model relies on nodes keeping memory of previous neighbors up to the current time.
== Background ==
In real social networks individuals exhibit a tendency to re-connect with past contacts (ex. family, friends, co-workers, etc.) rather than strangers. In 1970, Mark Granovetter examined this behaviour in the social networks of a group of workers and identified tie strength, a characteristic of social ties describing the frequency of contact between two individuals. From this comes the idea of strong and weak ties, where an individual's strong ties are those she has come into frequent contact with. Link-centric preferential attachment aims to explain the mechanism behind strong and weak ties as a stochastic reinforcement process for old ties in agent-based modeling where nodes have long-term memory.
== Examples ==
In a simple model for this mechanism, a node's propensity to establish a new link can be characterized solely by
n
{\displaystyle n}
, the number of contacts it has had in the past. The probability for a node with n social ties to establish a new social tie could then be simply given by
P
(
n
)
=
c
n
+
c
{\displaystyle P(n)={c \over n+c}\,}
where c is an offset constant. The probability for a node to re-connect with old ties is then
1
−
P
(
n
)
=
n
n
+
c
.
{\displaystyle 1-P(n)={n \over n+c}.}
Figure 1. shows an example of this process: in the first step nodes A and C connect to node B, giving B a total of two social ties. With c = 1, in the next step B has a probability P(2) = 1/(2 + 1) = 1/3 to create a new tie with D, whereas the probability to reconnect with A or C is twice that at 2/3.
More complex models may take into account other variables, such as frequency of contact, contact and intercontact duration, as well as short term memory effects.
Effects on the spreading of contagions / weakness of strong ties
Understanding the evolution of a network's structure and how it can influence dynamical processes has become an important part of modeling the spreading of contagions. In models of social and biological contagion spreading on time-varying networks link-centric preferential attachment can alter the spread of the contagion to the entire population. Compared to the classic rumour spreading process where nodes are memory-less, link-centric preferential attachment can cause not only a slower spread of the contagion but also one less diffuse. In these models an infected node's chances of connecting to new contacts diminishes as their size of their social circle
n
{\displaystyle n}
grows leading to a limiting effect on the growth of n. The result is strong ties with a node's early contacts and consequently the weakening of the diffusion of the contagion.
== See also ==
BA model
Network science
Interpersonal tie
== References == |
One-way analysis of variance | In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two or more samples' means are significantly different (using the F distribution). This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way".
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.
Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and t is given by F = t2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable.
== Assumptions ==
The results of a one-way ANOVA can be considered reliable as long as the following assumptions are met:
Response variable residuals are normally distributed (or approximately normally distributed).
Variances of populations are equal.
Responses for a given group are independent and identically distributed normal random variables (not a simple random sample (SRS)).
If data are ordinal, a non-parametric alternative to this test should be used such as Kruskal–Wallis one-way analysis of variance. If the variances are not known to be equal, a generalization of 2-sample Welch's t-test can be used.
=== Departures from population normality ===
ANOVA is a relatively robust procedure with respect to violations of the normality assumption.
The one-way ANOVA can be generalized to the factorial and multivariate layouts, as well as to the analysis of covariance.
It is often stated in popular literature that none of these F-tests are robust when there are severe violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts. Furthermore, it is also claimed that if the underlying assumption of homoscedasticity is violated, the Type I error properties degenerate much more severely.
However, this is a misconception, based on work done in the 1950s and earlier. The first comprehensive investigation of the issue by Monte Carlo simulation was Donaldson (1966). He showed that under the usual departures (positive skew, unequal variances) "the F-test is conservative", and so it is less likely than it should be to find that a variable is significant. However, as either the sample size or the number of cells increases, "the power curves seem to converge to that based on the normal distribution". Tiku (1971) found that "the non-normal theory power of F is found to differ from the normal theory power by a correction term which decreases sharply with increasing sample size." The problem of non-normality, especially in large samples, is far less serious than popular articles would suggest.
The current view is that "Monte-Carlo studies were used extensively with normal distribution-based tests to determine how sensitive they are to violations of the assumption of normal distribution of the analyzed variables in the population. The general conclusion from these studies is that the consequences of such violations are less severe than previously thought. Although these conclusions should not entirely discourage anyone from being concerned about the normality assumption, they have increased the overall popularity of the distribution-dependent statistical tests in all areas of research."
For nonparametric alternatives in the factorial layout, see Sawilowsky. For more discussion see ANOVA on ranks.
== The case of fixed effects, fully randomized experiment, unbalanced data ==
=== The model ===
The normal linear model describes treatment groups with probability
distributions which are identically bell-shaped (normal) curves with
different means. Thus fitting the models requires only the means of
each treatment group and a variance calculation (an average variance
within the treatment groups is used). Calculations of the means and
the variance are performed as part of the hypothesis test.
The commonly used normal linear models for a completely
randomized experiment are:
y
i
,
j
=
μ
j
+
ε
i
,
j
{\displaystyle y_{i,j}=\mu _{j}+\varepsilon _{i,j}}
(the means model)
or
y
i
,
j
=
μ
+
τ
j
+
ε
i
,
j
{\displaystyle y_{i,j}=\mu +\tau _{j}+\varepsilon _{i,j}}
(the effects model)
where
i
=
1
,
…
,
I
{\displaystyle i=1,\dotsc ,I}
is an index over experimental units
j
=
1
,
…
,
J
{\displaystyle j=1,\dotsc ,J}
is an index over treatment groups
I
j
{\displaystyle I_{j}}
is the number of experimental units in the jth treatment group
I
=
∑
j
I
j
{\displaystyle I=\sum _{j}I_{j}}
is the total number of experimental units
y
i
,
j
{\displaystyle y_{i,j}}
are observations
μ
j
{\displaystyle \mu _{j}}
is the mean of the observations for the jth treatment group
μ
{\displaystyle \mu }
is the grand mean of the observations
τ
j
{\displaystyle \tau _{j}}
is the jth treatment effect, a deviation from the grand mean
∑
τ
j
=
0
{\displaystyle \sum \tau _{j}=0}
μ
j
=
μ
+
τ
j
{\displaystyle \mu _{j}=\mu +\tau _{j}}
ε
∼
N
(
0
,
σ
2
)
{\displaystyle \varepsilon \thicksim N(0,\sigma ^{2})}
,
ε
i
,
j
{\displaystyle \varepsilon _{i,j}}
are normally distributed zero-mean random errors.
The index
i
{\displaystyle i}
over the experimental units can be interpreted several
ways. In some experiments, the same experimental unit is subject to
a range of treatments;
i
{\displaystyle i}
may point to a particular unit. In others,
each treatment group has a distinct set of experimental units;
i
{\displaystyle i}
may
simply be an index into the
j
{\displaystyle j}
-th list.
=== The data and statistical summaries of the data ===
One form of organizing experimental observations
y
i
j
{\displaystyle y_{ij}}
is with groups in columns:
Comparing model to summaries:
μ
=
m
{\displaystyle \mu =m}
and
μ
j
=
m
j
{\displaystyle \mu _{j}=m_{j}}
. The grand mean and grand variance are computed from the grand sums,
not from group means and variances.
=== The hypothesis test ===
Given the summary statistics, the calculations of the hypothesis test
are shown in tabular form. While two columns of SS are shown for their
explanatory value, only one column is required to display results.
M
S
E
r
r
o
r
{\displaystyle MS_{Error}}
is the
estimate of variance corresponding to
σ
2
{\displaystyle \sigma ^{2}}
of the
model.
=== Analysis summary ===
The core ANOVA analysis consists of a series of calculations. The
data is collected in tabular form. Then
Each treatment group is summarized by the number of experimental units, two sums, a mean and a variance. The treatment group summaries are combined to provide totals for the number of units and the sums. The grand mean and grand variance are computed from the grand sums. The treatment and grand means are used in the model.
The three DFs and SSs are calculated from the summaries. Then the MSs are calculated and a ratio determines F.
A computer typically determines a p-value from F which determines whether treatments produce significantly different results. If the result is significant, then the model provisionally has validity.
If the experiment is balanced, all of the
I
j
{\displaystyle I_{j}}
terms are
equal so the SS equations simplify.
In a more complex experiment, where the experimental units (or
environmental effects) are not homogeneous, row statistics are also
used in the analysis. The model includes terms dependent on
i
{\displaystyle i}
. Determining the extra terms reduces the number of
degrees of freedom available.
== Example ==
Consider an experiment to study the effect of three different levels of a factor on a response (e.g. three levels of a fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like this, where a1, a2, and a3 are the three levels of the factor being studied.
The null hypothesis, denoted H0, for the overall F-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the F-ratio:
Step 1: Calculate the mean within each group:
Y
¯
1
=
1
6
∑
Y
1
i
=
6
+
8
+
4
+
5
+
3
+
4
6
=
5
Y
¯
2
=
1
6
∑
Y
2
i
=
8
+
12
+
9
+
11
+
6
+
8
6
=
9
Y
¯
3
=
1
6
∑
Y
3
i
=
13
+
9
+
11
+
8
+
7
+
12
6
=
10
{\displaystyle {\begin{aligned}{\overline {Y}}_{1}&={\frac {1}{6}}\sum Y_{1i}={\frac {6+8+4+5+3+4}{6}}=5\\{\overline {Y}}_{2}&={\frac {1}{6}}\sum Y_{2i}={\frac {8+12+9+11+6+8}{6}}=9\\{\overline {Y}}_{3}&={\frac {1}{6}}\sum Y_{3i}={\frac {13+9+11+8+7+12}{6}}=10\end{aligned}}}
Step 2: Calculate the overall mean:
Y
¯
=
∑
i
Y
¯
i
a
=
Y
¯
1
+
Y
¯
2
+
Y
¯
3
a
=
5
+
9
+
10
3
=
8
{\displaystyle {\overline {Y}}={\frac {\sum _{i}{\overline {Y}}_{i}}{a}}={\frac {{\overline {Y}}_{1}+{\overline {Y}}_{2}+{\overline {Y}}_{3}}{a}}={\frac {5+9+10}{3}}=8}
where a is the number of groups.
Step 3: Calculate the "between-group" sum of squared differences:
S
B
=
n
(
Y
¯
1
−
Y
¯
)
2
+
n
(
Y
¯
2
−
Y
¯
)
2
+
n
(
Y
¯
3
−
Y
¯
)
2
=
6
(
5
−
8
)
2
+
6
(
9
−
8
)
2
+
6
(
10
−
8
)
2
=
84
{\displaystyle {\begin{aligned}S_{B}&=n({\overline {Y}}_{1}-{\overline {Y}})^{2}+n({\overline {Y}}_{2}-{\overline {Y}})^{2}+n({\overline {Y}}_{3}-{\overline {Y}})^{2}\\[8pt]&=6(5-8)^{2}+6(9-8)^{2}+6(10-8)^{2}=84\end{aligned}}}
where n is the number of data values per group.
The between-group degrees of freedom is one less than the number of groups
f
b
=
3
−
1
=
2
{\displaystyle f_{b}=3-1=2}
so the between-group mean square value is
M
S
B
=
84
/
2
=
42
{\displaystyle MS_{B}=84/2=42}
Step 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group
The within-group sum of squares is the sum of squares of all 18 values in this table
S
W
=
(
1
)
2
+
(
3
)
2
+
(
−
1
)
2
+
(
0
)
2
+
(
−
2
)
2
+
(
−
1
)
2
+
(
−
1
)
2
+
(
3
)
2
+
(
0
)
2
+
(
2
)
2
+
(
−
3
)
2
+
(
−
1
)
2
+
(
3
)
2
+
(
−
1
)
2
+
(
1
)
2
+
(
−
2
)
2
+
(
−
3
)
2
+
(
2
)
2
=
1
+
9
+
1
+
0
+
4
+
1
+
1
+
9
+
0
+
4
+
9
+
1
+
9
+
1
+
1
+
4
+
9
+
4
=
68
{\displaystyle {\begin{aligned}S_{W}=&(1)^{2}+(3)^{2}+(-1)^{2}+(0)^{2}+(-2)^{2}+(-1)^{2}+\\&(-1)^{2}+(3)^{2}+(0)^{2}+(2)^{2}+(-3)^{2}+(-1)^{2}+\\&(3)^{2}+(-1)^{2}+(1)^{2}+(-2)^{2}+(-3)^{2}+(2)^{2}\\=&\ 1+9+1+0+4+1+1+9+0+4+9+1+9+1+1+4+9+4\\=&\ 68\\\end{aligned}}}
The within-group degrees of freedom is
f
W
=
a
(
n
−
1
)
=
3
(
6
−
1
)
=
15
{\displaystyle f_{W}=a(n-1)=3(6-1)=15}
Thus the within-group mean square value is
M
S
W
=
S
W
/
f
W
=
68
/
15
≈
4.5
{\displaystyle MS_{W}=S_{W}/f_{W}=68/15\approx 4.5}
Step 5: The F-ratio is
F
=
M
S
B
M
S
W
≈
42
/
4.5
≈
9.3
{\displaystyle F={\frac {MS_{B}}{MS_{W}}}\approx 42/4.5\approx 9.3}
The critical value is the number that the test statistic must exceed to reject the test. In this case, Fcrit(2,15) = 3.68 at α = 0.05. Since F=9.3 > 3.68, the results are significant at the 5% significance level. One would not accept the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ. The p-value for this test is 0.002.
After performing the F-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4 units, the first and third group means differ by 5 units, and the second and third group means differ by only 1 unit. The standard error of each of these differences is
4.5
/
6
+
4.5
/
6
=
1.2
{\displaystyle {\sqrt {4.5/6+4.5/6}}=1.2}
. Thus the first group is strongly different from the other groups, as the mean difference is more than 3 times the standard error, so we can be highly confident that the population mean of the first group differs from the population means of the other groups. However, there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error.
Note F(x, y) denotes an F-distribution cumulative distribution function with x degrees of freedom in the numerator and y degrees of freedom in the denominator.
== See also ==
Analysis of variance
F test (Includes a one-way ANOVA example)
Mixed model
Multivariate analysis of variance (MANOVA)
Repeated measures ANOVA
Two-way ANOVA
Welch's t-test
== Notes ==
== Further reading ==
George Casella (18 April 2008). Statistical design. Springer. ISBN 978-0-387-75965-4. |
End of preview. Expand
in Data Studio
Wikipedia Machine Learning Corpus (wiki-ml-corpus)
A curated dataset of over 100 Wikipedia articles related to Machine Learning, Statistics, Probability, Data Science, and Deep Learning.
This dataset is designed for use in:
- NLP tasks like summarization, QA, and topic modeling
- ML interview prep and curriculum design
- Ontology-driven QA systems and SPARQL-based pipelines
- Building structured knowledge graphs from unstructured text
Dataset Structure
Each example in the dataset is a JSON object with the following fields:
{
"title": "Linear regression",
"text": "Linear regression is a linear approach to modeling the relationship between a scalar response and one or more explanatory variables..."
}
- Downloads last month
- 11