url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://blekko.com/wiki/Orthocenter?source=672620ff | # Altitude (triangle)
From Wikipedia, the free encyclopedia - View original article
(Redirected from Orthocenter)
Jump to: navigation, search
Three altitudes intersecting at the orthocenter
In geometry, an altitude of a triangle is a straight line through a vertex and perpendicular to (i.e. forming a right angle with) a line containing the base (the opposite side of the triangle). This line containing the opposite side is called the extended base of the altitude. The intersection between the extended base and the altitude is called the foot of the altitude. The length of the altitude, often simply called the altitude, is the distance between the base and the vertex. The process of drawing the altitude from the vertex to the foot is known as dropping the altitude of that vertex. It is a special case of orthogonal projection.
Altitudes can be used to compute the area of a triangle: one half of the product of an altitude's length and its base's length equals the triangle's area. Thus the longest altitude is perpendicular to the shortest side of the triangle. The altitudes are also related to the sides of the triangle through the trigonometric functions.
In an isosceles triangle (a triangle with two congruent sides), the altitude having the incongruent side as its base will have the midpoint of that side as its foot. Also the altitude having the incongruent side as its base will form the angle bisector of the vertex.
It is common to mark the altitude with the letter h (as in height).
In a right triangle, the altitude with the hypotenuse as base divides the hypotenuse into two lengths p and q. If we denote the length of the altitude by h, we then have the relation
$h=\sqrt{pq}$.
## The orthocenter
The three altitudes intersect in a single point, called the orthocenter of the triangle. The orthocenter lies inside the triangle if and only if the triangle is acute (i.e. does not have an angle greater than or equal to a right angle). See also orthocentric system. If one angle is a right angle, the orthocenter coincides with the vertex of the right angle. Thus for acute and right triangles the feet of the altitudes all fall on the triangle.
The orthocenter, along with the centroid, circumcenter and center of the nine-point circle all lie on a single line, known as the Euler line. The center of the nine-point circle lies at the midpoint between the orthocenter and the circumcenter, and the distance between the centroid and the circumcenter is half that between the centroid and the orthocenter.
The isogonal conjugate and also the complement of the orthocenter is the circumcenter.
Four points in the plane such that one of them is the orthocenter of the triangle formed by the other three are called an orthocentric system or orthocentric quadrangle.
Let A, B, C denote the angles of the reference triangle, and let a = |BC|, b = |CA|, c = |AB| be the sidelengths. The orthocenter has trilinear coordinates sec A : sec B : sec C and barycentric coordinates
$\displaystyle ((a^2+b^2-c^2)(a^2-b^2+c^2) : (a^2+b^2-c^2)(-a^2+b^2+c^2) : (a^2-b^2+c^2)(-a^2+b^2+c^2)).$
Denote the vertices of a triangle as A, B, and C and the orthocenter as H, and let D, E, and F denote the feet of the altitudes from A, B, and C respectively. Then:
• The sum of the ratios on the three altitudes of the distance of the orthocenter from the base to the length of the altitude is 1:[1]
$\frac{HD}{AD} + \frac{HE}{BE} + \frac{HF}{CF} = 1.$
• The sum of the ratios on the three altitudes of the distance of the orthocenter from the vertex to the length of the altitude is 2:[1]
$\frac{AH}{AD} + \frac{BH}{BE} + \frac{CH}{CF} = 2.$
• The product of the lengths of the segments that the orthocenter divides an altitude into is the same for all three altitudes:[2]
$AH \cdot HD = BH \cdot HE = CH \cdot HF.$
• If any altitude, say AD, is extended to intersect the circumcircle at P, so that AP is a chord of the circumcircle, then the foot D bisects segment HP:[2]
$HD = DP.$
Denote the orthocenter of triangle ABC as H, denote the sidelengths as a, b, and c, and denote the circumradius of the triangle as R. Then[3]
$a^2+b^2+c^2+AH^2+BH^2+CH^2 = 12R^2.$
In addition, denoting r as the radius of the triangle's incircle, ra, rb, and rc as the radii if its excircles, and R again as the radius of its circumcircle, the following relations hold regarding the distances of the orthocenter from the vertices:[4]
$r_a+r_b+r_c+r=AH+BH+CH+2R,$
$r_a^2+r_b^2+r_c^2+r^2=AH^2+BH^2+CH^2+(2R)^2.$
## Orthic triangle
Triangle abc is the orthic triangle of triangle ABC
If the triangle ABC is oblique (not right-angled), the points of intersection of the altitudes with the sides of the triangle form another triangle, A'B'C', called the orthic triangle or altitude triangle. It is the pedal triangle of the orthocenter of the original triangle. Also, the incenter (that is, the center for the inscribed circle) of the orthic triangle is the orthocenter of the original triangle.[5]
The orthic triangle is closely related to the tangential triangle, constructed as follows: let LA be the line tangent to the circumcircle of triangle ABC at vertex A, and define LB and LC analogously. Let A" = LB ∩ LC, B" = LC ∩ LA, C" = LC ∩ LA. The tangential triangle, A"B"C", is homothetic to the orthic triangle.
The orthic triangle provides the solution to Fagnano's problem, posed in 1775, of finding for the minimum perimeter triangle inscribed in a given acute-angle triangle.
The orthic triangle of an acute triangle gives a triangular light route.[6]
Trilinear coordinates for the vertices of the orthic triangle are given by
• A' = 0 : sec B : sec C
• B' = sec A : 0 : sec C
• C' = sec A : sec B : 0
Trilinear coordinates for the vertices of the tangential triangle are given by
• A" = −a : b : c
• B" = a : −b : c
• C" = a : b : −c
## Some additional altitude theorems
### Altitude in terms of the sides
For any triangle with sides a, b, c and semiperimeter s = (a+b+c) / 2, the altitude from side a is given by
$h_a=\frac{2\sqrt{s(s-a)(s-b)(s-c)}}{a}.$
This follows from combining Heron's formula for the area of a triangle in terms of the sides with the area formula (1/2)×base×height, where the base is taken as side a and the height is the altitude from a.
### Equilateral triangle theorem
For any point P within an equilateral triangle, the sum of the perpendiculars to the three sides is equal to the altitude of the triangle. This is Viviani's theorem.
### Inradius theorems
Consider an arbitrary triangle with sides a, b, c and with corresponding altitudes α, β, η. The altitudes and incircle radius r are related by
$\displaystyle \frac{1}{r}=\frac{1}{\alpha}+\frac{1}{\beta}+\frac{1}{\eta}.$
Let c, h, s be the sides of three squares associated with the right triangle: the square on the hypotenuse, and the triangle's two inscribed squares respectively. The sides of these squares (c>h>s) and the incircle radius r are related by a similar formula:
$\displaystyle \frac{1}{r}=-{\frac{1}{c}}+\frac{1}{h}+\frac{1}{s}.$
### Relation among altitudes of right triangle
In a right triangle the three altitudes α, β, η (the first two of which coincide with the legs b and a respectively) are related according to[7][8]
$\frac{1}{\alpha ^2}+\frac{1}{\beta ^2}=\frac{1}{\eta ^2}.$
### Relation among sides of squares on and in a right triangle
Also in the case of the right triangle, the sides c, h, s of the three above-mentioned squares are related to each other by the symphonic theorem, which states that[9]
$\frac{1}{c^2}+\frac{1}{h^2}=\frac{1}{s^2}.$
### Area theorem
Denoting the altitudes of any triangle from sides a, b, and c respectively as $h_a$, $h_b$, and $h_c$,and denoting the semi-sum of the reciprocals of the altitudes as $H = (h_a^{-1} + h_b^{-1} + h_c^{-1})/2$ we have[10]
$\mathrm{Area}^{-1} = 4 \sqrt{H(H-h_a^{-1})(H-h_b^{-1})(H-h_c^{-1})}.$
## In-line references
1. ^ a b
2. ^ a b "Orthocenter of a triangle"
3. ^ http://mathworld.wolfram.com/Orthocenter.html Weisstein, Eric W. "Orthocenter." From MathWorld--A Wolfram Web Resource.]
4. ^ Bell, Amy, "Hansen’s right triangle theorem, its converse and a generalization", Forum Geometricorum 6, 2006, 335–342.
5. ^ William H. Barker, Roger Howe (2007). "§ VI.2: The classical coincidences". Continuous symmetry: from Euclid to Klein. American Mathematical Society Bookstore. p. 292. ISBN 0-8218-3900-4. See also: Corollary 5.5, p. 318.
6. ^ Bryant, V., and Bradley, H., "Triangular Light Routes," Mathematical Gazette 82, July 1998, 298-299.
7. ^ Voles, Roger, "Integer solutions of $a^{-2}+b^{-2}=d^{-2}$," Mathematical Gazette 83, July 1999, 269–271.
8. ^ Richinick, Jennifer, "The upside-down Pythagorean Theorem," Mathematical Gazette 92, July 2008, 313–317.
9. ^ Price, H. Lee and Bernhart, Frank R. "Pythagorean Triples and a New Pythagorean Theorem" (2007), arXiv 0701554 [1]
10. ^ Mitchell, Douglas W., "A Heron-type formula for the reciprocal area of a triangle," Mathematical Gazette 89, November 2005, 494. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166755676269531, "perplexity": 725.4670677343113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://quant.stackexchange.com/questions/9051/examples-of-spectral-risk-measures/45316 | # Examples of Spectral Risk Measures
Let's take the usual definition of a spectral risk measure.
If we look at the integral we see that spectral risk measures have the property that the risk measure of a random variable $X$ can be represented by a combination of the quantiles of $X$.
Since the quantile function is rather friendly one gets that every spectral risk measure is also a coherent risk measure.
Examples are the expected value and the expected shortfall (CVaR). In those cases, the spectral representation yields a very convenient way to approximate the measure by simply weighing the quantiles of our dataset. That yields the following questions:
Are there any other known measures that have a spectral representation? If we relax the assumptions on the spectrum $\phi$, can we obtain (approximative sequences of) other (possibly non-coherent) risk measures?
EDIT: In reaction to the comment by @Joshua Ulrich I want to provide an example of what I want to achieve and some more details.
• Example: The Conditional Value at Risk. We have the following formula: $\text{CVaR}_\alpha(X) = -\frac{1}{\alpha}\int_0^{\alpha}F^{-1}_X(p)dp$. From sample $X_i$, $i=1,\ldots,N$, we can calculate the CVaR by taking the order statistics that are in the $\alpha$-tail of the sample, average, and divide by $\alpha$. We can see that this is measure has a spectral representation with $\phi(p) = \frac{1}{\alpha}$ for $p \in [0,\alpha]$ and $\phi(p) = 0$ for $p \in (\alpha, 1]$. So its easy to check: The CVaR is a spectral measure.
Obviously, the "order statistics + weighted average" procedure does not only work for the CVaR, it works for all spectral measures: From the definition of spectral measure we see that, after discretizing the integral, we have an approximation of the measure that is a linear combination of quantiles which is very easy to compute.
In fact its so easy that I would like to compute as many risk measures as possible this way (very easy if you do monte carlo or scenarios for example). For the computation only, I dont need all the assumptions about $\phi$ so lets forget about them for a moment and see what else we can calculate this way.
• This is a bit broad, and could lead to list-like answers. Could you provide more details on what you're actually trying to accomplish? Oct 20, 2013 at 12:32
• Well, I am not sure I understand what you want but what is clear is that $CVAR_{\alpha}$ works as $\Phi(p)=\frac{1}{\alpha} \cdot 1_{p \in [0,\alpha]}$ which is a density of probability. So you can build any other risk measure chosing for $\Phi$ any density on the compact set $[0,1]$, and by scaling on a compact set. I would try a "hat function" first, then sinusoids... Oct 21, 2013 at 12:20
• Quite a few risk measures seem unlikely to have a representation like this, since their units differ. For example, annualized volatility. Some risk measures, such as scenario playbacks of the '97 Asian crisis, will trivially have a spectral representation but not in any computationally useful way. Oct 21, 2013 at 13:38
• @statquant Well, the aim is not to try a different spectrum $\phi$ but rather to identify the spectra of other well known risk measures. Brian B: I dont understand your comment about units. The units are irrelevant for a risk measure as far es the spectral property is concerned, right? Oct 23, 2013 at 6:47
I believe that Prospect Theory (as defined by Kahneman, Amos, and Tversky) implicitly makes use spectral risk measures. Though I am not able to find any literature linking the two, I think there is clear link between the intuitions regarding loss aversion. The key difference is that spectral risk measures are normative; we assume that the utility function is known. Prospect Theory, on the other hand, is inherently descriptive (i.e., reflects observed behaviors). Also, I am aware that spectral risk measures are extended to portfolio risk, while Prospect Theoretic measures deal with generic utility.
source: Wikipedia. Prospect Theory
Again, while I haven't seen any literature on the topic, it would be interesting if someone were to show that Prospect Theoretic risk measures (which are typified by Exhibit A) meet the coherence standards for a spectral risk measure given by:
${\displaystyle \rho :{\mathcal {L}}\to \mathbb {R} }$ satisfies:
• Positive Homogeneity: for every portfolio X and positive value ${\displaystyle \lambda >0} \lambda >0$, ${\displaystyle \rho (\lambda > X)=\lambda \rho (X)}$;
• Translation-Invariance: for every portfolio X and $\alpha \in \mathbb {R}$, ${\displaystyle \rho (X+a)=\rho (X)-a}$;
• Monotonicity: for all portfolios X and Y such that ${\displaystyle X\geq Y}$ , ${\displaystyle \rho (X)\leq \rho (Y)}$;
• Sub-additivity: for all portfolios X and Y, ${\displaystyle \rho (X+Y)\leq \rho (X)+\rho (Y)}$;
• Law-Invariance: for all portfolios X and Y with cumulative distribution functions ${\displaystyle F_{X}}$ and ${\displaystyle > F_{Y}}$ respectively, if ${\displaystyle F_{X}=F_{Y}}$ then ${\displaystyle \rho (X)=\rho (Y)}$;
• Comonotonic Additivity: for every comonotonic random variables X and Y, ${\displaystyle \rho (X+Y)=\rho (X)+\rho (Y)}$. Note that X and Y are comonotonic if for every ${\displaystyle \omega _{1},\omega > _{2}\in \Omega :\;(X(\omega _{2})-X(\omega _{1}))(Y(\omega _{2})-Y(\omega _{1}))\geq 0}$
• Interesting thought but thats kindof the sad story about prospect theory. It does not fulfill the necessary requirements that we have in our models. Clearly, the function above is not subadditive. Take X and -X as an example. Mar 2, 2018 at 12:26
• @vanguard2k I think the function in the illustration is for a single asset, so it really doesn't say anything about how things behave in a portfolio. Regardless, my point was that if you merge the intuitions of behavioral finance into a coherent risk measure, you get something that looks a lot like a spectral risk measure. Mar 3, 2018 at 18:16
As I know, I think that spectral risk measure is a new kind of measure developed from the CVaR (weighted average value of VaR) and in the framework of coherent risk measures.
If you can prove that a risk measure is coherent then you can add any types of weighted function $$\phi$$ to make it a spectral risk measure. The underlying idea is that the sum of any number of coherent measures is also a coherent risk measure.
• Utility functions just make them reflect attitude to risk of investors and sound more like behavioural finance. Apr 26, 2019 at 9:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420799612998962, "perplexity": 458.6905733390614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00306.warc.gz"} |
https://www.homebuiltairplanes.com/forums/members/bifft.9807/ | B
Reaction score
108
Profile posts Latest activity Postings About
• Thanks for letting me know about that. Since I'm using an RV component, I have been keeping up with all relevant issues/SBs etc. for the tailplane I am using so I was aware of it and will be incorporating the appropriate mod as required. I appreciate you taking the time to let me know. That's very kind of you.
Cheers,
Dave | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582687377929688, "perplexity": 1175.978627534187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00336.warc.gz"} |
https://www.physicsforums.com/threads/need-some-help-answering-this-problem-thanks.303608/ | # Need some help answering this problem thanks
1. Mar 29, 2009
### nsingh947
hi just wondering how you would solve this problem
A 3 kg toy car with a speed of 6 m/s collides head on with a 2kg car traveling in the opposite direction with a speed of 4 m/s. If the cars are locked together after the collision with a speed of 2 m/s, how much kinetic energy is lost?
2. Mar 29, 2009
### rock.freak667
Find the kinetic energies before impact. Then find the kinetic energy when the two stick together after impact. Find the difference.
3. Mar 29, 2009
### danb
Momentum is conserved, and kinetic energy is easy to calculate for each object. Incidentally, the problem is overspecified. They could have asked you to calculate the final speed from the masses and initial speeds.
Similar Discussions: Need some help answering this problem thanks | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855653405189514, "perplexity": 617.7674201258618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00588.warc.gz"} |
https://www.varsitytutors.com/ap_physics_1-help/electric-force-between-point-charges | # AP Physics 1 : Electric Force Between Point Charges
## Example Questions
← Previous 1
### Example Question #1 : Electricity
Two point charges, each having a charge of +1C, are 2 meters apart. If the distance between them is doubled, by what factor does the force between them change?
The force between the charges remains constant
Explanation:
This is a question where knowing how to effectively sift through a problem statement and choose only the information you need will really help. We are given a bunch of values, but only need to know one thing, which is that the distance between the two charges is doubled.
Coulomb's law is as follows:
We can rewrite this for the initial and final scenarios:
We can divide one equation by the other to set up a ratio:
We know that the final radius is double the intial, which is written as:
Substituting this in we get:
Rerranging for the final force, we get:
### Example Question #1 : Electric Force Between Point Charges
What is the force exerted on a point charge of by a point charge of that is located away?
Explanation:
Use Coulomb's law.
Plug in known values and solve.
Note that a positive value for electric force corresponds to a repulsive force. This should make sense since the charge on both particles are the same sign (positive).
### Example Question #2 : Electric Force Between Point Charges
If we have 2 charges, and , that are apart, what is the force exerted on by if we know that has a charge of and has a charge of ?
Explanation:
Use Coulomb's law.
Note that the electric force between two charges of the same sign (both positive or both negative) is a positive value. This indicates a repulsive force.
### Example Question #3 : Electric Force Between Point Charges
Determine the magnitude of the electric force between 2 protons that are 3nm apart. Also determine if this force is attractive or repulsive.
; attractive
; repulsive
; repulsive
; repulsive
; repulsive
Explanation:
Recall that Coulomb's law tells us that the magnitude of force between two point charges is given as:
Here, is force between two particles, are the charges of each of the two particles, and is the distance between the two charges. In our case, and are identical since each is the charge of a proton which is given as:, and
Thus, plug in known values and solve.
To determine if the force is attractive or repulsive, we only need to examine the sign of the charges. Since both protons have the same sign for their charge (positive charges) they will repel.
### Example Question #4 : Electric Force Between Point Charges
A point charge of magnitude is located 0.01m away from a point charge of magnitude . What is the electric force between the point charges?
Explanation:
Use Coulomb's law to find the electric force between the charges:
### Example Question #5 : Electric Force Between Point Charges
A point charge of magnitude is 2nm away from a point charge of identical charge. What is the electric force between the point charges?
Explanation:
The electric force between two point charges is given by Coulomb's law:
Now, plug in the given charges (both the same magnitude), the given constant, and the distance between the charges (in meters) to get our answer:
### Example Question #1231 : Ap Physics 1
What is the magnitude of the electric force between two charged metals that are 3m apart, that have absolute value of the charges being 1C and 3C?
Explanation:
We are given all the necessary information to find the magnitude of the electric force by using Coulomb's law:
Where is Coulomb's constant given by and are the respective charges, and is the distance between the charges. In our case:
### Example Question #7 : Electric Force Between Point Charges
Three charges are shown in the given figure. Find the net force on the "top" charge due to the other two (both magnitude and direction). Let and assume all charges are away from each other.
Let be the bottom left particle, be the top particle and be the bottom right particle. Note the axis.
Explanation:
The method to solving Coulomb's law problems with electrostatic configurations is to find the magnitude of the force and then assign a direction based of what is known about the charges. Coulomb's law is given as:
Where and are the two particles we are finding the force between and is the electric constant and is:
Notice that the distances between and is the same as the and . Since the magnitudes of all charges are the same, that means that the magnitudes of the forces (not directions) are the same. So the force exerted on from is the same magnitude as the force exerted on from .
A sketch of the forces is shown below:
Remember that there are always equal and opposite force pairs. We only care about the forces acting on and the last picture shows the two forces that act on it from and . Notice that the vector arrows are of equal length (force magnitudes are equal) and in different directions. Coulomb forces obey the law of superposition and we can add them. Before we do that let's calculate the magnitude of the two forces pictured.
Remember to convert distances to meters and charge magnitudes to Coulombs so the units work out and you are not off by any factors of .
Both the red vector arrow and the blue vector arrow have magnitudes of . Notice in the diagram below that if the charges are spaced equidistant, the will form an equilateral triangle.
The angle is the same angle that each vector on the right has relative to the line drawn. In order to add the vectors together we need to separate the components of the vectors into their x- and y-components and add the respective components. This is where symmetry can be handy to make the problem easier. Since the particles are equidistant and the charge magnitudes are all equal, this lead to the force magnitudes to be equal. By inspection it can be shown that the y-components must be equal and opposite and therefore cancel.
This means that total magnitude of the force acting on is just the sum of the x-component forces. To get the x-components we can use the cosine of the angle. Since the angles are equal and the magnitudes are equal, the final answer will be:
The final answer is in the positive x-direction, denoted by the positive answer and the to indicate in the x-direction. The answer must have a magnitude and direction to describe the net force acting on the particle.
### Example Question #8 : Electric Force Between Point Charges
A mole of electrons have a charge of , which is called Faraday's constant. Given that Faraday's constant is , determine the electric force per mole exerted by individual moles of electrons on one another separated by by . Assume charges are static. Use Coulomb's law, and assume that moles of electrons behave like a point charge.
Explanation:
From Coulomb's Law:
Where is the distance between point charges, , and and are charges of the electrons. In our case,
### Example Question #9 : Electric Force Between Point Charges
If , and , then what is the magnitude of the net force on charge 2?
Explanation:
First lets set up two axes. Have be to the right of charge 3 and 2 in the diagram and be above charges 1 and 2 in the diagram with charge 2 at the origin.
Coloumb's law tells us the force between point charges is
The net force on charge 2 can be determined by combining the force on charge 2 due to charge 1 and the force on charge 2 due to charge 3.
Since charge 1 and charge 2 are of opposite polarities, they have an attractive force; therefore, charge 2 experiences a force towards charge 1 (in the direction). By using Coloumb's law, we can determine this force to be
in the direction
Since charge 2 and 3 have the same polarities, they have a repulsive force; therefore, charge 2 experiences a force away from charge 2 (in the direction). By using Coloumb's law, we can determine this force to be:
in the -direction
If we draw out these two forces tip to tail, we can construct the net force:
From this, we can see that and create a right triangle with the net force on charge 2 as the hypotenuse. By using the Pythagorean theorem, we can calculate the magnitude of the net force:
← Previous 1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846778512001038, "perplexity": 520.9742315986465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742978.60/warc/CC-MAIN-20181116045735-20181116071735-00521.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs00706-018-2243-6 | Monatshefte für Chemie - Chemical Monthly
, Volume 149, Issue 9, pp 1693–1699
# Monitoring of n-butanol vapors biofiltration process using an electronic nose combined with calibration models
• Bartosz Szulczyński
• Piotr Rybarczyk
• Jacek Gębicki
Open Access
Original Paper
### Abstract
Malodorous odors, by definition, are unpleasant, irritating smells being a mixture of volatile chemical compounds that can be sensed at low concentrations. Due to the increasing problem of odor nuisance associated with odor sensations, and thus the need to remove them from the air, deodorization techniques are commonly used. Biofiltration is one of the methods of reducing odorants in the air stream. In the paper, the possibility of using an electronic nose as an alternative method to gas chromatography for the online monitoring and evaluation of efficiency of the n-butanol vapors biofiltration process in a transient state was investigated. Three calibration models were used in the research, i.e., multiple linear regression, principal component regression, and partial least-square regression. The obtained results were compared with the theoretical values.
## Keywords
Odorous substances Sensors Mathematical modeling Alcohols
## Introduction
Among various methods of controlling volatile organic compounds’ (VOCs) emissions, biological techniques seem to be very promising due to low operating costs, possibility to treat gases with low concentration of odorants and very low secondary pollution generation. The use of biological methods for air purification has been known for over 60 years. Devices called biofilters are most commonly used for this purpose [1]. A typical biofilter is a column-type bioreactor packed with a bed made from either natural or synthetic materials. The process of biofiltration consists in the decomposition of contaminants by bacteria or other microorganisms located in the biofilm formed on the filter bed elements. The contaminated gas is humidified prior to entering the biofilter. The mechanism of biofiltration involves the diffusion of pollutants from the gas phase to the biofilm, covering the surface of the packing element. The compounds absorbed in the biofilm undergo biodegradation and the clean gas leaves the biofilter [2, 3].
Elements of a biofilter bed are moisturized with an aqueous phase. Therefore, the biofilm formed on the surface of such elements of bed allows for the absorption of hydrophilic compounds, as it is in the case of n-butanol. n-Butanol belongs to the group of VOCs, concentrations of which need to be controlled in the environment. Emissions of n-butanol are identified, i.e., from processes of thermal regeneration of adsorbents, from wastewater treatment plants, and waste disposal plants, coal mines, and other industrial activities [4, 5]. n-Butanol is characterized by harsh and alcoholic-like odor type [6], and it is irritating and hazardous to human health when inhaled. The effectiveness of the removal of such kind of compounds in the biofilters depends mainly on the rate of their biodegradation by microorganisms inhabiting the biofilm. This is so because the process limitation with respect to mass transfer of the hydrophilic compounds from the gas to liquid phase is negligibly small [7]. The most crucial parameters affecting the biofiltration efficiency include the type and porosity of the bed, the pressure drop through the bed, pH value, gas flow rate, temperature, and concentration of pollutants in the gas phase [8].
Mathematical modeling is a useful tool for the prediction of a biofilter performance. Biofiltration models usually follow a transfer-reaction scheme [9]. Such an approach requires the knowledge of physical (i.e., gas–liquid equilibrium and transfer coefficients) as well as biological (kinetics and efficiency of biodegradation) parameters. One of the first and the most popular models describing biofiltration was proposed by Ottengraf [10, 11]. According to [10], the Ottengraf model is based on the following assumptions: (1) the mass transfer resistance in the gas phase is negligible comparing to the mass transfer resistance in the liquid phase; (2) the biofilm thickness is much smaller than the diameter of a bed element; (3) the biofilm may be treated as a planar surface; (4) substrate transport through the biofilm occurs by diffusion; (5) the interface between the gas and the liquid phases is in the equilibrium; (6) biodegradation may be described by the Michaelis–Menten kinetics or by the Monod equation; (7) constant kinetic constants may be used due to the net growth of biomass in the biofilm which is controlled to be zero; (8) the biomass is uniformly distributed in the biofilter; (9) the biofilter is treated as a plug flow reactor. The Ottengraf model, proposed in 1983, is a rather simple model predicting the performance of the biofilter; however, it is still treated as a reference for other models and their modifications [12]. When modifying or developing a new model for the description of biofiltration, following problems must be taken into consideration: (1) an influence of unstable conditions during the biofiltration, (2) the possibility of interactions between the gas-mixture components, and (3) the identification of biodegradation inhibitors.
Until the steady-state process conditions are established, the system is subjected to both first- and zero-order reactions. In addition, for a given moment of the process duration at two different horizontal cross-sections of the biofilter, the occurring reactions proceed with different rates and may follow the kinetics of different order. Therefore, it is impossible to determine the order of reactions in the initial period of the biofiltration process, because the process occurs in unsteady-state conditions. For this reason, the model proposed by Świsłowski [13] was used in this paper to predict the theoretical values of the time-dependent distribution pattern of n-butanol concentrations in the biofilter outlet gas stream. The calculation algorithm is shown schematically in Fig. 1.
The determination of concentration of a compound at the inlet and outlet of a biofilter allows to calculate the efficiency of the process:
$$\eta = 1 - \frac{{C_{\text{outlet}} }}{{C_{\text{inlet}} }},$$
(1)
where η is the biofiltration efficiency; Cinlet/outlet is the concentration of the odorous compound in the inlet and outlet streams.
The most popular methods used for the evaluation of the biofiltration efficiency are chromatographic methods, i.e., gas chromatography coupled with mass spectrometer or flame ionization detector. However, due to the high operating costs, the need to provide high purity gas or vacuum, these techniques are mainly exploited for laboratory tests and are rarely used for the monitoring of industrial biofiltration processes in the online mode. In recent years, there has been a significant increase in the interest in using electronic noses for quantitative and qualitative analyses for environmental monitoring [14, 15]. Due to the short time and low cost of a single analysis, electronic noses have become an alternative to gas chromatography. Electronic noses (e-noses) are devices that are supposed to mimic the human sense of smell and are used in many areas of human activity [16, 17, 18, 19, 20]. Such devices consist of four main components, as given in Fig. 2.
The applied sampling system eliminates possible undesirable factors affecting the sensor response, and provides stable and reproducible measurement conditions (temperature, humidity, and gas flow velocity). Detection system is a set of sensors located in the measurement chamber. The most commonly used sensor types are commercially available sensors for the detection of volatile organic compounds, e.g., metal oxide sensors (MOS) [21]. Such sensors exhibit different selectivity and sensitivity, but, when coupled in a sensor matrix, produce a characteristic chemical image of the gas mixture (“fingerprint”). This image is transferred to the data collection system, which is responsible for the digital signal processing. The last but the most important element of the electronic nose system is a pattern recognition system, which assigns the received set of signals to one of the pattern classes or predicts the concentrations using the appropriate mathematical calibration model. The most commonly used models include multiple linear regression (MLR), principal components regression (PCR), and partial least-squares regression (PLSR). Construction of the above models utilizes a set of explaining variables (signals from the sensors comprising an electronic nose) and a set of dependent variables (values of substance concentrations in a gas mixture). A task for the calibration method is to develop a model allowing for the quantitative evaluation of a particular property or the properties based on the set of explaining variables. These methods find successful applications for the monitoring of the changes of concentrations of odorous compounds during biofiltration or sewage treatment [22, 23, 24, 25].
Multiple linear regression generates a linear relationship between a single dependent variable (y) and a set of several explanatory variables (x):
$$y = a_{0} + a_{1} \cdot x_{1} + a_{2} \cdot x_{2} + \cdots + a_{n} \cdot x_{n} .$$
(2)
The model can be used when the number of variables is smaller than the number of samples and when these variables are poorly correlated. In other cases, it is impossible to determine the regression coefficients (a), which are obtained by the least-squares method [26].
Principal component regression assumes reducing the number of explanatory variables by selecting few first principal components (PCs) in the place of the primary variables. The guiding idea of this method is to formulate a relationship between PCs and the expected property of the sample. The method may be applied in two stages and the first stage allows for the determination of the principal components using the principal component analysis (PCA) method. It allows to obtain uncorrelated matrix of variables. The second stage assumes the development of the MLR model with the use of principal components as variables:
$$y = a_{0} + a_{1} \cdot {\text{PC}}_{1} + a_{2} \cdot {\text{PC}}_{2} + \cdots + a_{n} \cdot {\text{PC}}_{n} .$$
(3)
The number of principal components (n) most often needs to be selected experimentally. The proper choice is of a great importance for the prognostic capabilities of the model. In the case when too few main components, the calibration model contains insufficient information necessary for the correct forecasting. However, if the number of main factors is excessive, unwanted information such as a noise or experimental errors is introduced to the model.
The partial least-squares regression is the most commonly used method for the development of multidimensional calibration models. The task of the PLS method, like PCR, is the design of the model based on the mutually orthogonal factors. In the PLS method, such factors are created in different way than in the PCR method. In the PCR method, factors are the principal components created on the basis of the variables matrix only, while, in the PLS method, the relationship between variables and dependent variables matrices is taken into account. Each factor in the PLS method explains the maximum covariance between factors for the variables and factors for the dependent variables. Covariance combines a high variance of the variables matrix with a high correlation of the property of interest [27, 28, 29].
In the present paper, an electronic nose prototype combined with three types of calibration models (MLR, PCR, and PLSR) is used for the monitoring of n-butanol vapors biofiltration process. Research results are compared with theoretical values.
## Results and discussion
For the data calibration, three models were developed, i.e., MLR, PCR, and PLSR. As a visual method of assessing the fit of the models to the experimental data, the correlation plots were prepared (Figs. 3, 4, 5).
The root-mean-square error of prediction (RMSEP) was chosen as the numerical tool for the selection of the optimal model. The values of RMSEP for the developed models are given in Table 1.
Table 1
Root-mean-square error of prediction (RMSEP) values for the developed models
Model
RMSEP/ppm
Multiple linear regression (MLR)
7.5
Principal component regression (PCR)
12.8
Partial least-square regression (PLSR)
5.4
The results of the conducted research indicate that the most suitable calibration model for the determination of the n-butanol concentration is the PLS model (i.e., characterized by the lowest value of RMSEP = 5.4), and the least suitable model is PCR with the RMSEP value of 12.8. The obtained results show that, with a relatively small number of training data, PLSR models perform better than in the case of PCR methods. The multiple linear regression method, in terms of the value of mean square error, is placed between PLSR and PCR. It can be concluded that its estimation of the explained variable is relatively good (RMSEP = 7.5), taking into account a small number of degrees of freedom of the model.
Good predictive capabilities of the developed models allowed to investigate the possibility of using the e-nose as a tool for monitoring and assessing the effectiveness of the n-butanol biofiltration process in unsteady-state conditions. In this part of the work, also the three models were applied (i.e., MLR, PCR, and PLSR). Using the electronic nose, the concentrations of n-butanol vapors at the inlet and outlet of the biofilter were determined for the process duration time from the start-up until reaching the stable process conditions. The obtained results (the ratio of the outlet-to-inlet concentrations) as a function of time are shown in Figs. 6, 7, and 8. Presented figures contain also the theoretical values of process efficiency calculated on the basis of the biofiltration model described in Fig. 1.
Very good correlation between the experimental and theoretical results was obtained. The PLSR model presents the best fit to the theoretical curve. For the process duration time longer than about 5–6 h, the MLR and PCR models predicted lower values for the outlet concentration of n-butanol (CG) than the theoretical values. The largest discrepancies were observed for the PCR model which may be due to the presence of water vapor released from the filter bed, which was not present in the calibration samples. This problem has not been observed for the PLSR mode, which is the least susceptible to changes in the sample matrix composition among the investigated models.
## Conclusions
It was found that the constructed electronic nose prototype together with the developed mathematical calibration models (MLR, PCR, and PLSR) can be successfully applied for the monitoring and efficiency assessment of the n-butanol vapors biofiltration process. The proposed models were characterized by a high compliance with the theoretical values. The best fit quality was obtained for the PLSR model.
The obtained results confirm that the use of electronic noses as an alternative method to gas chromatography for the online monitoring of the biofiltration process is possible. Due to the low cost of the prototype and short time as well as low cost of a single analysis, the use of e-noses seems to be justified and purposeful for such applications. The possibility of obtaining the results in the online mode highlights the perspective of using e-noses as measuring elements in the automation systems for control and management of air biofiltration processes.
## Experimental
Two research systems were used in the investigations: the first was used to develop calibration models for the electronic nose (Fig. 9) and the second system was used for the evaluation of a biofilter performance (Fig. 10).
Mixtures of air and n-butanol vapors were generated using the bubbling phenomenon. Purified and dried air was passed through a vial containing n-butanol (Sigma-Aldrich). The formed mixture was diluted with a zero air stream to achieve the desired concentration of n-butanol in the mixture. The concentration was determined by measuring the weight loss of n-butanol in a vial according to the relationship:
$$C = \frac{\Delta m}{V \cdot t},$$
(4)
where C is the concentration, Δm is the vial mass change, V is the air flow rate, and t is the time.
The air flow rates through the vial and the diluting air were controlled by mass flow controllers. Switching the three-way valve, the generated mixture was directed to the electronic nose working in the stop-flow mode (flow time: 30 s; stop time: 15 s). Then, the valve was switched again and zero air was introduced to the e-nose for sensors regeneration (for the time interval equal to 10 min). Measurements were made for five concentrations of n-butanol in the mixture: 15, 30, 60, 120, and 240 ppm. Each measurement was repeated three times.
The biofiltration of n-butanol vapors was carried out using the system shown in Fig. 10.
A mixture of air and n-butanol with a concentration of 240 ppm, generated as described above, was fed to the biofilter. The volumetric flow rate of the mixture through the packed bed was 1.2 dm3 min−1. Column biofilter was used during the investigations (outside diameter/inside diameter: 0.05/0.04 m, h = 1 m). Pine bark elements with an average size of 4–7 mm were used a biofilter packing material. Before starting the process, the bed was sprinkled with an aqueous solution of mineral salts. After the process start-up (process duration time: 12 h), every 15 min, the gas samples were collected and analyzed using the electronic nose prototype, constructed in the Department of Chemical and Process Engineering, Faculty of Chemistry, Gdańsk University of Technology. The device worked with eight TGS metal oxide sensors manufactured by Figaro Inc: TGS 2104, TGS 2106, TGS 2180, TGS 2201, TGS 2600, TGS 2602, TGS 2603, and TGS 2611. The collected sensor signals were saved on the computer using the Simex SIAi-8 analog-to-digital converter. The electronic nose operated in a stop-flow mode. The sample flow time through the chamber was 30 s, while the stop time was 15 s. For the calculations of a sensor signal (S), the quotient form was chosen:
$$S = \frac{\Delta S}{{S_{0} }} = \frac{{S_{ \hbox{max} } - S_{0} }}{{S_{0} }},$$
(5)
where Smax is the maximal value of the signal and S0 is the sensor baseline value. Data analysis and other calculations were performed using RStudio Desktop (v. 1.0.143) software.
## Notes
### Acknowledgements
The investigations were financially supported by the Grant No. UMO-2015/19/B/ST4/02722 from the National Science Centre, Poland.
## References
1. 1.
Sonil N, Prakash KS, Jayanthi A (2012) J Soil Sci Environ Manag 3:28Google Scholar
2. 2.
Mudliar S, Giri B, Padoley K, Satpute D, Dixit R, Bhatt P, Pandey R, Juwarkar A, Vaidya A (2010) J Environ Manag 91:1039
3. 3.
Ferdowski M, Avalos Ramirez A, Jones PJ, Heitz M (2017) Int Biodeterior Biodegrad 119:336
4. 4.
Schmidt T, Anderson WA (2017) Environments 4:57
5. 5.
Lee SH, Li C, Heber AJ, Ni J, Huang H (2013) Bioresour Technol 127:366
6. 6.
Lewkowska P, Cieślik B, Dymerski T, Konieczka P, Namieśnik J (2016) Environ Res 151:573
7. 7.
Cheng Y, He H, Yang C, Zeng G, Li X, Chen H, Yu G (2016) Biotechnol Adv 34:1091
8. 8.
McNevin D, Barford J (2000) Biochem Eng J 5:231
9. 9.
Pineda J, Auria R, Perez-Guevara F, Revah S (2000) Bioprocess Biosyst Eng 23:479
10. 10.
Lim KH, Lee EJ (2003) Korean J Chem Eng 20:315
11. 11.
Zarook SM, Shaikh AA (1997) Chem Eng J 65:55
12. 12.
Qasim M, Shareefdeen Z (2013) Adv Chem Eng Sci 3:26479
13. 13.
Świsłowski M (2002) Removal of organic vapors from the air in a biofilter with natural packing materials, Ph.D. Dissertation. Faculty of Chemistry, Gdańsk University of Technology, GdańskGoogle Scholar
14. 14.
Capelli L, Sironi S, Del Rosso R (2014) Sensors 14:19979
15. 15.
Szulczyński B, Wasilewski T, Wojnowski W, Majchrzak T, Dymerski T, Namieśnik J, Gębicki J (2017) Sensors 17:2671
16. 16.
Gancarz M, Wawrzyniak J, Gawrysiak-Witulska M, Wiącek D, Nawrocka A, Tadla M, Rusinek R (2017) Measurement 103:227
17. 17.
Wojnowski W, Majchrzak T, Dymerski T, Gębicki J, Namieśnik J (2017) Meat Sci 131:119
18. 18.
Schnabel RM, Boumans MLL, Smolinska A, Stobberingh EE, Kaufmann R, Roekaerts PMHJ, Bergmans DCJJ (2015) Respir Med 109:1454
19. 19.
Peris M, Escuder-Gilabert L (2016) Trends Food Sci Technol 58:40
20. 20.
Romero-Flores A, McConnell LL, Hapeman CJ, Ramirez M, Torrents A (2017) Chemosphere 186:151
21. 21.
Szulczyński B, Gębicki J (2017) Environments 4:21
22. 22.
López R, Cabeza IO, Giráldez I, Díaz MJ (2011) Bioresour Technol 102:7984
23. 23.
Romain AC, Nicolas J, Cobut P, Delva J, Nicks B, Philippe F-X (2013) Atmos Environ 77:935
24. 24.
Littarru P (2007) Waste Manag 27:302
25. 25.
Szulczyński B, Gębicki J, Namieśnik J (2018) Chem Papers 72:527
26. 26.
Wise B, Gallagher N (1996) J Process Control 6:329
27. 27.
Arnold MA, Burmeister JJ, Chung H (1998) Photochem Photobiol 67:50
28. 28.
Malin F, Ruchti TL, Blank TB, Thennadil SN, Monfre SL (1999) Clin Chem 45:1651
29. 29.
Blanco M, Romero MA (2001) Analyst 126:2212 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198298215866089, "perplexity": 2663.6354549815787}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00294.warc.gz"} |
https://online.stat.psu.edu/stat501/book/export/html/965 | # 10.1 - What if the Regression Equation Contains "Wrong" Predictors?
10.1 - What if the Regression Equation Contains "Wrong" Predictors?
Before we can go off and learn about the two variable selection methods, we first need to understand the consequences of a regression equation containing the "wrong" or "inappropriate" variables. Let's do that now!
There are four possible outcomes when formulating a regression model for a set of data:
• The regression model is "correctly specified."
• The regression model is "underspecified."
• The regression model contains one or more "extraneous variables."
• The regression model is "overspecified."
Let's consider the consequence of each of these outcomes on the regression model. Before we do, we need to take a brief aside to learn what it means for an estimate to have the good characteristic of being unbiased.
### Unbiased estimates
An estimate is unbiased if the average of the values of the statistics determined from all possible random samples equals the parameter you're trying to estimate. That is, if you take a random sample from a population and calculate the mean of the sample, then take another random sample and calculate its mean, and take another random sample and calculate its mean, and so on — the average of the means from all of the samples that you have taken should equal the true population mean. If that happens, the sample mean is considered an unbiased estimate of the population mean $$\mu$$.
An estimated regression coefficient $$b_i$$ is an unbiased estimate of the population slope $$\beta_i$$ if the mean of all of the possible estimates $$b_i$$ equals $$\beta_i$$. And, the predicted response $$\hat{y}_i$$ is an unbiased estimate of $$\mu_Y$$ if the mean of all of the possible predicted responses $$\hat{y}_i$$ equals $$\mu_Y$$.
So far, this has probably sounded pretty technical. Here's an easy way to think about it. If you hop on a scale every morning, you can't expect that the scale will be perfectly accurate every day —some days it might run a little high, and some days a little low. That you can probably live with. You certainly don't want the scale, however, to consistently report that you weigh five pounds more than you actually do — your scale would be biased upward. Nor do you want it to consistently report that you weigh five pounds less than you actually do — errr..., scratch that, maybe you do — in this case, your scale would be biased downward. What you do want is for the scale to be correct on average — in this case, your scale would be unbiased. And, that's what we want!
### The four possible outcomes
##### Outcome 1
A regression model is correctly specified if the regression equation contains all of the relevant predictors, including any necessary transformations and interaction terms. That is, there are no missing, redundant or extraneous predictors in the model. Of course, this is the best possible outcome and the one we hope to achieve!
The good thing is that a correctly specified regression model yields unbiased regression coefficients and unbiased predictions of the response. And, the mean squared error (MSE) — which appears in some form in every hypothesis test we conduct or confidence interval we calculate — is an unbiased estimate of the error variance $$\sigma^{2}$$.
##### Outcome 2
A regression model is underspecified if the regression equation is missing one or more important predictor variables. This situation is perhaps the worst-case scenario, because an underspecified model yields biased regression coefficients and biased predictions of the response. That is, in using the model, we would consistently underestimate or overestimate the population slopes and the population means. To make already bad matters even worse, the mean square error MSE tends to overestimate $$\sigma^{2}$$, thereby yielding wider confidence intervals than it should.
Let's take a look at an example of a model that is likely underspecified. It involves an analysis of the height and weight of martians. The Martian dataset — which was obviously contrived just for the sake of this example — contains the weights (in g), heights (in cm), and amount of daily water consumption (0, 10 or 20 cups per day) of 12 martians.
If we regress $$y = \text{ weight}$$ on the predictors $$x_1 = \text{ height}$$ and $$x_2 = \text{ water}$$, we obtain the following estimated regression equation:
##### Regression Equation
weight = -1.220 + 0.28344 height + 0.11121 water
and the following estimate of the error variance $$\sigma^{2}$$:
MSE = 0.017
If we regress $$y = \text{ weight}$$ on only the one predictor $$x_1 = \text{ height}$$, we obtain the following estimated regression equation:
##### Regression Equation
weight = -4.14 + 0.3889 height
and the following estimate of the error variance $$\sigma^{2}$$:
MSE = 0.653
Plotting the two estimated regression equations, we obtain:
The three black lines represent the estimated regression equation when the amount of water consumption is taken into account — the first line for 0 cups per day, the second line for 10 cups per day, and the third line for 20 cups per day. The blue dashed line represents the estimated regression equation when we leave the amount of water consumed out of the regression model.
The second model — in which water is left out of the model — is likely an underspecified model. Now, what is the effect of leaving water consumption out of the regression model?
• The slope of the line (0.3889) obtained when height is the only predictor variable is much steeper than the slopes of the three parallel lines (0.28344) obtained by including the effect of water consumption, as well as height, on martian weight. That is, the slope likely overestimates the actual slope.
• The intercept of the line (-4.14) obtained when height is the only predictor variable is smaller than the intercepts of the three parallel lines (-1.220, -1.220 + 0.11121(10) = -0.108, and -1.220 + 0.11121(20) = 1.004) obtained by including the effect of water consumption, as well as height, on martian weight. That is, the intercept likely underestimates the actual intercepts.
• The estimate of the error variance $$\sigma^{2}$$ (MSE = 0.653) obtained when height is the only predictor variable is about 38 times larger than the estimate obtained (MSE = 0.017) by including the effect of water consumption, as well as height, on martian weight. That is, MSE likely overestimates the actual error variance $$\sigma^{2}$$.
This contrived example is nice in that it allows us to visualize how an underspecified model can yield biased estimates of important regression parameters. Unfortunately, in reality, we don't know the correct model. After all, if we did we wouldn't have a need to conduct the regression analysis! Because we don't know the correct form of the regression model, we have no way of knowing the exact nature of the biases.
##### Outcome 3
Another possible outcome is that the regression model contains one or more extraneous variables. That is, the regression equation contains extraneous variables that are neither related to the response nor to any of the other predictors. It is as if we went overboard and included extra predictors in the model that we didn't need!
The good news is that such a model does yield unbiased regression coefficients, unbiased predictions of the response, and an unbiased SSE. The bad news is that — because we have more parameters in our model — MSE has fewer degrees of freedom associated with it. When this happens, our confidence intervals tend to be wider and our hypothesis tests tend to have lower power. It's not the worst thing that can happen, but it's not too great either. By including extraneous variables, we've also made our model more complicated and hard to understand than necessary.
##### Outcome 4
If the regression model is overspecified, then the regression equation contains one or more redundant predictor variables. That is, part of the model is correct, but we have gone overboard by adding predictors that are redundant. Redundant predictors lead to problems such as inflated standard errors for the regression coefficients. (Such problems are also associated with multicollinearity, which we'll cover in Lesson 12).
Regression models that are overspecified yield unbiased regression coefficients, unbiased predictions of the response, and an unbiased SSE. Such a regression model can be used, with caution, for prediction of the response, but should not be used to describe the effect of a predictor on the response. Also, as with including extraneous variables, we've also made our model more complicated and hard to understand than necessary.
### A goal and a strategy
Okay, so now we know the consequences of having the "wrong" variables in our regression model. The challenge, of course, is that we can never really be sure which variables are "wrong" and which variables are "right." All we can do is use the statistical methods at our fingertips and our knowledge of the situation to help build our regression model.
Here's my recommended approach to building a good and useful model:
1. Know your goal, know your research question. Knowing how you plan to use your regression model can assist greatly in the model building stage. Do you have a few particular predictors of interest? If so, you should make sure your final model includes them. Are you just interested in predicting the response? If so, then multicollinearity should worry you less. Are you interested in the effects that specific predictors have on the response? If so, multicollinearity should be a serious concern. Are you just interested in summary description? What is it that you are trying to accomplish?
2. Identify all of the possible candidate predictors. This may sound easier than it actually is to accomplish. Don't worry about interactions or the appropriate functional form — such as $$x^{2}$$ and log x — just yet. Just make sure you identify all the possible important predictors. If you don't consider them, there is no chance for them to appear in your final model.
3. Use variable selection procedures to find the middle ground between an underspecified model and a model with extraneous or redundant variables. Two possible variable selection procedures are stepwise regression and best subsets regression. We'll learn about both methods here in this lesson.
4. Fine-tune the model to get a correctly specified model. If necessary, change the functional form of the predictors and/or add interactions. Check the behavior of the residuals. If the residuals suggest problems with the model, try a different functional form of the predictors or remove some of the interaction terms. Iterate back and forth between formulating different regression models and checking the behavior of the residuals until you are satisfied with the model.
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675829768180847, "perplexity": 541.6413185959524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00666.warc.gz"} |
http://math.stackexchange.com/questions/272821/quadratic-equation-with-matricial-coefficients | # Quadratic equation with matricial coefficients
If I have a equation in the form
$${\lambda ^2}{I_N} + \lambda {M_1} + {M_2} = {0_N}$$
where ${I_N}$ is the identity matrix of order $N$, $M_1$ and $M_2$ are matrices of ($N\times N$) order and $\lambda \in \mathbb C$ belongs to the complex numbers .
What are the mathematical tools or the mathematical framework to solve this kind of equations?
-
Looking at the equation for one coefficient gives you two possible values for $\lambda$. You can test both of them to see if they solve the equation. – Quimey Jan 8 '13 at 14:18
For the equation to have a solution, your matrices $M_1$ and $M_2$ must necessarily commute. This would only be one requirement. To see this, consider the case for diagonalizable $M_2$ so that $PM_2P^{-1} = D$ for some invertible $P$ and diagonal $D$.
\begin{align} P\left(\lambda^2 I_N + \lambda M_1 + M_2 = 0_N\right)P^{-1} \\ \lambda^2 I_N + \lambda PM_1P^{-1} + D = 0_N \end{align}
Here we can see that $M_1$ must not only have the same spectrum as $M_2$ (since otherwise the non-zero elements in the off-diagonal would not be canceled in the sum), but it must have appropriate eigenvalues such that the single $\lambda$ simultaneously solves for each diagonal term.
tl:dr: solve for $\lambda_1$ and $\lambda_2$ at any desired coordinate. Check if either one works globally. If not, then there is no solution.
-
I've comproved that this eq does not have solution, this because M1 and M2 do not commute and they do not share the same spectra. Thanks a lot. – Daniel.B. Jan 10 '13 at 10:17
+1 and thanks for noting me that gap there. – Babak S. Jan 16 '13 at 15:56
@Babak Thanks, and let me know when to delete my other comments on your question, I will look up and refresh my memory on the array syntax in the meantime – adam W Jan 16 '13 at 16:04
You noted me a very important thing and I want you to accept my thanks. Let them be there. Your comments contain some useful points I didn't know them. ;-) – Babak S. Jan 16 '13 at 16:08
If the equality is to be true, we need to check that for $\{M_1\}_{ij}$ and $\{M_2\}_{ij}$ we have $$\lambda^2 \delta_{ij} + \lambda \{M_1\}_{ij} + \{M_2\}_{ij} = 0$$
where $\delta_{ij}$ is the Kronecker delta and $1 \leq i,j \leq n$.
There's probably a better way of investigating it - I'm having a think about that now. Certainly, from the $i \neq j$ case, we are left with a single variabled equation, so at most we have one $\lambda$ as a solution.
-
The determinant does not distribute over the sum. – Quimey Jan 8 '13 at 15:11
Indeed - I guess I thought that the determinant was a ring homomorphism. – Andrew D Jan 8 '13 at 15:15
Indeed, by adam W's answer, $M_1$ is without loss of generality in Jordan canonical form, so that $M_1 = \bigoplus_{k=1}^M (\mu_k I_{n_k} + N_{n_k})$ for $\mu_k$ the eigenvalues (counted with the relevant notion of multiplicity) and $N_{n_k}$ the appropriate nilpotent matrices. Then $$\lambda^2 I_N + \lambda M_1 = \bigoplus_{k=1}^M ((\lambda^2 + \lambda\mu_k) I_{n_k} + \lambda N_{n_k}),$$ so that $M_2$ must necessarily have the analogous block diagonal form $$M_2 = \bigoplus_{k=1}^M (\alpha_k I_{n_k} + \beta N_{n_k})$$ for some constants $\alpha_k$ and $\beta$. Hence, when the dust settles, you're left with the system of quadratic equations $$\lambda^2 + \mu_k \lambda - \alpha_k =0$$ together with the additional equation $\lambda = \beta$ whenever $M_1$ (and hence also $M_2$) is not diagonal. I think this should be correct?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555039405822754, "perplexity": 302.1932952811522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121833101.33/warc/CC-MAIN-20150124175033-00016-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://www.algebra.com/cgi-bin/show-question-source.mpl?solution=142642 | Question 190084
<font face="Garamond" size="+2">
You can go through the process of finding all of the possible factors of -15 and 10, blah blah blah, but there is a much simpler and quicker method. Use the idea that if *[tex \LARGE x = a] is a root of the equation formed when you set the polynomial equal to zero, then *[tex \LARGE x - a] must be a factor of the polynomial. Since this is a quadratic trinomial, you can set it equal to zero and use the quadratic formula to find the roots -- then the factors become obvious.
Step 1: Set the trinomial equal to zero.
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ 10w^2-19w-15=0]
Step 2: Solve with the quadratic formula:
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ w = \frac{-b \pm sqrt{b^2 - 4ac}}{2a} = \frac{-(-19) \pm sqrt{(-19)^2 - 4(10)(-15)}}{2(10)} = \frac{19 \pm sqrt{961}}{20} = \frac{19 \pm 31}{20} ]
So
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ w = \frac{19 + 31}{20} = \frac{50}{20} =\frac{5}{2} \ \ \Rightarrow\ \ 2w - 5 = 0]
Or
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ w = \frac{19 - 31}{20} = \frac{-12}{20} =\frac{-3}{5} \ \ \Rightarrow\ \ 5w + 3 = 0]
Verifying that
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ (2w - 5)(5w + 3) = 10w^2-19w-15]
is left as an exercise for the student.
By the way, if you end up with a pair irrational roots, then the trinomial is not factorable over the rational numbers.
John
*[tex \LARGE e^{i\pi} + 1 = 0]
</font> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982337474822998, "perplexity": 406.4470280357851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698063918/warc/CC-MAIN-20130516095423-00067-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/292837/rectify-image-from-congruent-planar-shape-objects/295409 | Rectify image from congruent planar shape objects
I am implementing an algorithm to remove projective distortions on the following image.
I understand this is possible by applying the following transformation:
$$\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ l_1 & l_2 & l_3 \\ \end{matrix}$$
Where $$l_\infty=(\begin{matrix}l_1 & l_2 & l_3 \end{matrix})^T$$ is the line at the infinity. In a perspective image of a plane, the line at infinity on the world plane is imaged as the vanishing line of the plane.
The vanishing line can be computed by intersecting two vanishing points which can be computed in the following ways:
1. From the intersection of two sets of imaged parallel lines. But seems there are no two sets of imaged parallel lines on the image.
2. Given two intervals on a imaged line $$\lt0,a^\prime,a^\prime+b^\prime\gt$$ with a known length ratio $$d(0,a^\prime):d(a^\prime,a^\prime+b^\prime)=0:a^\prime$$ Where I need to solve the system (up to scale) $$\left(\begin{matrix}0 \\ 1\end{matrix}\right)=\left(\begin{matrix}h11 & h12 \\ h21 & h22\end{matrix}\right) \left(\begin{matrix}0 \\ 1\end{matrix}\right)$$ $$\left(\begin{matrix}a \\ 1\end{matrix}\right)=\left(\begin{matrix}h11 & h12 \\ h21 & h22\end{matrix}\right) \left(\begin{matrix}a^\prime \\ 1\end{matrix}\right)$$ $$\left(\begin{matrix}a+b \\ 1\end{matrix}\right)=\left(\begin{matrix}h11 & h12 \\ h21 & h22\end{matrix}\right) \left(\begin{matrix}a^\prime+b^\prime \\ 1\end{matrix}\right)$$ And compute the vanishing point as $$x^\prime=\left(\begin{matrix}h11 & h12 \\ h21 & h22\end{matrix}\right) \left(\begin{matrix}0 \\ 1\end{matrix}\right)$$ But I don't understand this approach as I don't have the world points $$\lt0,a,a+b\gt$$ and I don't know what serves me for knowing the length ratio.
3. Using the cross ratio. Which I totally don't understand how could be possible to use in this case.
Edit: isn't necessary to follow any particular approach just remind that the planar objects are irregular (no orthogonal angles in real world) congruent shapes (they have the same shape in real world)
-
I have written an answer on SO about how to compute a projective transform given four points and their images. You can use vanishing points for my approach, but don't have to. – MvG Feb 4 '13 at 15:36
Thanks for your answer, well actually using directly a projective approach based on four points is an oversepecification of the geometry (8 degrees of freedom instead of 4). Furthermore i don't count with such 4 pairs of points for which i know its shape in world frame; the piece of paper under the rightmost pen might not be rectangular or even may not exist. – gantzer89 Feb 5 '13 at 12:05
I see I haven't read your question carefully enough, but now I have a proper solution for you, posted as an answer. If that answer is what you are looking for, then perhaps you should modify the title of this question, since you don't necessarily need to use cross ratios for this. Nor some real vanishing points, come to think of it. So “Reconstruct perspective from congruent shapes” or something like this would better describe the task you have at hand. Except you really want a solution based on cross ratios, that is… – MvG Feb 5 '13 at 14:18
I will assume that the two depicted bright polygons are congruent in the world plane. If they are not, then I guess you won't have enough information to reconstruct anything.
The first step is finding a projective transformation which maps one of the polygons onto the other. That transformation is uniquely defined using four points and their images, and I've described this computation in detail in another post. So choosing any four pairs of matching corners will give you that matrix. Originally I had assumed that this is the transformation you were interested in, but I had not read the question carefully enough. The transformation you just found describes the rotation of one polygon onto the other in the image plane.
Next, you look for fixed points of the transformation. You find these as eigenvectors of the transformation matrix. In $\mathbb C\mathrm P^2$ you should get three fixed points, one of them real and two complex and conjugate to one another. The real one is your center of rotation. The complex fixed points correspond to the ideal circle points $I=(1,i,0)^T$ and $J=(1,-i,0)^T$, which remain fixed under every similarity transformation.
By the way: The join of the two complex points corresponds to the vanishing line. Which in your image is way outside the picture area. But knowing the vanishing line is less useful than knowing the locations of $I$ and $J$ as we do, since these points can be used to define angles, so they will help you avoid skewing.
Now you can again choose four points and their images to define a projective transformation. This time, you map the complex fixed points to $I$ and $J$, and you map the center of rotation to wherever you want it to lie in your reconstructed image, e.g. the origin. You still have one point which you can choose arbitrarily. Its image will fix the scale and orientation of the reconstructed image. Strictly speaking, you don't have to take the center of rotation as one preimage point either: as long as you correctly map $I$ and $J$ from image coordinates to world coordinates, you can choose any two pairs of real points to uniquely define the projective transformation.
Counting degrees of freedom, the two arbitrarily chosen points above amount to four real degrees of freedom, matching the four degrees of freedom of a similarity transformation. Everything else is fixed. In particular, not only parallel lines but also angles are reconstructed.
I've just implemented the above description as a proof of concept using Cinderella. The center of rotation, drawn in green, appears to coincide with the upper boundary of your picture. The points P1, P2 and R2 were chosen arbitrarily.
-
Thanks a lot for your answer its been very helpful. I am writing because I got some issues: 1) When mapping the complex fixed points to the ideal points I am obtaining a complex transformation matrix, is it correct? 2) How should I choose the fourth point? despite the fact of being complex I am applying the projective transformation to the planar objects vertices but it reflects horizontally and vertically. 3) I tried to apply the transformation to the complex fixed points but they didn't transformed onto the ideal points, is it correct? – gantzer89 Feb 6 '13 at 2:48
@gantzer89: 1) The transformation matrix should be real, as you map conjugate points onto conjugate points. 2) Choose the fourth pair of preimage and image both as real points. Apart from that, tweak it to control the size and rotation of the reconstruction. If you get "two reflections", then you get a rotation, so rotate P2 around R2 (in my example) to align the image any way you want. 3) If the pairs of points defining the transformation don't end up as preimage and image under said transformation, you made a mistake computing its matrix. The complex fixpoints should map to $I$ and $J$. – MvG Feb 6 '13 at 3:00
Thank you for your patience answering my questions, I continue having problems with the projective transformation, in this other question I am showing the matrices I am using for the computation of the transformation. – gantzer89 Feb 6 '13 at 15:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584201335906982, "perplexity": 296.03527813976797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830834.3/warc/CC-MAIN-20140820021350-00033-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=8&journalID=9523&pageb=3&userQueryID=&sort=&local_page=&sorType=&sorCol= | Subjects -> MATHEMATICS (Total: 1105 journals) - APPLIED MATHEMATICS (88 journals) - GEOMETRY AND TOPOLOGY (23 journals) - MATHEMATICS (814 journals) - MATHEMATICS (GENERAL) (45 journals) - NUMERICAL ANALYSIS (25 journals) - PROBABILITIES AND MATH STATISTICS (110 journals) MATHEMATICS (814 journals) First | 1 2 3 4 5
Showing 401 - 538 of 538 Journals sorted alphabetically Journal of Computational Physics (Followers: 70) Journal of Computational Physics : X (Followers: 1) Journal of Computer Engineering, System and Science (CESS) Journal of Contemporary Mathematical Analysis Journal of Cryptology (Followers: 3) Journal of Difference Equations and Applications Journal of Differential Equations (Followers: 1) Journal of Discrete Algorithms (Followers: 4) Journal of Discrete Mathematics (Followers: 1) Journal of Dynamics and Differential Equations Journal of Engineering Mathematics (Followers: 2) Journal of Evolution Equations Journal of Experimental Algorithmics (Followers: 1) Journal of Flood Risk Management (Followers: 13) Journal of Formalized Reasoning (Followers: 2) Journal of Function Spaces Journal of Functional Analysis (Followers: 2) Journal of Geochemical Exploration (Followers: 1) Journal of Geological Research (Followers: 1) Journal of Geovisualization and Spatial Analysis Journal of Global Optimization (Followers: 7) Journal of Global Research in Mathematical Archives (Followers: 1) Journal of Group Theory (Followers: 2) Journal of Homotopy and Related Structures Journal of Honai Math Journal of Humanistic Mathematics (Followers: 1) Journal of Hyperbolic Differential Equations Journal of Indian Council of Philosophical Research Journal of Industrial Mathematics (Followers: 2) Journal of Inequalities and Applications Journal of Infrared, Millimeter and Terahertz Waves (Followers: 2) Journal of Integrable Systems (Followers: 1) Journal of K-Theory Journal of Knot Theory and Its Ramifications (Followers: 1) Journal of Kufa for Mathematics and Computer (Followers: 1) Journal of Liquid Chromatography & Related Technologies (Followers: 7) Journal of Logical and Algebraic Methods in Programming Journal of Manufacturing Systems (Followers: 4) Journal of Mathematical Analysis and Applications (Followers: 4) Journal of mathematical and computational science (Followers: 7) Journal of Mathematical and Fundamental Sciences Journal of Mathematical Behavior (Followers: 3) Journal of Mathematical Chemistry (Followers: 3) Journal of Mathematical Cryptology (Followers: 1) Journal of Mathematical Extension (Followers: 3) Journal of Mathematical Finance (Followers: 9) Journal of Mathematical Imaging and Vision (Followers: 6) Journal of Mathematical Logic (Followers: 3) Journal of Mathematical Modelling and Algorithms (Followers: 1) Journal of Mathematical Neuroscience (Followers: 10) Journal of Mathematical Sciences Journal of Mathematical Sciences and Applications (Followers: 2) Journal of Mathematical Sociology (Followers: 3) Journal of Mathematics Journal of Mathematics and Statistics (Followers: 8) Journal of Mathematics and the Arts (Followers: 2) Journal of Mathematics Education at Teachers College (Followers: 3) Journal of Mathematics in Industry Journal of Mathematics Research (Followers: 6) Journal of Metallurgy (Followers: 7) Journal of Modern Mathematics Frontier Journal of Multidisciplinary Modeling and Optimization Journal of Multivariate Analysis (Followers: 13) Journal of Natural Sciences and Mathematics Research Journal of Nonlinear Analysis and Optimization : Theory & Applications (Followers: 4) Journal of Nonlinear Mathematical Physics (Followers: 1) Journal of Nonlinear Science (Followers: 1) Journal of Numerical Cognition Journal of Numerical Mathematics (Followers: 2) Journal of Optimization (Followers: 4) Journal of Peridynamics and Nonlocal Modeling Journal of Problem Solving (Followers: 2) Journal of Progressive Research in Mathematics (Followers: 5) Journal of Pseudo-Differential Operators and Applications Journal of Pure and Applied Algebra (Followers: 4) Journal of Quantitative Analysis in Sports (Followers: 9) Journal of Quantitative Linguistics (Followers: 6) Journal of Scientific Computing (Followers: 18) Journal of Scientific Research Journal of Symbolic Computation (Followers: 1) Journal of the Australian Mathematical Society Journal of the Egyptian Mathematical Society Journal of the European Mathematical Society (Followers: 1) Journal of the Indian Mathematical Society (Followers: 1) Journal of the Institute of Mathematics of Jussieu Journal of the London Mathematical Society (Followers: 2) Journal of the Nigerian Mathematical Society (Followers: 1) Journal of Theoretical and Applied Physics (Followers: 8) Journal of Topology and Analysis Journal of Transport and Supply Chain Management (Followers: 15) Journal of Turbulence (Followers: 8) Journal of Uncertainty Analysis and Applications Journal of Universal Mathematics Journal of Urban Regeneration & Renewal (Followers: 11) JRAMathEdu : Journal of Research and Advances in Mathematics Education (Followers: 5) JUMLAHKU : Jurnal Matematika Ilmiah STKIP Muhammadiyah Kuningan (Followers: 4) JURING (Journal for Research in Mathematics Learning) (Followers: 1) Jurnal Ilmiah AdMathEdu Jurnal Matematika (Followers: 1) Jurnal Matematika Integratif Jurnal Matematika, Sains, Dan Teknologi Jurnal Natural Jurnal Pendidikan Matematika Raflesia Jurnal Penelitian Pembelajaran Matematika Sekolah Jurnal Penelitian Sains (JPS) Jurnal Riset Pendidikan Matematika Jurnal Sains Matematika dan Statistika Jurnal Tadris Matematika Jurnal Teknologi dan Sistem Komputer Kontinu : Jurnal Penelitian Didaktik Matematika (Followers: 3) Kreano, Jurnal Matematika Kreatif-Inovatif (Followers: 5) Le Matematiche Learning and Teaching Mathematics (Followers: 7) Lettera Matematica Lietuvos Matematikos Rinkinys (Followers: 3) Limits : Journal of Mathematics and Its Applications (Followers: 1) Linear Algebra and its Applications (Followers: 22) Linear and Multilinear Algebra (Followers: 8) Lithuanian Mathematical Journal LMS Journal of Computation and Mathematics Lobachevskii Journal of Mathematics Logic and Analysis (Followers: 1) Logic Journal of the IGPL (Followers: 1) Logica Universalis manuscripta mathematica MaPan : Jurnal Matematika dan Pembelajaran Marine Genomics (Followers: 2) Matemáticas, Educación y Sociedad Matematicheskie Zametki Matematika Matematychni Studii Mathematica Eterna Mathematica Scandinavica (Followers: 1) Mathematica Slovaca (Followers: 1) Mathematical and Computational Forestry & Natural-Resource Sciences Mathematical Communications Mathematical Computation (Followers: 1) Mathematical Geosciences (Followers: 3) Mathematical Journal of Interdisciplinary Sciences Mathematical Medicine and Biology: A Journal of the IMA (Followers: 1) Mathematical Methods in the Applied Sciences (Followers: 4) Mathematical Methods of Statistics (Followers: 4) Mathematical Modelling and Analysis (Followers: 1) Mathematical Modelling in Civil Engineering (Followers: 5) Mathematical Modelling of Natural Phenomena (Followers: 1) Mathematical Models and Methods in Applied Sciences (Followers: 2) Mathematical Models in Engineering (Followers: 5) Mathematical Notes Mathematical Proceedings of the Cambridge Philosophical Society (Followers: 2) Mathematical Programming Computation (Followers: 3) Mathematical Sciences Mathematical Social Sciences (Followers: 1) Mathematical Theory and Modeling (Followers: 13) Mathematical Thinking and Learning (Followers: 3) Mathematics and Statistics (Followers: 5) Mathematics Education Forum Chitwan (Followers: 1) Mathematics Education Journal (Followers: 2) Mathematics Education Research Journal (Followers: 18) Mathematics in Science and Engineering Mathematics of Control, Signals, and Systems (MCSS) (Followers: 5) Mathematics of Quantum and Nano Technologies Mathématiques et sciences humaines (Followers: 7) Mathematische Annalen (Followers: 1) Mathematische Nachrichten (Followers: 1) Mathematische Semesterberichte Mathematische Zeitschrift (Followers: 1) MathLAB Journal (Followers: 3) MATI : Mathematical Aspects of Topological Indeces MATICS (Followers: 2) Matrix Science Mathematic (Followers: 1) Measurement Science Review (Followers: 3) Mediterranean Journal of Mathematics Memetic Computing Mendel : Soft Computing Journal Metaheuristics Metals and Materials International Metascience (Followers: 1) Milan Journal of Mathematics Mitteilungen der DMV MLQ- Mathematical Logic Quarterly (Followers: 1) MONA : Matematik- og Naturfagsdidaktik (Followers: 6) Monatshefte fur Mathematik Moroccan Journal of Pure and Applied Analysis (Followers: 4) Moscow University Mathematics Bulletin MSOR Connections (Followers: 1) Multiscale Modeling and Simulation (Followers: 3) MUST : Journal of Mathematics Education, Science and Technology (Followers: 1) Nagoya Mathematical Journal Nano Research (Followers: 4) Nanotechnologies in Russia (Followers: 1) Natural Resource Modeling (Followers: 1) New Mathematics and Natural Computation Nonlinear Analysis : Modelling and Control (Followers: 1) Nonlinear Analysis : Theory, Methods & Applications (Followers: 1) Nonlinear Analysis: Hybrid Systems Nonlinear Analysis: Real World Applications (Followers: 2) Nonlinear Differential Equations and Applications NoDEA Nonlinear Engineering Nonlinear Oscillations (Followers: 1) North Carolina Journal of Mathematics and Statistics
Similar Journals
manuscripta mathematicaJournal Prestige (SJR): 1.053 Citation Impact (citeScore): 1Number of Followers: 0 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1432-1785 - ISSN (Online) 0025-2611 Published by Springer-Verlag [2626 journals]
• A note on fibrations of $$G_2$$ G 2 -manifolds
• Abstract: In this note, we first give some constructions of torsion-free $$G_2$$ -structures on some topological product manifolds. Then we provide a sufficient condition of 3-Calabi–Yau fibrations for $$G_2$$ -manifolds. Next we study the Gukov–Yau–Zaslow horizontal lifting for hyperKähler firbations of $$G_2$$ -manifolds, and discuss when the Gukov-Yau-Zaslow metric on this fibration is a $$G_2$$ -metric.
PubDate: 2019-03-22
• Evolution of the Steklov eigenvalue under geodesic curvature flow
• Abstract: On a two-dimensional compact Riemannian manifold with boundary, we prove that the first nonzero Steklov eigenvalue is nondecreasing along the unnormalized geodesic curvature flow if the initial metric has positive geodesic curvature and vanishing Gaussian curvature. Using the normalized geodesic curvature flow, we also obtain some estimate for the first nonzero Steklov eigenvalue. On the other hand, we prove that the compact soliton of the geodesic curvature flow must be the trivial one.
PubDate: 2019-03-21
• The generalized algebraic conjecture on spherical classes
• Abstract: Let X be a pointed CW-complex. The generalized conjecture on spherical classes states that, the Hurewicz homomorphism $$H: \pi _{*}(Q_0 X) \rightarrow H_{*}(Q_0 X)$$ vanishes on classes of $$\pi _* (Q_0 X)$$ of Adams filtration greater than 2. Let $$\varphi _s: Ext _{\mathcal {A}}^{s}(\widetilde{H}^*(X), \mathbb {F}_2) \rightarrow {(\mathbb {F}_2 \otimes _{{\mathcal {A}}}R_s\widetilde{H}^*(X))}^*$$ denote the sth Lannes–Zarati homomorphism for the unstable $${\mathcal {A}}$$ -module $$\widetilde{H}^*(X)$$ . This homomorphism corresponds to an associated graded of the Hurewicz map. An algebraic version of the conjecture states that the sth Lannes–Zarati homomorphism vanishes in any positive stem for $$s>2$$ and any CW-complex X. We construct a chain level representation for the Lannes–Zarati homomorphism by means of modular invariant theory. We show the commutativity of the Lannes–Zarati homomorphism and the squaring operation. The second Lannes–Zarati homomorphism for $$\mathbb {R}\mathbb {P}^{\infty }$$ vanishes in positive stems, while the first Lannes-Zatati homomorphism for any space is basically non-zero. We prove the algebraic conjecture for $$\mathbb {R}\mathbb {P}^{\infty }$$ and $$\mathbb {R}\mathbb {P}^{n}$$ with $$s=3$$ , 4. We discuss the relation between the Lannes–Zarati homomorphisms for $$\mathbb {R}\mathbb {P}^{\infty }$$ and $$S^0$$ . Consequently, the algebraic conjecture for $$X=S^0$$ is re-proved with $$s=3$$ , 4, 5.
PubDate: 2019-03-19
• The concavity of Rényi entropy power for the parabolic p -Laplace
equations and applications
• Abstract: In this paper, we prove that the concavity of Rényi entropy power of positive solutions to the parabolic p-Laplace equations on compact Riemannian manifold with nonnegative Ricci curvature. As applications, we derive the improved $$L^p$$ -Gagliardo-Nirenberg inequalities.
PubDate: 2019-03-16
• A Note on Generic Transversality of Euclidean Submanifolds
• PubDate: 2019-03-15
• Bernstein theorem for translating solitons of hypersurfaces
• Abstract: In this paper, we prove a monotonicity formula and some Bernstein type results for translating solitons of hypersurfaces in $$\mathbb {R}^{n+1}$$ , giving some conditions under which a translating soliton is a hyperplane. We also show a gap theorem for the translating soliton of hypersurfaces in $$R^{n+k}$$ , namely, if the $$L^n$$ norm of the second fundamental form of the soliton is small enough, then it is a hyperplane.
PubDate: 2019-03-15
• A square root of Hurwitz numbers
• Abstract: We exhibit a generating function of spin Hurwitz numbers analogous to (disconnected) double Hurwitz numbers that is a tau function of the two-component BKP (2-BKP) hierarchy and is a square root of a tau function of the two-component KP (2-KP) hierarchy defined by related Hurwitz numbers.
PubDate: 2019-03-15
• On the density theorem related to the space of non-split tri-Hermitian
forms II
• Abstract: Let $${\widetilde{k}}$$ be a fixed cubic field, F a quadratic field and $$L=\widetilde{k}\cdot F$$ . In this paper and its companion paper, we determine the density of more or less the ratio of the residues of the Dedekind zeta functions of L, F where F runs through quadratic fields.
PubDate: 2019-03-14
• Erratum to: A note on Galois embeddings of abelian varieties
• Abstract: The original Theorem in the article is revised in this erratum based on a referee’s request.
PubDate: 2019-03-01
• Motivic multiplicative McKay correspondence for surfaces
• Abstract: We revisit the classical two-dimensional McKay correspondence in two respects: The first one, which is the main point of this work, is that we take into account of the multiplicative structure given by the orbifold product; second, instead of using cohomology, we deal with the Chow motives. More precisely, we prove that for any smooth proper two-dimensional orbifold with projective coarse moduli space, there is an isomorphism of algebra objects, in the category of complex Chow motives, between the motive of the minimal resolution and the orbifold motive. In particular, the complex Chow ring (resp. Grothendieck ring, cohomology ring, topological K-theory) of the minimal resolution is isomorphic to the complex orbifold Chow ring (resp. Grothendieck ring, cohomology ring, topological K-theory) of the orbifold surface. This confirms the two-dimensional Motivic Crepant Resolution Conjecture.
PubDate: 2019-03-01
• On path-components of the mapping spaces $$M(\mathbb {S}^m,\mathbb {F}P^n)$$ M ( S m , F P n )
• Abstract: We estimate the number of homotopy types of path-components of the mapping spaces $$M(\mathbb {S}^m,\mathbb {F}P^n)$$ from the m-sphere $$\mathbb {S}^m$$ to the projective space $$\mathbb {F}P^n$$ for $$\mathbb {F}$$ being the real numbers $$\mathbb {R}$$ , the complex numbers $$\mathbb {C}$$ , or the skew algebra $$\mathbb {H}$$ of quaternions. Then, the homotopy types of path-components of the mapping spaces $$M(E\Sigma ^m,\mathbb {F}P^n)$$ for the suspension $$E\Sigma ^m$$ of a homology m-sphere $$\Sigma ^m$$ are studied as well.
PubDate: 2019-03-01
• Local energy inequalities for mean curvature flow into evolving ambient
spaces
• Abstract: We establish a local monotonicity formula for mean curvature flow into a curved space whose metric is also permitted to evolve simultaneously with the flow, extending the work of Ecker (Ann Math (2) 154(2):503–525, 2001), Huisken (J Differ Geom 31(1):285–299, 1990), Lott (Commun Math Phys 313(2):517–533, 2012), Magni, Mantegazza and Tsatis (J Evol Equ 13(3):561–576, 2013) and Ecker et al. (J Reine Angew Math 616:89–130, 2008). This formula gives rise to a monotonicity inequality in the case where the target manifold’s geometry is suitably controlled, as well as in the case of a gradient shrinking Ricci soliton. Along the way, we establish suitable local energy inequalities to deduce the finiteness of the local monotone quantity.
PubDate: 2019-03-01
• On the isometry group of $$RCD^*(K,N)$$ R C D ∗ ( K , N ) -spaces
• Abstract: We prove that the group of isometries of a metric measure space that satisfies the Riemannian curvature condition, $$RCD^*(K,N),$$ is a Lie group. We obtain an optimal upper bound on the dimension of this group, and classify the spaces where this maximal dimension is attained.
PubDate: 2019-03-01
• A fractional elliptic problem in $$\mathbb {R}^n$$ R n with critical
growth and convex nonlinearities
• Abstract: In this paper we prove the existence of a positive solution of the nonlinear and nonlocal elliptic equation in $$\mathbb {R}^n$$ \begin{aligned} (-\Delta )^s u =\varepsilon h u^q+u^{2_s^*-1} \end{aligned} in the convex case $$1\le q<2_s^*-1$$ , where $$2_s^*={2n}/({n-2s})$$ is the critical fractional Sobolev exponent, $$(-\Delta )^s$$ is the fractional Laplace operator, $$\varepsilon$$ is a small parameter and h is a given bounded, integrable function. The problem has a variational structure and we prove the existence of a solution by using the classical Mountain-Pass Theorem. We work here with the harmonic extension of the fractional Laplacian, which allows us to deal with a weighted (but possibly degenerate) local operator, rather than with a nonlocal energy. In order to overcome the loss of compactness induced by the critical power we use a Concentration-Compactness principle. Moreover, a finer analysis of the geometry of the energy functional is needed in this convex case with respect to the concave–convex case studied in Dipierro et al. (Fractional elliptic problems with critical growth in the whole of $$\mathbb {R}^n$$ . Lecture Notes Scuola Normale Superiore di Pisa, vol 15. Springer, Berlin, 2017).
PubDate: 2019-03-01
• Pairs of solutions for Robin problems with an indefinite and unbounded
potential, resonant at zero and infinity
• Abstract: We consider a semilinear Robin problem driven by the Laplacian plus an indefinite and unbounded potential and a Caratheodory reaction term which is resonant both at zero and $$\pm \infty$$ . Using the Lyapunov–Schmidt reduction method and critical groups (Morse theory), we show that the problem has at least two nontrivial smooth solutions.
PubDate: 2019-03-01
• On invariant Riemannian metrics on Ledger–Obata spaces
• Abstract: We study invariant metrics on Ledger–Obata spaces $$F^m/{\text {diag}}(F)$$ . We give the classification and an explicit construction of all naturally reductive metrics, and also show that in the case $$m=3$$ , any invariant metric is naturally reductive. We prove that a Ledger–Obata space is a geodesic orbit space if and only if the metric is naturally reductive. We then show that a Ledger–Obata space is reducible if and only if it is isometric to the product of Ledger–Obata spaces (and give an effective method of recognising reducible metrics), and that the full connected isometry group of an irreducible Ledger–Obata space $$F^m/{\text {diag}}(F)$$ is $$F^m$$ . We deduce that a Ledger–Obata space is a geodesic orbit manifold if and only if it is the product of naturally reductive Ledger–Obata spaces.
PubDate: 2019-03-01
• Algebraic surfaces with $$p_g$$ p g = q = 1, $$K^2$$ K 2 = 4
and genus 3 Albanese fibration
• Abstract: In this paper, we study the Gieseker moduli space $$\mathcal {M}_{1,1}^{4,3}$$ of minimal surfaces with $$p_g=q=1, K^2=4$$ and genus 3 Albanese fibration. Under the assumption that direct image of the canonical sheaf under the Albanese map is decomposable, we find two irreducible components of $$\mathcal {M}_{1,1}^{4,3}$$ , one of dimension 5 and the other of dimension 4.
PubDate: 2019-03-01
• Tautological ring of strata of differentials
• Abstract: Strata of k-differentials on smooth curves parameterize sections of the k-th power of the canonical bundle with prescribed orders of zeros and poles. Define the tautological ring of the projectivized strata using the $$\kappa$$ and $$\psi$$ classes of moduli spaces of pointed smooth curves along with the class $$\eta = \mathcal O(-1)$$ of the Hodge bundle. We show that if there is no pole of order k, then the tautological ring is generated by $$\eta$$ only, and otherwise it is generated by the $$\psi$$ classes corresponding to the poles of order k.
PubDate: 2019-03-01
• Classifying Fano complexity-one T -varieties via divisorial polytopes
• Authors: Nathan Ilten; Marni Mishna; Charlotte Trainor
Abstract: The correspondence between Gorenstein Fano toric varieties and reflexive polytopes has been generalized by Ilten and Süß to a correspondence between Gorenstein Fano complexity-one T-varieties and Fano divisorial polytopes. Motivated by the finiteness of reflexive polytopes in fixed dimension, we show that over a fixed base polytope, there are only finitely many Fano divisorial polytopes, up to equivalence. We classify two-dimensional Fano divisorial polytopes, recovering Huggenberger’s classification of Gorenstein del Pezzo $$\mathbb {K}^*$$ -surfaces. Furthermore, we show that any three-dimensional Fano divisorial polytope is equivalent to one involving only eight functions.
PubDate: 2018-05-14
DOI: 10.1007/s00229-018-1036-x
• Hölder regularity for bounded solutions to a class of anisotropic
operators
• Authors: Stella Piro-Vernier; Francesco Ragnedda; Vincenzo Vespri
Abstract: In this note we show the Hölder regularity for bounded solutions to a class of anisotropic elliptic operators. This result is the dual of the one proved by Liskevich and Skrypnik (Nonlinear Anal 71:1699–1708, 2009).
PubDate: 2018-05-02
DOI: 10.1007/s00229-018-1034-z
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962690234184265, "perplexity": 3845.2435544239247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00470.warc.gz"} |
https://mathoverflow.net/questions/366768/how-to-estimate-the-order-of-this-integral-with-parameter | # How to estimate the order of this integral with parameter
Some introduction: Given a homogeneous structure called "dilation" in $$R^n$$: For $$t\geq 0$$ $$D_t: R^n\rightarrow R^n$$ $$D_t(x)=(t^{a_1}x_1,...,t^{a_n}x_n)$$ where $$1=a_1\leq...\leq a_n$$, and $$a_i$$ are all integers. And we call $$Q=a_1+...+a_n$$ the homogeneous dimension. In our problem, we only consider when $$Q>n\geq 2$$.
Now consider the integral: $$J(r)=\int_{[0,1]^n}\frac{dx}{P(x,r)}=\int_{[0,1]^n}\frac{dx}{f_n(x)r^n+f_{n+1}(x)r^{n+1}+...+f_Q(x)r^Q}$$ where $$f_k(x)$$ satisfies:
(1) $$f_k(D_t(x))=t^{Q-k}f_k(x)$$ for all $$x\in R^n$$ and $$t\geq0$$
(2) $$f_k(x)$$ is the combination of some positive monomials. (Examples will be shown below)
(3) $$f_Q(x)=Constant>0$$. (This property follows from other theorems and propositions, but they are too many so I don't describe them here.)
Four examples are the followings:
(ex1) In $$R^2$$, $$D_t(x)=(tx_1,t^2x_2)$$, so $$Q=3$$. And Let $$P(x,r)=x_1r^2+r^3$$.
(ex2) In $$R^3$$, $$D_t(x)=(tx_1,tx_2,t^2x)$$, so $$Q=4$$. Let $$P(x,r)=(x_1+x_2)r^3+r^4$$
(ex3) In $$R^3$$, $$D_t(x)=(t^{1}x_1,t^2x_2,t^{3}x_3)$$, so $$Q=6$$. Let $$P(x,r)= x_1^3r^3+(x_2+3x_1^2)r^4+5x_1r^5+3r^6$$
(ex4) In $$R^3$$, $$D_t(x)=(t^{1}x_1,t^2x_2,t^{3}x_3)$$, so $$Q=6$$. Let $$P(x,r)= x_1x_2r^3+(x_2+2x_1^2)r^4+3x_1r^5+r^6$$
(You will find that $$x_n$$ doesn't make effort. In my work $$x_n$$ do make no sense in the integral but this follows from other theorems, and it doesn't matter here. )
Problem: Find the order of $$J(r)$$ when $$r$$ goes to $$0^+$$. Like the following description.
Attempt and information: I guess $$J(r)=\frac{1}{r^\alpha}I(r)$$, where the $$\alpha$$ is the "critical value", that is:
(i) $$\liminf_\limits{r\rightarrow0^+}I(r)>0$$.
(ii) for any $$\epsilon>0$$, $$\lim_\limits{x\rightarrow0^+}r^\epsilon I(r)=0$$.
I will give the reason why I guess so in the below. I can show that $$g_p(r)=r^p J(r)$$, then there exists $$p_0$$ s.t. when $$a, $$\lim_\limits{r\rightarrow0^+}g_a(r)>0$$ and when $$a>p_0$$, $$\lim_\limits{r\rightarrow0^+}g_a(r)=0$$. But I can't show $$\lim_\limits{r\rightarrow0^+}g_{p_0}(r)>0$$, that is, I can't show the (i) above. (see https://math.stackexchange.com/questions/3769564/how-to-find-the-critical-index-a-of-xafx) One gave a counterexample for the proposition in that link. But its counterexample will not appear in this problem. Because this is a rational fractional integral. The $$I(r)$$ I guess will be like the combination of $$\log$$ and $$\arctan$$.
The four example have the order estimates:
(ex1) We can calculate directly: $$J(r)=\frac{1}{r^2}\ln(1+\frac{1}{r})=\frac{1}{r^2}I(r)$$ where $$I(r)$$ satisfies (i)(ii) above.
(ex2) $$J(r)=\frac{1}{r^3}I(r)$$ where $$I(r)$$ can be calculate or one can use Dominate convergence theorem to estimate that $$I(r)$$ satisfies (i)(ii)
(ex3) $$J(r)=\frac{1}{r^{3+2/3}}I(r)$$ see https://math.stackexchange.com/questions/3718932/estimate-a-integral-with-parameter
(ex4) $$J(r)=\frac{1}{r^{3}}I(r)$$ First $$J(r)=\frac{1}{r^3}\int_{[0,1]^2}\frac{dxdy}{xy+(y+2x^2)r+3xr^2+r^3}=\frac{1}{r^3}I(r)$$ we can show $$I(r)$$ satisfies (i)(ii):
(i) change variables: $$I(r)=\int_{0}^{1/r^2}\int_{0}^{1/r}\frac{dxdy}{xy+(y+2x^2)+3x+1}$$ and then obviously.
(ii) for $$3>\epsilon>0$$ (the part $$\epsilon\geq 3$$ follows from the part $$3>\epsilon>0$$), $$r^\epsilon I(r)=\int_{[0,1]^2}\frac{r^\epsilon}{xy+(y+2x^2)r+3xr^2+r^3}dxdy=\int_{[0,1]^2}h_r(x,y)dxdy=\int_{(0,1)^2}h_r(x,y)dxdy$$ Pointwisely $$\lim_\limits{r\rightarrow0^+}h_r(x,y)=0$$ in $$(0,1)^2$$. Now look for a dominating function in $$(0,1)^2$$: $$\frac{1}{h_r(x,y)}\geq \frac{xy}{r^\epsilon}+r^{3-\epsilon}\geq C(xy)^{1-\frac{\epsilon}{3}}$$ So $$h_r(x,y)\leq \frac{C}{(xy)^{1-\frac{\epsilon}{3}}}$$ in $$(0,1)^2$$, which is integrable. By DCT, we have $$I(r)$$ satisfying (i)(ii). But this method doesn't work in other examples like (ex3).
Based on the four examples, I guess $$J(r)=\frac{1}{r^\alpha}I(r).$$ But I can't show how to find the critical value $$\alpha$$ and even it's difficult to show the existence of critical value
It looks like you care only about the order of magnitude (i.e., an answer up to a constant factor), in which case it is fairly easy.
First, ignore all coefficients. Setting them to $$1$$ just changes the answer at most constant number of times. Now, suppose we have the denominator of the form $$\sum_{(\alpha,\beta)} x^\alpha r^\beta$$ where $$\alpha$$ is a multi-index with real entries and $$\beta$$ is a real number. The sum is assumed to be finite. Make the change of variable $$x_j=e^{-y_j}$$. Now, at each point, only the maximal term matters (up to a factor that is the total number of terms). In terms of $$y$$'s, the condition of maximality of $$x^\alpha r^\beta$$ is $$y_j\ge 0$$, $$\langle y,\alpha-\alpha'\rangle\le (\beta'-\beta)\log(1/r)$$ for all $$(\alpha',\beta')\ne(\alpha,\beta)$$. This domain is just a fixed polyhedron $$P_{\alpha,\beta}$$ stretched $$\log(1/r)$$ times (we keep only those with non-empty interiors in what follows; also I call it a "polyhedron" though, technically, it can be unbounded). Thus, $$J(r)\asymp\sum_{(\alpha,\beta)}r^{-\beta}\int_{(log\frac 1r)P_{\alpha,\beta}}e^{\psi_{\alpha,\beta}(y)}\,dy$$ where $$\psi_{\alpha,\beta}(y)=\langle \alpha-e,y\rangle$$, $$e=(1,\dots,1)$$.
Now the life becomes straightforward. All you need is to find the order of magnitude of each integral. I'll drop the indices $$\alpha,\beta$$ for brevity. Let $$F$$ be the face of $$P$$ on which $$\psi$$ attains its maximum $$p$$ and let $$d$$ be the dimension of $$F$$. If $$\psi\equiv 0$$ (i.e., $$\alpha=e$$), we just have $$F=P$$ and $$\int_{(\log\frac 1r)P}e^{\psi}=V(P)\log^d(1/r)$$. Consider now the non-trivial situation when $$\psi$$ is not $$0$$. Then we can rotate and shrink the coordinate system so that $$-\psi(y)$$ becomes a new variable $$t$$. Also we can shift $$P$$ along this coordinate so that the face $$F$$ lies on the corresponding coordinate hyperplane $$\{t=0\}$$. Then the integral in question is just $$e^{p\log(1/r)}(\log^{D-1}\frac 1r)\int_{0}^\infty e^{-t}S_P(\frac t{\log{1/r}})\,dt$$ where $$S_P(\tau)$$ is the $$D-1$$-dimensional volume of the cross-section of $$P$$ by the hyperplane $$\{t=\tau\}$$. By the general convex geometry nonsense, for small $$\tau$$, $$S_P(\tau)=v_d\tau^{D-1-d}+v_{d-1}\tau^{D-d}+\dots+v_0\tau^{D-1}$$ where $$v_d>0$$ and then it becomes smaller (look up "mixed volumes" on Google if you are interested in the details), whence the leading term in the integral becomes $$\log^d\frac 1r$$ with some coefficient depending on $$P$$. Thus, the final answer for the integral we are interested in with the factor $$r^{-\beta}$$ is $$\asymp r^{-p_{\alpha,\beta}-\beta}\log^{d_{\alpha,\beta}}\frac 1r$$
We have several competing terms like that, so the winning one is the one with largest $$p+\beta$$ and among those the one with the largest $$d$$.
In your last example $$x_1x_2+x_1^2r+x_2r+x_1r^2+r^3$$ (I ignore $$r^3$$ that can be carried out and all the coefficients), we have $$5$$ polyhedra and functionals (I drop the trivial restrictions $$y_1,y_2\ge 0$$): $$P_{1,1,0}=\{-y_1+y_2\le 1, y_1\le 1, y_2\le 2, y_1+y_2\le 3\}, \\ \psi_{1,1,0}(y)=0 \\ P_{2,0,1}=\{y_1-y_2\le -1, 2y_1-y_2\le 0, y_1\le 1,2y_1\le 2\}, \\ \psi_{2,0,1}(y)=y_1-y_2 \\ et\ cetera.$$ Here $$P_{1,1,0}$$ dominates and yields $$\log^2\frac 1r$$ but it may be instructive to find the contribution of $$P_{2,0,1}$$. In this case (just draw the picture) $$p=-1$$, $$\beta=1$$, $$d=1$$, so we get $$\log\frac 1r$$.
• @Houa Oops, I forgot about $r^\beta$ in my final answer. I edited. So the contribution of $P_2$ is actually $\log(1/r)$. However, it is still $P_1$ that dominates (contributing $\log^2(1/r)$, so $J(r)\asymp r^{-3}\log^2(1/r)$ in full agreement with what you wrote). As to question 1, by my (corrected) formula the integral is $\asymp r^{-1}\int_{\log(1/r)P_{0,1}}e^{-y}dy+\int_{\log(1/r)P_{1,0}}1dy$. From the definition with linear inequalities (just drop $\log(1/r)$ on the RHS), $P_{1,0}=\{0\le y\le 1\}=[0,1]$ and $P_{0,1}=\{y>0,-y\le-1\}=[1,+\infty]$ so you get your $\log(1/r)+1\asymp\log(1/r)$ – fedja Jul 29 at 12:02
• @Houa Is it clearer now? If not, what should I clarify? (I addressed two explicit questions you asked but I suspect more clarification may be still needed, so don't hesitate to ask for it but try to explain what exactly you are confused about) – fedja Jul 29 at 12:25
• (3) Yes, of course. As I said "ignore everything with empty interior". (4) It follows from the physical meaning of your particular problem: since you have a constant term, the integral should converge, and if $\psi$ is unbounded (or the face $F$ on which the maximum is attained is unbounded), you'll certainly just get $+\infty$ immediately. This can happen in the general setting I considered but your particular case is tame. (5) Yes, you can think of it as first rotating the rectangle and then shifting it or you can think of first shifting the origin and then rotating the coordinate frame. – fedja Jul 30 at 14:17
• @Houa (6) Yes, it is related to it (read the previous comment for (3-5); I forgot to address it to you). As to the integral, just integrate term by term. The claim is that if $f(t)$ is any function that equals $p(t)=\sum_k c_kt^k$ ($c_k>0$) near the origin and is always between $0$ and $p(t)$, then $\int_0^\infty f(\delta t)e^{-t}\,dt\approx \sum_k k!c_k\delta^k$ as $\delta\to 0$. – fedja Jul 30 at 14:24
• @Houa That's all right, but note that I also first shifted the origin to the point where the maximum is attained (so I should rather say "making $t=p-\psi(y)$ a new coordinate" if I had used the original $y$, not $y$ with respect to the shifted origin), in which case the shift in the argument of $S_P$ disappears and our formulae agree. I apologize for being somewhat confusing here – fedja Jul 31 at 14:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 133, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999200105667114, "perplexity": 2066.5531622907533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00590.warc.gz"} |
http://mathhelpforum.com/calculus/18992-second-fundamental-theorem-calculus-print.html | # Second Fundamental Theorem of Calculus
• September 15th 2007, 10:37 AM
Fourier
Second Fundamental Theorem of Calculus
Is it possible to use the Second Fundamental Theorem of Calculus on $C(x)$ to find $C'(x)$, if
$C(x)=\sum_{j=1}^{n}{\bigg|N_{j}\int_{x}^{x_j}{k(s) ds}\bigg|}$,
where $N_j$ is some constant, and $k(s)$ is integrable over every interval?
• September 15th 2007, 12:31 PM
CaptainBlack
Quote:
Originally Posted by Fourier
Is it possible to use the Second Fundamental Theorem of Calculus on $C(x)$ to find $C'(x)$, if
$C(x)=\sum_{j=1}^{n}{\bigg|N_{j}\int_{x}^{x_j}{k(s) ds}\bigg|}$,
where $N_j$ is some constant, and $k(s)$ is integrable over every interval?
Yes
RonL
• September 15th 2007, 01:38 PM
Fourier
Would $C'(x)$ be
$C'(x)=\sum_{j=1}^{n}{\bigg|N_j k(x_j)\bigg|} \textrm{ ?}$
If this is correct, then it implies that the derivative is always nonnegative and increasing as the sum increases from 1 to n.
• September 17th 2007, 08:16 AM
Fourier
Can anyone check my work?
Thanks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826605916023254, "perplexity": 501.4534305301673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00107-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathematica.stackexchange.com/questions/36843/using-a-mathematica-function-to-define-a-new-function | # using a Mathematica function to define a new function
I'd like to define a function of three variables which produces a new, named function of a single variable, where this final variable is not a member of the first three. So I'd like something where I have
f0[x_ , y_ , z_] := << complicated function >> which gives
f1[p_] Furthermore, I'd like to have the functions be defined in such a way that when I write
IN:= f0[x0,y0,z0] I get
OUT:= fx0y0z0[p_] so that I can easily identify which inputs produced the function. So, basically, I'd like to wind up with a function produced via Interpolation and I'd like that function to have a specific name defined by my inputs.
For instance, let's say I'm creating a numerical function by writing down a table in two variables (say, p and q) that requires x, y, and z as inputs. Then I'd like to integrate over q and interpolate over p so that my final function is just a function over q only.
So, let's say that my first function is
g[x_,y_,z_,p_,q_] := (x + y + z) p / q Now I'd like to tabulate this over p and q for given values of x, y, and z. Then I'd like to integrate over p and have a function of q only. I can for instance do the following
f1[x_,y_,z_] := NIntegrate[ Interpolation[ Flatten[Table[{q, p, g[x,y,z,p,q]}, {p,p0,p1}, {q,q0,q1}],1]][#, p], {p, p0, p1}] & and then this is a well-behaved function of q which I can make tables of, which I can then plot, integrate, etc. just by writing f1[x0,y0,x0][q]. But this is not very convenient for me since it requires me to write out a new function name every time I want to examine the behavior as a function of different values of x, y, and z, and I will ultimately need many values of x, y, and z. Is there any way to write a meta-function that is capable of producing a brand new interpolating function of q only, with the name including the input values of x, y, and z?
Thanks for your help,
Sam
-
A solution like the one to this question might be useful to you. – episanty Nov 12 '13 at 19:00
I typically use pure functions for this type of meta programming. For example:
generator[p_,q_] := Function[{x,y,z},
Evaluate[
Integrate[ (x+y+z)/(p+q), {p,p0,p1},{q,q0,q1}]
]
]
Then one can use it as
fpXqY = generator[X,Y]
And then fpXqY will be a pure function you can use. This only works for function that can be treated as pure functions. However, one can also do something similar by just calling SetDelayed[] (which is the full form of :=) within your generator to create a new function. So something like:
generator2[p_,q_] := SetDelayed[
ToExpression[ StringJoin[ "fp", ToString[p], "q", ToString[q]]][x_, y_, z_]
, Integrate[ (x+y+z)/(p+q), {p,p0,p1},{q,q0,q1}]
]
And then call it as
generator2[X,Y]
and you should then find you can use fpXqY as your evaluator. Note that you can either use Set or SetDelayed as you need.
-
Thanks -- this seems to be on the right track. However, my goal is to produce a function with an entirely new name. Say I have g[x_,y_,z_,p_,q_]:=(x+y+z)p/q f1[x_,y_,z_] := NIntegrate[Interpolation[Flatten[Table[{q,p,g[x,y,z,p,q]},{p,10}, {q,10}],1]][#,p],{p,1,10}]& as before. What is wrong with the following? generator[x_,y_,z_]:=SetDelayed[ToExpression[ToString[StringForm["f1",x,y,z]]][#]&,Interpolation[Table[f1[x,y,z][p],{p,10}]]] I'd like to produce, e.g., f1323 = Interpolation[Table[f1[3,2,3][x],{x,10}]]; which is easily identifiable and very fast. Thank you for your help! – user1451632 Nov 12 '13 at 16:43
@user1451632 right sorry, let me fix that -- see the updated generator 2. In general, you should avoid using StringForm for producing function names -- the *Form sometimes introduces odd quirks – tkott Nov 12 '13 at 17:31
Terrific! This really helps. Thank you for your time. – user1451632 Nov 12 '13 at 21:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292410373687744, "perplexity": 760.2934362867456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://socratic.org/precalculus/functions-defined-and-notation/symmetry | Precalculus
Topics
# Symmetry
## Key Questions
It is a line about which the shape/curve is repeated as if it had rotated about it by ${180}^{o}$
See diagram/graph
#### Explanation:
$\textcolor{g r e e n}{\text{The line of symmetry is the y axis}}$
$f \left(x , y\right) = {x}^{2} + x y + {y}^{2}$
$g \left(x , y , z\right) = x y + y z + z x + \frac{1}{{x}^{2} + {y}^{2} + {z}^{2}}$
#### Explanation:
A symmetric function is a function in several variable which remains unchanged for any permutation of the variables.
For example, if $f \left(x , y\right) = {x}^{2} + x y + {y}^{2}$, then $f \left(y , x\right) = f \left(x , y\right)$ for all $x$ and $y$.
How many times is the same shape seen if a figure is turned through 360°
#### Explanation:
Symmetry means that there is a 'sameness' about two figures THere are two types of symmetry - line symmetry and rotational symmetry.
Line symmetry means if you draw a line thorugh the middle of a figure, the one side is a mirror image of the other.
Rotational symmetry is the symmetry of turning.
If you turn a shape though 360°, sometimes the identical shape is seen again during the turn. This is called rotational symmetry.
For example, a square has 4 sides, but the square will look exactly the same no matter which of its sides is at the top.
Rotational symmetry is described by the number of times the same shape is seen during the 360° rotation.
A square has rotational symmetry of order 4,
An equilateral triangle has rotational symmetry of order 3.
A rectangle and a rhombus have rotational symmetry of order 2.
A regular pentagon has rotational symmetry of order 5. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255053758621216, "perplexity": 495.48876464141495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249490870.89/warc/CC-MAIN-20190223061816-20190223083816-00170.warc.gz"} |
https://www.lessonplanet.com/teachers/match-uppercase-to-lowercase-letters-f-g-h-i-and-j | # Match Uppercase to Lowercase-- Letters F, G, H, I and J
In this matching letters worksheet, students draw lines to match upper and lower case examples of letters F, G, H, I and J. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171276092529297, "perplexity": 2362.07757823056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00588.warc.gz"} |
https://www.healthline.com/nutrition/maple-syrup | # Maple Syrup: Healthy or Unhealthy?
Written by Kris Gunnars, BSc on June 4, 2017
One of the more popular sweeteners today is maple syrup.
It is a 100% natural sweetener that is claimed to be more nutritious and healthier than sugar.
There are many claims about maple syrup online and I'd like to separate the facts from the fiction.
## What Is Maple Syrup and How Is It Made?
Maple syrup is made from the sugary circulating fluid (sap) of maple trees.
It has been consumed for many centuries in North America... since the times of Native Americans.
Over 80% of the world's supply is now produced in Canada.
Maple syrup is made in a natural 2-step process:
1. A hole is drilled in the maple tree. Then the sugary circulating fluid leaks out and is collected into a container.
2. The sugary fluid is boiled until most of the water evaporates, leaving a thick sugary syrup, which is then filtered to remove impurities.
If you want to see how incredibly simple it is to make maple syrup, then check out this cool video (opens in new tab) of a guy making his own from wild maple trees.
Bottom Line: Maple syrup is made by evaporating the sugary circulating fluid (sap) from maple trees, leaving a thick syrup. It has been consumed for many centuries in North America.
## Different Grades of Maple Syrup
There are several different "grades" of maple syrup, depending on the color.
The exact way they are classified can vary between countries.
In the United States, maple syrup is either classified as grade A or grade B (1).
• Grade A is further categorized into 3 groups: Light Amber, Medium Amber and Dark Amber.
• Grade B is the darkest of them all.
The main difference between them, is that the darker syrups are made from sap that is extracted later in the harvesting season.
The dark syrups have a stronger maple flavor and are usually used for baking or in recipes, while the lighter ones are rather used directly as syrups... for example on pancakes.
If you're going to buy maple syrup, then make sure to get actual maple syrup, not just maple-flavored syrup... which can be loaded with refined sugar or high fructose corn syrup.
As with any other food, make sure to read the label.
Bottom Line: There are several different grades of maple syrup, depending on the color. Grade B is the darkest, with the strongest maple flavor.
## It Contains Some Vitamins and Minerals, But is Also High in Sugar
The main thing that sets maple syrup apart from refined sugar, is the fact that it also contains some minerals and antioxidants.
100 grams of maple syrup contain (2):
• Calcium: 7% of the RDA.
• Potassium: 6% of the RDA.
• Iron: 7% of the RDA.
• Zinc: 28% of the RDA.
• Manganese: 165% of the RDA.
True, maple syrup does contain a decent amount of some minerals, especially manganese and zinc, but keep in mind that it also contains a whole bunch of sugar.
Maple syrup is about 2/3rds sucrose (as in table sugar) and a 100 grams of it therefore supply around 67 grams of sugar.
Really... sugar can be seriously harmful. Consumed in excess, it is believed to be among the leading causes of some of the world's biggest health problems, including obesity, type 2 diabetes and heart disease (3, 4, 5).
The fact that maple syrup contains some minerals is a very poor reason to eat it, given the high sugar content. Most people are already eating way too much sugar.
The best way to get these minerals is to eat real foods. If you eat a balanced diet of plants and animals, then your chances of lacking any of these minerals is very low.
But if you're going to eat a sugar-based sweetener anyway, then replacing refined sugar in recipes with an identical amount of maple syrup will cut the total sugar content by a third.
The glycemic index of maple syrup seems to be around 54, compared to table sugar which has a glycemic index of around 65 (6).
This is a good thing and implies that maple syrup raises blood sugar slower than regular sugar.
Bottom Line: Maple syrup contains a small amount of minerals, especially manganese and zinc. However, it is also very high in sugar (about 67%).
## Maple Syrup Contains at Least 24 Different Antioxidants
Oxidative damage is believed to be among the mechanisms behind ageing and many diseases.
It consists of undesirable chemical reactions that involve free radicals... that is, molecules with unstable electrons.
Antioxidants are substances that can neutralize free radicals and reduce oxidative damage, potentially lowering the risk of some diseases.
Several studies have found that maple syrup is a decent source of antioxidants. One study found 24 different antioxidant substances in maple syrup (7).
The darker syrups (like Grade B) contain more of these beneficial antioxidants than the lighter syrups (8).
However, same as with the minerals, the total amount of antioxidants is still low compared to the large amounts of sugar.
One study estimates that replacing all the refined sugar in the average diet with "alternative" sweeteners like maple syrup will increase the total antioxidant load of the diet similar to eating a single serving of nuts or berries (9).
If you need to lose weight or improve your metabolic health, then you would be better off skipping caloric sweeteners altogether instead of going for a "less bad" version of sugar.
Bottom Line: There are a number of antioxidant substances found in maple syrup, but the amount is still low compared to the large amount of sugar.
## Maple Syrup Has Been Studied in Test Tubes, But no Human Studies are Available
Numerous potentially beneficial substances have been found in maple syrup.
Some of these compounds are not present in the maple tree, but they form when the sugary fluid is boiled to form the syrup.
One of these is a compound called quebecol, named after Quebec, a province in Canada that produces large amounts of maple syrup.
The active compounds in maple syrup have been shown to help reduce the growth of cancer cells and may slow down the breakdown of carbohydrates in the digestive tract (10, 11, 12, 13, 14).
But really... these test tube studies are almost meaningless when it comes to human health. They tell us absolutely nothing about what happens in a living, breathing person.
## The Bottom Line: It's Slightly "Less Bad" Than Sugar
Even though maple syrup does contain some nutrients and antioxidants, it is also very high in sugar.
Calorie for calorie (and sugar gram for sugar gram), maple syrup is a very poor source of nutrients compared to "real" foods like vegetables, fruits and unprocessed animal foods.
Replacing refined sugar with pure, quality maple syrup is likely to yield a net health benefit, but adding it to your diet will just make things worse.
Maple syrup is a "less bad" version of sugar... kind of like honey and coconut sugar. That does NOT make it healthy.
As with all sugar-based sweeteners, if you're going to eat it, make sure to do so in moderation only.
An evidence-based nutrition article from our experts at Authority Nutrition.
CMS Id: 128182 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537803888320923, "perplexity": 3477.817112251268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513784.2/warc/CC-MAIN-20171211164220-20171211184220-00403.warc.gz"} |
http://stats.stackexchange.com/questions/46588/why-are-gaussian-process-models-called-non-parametric | # Why are Gaussian process models called non-parametric?
I am a bit confused. Why are Gaussian processes called non parametric models?
They do assume that the functional values, or a subset of them, have a Gaussian prior with mean 0 and covariance function given as the kernel function. These kernel functions themselves have some parameters (i.e., hyperparameters).
So why are they called non parametric models?
-
I know of several definitions of "Gaussian processes," so it's not apparent what your question is really asking about. But as you consider how to clarify it, ask yourself this: exactly how would you parametrize the Gaussian process you have in mind? If you cannot do it in a natural way with a finite number of real parameters, then it should be considered nonparametric. – whuber Dec 27 '12 at 16:15
@whuber. AFAIK, the main parameters of gaussian processes are the mean and the covariance functions. But as we keep on adding data points, they keep on increasing. So it keeps on increasing. Is that why gaussian processes are termed as non parametric? – user34790 Dec 27 '12 at 17:04
@whuber If I have millions of training data points, then my GP f ~ N(m,k) will be a million dimensional multivariate gaussian distribution. Isn't that too big? I mean as new training data comes it gets bigger and bigger. Doesn't it give rise to computational issue? – user34790 Dec 27 '12 at 17:06
"Parametric" versus "non-parametric" are terms that do not apply to particular processes: they apply to the entire family of processes that could be fit to data. Although I still do not know what family you have in mind, it sounds like although the number of parameters may be finite in any circumstance, there is no limit to the number of parameters that may appear among members of the family: ergo, the problem is non-parametric. – whuber Dec 27 '12 at 17:35
I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe something like choosing a model $M_\theta$ from some family $\{M_\theta: \theta \in \Theta\}$ where $\Theta$ is infinite dimensional), but I'm going to be pretty informal. Some might argue that a nonparametric method is one where the effective number of parameters you use increases with the data. I think there is a video on videolectures.net where (I think) Peter Orbanz gives four or five different takes on how we can define "nonparametric."
Since I think I know what sorts of things you have in mind, for simplicity I'll assume that you are talking about using Gaussian processes for regression, in a typical way: we have training data $(Y_i, X_i), i = 1, ..., n$ and we are interested in modeling the conditional mean $E(Y|X = x) := f(x)$. We write $$Y_i = f(X_i) + \epsilon_i$$ and perhaps we are so bold as to assume that the $\epsilon_i$ are iid and normally distributed, $\epsilon_i \sim N(0, \sigma^2)$. $X_i$ will be one dimensional, but everything carries over to higher dimensions.
If our $X_i$ can take values in a continuum then $f(\cdot)$ can be thought of as a parameter of (uncountably) infinite dimension. So, in the sense that we are estimating a parameter of infinite dimension, our problem is a nonparametric one. It is true that the Bayesian approach has some parameters floating about here and there. But really, it is called nonparametric because we are estimating something of infinite dimension. The GP priors we use assign mass to every neighborhood of every continuous function, so they can estimate any continuous function arbitrarily well.
The things in the covariance function are playing a role similar to the smoothing parameters in the usual frequentist estimators - in order for the problem to not be absolutely hopeless we have to assume that there is some structure that we expect to see $f$ exhibit. Bayesians accomplish this by using a prior on the space of continuous functions in the form of a Gaussian process. From a Bayesian perspective, we are encoding beliefs about $f$ by assuming $f$ is drawn from a GP with such-and-such covariance function. The prior effectively penalizes estimates of $f$ for being too complicated.
Edit for computational issues
Most (all?) of this stuff is in the Gaussian Process book by Rasmussen and Williams.
Computational issues are tricky for GPs. If we proceed niavely we will need $O(N^2)$ size memory just to hold the covariance matrix and (it turns out) $O(N^3)$ operations to invert it. There are a few things we can do to make things more feasible. One option is to note that guy that we really need is $v$, the solution to $(K + \sigma^2 I)v = Y$ where $K$ is the covariance matrix. The method of conjugate gradients solves this exactly in $O(N^3)$ computations, but if we satisfy ourselves with an approximate solution we could terminate the conjugate gradient algorithm after $k$ steps and do it in $O(kN^2)$ computations. We also don't necessarily need to store the whole matrix $K$ at once.
So we've moved from $O(N^3)$ to $O(kN^2)$, but this still scales quadratically in $N$, so we might not be happy. The next best thing is to work instead with a subset of the data, say of size $m$ where inverting and storing an $m \times m$ matrix isn't so bad. Of course, we don't want to just throw away the remaining data. The subset of regressors approach notes that we can derive the posterior mean of our GP as a regression of our data $Y$ on $N$ data-dependent basis functions determined by our covariance function; so we throw all but $m$ of these away and we are down to $O(m^2 N)$ computations.
A couple of other potential options exist. We could construct a low-rank approximation to $K$, and set $K = QQ^T$ where $Q$ is $n \times q$ and of rank $q$; it turns inverting $K + \sigma^2 I$ in this case can be done by instead inverting $Q^TQ + \sigma^2 I$. Another option is to choose the covariance function to be sparse and use conjugate gradient methods - if the covariance matrix is very sparse then this can speed up computations substantially.
-
Generally speaking, the "nonparametric" in Bayesian nonparametrics refers to models with an infinite number of (potential) parameters. There are a lot of really nice tutorials and lectures on the subject on videolectures.net (like this one) which give nice overviews of this class of models.
Specifically, the Gaussian Process (GP) is considered nonparametric because a GP represents a function (i.e. an infinite dimensional vector). As the number of data points increases ((x, f(x)) pairs), so do the number of model 'parameters' (restricting the shape of the function). Unlike a parametric model, where the number of parameters stay fixed with respect to the size of the data, in nonparametric models, the number of parameters grows with the number of data points.
-
This is exactly what I was assuming. So my assumption is right I guess. But my question is if I have million points(observed data). Then my f will also be of million dimension. So wouldn't I have computational issues. Further my covariance matrix will also be of size 1millionx1million. So what should I do in this case? – user34790 Dec 27 '12 at 22:07
@user34790 yes, you would have computational issues. Computational challenges are quite big deal for GPs. Rasmussen and Williams have a book on GPs with an entire chapter dedicated to this, and if you google hard enough you can find it online for free. See my updated post for some minimal details. – guy Dec 28 '12 at 4:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189270496368408, "perplexity": 294.0272394727482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461988.0/warc/CC-MAIN-20150226074101-00014-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/57523/do-you-use-the-magnitude-equation-to-get-speed-from-an-accelerometer | # Do you use the magnitude equation to get speed from an accelerometer?
A guy suggested to me that getting speed from an accelerometer required the use of this equation:
$\text{speed} = \sqrt{x^2 + y^2 + z^2}$
This does not make any sense to me, all that you would get from this equation would be the magnitude in $m/s^2$ of the acceleration in the $x, y$ and $z$ axes. Am I correct? Or is my reasoning flawed?
-
A link to the original claim would be nice. – Chris White Mar 21 '13 at 3:57
I added the link into the question for you, mathisnotmyforte, but just remember for the future that when you have additional information to add to a question, edit it in, don't leave it in a comment. – David Z Mar 21 '13 at 6:09
Basically yes.
1) Acceleration is a change over time in velocity. Since velocity has units of distance per unit time (like meters per second), acceleration has units of distance per unit time per unit time (like $\mathrm{m}/\mathrm{s}^2$). So the accepted answer is mistaken on that point.
2) Just to make sure everyone is on the same page with notation: From reading the question, and better yet the accepted answer to the original question, one can see that all throughout, $x$, $y$, and $z$ are referring to components of acceleration, not speed (or position, as a physicist would assume given those names). I imagine that mistaken answerer just used "speed" and "acceleration" interchangeably - unfortunately.
3) But the answer has the right idea. The quantity $$\sqrt{x^2 + y^2 + z^2}$$ is indeed the magnitude of the vector with components $x$, $y$, and $z$, assuming those components are in orthogonal positions (which they are). If all three are accelerations in $\mathrm{m}/\mathrm{s}^2$, then the result will be too. If the device is otherwise not moving (or even if it is moving, but at a constant speed and in an unchanging direction), then the only acceleration it will feel will be due to the Earth's gravity, which (assuming we are not inside the mantle or as far away as the Moon or something) is pretty constant at $9.8~\mathrm{m}/\mathrm{s}^2$. Thus one would hope the square-rooted quantity we calculated comes out to $9.8$. If it doesn't, you need to tweak the calibrations more.
-
You're correct. That equation is for the magnitude of the acceleration vector, not the speed.
Measuring speed using just an accelerometer can be tricky. You essentially have to integrate the measured acceleration by multiplying that acceleration (minus gravity) by the time between measurements and summing them for three components of the velocity, then finding the magnitude with that equation. I don't have experience with doing this, but if you're serious about measuring speed, you could combine that measurement with GPS coordinates using something like a Kalman filter.
-
Thanks for answering my question. I'm going to use this paper as a guide – mathisnotmyforte Mar 21 '13 at 4:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679831624031067, "perplexity": 293.23903630336264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278385.58/warc/CC-MAIN-20160524002118-00034-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/varying-the-action-with-respect-to-a_-mu.784867/ | # Varying the action with respect to A_\mu
1. Nov 30, 2014
### PhyAmateur
Was reading a paper and trying to work out the author's calculations: I am trying to vary the action,
$$S_A= \int d^3x \left[ \, -\frac{1}{4}F_{\mu\nu}(A)F^{\mu\nu}(A) +\frac{1}{2}m\, \epsilon^{\mu\nu\rho}A_\mu F_{\nu\rho}(A) \right]$$
with respect to $$A_\mu$$. I am finding difficulty deriving this because $$A_\mu$$ is embedded in $$F_{\mu \nu}$$. So my attempt was writing this all in terms of $$A_\mu$$:
$$S_A= \int d^3x \, \left[ -\frac{1}{4}(\partial_\mu A_\nu - \partial_\nu A_\mu) (\partial^\mu A^\nu - \partial^\nu A^\mu) +\frac{1}{2}m\, \epsilon^{\mu\nu\rho}A_\mu (\partial_\nu A_\rho - \partial_\rho A_\nu) \right]$$ and then I got stuck. If you could please lead me from here.
Reference, section 2: http://arxiv.org/pdf/hep-th/9705122.pdf
2. Dec 5, 2014 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106631278991699, "perplexity": 435.8062119348272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863684.0/warc/CC-MAIN-20180520190018-20180520210018-00544.warc.gz"} |
http://www.thespectrumofriemannium.com/tag/polylogarithmic-functions/ | LOG#162. Polylogia flashes(IV).
In this final post (by the moment) in the polylogia series we will write some additional formulae for polylogs and associated series. Firstly, we have (1) and now, if (2) (3) The next identity also holds … Continue reading
LOG#161. Polylogia flashes(III).
In the third post of this series I will write more fantastic identities related to our friends, the polylogs! (1) and by analytic continuation that equation can be extended to all . In fact (2) such as , … Continue reading | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623979330062866, "perplexity": 4980.289567510609}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00527.warc.gz"} |
https://brilliant.org/discussions/thread/help-me-5/ | # Help me!!
Consider the situation as shown in the figure. When the point mass (1 kg) is released, it will start accelerating downwards due to gravity and by the time it moves through 30 m it will rebound upwards due to the string. Find the maximum height it reaches. Consider ideal conditions.
Note by Shantanu Lautre
6 years, 6 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
• Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
EDITED
The horizontal distance was 30cm
Now let us consider the system ,, Let us make the following observations--
1) Initially there was no tension in string as it wasnt taut
2) So the mass falls vertically freely under gravity till it becomes taut.
3) And since the mass was not at the same position as that of hinge initially,, surely it would not become taut 3m below the hinge point
4) When it becomes taut,, the ball's component of velocity paralell to the string becomes 0 , and the other component that is perpendicular to it remains (can be established through angular momentum conservation) (or qualitatively)
5) So the remaining energy of ball will bring it up again and from there we can decide the maximim height it reaches
So heres what i think is gonna happen, Please Check if i am correct
When the ball falls vertically till string becomes Taut, Applying energy conservation-- (note the $\theta$ is the angle between string and vertical when it becomes taut)
$mglcos\theta \quad =\quad \frac { m{ v }^{ 2 } }{ 2 }$
Now it falls vertically till then, so its velocity will be downward,, so its component perpendicular to string is
$v\quad sin\theta \\ \\ \quad$
which is
$\sqrt { 2glcos\theta } sin\theta$
Now with this velocity it continues further motion as a circle about the hinge point,, thus again applying energy conservation
we get
$mglcos\theta \quad (sin\theta )^{ 2 }\quad =\quad mgh\quad \\ h=lcos\theta \quad (sin\theta )^{ 2 }$
Now as the lateral distance is 30 cm ,, so the angle $\theta$ is approx 37 degrees,, (thats irrelevant,, let it be arc sin (0.6) ) either way it means the answer is 72/5 cm,,
@Ronak Agarwal am i correct, if answer does not matter, just tell if there is anything wrong with the method,, thankyou
NOTE - I assumed that as string is inextensible,, the whole component of velocity paralell to it is cancelled (that is no rebounce occurs, or string does not behave as a spring even for a second)
- 6 years, 6 months ago
According to my calculation the answer should be :
${ h }_{ ground }\quad =\quad 50\quad -\quad 30{ (\cfrac { \sqrt { { 30 }^{ 2 }-{ d }^{ 2 } } }{ 30 } ) }^{ 3 }$.
where d = initial distance of the Point mass from the point of suspension.
And I think 'd' should be mentioned in this question.
here is diagram of this question: ( it is rough figure don't take it seriously :P )
Diagram
- 6 years, 6 months ago
I got the same answer.
- 6 years, 6 months ago
@Mvs Saketh
Actually the horizontal distance is 30 cm and the length of the string is 50 cm . It is given wrong in the diagram. Now can please you please post your solution again then it can be checked whether it is correct or not.Refer to the values given by me. I know this question as I have said I have seen it in fiitjee grandmasters package.
- 6 years, 6 months ago
I have edited it accordingly, check now
- 6 years, 6 months ago
Also the question was asking the speed of the ball at the lowest point.
- 6 years, 6 months ago
Your method is absolutely correct @Mvs Saketh this is the correct method needed to solve the question.
But can you solve for the velocity at the lowest point of the trajectory.
- 6 years, 6 months ago
Yes i will use conservation of energy again after the "Taut" instant because the tension in string only acts perpendicular to the direction of motion after it has become taut and thus does no work,, so i will use change in mgh= gain in KE and find out
- 6 years, 6 months ago
see if we consider just vertical motion then when the string becomes taut it'll provide impulse to the particle in opposite direction to its motion (i.e., upward direction) , then what do you think the value of the impulse will be.??
- 6 years, 6 months ago
No please observe my answer carefully,, impulse is not provided in vertical direction but along the string,, (tension acts along the string ) so the component of velocity paralell to string gets cancelled completely or the impulse provided is the momentum of the particle along the string which is mvcos(x) ,,, see my solution carefully ,
- 6 years, 6 months ago
Gotcha.......Thank you a million.!
- 6 years, 6 months ago
I got that too......
- 6 years, 6 months ago
This question I saw in grand masters package of fiitjee.
- 6 years, 6 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823077917098999, "perplexity": 1532.921684356295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00018.warc.gz"} |
http://math.stackexchange.com/questions/17343/what-is-the-speed-of-a-divergent-series?answertab=oldest | what is the speed of a divergent series?
How to characterize the speed of a divergent series ? I have a divergent series with a parameter $x$ in it. How can i characterize the speed of divergence for different $x$ ?
-
While there are various rates of convergence, based on big-O, little-o, etc. notations, it will not make equal sense to talk about "speed of divergence". A divergent series need not grow uniformly. It might oscillate. Or there might be subsequences that grow and subsequences that oscillate in bounded fashion. One way to impose some consistency on the behavior of nonconvergent sequences is to look at its lim sup and lim inf. Perhaps you should explain the purpose of the characterization you seek. – hardmath Jan 13 '11 at 4:57
@hardmath : the series is non-decreasing – Rajesh D Jan 13 '11 at 5:22
This is a very vague question. Why don't you show us what series you have... – Aryabhata Jan 13 '11 at 5:30
@Moron : I don't really have any series to work with, the only restriction is that it is non decreasing. – Rajesh D Jan 13 '11 at 5:41
I guess ‘big O notation’ and its relatives are what you’re after, or something like that?
If $(a_n)$ is a sequence with $a_n \rightarrow \infty$ as $n \rightarrow \infty$, then this notation defines statements like $(a_n) = O(n^2)$, formalising the idea that “in the long run, the sequence $(a_n)$ grows no faster than the sequence $(n^2)$”.
(The use of ‘=’ in this notation is slightly confusing: the precise statement doesn’t assert that any two things are equal.)
-
Under the given circumstances (the sequence $\{a_n\}$ tends monotonically to plus infinity), faster rate of divergence (growth) would be equivalent to faster convergence of $\{1/a_n\}$ to zero. – hardmath Jan 13 '11 at 6:12
This is related to Hausdorff's "Pantachie" problem. Suppose $x_i$ is a monotone decreasing sequence whose sum is divergent. We say that a similar sequence $y_i$ diverges slower if $x_i/y_i \rightarrow \infty$. Example: $\sum n^{-1}$ diverges slower then $\sum n^{-0.5}$.
Similarly, if $x_i$ is a monotone decreasing sequence whose sum is convergent, a similar sequence $y_i$ converges more slowly if $y_i/x_i \rightarrow \infty$. Example: $\sum n^{-2}$ converges more slowly than $\sum n^{-3}$.
Hausdorff proved the following theorem: For any sequence of divergent (convergent) series, there's a sequence diverging (converging) slower than any of them. That means that there is no "expressible by finite strings" characterization of the speed of divergence (convergence), since such a characterization would not allow any series which is diverging (converging) slower.
For more on the subject, look up Hausdorff gaps.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407497048377991, "perplexity": 525.3099430562444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988310.3/warc/CC-MAIN-20150728002308-00067-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/79440-distribution-probabilty-parent-sample.html | # Math Help - Distribution and probabilty - parent and sample
1. ## Distribution and probabilty - parent and sample
Ok I am completely lost, can anyone help me out? I have done the first question, and the second one is based on it.
The first question said:
The parent distribution of x is normal with mean 80 and standard deviation 9.
Based on samples of size n=25, find the mean and standard error of the sampling distribution of $\bar{x}$.
I said: $\mu_{\bar{x}}$ = 80
$\sigma_{\bar{x}}$ = 9/5
The second question is where I am lost.
It says find:
A. P(62 < x < 80)
B. P(71 < $\bar{x}$ < 77)
C. x' such that P( $\bar{x}$ > x') = .05
2. Is n=25 or 225? Because $\sigma_{\bar{x}}={\sigma \over\sqrt{n}}$.
Or, did you mean $\sigma_{\bar{x}}={9\over 5}$
3. n=25
it is the number in the sample
$\sigma\bar{x}$ = 9/5 (thats what I got) sorry
4. $P(62.
$P(71<\bar X<77)=P\biggl({71-80\over 9/5}.
Can you finish these? I'll check on you later.
$.05=P(\bar X so ${a-80\over 9/5}=-1.645$.
5. P[(-5) < z < (-5/3)]
It was supposed to be P( $\bar{x}$ > x') SORRY
P( $\bar{x}$ > $x'-80\over 9/5$) ?
Can you tell me what formula you used to find that last one?
Did you use these for the first two?
Z= $X-\mu\over \sigma$ and Z= $\bar{x}-\mu\over \sigma / \sqrt n$
6. Let X be a random variable with mean $\mu$ and standard deviation $\sigma$.
Then $Z={X-\mu\over \sigma}$ has mean zero and standard deviation 1.
And if X was a normal random variable, then Z is a standard normal random variable and you can use your Z table.
We did that for both X and $\bar X$.
7. Thank you SO much for your help. It helped me a lot on my test. Thanks a hundred times. You are a really nice person | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497736692428589, "perplexity": 1320.6045859857793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00024-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://www.perimeterinstitute.ca/seminar/emergent-gravity-dark-energy-and-dark-matter | # From Emergent Gravity to Dark Energy and Dark Matter
The observed deviations from the laws of gravity of Newton and Einstein in galaxies and clusters can logically speaking be either due to the presence of unseen dark matter particles or due to a change in the way gravity works in these situations. Until recently there was little reason to doubt that general relativity correctly describes gravity in all circumstances. In the past few years insights from black hole physics and string theory have lead to a new theoretical framework in which the gravitational laws are derived from the quantum entanglement of the microscopic information that is underlying space-time. An essential ingredient in the derivation is of the Einstein equations is that the vacuum entanglement obeys an area law, a condition that is known to hold in Anti-de Sitter space due to the work of Ryu and Takayanagi. We will argue that in de Sitter space due to the positive dark energy, that the microscopic entanglement entropy also contains also a volume law contribution in addition to the area law. This volume law contribution is related to the thermal properties of de Sitter space and leads to a total entropy that precisely matches the Bekenstein-Hawking formula for the cosmological horizon. We study the effect of this extra contribution on the emergent laws of gravity, and argue that it leads to a modification compared to Einstein gravity. We provide evidence for the fact that this modification explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Collection/Series:
Event Type:
Seminar
Scientific Area(s):
Speaker(s):
Event Date:
Wednesday, October 4, 2017 - 14:00 to 15:30
Location:
Time Room
Room #:
294 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713998436927795, "perplexity": 243.06640815171482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647327.52/warc/CC-MAIN-20180320091830-20180320111830-00322.warc.gz"} |
http://bib-pubdb1.desy.de/collection/VDB?ln=en | # Publications database
2019-05-2314:17 [PUBDB-2019-02359] Book/Report/Master Thesis Steder, L. Systematic studies of a cavity quench localization system [DESY-THESIS-2019-010] Hamburg : Verlag Deutsches Elektronen-Synchrotron, DESY-THESIS 120 pp. (2019) [10.3204/PUBDB-2019-02359] = Masterarbeit, Universität Hamburg, 2019 Several tools for quench localization at Superconducting Radio Frequency cavities exist. One of these techniques uses the excitation of temperature waves in liquid Helium below the lambda point. [...] OpenAccess: MasterThesis_B_Bein - PDF PDF (PDFA); desy-thesis-19-010.title - PDF PDF (PDFA); 2019-05-2114:21 [PUBDB-2019-02325] Report/Journal Article et al Search for vector-like quarks in events with two oppositely charged leptons and jets in proton-proton collisions at $\sqrt{s} =$ 13 TeV [arXiv:1812.09768; CMS-B2G-17-012; CERN-EP-2018-290] A search for the pair production of heavy vector-like partners $\mathrm {T}$ and $\mathrm {B}$ of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $\sqrt{s} = 13\,\text {Te}\text {V}$ . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9 $\,\text {fb}^{-1}$ . [...] OpenAccess: PDF PDF (PDFA); 2019-05-2114:17 [PUBDB-2019-02324] Report/Journal Article et al Measurement of the $\mathrm{t}\overline{\mathrm{t}}$ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $\sqrt{s} =$ 13 TeV [arXiv:1812.10505; CMS-TOP-17-001; CERN-EP-2018-317] A measurement of the top quark–antiquark pair production cross section $\sigma _{\mathrm {t}\overline{\mathrm {t}}}$ in proton–proton collisions at a centre-of-mass energy of 13 $\,\text {Te}\text {V}$ is presented. The data correspond to an integrated luminosity of $35.9{\,\text {fb}^{-1}}$ , recorded by the CMS experiment at the CERN LHC in 2016. [...] OpenAccess: PDF PDF (PDFA); 2019-05-2114:14 [PUBDB-2019-02323] Report/Journal Article et al Measurement of inclusive very forward jet cross sections in proton-lead collisions at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV [arXiv:1812.01691; CMS-FSQ-17-001; CERN-EP-2018-325] Journal of high energy physics 1905(05), 043 (2019) [10.1007/JHEP05(2019)043] Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range $-$6.6$<\eta<-$5.2. [...] OpenAccess: PDF PDF (PDFA); 2019-05-2114:10 [PUBDB-2019-02322] Report/Journal Article et al Measurement of the energy density as a function of pseudorapidity in proton-proton collisions at $\sqrt{s} =$ 13 TeV [arXiv:1812.04095; CMS-FSQ-15-006; CERN-EP-2018-308] A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of s√=13 TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext 2019-05-2114:00 [PUBDB-2019-02321] Report/Journal Article et al Search for an $L_{\mu}-L_{\tau}$ gauge boson using Z$\to4\mu$ events in proton-proton collisions at $\sqrt{s} =$ 13 TeV [arXiv:1808.03684; CMS-EXO-18-008; CERN-EP-2018-208] Physics letters / B 792, 345 - 368 (2019) [10.1016/j.physletb.2019.01.072] A search for a narrow Z$'$ gauge boson with a mass between 5 and 70 GeV resulting from an $L_{\mu}-L_{\tau}$ $U(1)$ local gauge symmetry is reported. Events containing four muons with an invariant mass near the standard model Z boson mass are analyzed, and the selection is further optimized to be sensitive to the events that may contain Z$\to$Z$'\mu\mu\to4\mu$ decays. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext 2019-05-2020:52 [PUBDB-2019-02319] Journal Article/Contribution to a conference proceedings HERMES Collaboration Medium-indused modification of kaons spectra measured in SIDIS at HERMES 24th International Baldin Seminar on High Energy Physics Problems, ISHEPP 2018, DubnaDubna, Russia, 17 Sep 2018 - 22 Sep 2018 The predicted high sensitivity of the nuclear modification factor for K$^−$ in SIDIS due to the QCD based effect of medium-induced flavour conversion in the fragmentation function is studied at HERMES experiment. Unlike π+, π− and K$^+$ nuclear modification factor for K$^−$ is assumed to increase at high value of Bjorken variable $x_B$ and a hadron fraction energy $z$. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext 2019-05-2016:18 [PUBDB-2019-02318] Report/Journal Article et al Constraining the Magnetic Field in the TeV Halo of Geminga with X-Ray Observations [arXiv:1904.11438] The astrophysical journal / 1 Part 1 875(2), 149 (2019) [10.3847/1538-4357/ab125c] Recently, the High Altitude Water Cherenkov (HAWC) collaboration reported the discovery of a TeV halo around the Geminga pulsar. The TeV emission is believed to originate from the inverse Compton scattering of pulsar-injected electrons/positrons off cosmic microwave background photons [...] Restricted: PDF PDF (PDFA); 2019-05-2016:09 [PUBDB-2019-02317] Report/Journal Article et al Secondary neutrino and gamma-ray fluxes from SimProp and CRPropa [arXiv:1901.01244] The interactions of ultra-high energy cosmic rays (UHECRs) with background photons in extragalactic space generate high-energy neutrinos and photons. Simulating UHECR propagation requires assumptions about physical quantities such as the spectrum of the extragalactic background light (EBL) and photodisintegration cross sections. [...] Restricted: PDF PDF (PDFA); 2019-05-2013:29 [PUBDB-2019-02314] Poster Bakhshiansohi, H. Exotic decay of the Higgs boson to a pair of light pseudoscalars in the CMS experiment 7th Conference of Large Hadron Collider Physics, LHCP2019, PueblaPuebla, Mexico, 20 May 2019 - 25 May 2019 The Standard Model is one of the most successful theories at describing the strong, weak, and electromagnetic forces and the interactions between the elementary particles.The scalar boson discovered in 2012 at the Large Hadron Collider (LHC) might be consistent with the Higgs boson predicted by the Standard Model, thus validating the Higgs mechanism and therefore representing a further confirmation of this theoretical framework. However, the experimental data still leave plenty of room to determine whether or not an extension of the scalar sector is allowed.The LHC combination of the SM Higgs boson measurements at 7 and 8 TeV allows Higgs boson decays to BSM states with a rate of up to 34% at 95% confidence level.Presence of new physics within the context of the Two-Higgs-Double-Model and NMSSM theories allows the exotic decay of the Higgs boson into a pair of light pseudoscalars. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954367876052856, "perplexity": 3556.1997659183917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259327.59/warc/CC-MAIN-20190526165427-20190526191427-00004.warc.gz"} |
http://knotphysics.net/faq.html | # FAQ
The following are some important advantages of this theory:
1. There is a small set of fundamental assumptions with very few independent parameters. This implies that almost all of the parameters can be derived from first principles. This has already been done for the fine structure constant.
2. In this theory, quantum properties are a consequence of the dynamics of the branched manifold. We show that the discrete behavior of the branched manifold can be modeled using a continuous approximation, which is a path integral. Because the underlying system is discrete and finite, the pathological infinities of the path integral never appear.
3. From the fundamental assumptions, we show that the manifold can spontaneously produce pairs of knots of the form $$\mathbb{R}^3 \# (S^1 \times P^2)$$. We show that these knots have the properties of the elementary fermions. Different embedding types of these knots explain the different generations of fermions. We therefore find that the theory spontaneously produces the elementary particles without additional assumptions. Furthermore, the theory tightly constrains the types of knots that can be created, and we therefore can mathematically constrain the types of possible particles.
4. The spacetime manifold is constrained by our assumptions, but it is under-constrained. We therefore expect the manifold to maximize entropy. Using this principle of entropic maximization, we can derive a description of the dynamics of the spacetime manifold. We find that the manifold behavior is best described by a Lagrangian. In that Lagrangian, we find terms for gravity (scalar curvature $$R$$) and electromagnetism (the term $$F^{\mu\nu} F_{\mu\nu}$$. Including the effects of particle geometry and topology produces electroweak and strong forces.
In this theory, we assume a few constraints on the spacetime manifold. One important result of those constraints is that the ways that the manifold can change topology are tightly constrained. If particles are knots on the manifold, then every production of particle/anti-particle pair is a change of topology of the manifold. Therefore a particular topology is only a viable particle topology if it can be created on the spacetime manifold subject to the constraining assumptions.
We prove that it is possible to produce pairs of knots of the form $$\mathbb{R}^3 \# (S^1 \times P^2)$$ subject to our constraining assumptions. The type $$\mathbb{R}^3 \# (S^1 \times P^2)$$ is relatively simple, analogous to a twist in the manifold. We then go on to show that those knots have the same properties as the elementary fermions. It remains to be shown that no other topology type is possible. It may be that there are other possible particle topologies and the only reason we see just the type $$\mathbb{R}^3 \# (S^1 \times P^2)$$ is that it has the lowest energy. While it is possible to mathematically eliminate many classes of particle topology, a complete list of possible particle topologies has not yet been determined.
The mathematical theory of knot theory has a very specific meaning for the word "knot". A knot is an embeddding of $$S^n$$ in $$S^{n+2}$$. The constraint on the dimensions (an $$n$$-manifold in an $$n$$+2-manifold) is a tight constraint. For any other dimensionality, the embedding would not be knotted. For example, a knot of the form $$S^n$$ in $$S^{n+3}$$ would spontaneously untie.
Every elementary fermion in this theory has topology $$\mathbb{R}^3 \# (S^1 \times P^2)$$. This is not a knot in the sense of knot theory. Also, this 3-dimensional topology can be embedded in $$\mathbb{R}^n$$ for $$n \geq 5$$. A hadron, for example a proton, contains linked copies of $$\mathbb{R}^3 \# (S^1 \times P^2)$$, each of which is a quark. Those quarks will remain linked to each other only if the 3 spatial dimensions of the spacetime manifold are embedded in a 5-dimensional space. For that reason, we see that the dimension of the embedding space is perfectly constrained by the particle topology of hadrons.
In this theory, we need a word to describe particle topology. There are two reasons that we use the word "knot". The first reason is that the word is convenient and approximately describes the particle topology. The second reason is that, for hadrons, the 3-dimensional particle topology must be embedded in an 5-dimensional space, which is the same constraint ($$n=3$$ and $$n+2=5$$) that applies in the mathematical theory of knot theory.
In this theory, all elementary fermions are knots in the spacetime manifold that have topology $$\mathbb{R}^3 \# (S^1 \times P^2)$$. Leptons are elementary fermions and therefore they also have this topology. In this theory, we assume that the spacetime manifold is constrained by Ricci flatness, and we find that Ricci flatness strongly constrains the geometry of the particles. In particular, as charged leptons approach other particles, Ricci flatness forces the lepton radius to shrink. As the distance of approach goes to zero, the lepton radius also goes to zero. For this reason, leptons appear to be pointlike in collisions.
This description allows us to describe the spin angular momentum of leptons in familiar terms. For the case of pointlike leptons, spin angular momentum must be considered to be an inherent quantity that is not obtained in the normal sense of $$p \times r$$, because the calculation would be meaningless if the lepton radius is $$r=0$$. In this description, radius is non-zero and the spin angular momentum can be calculated as a function of field strength from first principles.
The description of quantum mechanics here is very different from the usual description. We assume that the spacetime manifold is branched and we get all of our quantumness from that property. The viewpoint we adopt is that the path integral of quantum mechanics is useful because it is a convenient simplification of the complex dynamical system, which is the branched manifold. This is similar to saying that the heat equation is a convenient simplification of the transfer of heat energy through the vibrating molecules of an atomic lattice. The heat equation has certain pathological properties, for example energy is transferred faster than light speed. The path integral has certain pathological properties, for example various infinities that are difficult to tame. We hypothesize that the pathology results from modeling a discrete system as a continuous one. With the branched manifold, a single branch of the manifold is analogous to a single atom of the atomic lattice, and we do not try to know its random contribution. We just try to model the ensemble statistically. If we model the ensemble using a continuous approximation, the result reproduces the path integral of quantum mechanics.
So far all the key properties of physics (all four forces, quantum mechanics, and the known particles) have been shown to follow without requiring additional assumptions. There is a very short list of assumptions, even in comparison to the Standard Model, and the consequences of those assumptions have been adequate to describe every physical phenomenon that has been attempted. Interestingly, over the course of the development of the theory, the set of assumptions has become ever smaller as previously distinct descriptions were shown to be unified by ever simpler models.
You can reach us at:
We welcome any questions, comments, or interest in research collaboration. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330724477767944, "perplexity": 259.9919557256813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423075946-00053.warc.gz"} |
http://mathoverflow.net/questions/2128/youngs-lattice-and-the-weyl-algebra/58676 | # Young's lattice and the Weyl algebra
Let L be the lattice of Young diagrams ordered by inclusion and let Ln denote the nth rank, i.e. the Young diagrams of size n. Say that lambda > mu if lambda covers mu, i.e. mu can be obtained from lambda by removing one box and let C[L] be the free vector space on L. The operators
U lambda = summu > lambda mu
D lambda = sumlambda > mu mu
are a decategorification of the induction and restriction operators on the symmetric groups, and (as observed by Stanley and generalized in the theory of differential posets) they have the property that DU - UD = I; in other words, Young's lattice along with U, D form a representation of the Weyl algebra.
Is this a manifestation of a more general phenomenon? What's the relationship between differential operators and the representation theory of the symmetric group?
Edit: Maybe I should ask a more precise question, based on the comment below. As I understand it, in the language of Coxeter groups the symmetric groups are "type A," so the Weyl algebra can be thought of as being associated to type A phenomena. What's the analogue of the Weyl algebra for types B, C, etc.?
-
In type B, the poset of ordered pairs of Young diagrams describes the branching rules for the hyperoctahedral group. Here, the corresponding relation is DU - UD = 2I. Is there a natural way to rescale one of the variables so that these are "differential operators" in a sensible way? – Sammy Black Oct 23 '09 at 18:31
If I'm not mistaken you can take U = x - dx, D = x + dx, but I don't know if this is the "natural" interpretation. – Qiaochu Yuan Oct 23 '09 at 18:38
EDIT (3/16/11): When I first read this question, I thought "hmmm, Weyl algebra? Really? I feel like I never hear people say they're going to categorify the Weyl algebra, but it looks like that's what the question is about..." Now I understand what's going on. Not to knock the OP, but there's a much bigger structure here he left out. If you have any $S_n$ representation $M$, you get a functor $$\operatorname{Ind}_{S_m\times S_n}^{S_{m+n}}-\otimes M: S_m-\operatorname{rep}\to S_{m+n}-\operatorname{rep}$$ and these functors all have adjoints which I won't bother writing down. All of these together categorify a Heisenberg algebra, which is what Khovanov proves in the paper linked below (though cruder versions of these results on the level the OP was talking about are much older, at least as far back as Leclerc and Thibon).
There is a much more general story here, though one my brain is not very up to explaining it this afternoon, and unfortunately, I don't know of anywhere it's summarized well for beginners.
So, how you you prove the restriction rule you mentioned above? You note that the restriction of a $S_n$ rep to an $S_{n-1}$ rep has an action of the Jucys-Murphy element $X_n$ which commutes with $S_{n-1}$. The different $S_{n-1}$ representations are the different eigenspaces of the J-M element.
So, one can think of "restrict and take the m-eigenspace" as a functor $E_m$; this defines a direct sum decomposition of the functor of restriction.
Of course, this functor has an adjoint: I think the best way to think about this is as $F\_m=(k[S\_n]/(X\_n-m)) \otimes\_{k[S\_{n-1}]} V$.
These functors E_m,F_m satsify the relations of the Serre relations for $\mathfrak{sl}(\infty)$. Over characteristic 0, these are all different, and you can think of this as an $\mathfrak{sl}(\infty)$. If instead, you take representations over characteristic p, then E_m=E_{m+p} so you can think of them as being in a circle, an affine Dynkin diagram, so one gets an action of $\widehat{\mathfrak{sl}}(p)$.
Similar categorifications of other representations can deconstructed in general by looking at representations of complex reflection groups given by the wreath product of the symmetric group with a cyclic group. So, Sammy, you shouldn't rescale, you should celebrate that you found a representation with a different highest weight (also, if you really care, you should go talk to Jon Brundan or Sasha Kleshchev; they are some of the world's experts on this stuff).
EDIT: Khovanov has actually just posted a paper which I think might be relevant to your question.
-
In the OP's defense, nothing is really lost by only considering the Weyl algebra since the basic representation of affine sl_p is irreducible on restriction to the Heisenberg subalgebra. – David Hill Mar 16 '11 at 23:12
Sasha Kleshchev's book "Linear and Projective Representations of Symmetric Groups" is the reference I'd suggest. Chapter 1 contains the connection with Young's lattice, and the subsequent chapters develop the functors that Ben described above. The second half of the book develops the theory for spin representations of symmetric groups which is an honest type B analogue (The functors $E_m$ and $F_m$ in Ben's answer satisfy the Serre relations for the Kac-Moody algebra of type $B_\infty$).
To add a little more detail to Ben's answer, the right level of generality to think about these questions is the affine Hecke algebra (either the degenerate or nondegenerate varieties). I'll describe the degenerate case:
Let $F$ be an algebraically closed field of characteristic $p$. As a vector space the (degenerate) affine Hecke algebra is a tensor product of a polynomial algebra with the group algebra of the symmetric group: $H_d=F[x_1,\ldots,x_d]\otimes FS_d$. Multiplication is defined so that each tensor summand is a subalgebra, and $H_d$ satisfies the mixed relations $s_ix_j=x_js_i$ for $j\neq i,i+1$ (here $s_i=(i,i+1)$), and $s_ix_i=x_{i+1}s_1-1$.
Note that in addition to being a subalgera, $FS_d$ is also a quotient of $H_d$ obtained by mapping $s_i\mapsto s_i$ and $x_1\mapsto 0$ (so that the $x_i$ map to Jucys-Murphy elements).
The polynomial subalgebra forms a maximal commutative subalgebra, so given a finite dimensional $H_d$-module $M$, we may decompose $$M=\bigoplus_{(a_1,\ldots,a_d)\in F^d}M_{(a_1,\ldots,a_d)},$$ where $$M_{(a_1,\ldots a_d)}=\lbrace m\in M|(x_i-a_i)^Nm=0,\mbox{ for }N\gg0\mbox{ and }i=1,\ldots,d \rbrace$$ is the generalized $(a_1,\ldots,a_d)$-eigenspace for the action of $x_1,\ldots,x_d$. Let $I=\mathbb{Z}1_F\subset F$ and $Rep_IH_d$ be the category of finite dimensional $H_d$-modules which are integral in the sense that if $M\in Rep_IH_d$, and $M_{(a_1,\ldots,a_d)}\neq 0$, then $(a_1,\ldots,a_d)\in I^d$.
Now, let $K_d=K_0(Rep_IH_d)$, and $K=\bigoplus_d K_d$. Then, the categorification statement is that parabolic induction and restriction give $K$ the structure of a bialgebra, and as such $$K\cong U_{\mathbb{Z}}(\mathfrak{n}).$$ In the above statement, $\mathfrak{n}\subset \mathfrak{g}$ is the maximal nilpotent subalgebra of the Kac-Moody algebra $\mathfrak{g}$ generated by the Chevalley generators $e_i$, where, if $char F=p$, then $\mathfrak{g}=\hat{sl}(p)$, and if $char F=0$, $\mathfrak{g}=\mathfrak{gl}(\infty)$. In both cases $U_{\mathbb{Z}}(\mathfrak{g})$ denotes the Kostant-Tits $\mathbb{Z}$-subalgebra of the universal enveloping algebra. Note here that the Chevalley generators are indexed by $I$.
Now, for each dominant integral weight $\Lambda=\sum_{i\in I}\lambda_i\Lambda_i$ ($\Lambda_i$ the fundamental dominant weights) for $\mathfrak{g}$, define the polynomial $$f_\Lambda=\prod_{i\in I}(x_1-i)^{\lambda_i}.$$ Then, the algebra $H_d^\Lambda=H_d/(H_d f_\Lambda H_d)$ is finite dimensional. In the case $\Lambda=\Lambda_0$, $H_d^\Lambda\cong FS_d$.
One can form $K_d(\Lambda)$ and $K(\Lambda)$ as above corresponding to the category $H_d^\Lambda-mod$. Then, the categorification statement is $$K(\Lambda)\cong V_{\mathbb{Z}}(\Lambda)$$ as $\mathfrak{g}$-modules, where $V(\Lambda)$ is the irreducible $\mathfrak{g}$-module of highest weight $\Lambda$ generated by a highest weight vector $v_+$, and $V_{\mathbb{Z}}(\Lambda)=U_\mathbb{Z}(\mathfrak{g})v_+$ is an admissible lattice. The action of the Chevalley generators on $K$ are analogues of the functors in Ben's answer. The action of the Weyl module corresponds to the action of $D=\sum_{i\in I}e_i$ and $U=\sum_{i\in I}f_i$ (in characteristic 0, this is defined in the completion $\mathfrak{a}(\infty)$ of $\mathfrak{gl}(\infty)$.
One can generalize this story to $\hat{\mathfrak{sl}}_\ell$ by working with the (nondegenerate) affine Hecke algebra $H_d(t)$, where $t$ is a primitive $\ell$-th root of unity. In this case, the finite dimensional quotients are Hecke algebras of complex reflection groups. The hyperoctohedral group corresponds to the highest weight $\Lambda=2\Lambda_0$. Then $V(\Lambda)$ is a level 2 representation, hence the central element acts by $2\cdot Id$ as in Sammy's comment).
In the second half of Kleshchev's book, the Hecke algebra is replaced by the so-called Hecke-Clifford (or Sergeev) algebra, and $\mathfrak{g}$ is of type $B_\infty$ or $A_{2\ell}^{(2)}$ depending on the ground field (or one can work in the non-degenerate case so that $\ell$ needn't be prime.
The algebras introduced by Khovanov-Lauda and Rouquier generalize this story to arbitrary symmetrizable Kac-Moody algebra. These algebras are graded, so one gets a categorification of the quantum group $U_q$, where $q$ keeps track of the grading . . .
-
Passing from towers of algebras to graded dual Hopf algebras is done by using induction and restriction on the Grothendieck groups, and then using structure constants one gets edge multiplicities for a graded graph. Dual graded graphs are generalizations of differential posets (the definig rule is $D_{\Gamma} U_{\Gamma'}-U_{\Gamma}D_{\Gamma'}=r Id$).
This generalizes the way one can start with the tower of symmetric group algebras $\bigoplus_{n\geq 0}\mathbb{C} \mathfrak{S}_n$ and get the ring of symmetric functions which is a self dual graded Hopf algebra (construction of the Hopf structure in terms of the Grothendieck groups by Zelevinsky) and finally obtain the Young lattice through Pieri rules on the ring of symmetric functions (equivalently the branching rules for the symmetric group). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586575627326965, "perplexity": 215.6740886138987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652955/warc/CC-MAIN-20140305060732-00071-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://hal-centralesupelec.archives-ouvertes.fr/hal-02419167 | State observers for reaction systems with improved convergence rate
Abstract : The problem of designing state observers for reaction systems whose convergence rate is faster than the standard asymptotic observers [7] is addressed in this paper. It is assumed that the reaction functions are known and that there are more measurements than “independent” reactions. If the unmeasurable state enters linearly in the reaction functions we propose an observer that converges in finite-time, under very weak excitation assumptions. If this dependence is nonlinear, we additionally assume that there is an element of the reaction functions vector that depends only on one unmeasurable state and that these functions are strictly monotonic. Under these conditions, a state observer that ensures exponential convergence of the states that appear in the reaction functions is designed. For the unmeasurable states that do not appear in these functions, the convergence is similar to the one of the asymptotic observers.
Document type :
Journal articles
Domain :
https://hal-centralesupelec.archives-ouvertes.fr/hal-02419167
Contributor : Myriam Baverel <>
Submitted on : Thursday, December 19, 2019 - 12:59:59 PM
Last modification on : Wednesday, September 16, 2020 - 4:50:33 PM
Citation
Romeo Ortega, Alexey Bobtsov, Denis Dochain, Nikolay Nikolayev. State observers for reaction systems with improved convergence rate. Journal of Process Control, Elsevier, 2019, 83, pp.53-62. ⟨10.1016/j.jprocont.2019.08.003⟩. ⟨hal-02419167⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948207676410675, "perplexity": 1263.089652457247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00371.warc.gz"} |
https://www.physicsforums.com/threads/electrolytic-reduction-of-iron-ore-stoichiometry.346185/ | # Homework Help: Electrolytic reduction of iron ore - Stoichiometry
1. Oct 16, 2009
### ThetaPi
1. The problem statement, all variables and given/known data
Calculate the maximum mass of aluminum that can be made from 408 tonnes of alumina, assuming that aluminum is produced by electrolysis.
2. Relevant equations
At the cathode: $$Al^{3+} (l) + 3e^{-} \Rightarrow Al (l)$$.
At the anode: $$O^{2-} (l) \Rightarrow O_2 (g) + 4e^{-}$$. Is the oxide ion from aluminum oxide in liquid or gaseous state?
3. The attempt at a solution
Is there any limiting reagent of sorts? I have never done any stoichiometric calculations on metallurgy.
Postscript. How do we input chemistry using LaTeX? (Especially the chemical equations and symbols, e.g. how to make Al not italicized.)
Last edited: Oct 16, 2009
2. Oct 16, 2009
### cepheid
Staff Emeritus
Regarding formatting, LaTeX posts on PF are automatically in math mode, which means that all letters are italicized (which is the conventional way to typeset mathematical variables). However, the following LaTeX output (without the spaces in the tex tags) will generate the result given below it:
[ tex ] \textrm{Al}^{3+}_{(\textrm{l})} +3e^{-} \longrightarrow \textrm{Al}_{(\textrm{l})} [ /tex ]
$$\textrm{Al}^{3+}_{(\textrm{l})} +3e^{-} \longrightarrow \textrm{Al}_{(\textrm{l})}$$
where the \textrm{} enviroment takes us out of math mode and into normal text mode using the roman font.
3. Oct 16, 2009
### cepheid
Staff Emeritus
Regarding the problem, it's been a while since I've done chemistry, but isn't it merely a matter of:
1. Figuring out how many moles of aluminum oxide correspond to 408 tonnes of it.
2. Figuring out how many moles of Al are produced per mole of aluminum oxide
3. Converting that many moles of Al into a mass, assuming all of it is converted?
Edit, and here's a non LaTeX alternative which might or might not be easier for chemistry (your choice). Use the sup and sub tags for superscripts and subscripts respectively
This input, without spaces in the tags, produces the output below it:
Al[ sub ](l)[ /sub ][ sup ]3+[ /sup ] + 3e[ sup ]-[ /sup ] → Al [ sub ] (l) [ /sub ]
Al(l)3+ + 3e- → Al(l) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389857053756714, "perplexity": 4901.825378523361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00233.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/29941-groups.html | # Math Help - Groups
1. ## Groups
Hi. Hope someone can help with my problem.
Let G be a group with identity e. If A and B are subgroups of G then, from the Direct Product of Groups Theory, AxB is a subgroup of GxG. Use this result througout.
a) Show that, if A and B are two normal subgroups of G, then AxB is normal subgroup of GxG (Not sure if I am to prove that A and B are two normal subgroups as well as proving AxB is a normal subgroup of GxG).
b) Let H be the subset
H={(g,g):g E G}
Show that H is a subgroup of GxG
Normally I can do these type of questions, it's just the Gx G and AxB bit has confused me a bit.
Thanx
2. Originally Posted by bex23
a) Show that, if A and B are two normal subgroups of G, then AxB is normal subgroup of GxG (Not sure if I am to prove that A and B are two normal subgroups as well as proving AxB is a normal subgroup of GxG).
You need to prove $(g_1,g_2)(a,b)(g_1^{-1},g_2^{-1})\in A\times B$ thus $(g_1ag_1^{-1},g_2bg_2^{-1})\in A\times B$ because $g_1ag_1^{-1}\in A$ and $g_2bg_2^{-1} \in B$.
3. Originally Posted by ThePerfectHacker
You need to prove $(g_1,g_2)(a,b)(g_1^{-1},g_2^{-1})\in A\times B$ thus $(g_1ag_1^{-1},g_2bg_2^{-1})\in A\times B$ because $g_1ag_1^{-1}\in A$ and $g_2bg_2^{-1} \in B$.
I seem to be having a brain dead day, as I can't even remember how to do that
4. ## Groups
Right, I think I have figured out part a) it's just part b) that is giving me problems. I know that to prove that H is a subgroup of GxG, I need to prove the three subgroup axioms Closure, Identity and Inverse. So far I have only gone so far as Closure and even then I am not sure that this is correct any help will be greatly appreciated.
Closure:
Let (g_1, g_2) and (g_3, g_4) be any two elments of GxG. By the definition of GxG, we know that g_1, g_2, g_3 and g_4 are elemnts of G.
Since G is a group, it is closed under the operation o and we have
(g_1,g_2)o(g_3,g_4)=(g_1 o g_3, g_2 o g_4)
which belongs to H
So the closue axiom holds.
Is this correct and if it is (which I don't think it is) can you help me get started with the other two axioms.
Thanx
Bex
5. It is correct.
For identity element consider $(e,e)$, show it is identity element.
For inverse, note $(g_1,g_2)^{-1} = (g_1^{-1},g_2^{-1})$.
6. ## Groups
Identity:
To prove the identity axiom we use the identity element (e,e) which belongs to GxG and the arbitrary element
(g_1,g_2) which also belongs to GxG, Thus
(e,e) o (g_1,g_2)= (e o g_1, e o g_2)
=(g_1,g_2) which belongs to H
Inverse:
For this I have two methods (neither which I am very confidant on).
Method1:
To prove the inverse axiom, let (g_1,g_2) be and element of GxG where g_1^-1 and g_2^-1 are the inverses respectively. Proving that (g_1^-1, g_2^-1) is the inverse of (g_1,g_2) in GxG this will prove the inverse axiom (I hope!!).
We have
(g_1,g_2)o(g_1^-1, g_2^-1)=(g_1og_1^-1, g_2og_2^-1)
=(e,e)
Also
(g_1^-1, g_2^-1)o(g_1,g_2)=(e,e)
Hence, (g_1^-1,g_2^-1) is the inverse of (g_1,g_2)
Method 2:
Let (g_1,g_2) be an element of GxG then the inverse is
(g_1^-1,g_2^-1), which belongs to H as it is of the form (g,g) for g E G.
I think method 2 is the more wrong of the two but put in in here just in case.
Have I gone completly off the mark here. I was surprised when you said that the first part was right, and so now the rest will probably be all wrong. Lol!! Can you just tell me if I have gone in the right direction, and if not point me in the right one.
Thanx
Bex | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684237241744995, "perplexity": 618.2251356352376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999673608/warc/CC-MAIN-20140305060753-00050-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/179408/when-does-convolution-preserve-the-size-of-a-function | # When does convolution preserve the `size' of a function?
For a positive function $f$ and positive measures $\mu, \nu$, does $$\mu\ast f\leq \nu\ast f \Rightarrow \|\mu\|\leq \|\nu\|?$$
More details: Let $G$ be a locally compact group, $C(G)$ be the space of continuous functions on $G$, $M(G)$ be the finite, Borel measures on $G$. The convolution of a measure and function is defined by $$\mu\ast f(x) = \int f(y^{-1}x)d\mu (y)$$.
If $0\neq f\in C(G)$ is positive and compactly supported, and $\mu, \nu\in M(G)^+$ then the above implication holds. Simply take the Haar integral and evaluate the first inequality to find the second.
A similar approach works if $G$ is amenable and $f$ has nonzero mean value.
My questions: Does the implication above hold for all (edit: bounded) $f\in C(G)^+$ if $G$ is amenable? If $G$ is not amenable, what are the functions for which it is true?
-
It seems that Justin Moore has some results on single functions witnessing non-amenability in his answer to his question mathoverflow.net/questions/60247/… – Ben Willson Aug 27 '14 at 1:38
In the non-topological setting, I think this question is very closely related to supramenability. – Ben Willson Nov 12 '14 at 2:36
Do you want to require $f$ to be bounded? Otherwise there are easy counterexamples: take $G = \mathbb{Z}$, $\mu(1) = 2$, $\nu(0) = 1$, and $\mu$ and $\nu$ zero at all other points, and $f(n) = 2^n$. Then $\mu * f = \nu*f$ but $\|\mu\| = 2 > 1 = \|\nu\|$.
Thanks. I am mostly interested in bounded $f$, but it is good to be reminded of unbounded examples. – Ben Willson Aug 26 '14 at 7:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502367377281189, "perplexity": 311.17110893138243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456360595.57/warc/CC-MAIN-20150501045920-00002-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/10099/what-is-the-difference-between-herfindahl-index-and-the-concentration-ratio/10103 | # What is the difference between Herfindahl Index and the Concentration Ratio
I was recently reading about the Herfindahl Index and from what I've learned so far, is that HHI is preferred over the Concentration Ratio. However, I didn't quite understand the reasoning that lead to this conclusion. What impact does squaring have on the final result? Any hints or suggestions for further reading will be appreciated.
From the Industrial Organization by Belleflame and Petiz (Page 34/35, Chapter 2):
While the m-firm concentration ratio adds market shares of a small number of firms in the market, the so-called Herfindahl index (also known as Herfindahl–Hirschman index) considers the full distribution of market shares.
We can conclude that the mathematical approach in HHI is more adequate for a full market setup, rather than observing a particular concentration ratio in one firm. Also from the same book
the Herfindahl index provides a better measure of concentration as it captures both the number of firms and the dispersion of the market shares.
Hence the squared market shares.
For your information, there is another concentration measure called the Lerner Index, although the Lerner index is a snapshot of the intensity of competition. You can calculate L by finding the difference between price and marginal costs as a percentage of the price. More formally: $L=\frac{p-C'}{p}$
But it is easily observed that L ignores some dynamics of the markets. Lower prices do not necessarily mean high levels of competition. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922759890556335, "perplexity": 897.2365210777605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00156.warc.gz"} |
https://en.formulasearchengine.com/wiki/Distributed_source_coding | # Distributed source coding
Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other.[1] By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression (see distributed video coding[2]). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.
## History
In 1973, David Slepian and Jack Keil Wolf proposed the information theoretical lossless compression bound on distributed compression of two correlated i.i.d. sources X and Y.[3] After that, this bound was extended to cases with more than two sources by Thomas M. Cover in 1975,[4] while the theoretical results in the lossy compression case are presented by Aaron D. Wyner and Jacob Ziv in 1976.[5]
Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by Aaron D. Wyner.[6] The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis coset constructions to solve the problem.[7] They further extended the work into the symmetric DSC case.[8]
Syndrome decoding technology was first used in distributed source coding by the DISCUS system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes).[7] They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as side information. This kind of DSC scheme achieves asymmetric compression rates per source and results in asymmetric DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use parity bits rather than syndrome bits.
The correlation between two sources in DSC has been modeled as a virtual channel which is usually referred as a binary symmetric channel.[9][10]
Starting from DISCUS, DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as Turbo Code, LDPC Code, and so on.
Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai,[11] while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer.
Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on.[12]
With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed.[13][14][15] In these non-asymmetric schemes, both of two correlated sources are compressed.
Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper.[16] This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources. Then, by investigating the unique connection between syndromes and complementary codewords of linear codes, they have translated the major steps of DSC joint decoding into syndrome decoding followed by channel encoding via a linear block code and also via its complement code,[17] which theoretically illustrated a method of assembling a DSC joint decoder from linear code encoders and decoders.
## Theoretical bounds
The information theoretical lossless compression bound on DSC (the Slepian–Wolf bound) was first purposed by David Slepian and Jack Keil Wolf in terms of entropies of correlated information sources in 1973.[3] They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975.[4]
Similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources.[5]
### Slepian–Wolf bound
Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below:[3]
${\displaystyle R_{X}\geq H(X|Y),\,}$
${\displaystyle R_{Y}\geq H(Y|X),\,}$
${\displaystyle R_{X}+R_{Y}\geq H(X,Y).\,}$
If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is ${\displaystyle H(X)}$ and ${\displaystyle H(Y)}$ for ${\displaystyle X}$ and ${\displaystyle Y}$ respectively, where ${\displaystyle H(X)}$ and ${\displaystyle H(Y)}$ are the entropies of ${\displaystyle X}$ and ${\displaystyle Y}$. However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of ${\displaystyle X}$ and ${\displaystyle Y}$ is larger than their joint entropy ${\displaystyle H(X,Y)}$ and none of the sources is encoded with a rate larger than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences.
A special case of distributed coding is compression with decoder side information, where source ${\displaystyle Y}$ is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that ${\displaystyle R_{Y}=H(Y)}$ has already been used to encode ${\displaystyle Y}$, while we intend to use ${\displaystyle H(X|Y)}$ to encode ${\displaystyle X}$. The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric).
### Wyner–Ziv bound
Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem.[5] Similarly to lossless case, two statistically dependent i.i.d. sources ${\displaystyle X}$ and ${\displaystyle Y}$ are given, where ${\displaystyle Y}$ is available at the decoder side but not accessible at the encoder side. Instead of lossless compression in Slepian–Wolf theorem, Wyner–Ziv theorem looked into the lossy compression case.
Wyner–Ziv theorem presents the achievable lower bound for the bit rate of ${\displaystyle X}$ at given distortion ${\displaystyle D}$. It was found that for Gaussian memoryless sources and mean-squared error distortion, the lower bound for the bit rate of ${\displaystyle X}$ remain the same no matter whether side information is available at the encoder or not.
## Virtual channel
Deterministic model
Probabilistic model
## Asymmetric DSC vs. symmetric DSC
Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example ${\displaystyle X}$ and ${\displaystyle Y}$ are two discrete, memoryless, uniformly distributed sources which generate set of variables ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {y} }$ of length 7 bits and the Hamming distance between ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {y} }$ is at most one. The Slepian–Wolf bound for them is:
${\displaystyle R_{X}+R_{Y}\geq 10}$
${\displaystyle R_{X}\geq 5}$
${\displaystyle R_{Y}\geq 5}$
This means, the theoretical bound is ${\displaystyle R_{X}+R_{Y}=10}$ and symmetric DSC means 5 bits for each source. Other pairs with ${\displaystyle R_{X}+R_{Y}=10}$ are asymmetric cases with different bit rate distributions between ${\displaystyle X}$ and ${\displaystyle Y}$, where ${\displaystyle R_{X}=3}$, ${\displaystyle R_{Y}=7}$ and ${\displaystyle R_{Y}=3}$, ${\displaystyle R_{X}=7}$ represent two extreme cases called decoding with side information.
## Practical distributed source coding
### Slepian–Wolf coding – lossless distributed coding
It was understood that Slepian–Wolf coding is closely related to channel coding in 1974,[6] and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source ${\displaystyle X}$ and output as source ${\displaystyle Y}$. The DISCUS system proposed by S. S. Pradhan and K. Ramchandran in 1999 implemented DSC with syndrome decoding, which worked for asymmetric case and was further extended to symmetric case.[7][8]
The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources.
A group of codes can be used to generate coset partitions,[18] such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on convolution code and set-partitioning rules as in Trellis modulation, as well as lattice code based DSC.[7][8] After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results.[19]
After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as Turbo Code, LDPC Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound.
Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models.[20]
#### General theorem of Slepian–Wolf coding with syndromes for two sources
Theorem: Any pair of correlated uniformly distributed sources, ${\displaystyle X,Y\in \left\{0,1\right\}^{n}}$, with ${\displaystyle \mathbf {d_{H}} (X,Y)\leq t}$, can be compressed separately at a rate pair ${\displaystyle (R_{1},R_{2})}$ such that ${\displaystyle R_{1},R_{2}\geq n-k,R_{1}+R_{2}\geq 2n-k}$, where ${\displaystyle R_{1}}$ and ${\displaystyle R_{2}}$ are integers, and ${\displaystyle k\leq n-\log(\sum _{i=0}^{t}{n \choose i})}$. This can be achieved using an ${\displaystyle (n,k,2t+1)}$ binary linear code.
Proof: The Hamming bound for an ${\displaystyle (n,k,2t+1)}$ binary linear code is ${\displaystyle k\leq n-\log(\sum _{i=0}^{t}{n \choose i})}$, and we have Hamming code achieving this bound, therefore we have such a binary linear code ${\displaystyle \mathbf {C} }$ with ${\displaystyle k\times n}$ generator matrix ${\displaystyle \mathbf {G} }$. Next we will show how to construct syndrome encoding based on this linear code.
Suppose there are two different input pairs with the same syndromes, that means there are two different strings ${\displaystyle \mathbf {u^{1},u^{2}} \in \left\{0,1\right\}^{k}}$, such that ${\displaystyle \mathbf {u^{1}G+c_{s}=e} }$ and ${\displaystyle \mathbf {u^{2}G+c_{s}=e} }$. Thus we will have ${\displaystyle \mathbf {(u^{1}-u^{2})G=0} }$. Because minimum Hamming weight of the code ${\displaystyle \mathbf {C} }$ is ${\displaystyle 2t+1}$, the distance between ${\displaystyle \mathbf {u_{1}G} }$ and ${\displaystyle \mathbf {u_{2}G} }$ is ${\displaystyle \geq 2t+1}$. On the other hand, according to ${\displaystyle w(\mathbf {e} )\leq t}$ together with ${\displaystyle \mathbf {u^{1}G+c_{s}=e} }$ and ${\displaystyle \mathbf {u^{2}G+c_{s}=e} }$, we will have ${\displaystyle d_{H}(\mathbf {u^{1}G,c_{s}} )\leq t}$ and ${\displaystyle d_{H}(\mathbf {u^{2}G,c_{s}} )\leq t}$, which contradict with ${\displaystyle d_{H}(\mathbf {u^{1}G,u^{2}G} )\geq 2t+1}$. Therefore, we cannot have more than one input pairs with the same syndromes.
Therefore, we can successfully compress the two dependent sources with constructed subcodes from an ${\displaystyle (n,k,2t+1)}$ binary linear code, with rate pair ${\displaystyle (R_{1},R_{2})}$ such that ${\displaystyle R_{1},R_{2}\geq n-k,R_{1}+R_{2}\geq 2n-k}$, where ${\displaystyle R_{1}}$ and ${\displaystyle R_{2}}$ are integers, and ${\displaystyle k\leq n-\log(\sum _{i=0}^{t}{n \choose i})}$. Log indicates Log2.
#### Slepian–Wolf coding example
Take the same example as in the previous Asymmetric DSC vs. Symmetric DSC part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part.
##### Asymmetric case (${\displaystyle R_{X}=3}$, ${\displaystyle R_{Y}=7}$)
In this case, the length of an input variable ${\displaystyle \mathbf {y} }$ from source ${\displaystyle Y}$ is 7 bits, therefore it can be sent lossless with 7 bits independent of any other bits. Based on the knowledge that ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {y} }$ have Hamming distance at most one, for input ${\displaystyle \mathbf {x} }$ from source ${\displaystyle X}$, since the receiver already has ${\displaystyle \mathbf {y} }$, the only possible ${\displaystyle \mathbf {x} }$ are those with at most 1 distance from ${\displaystyle \mathbf {y} }$. If we model the correlation between two sources as a virtual channel, which has input ${\displaystyle \mathbf {x} }$ and output ${\displaystyle \mathbf {y} }$, as long as we get ${\displaystyle \mathbf {y} }$, all we need to successfully "decode" ${\displaystyle \mathbf {x} }$ is "parity bits" with particular error correction ability, taking the difference between ${\displaystyle \mathbf {x} }$ and ${\displaystyle \mathbf {y} }$ as channel error. We can also model the problem with cosets partition. That is, we want to find a channel code, which is able to partition the space of input ${\displaystyle X}$ into several cosets, where each coset has a unique syndrome associated with it. With a given coset and ${\displaystyle \mathbf {y} }$, there is only one ${\displaystyle \mathbf {x} }$ that is possible to be the input given the correlation between two sources.
In this example, we can use the ${\displaystyle (7,4,3)}$ binary Hamming Code ${\displaystyle \mathbf {C} }$, with parity check matrix ${\displaystyle \mathbf {H} }$. For an input ${\displaystyle \mathbf {x} }$ from source ${\displaystyle X}$, only the syndrome given by ${\displaystyle \mathbf {s} =\mathbf {H} \mathbf {x} }$ is transmitted, which is 3 bits. With received ${\displaystyle \mathbf {y} }$ and ${\displaystyle \mathbf {s} }$, suppose there are two inputs ${\displaystyle \mathbf {x_{1}} }$ and ${\displaystyle \mathbf {x_{2}} }$ with same syndrome ${\displaystyle \mathbf {s} }$. That means ${\displaystyle \mathbf {H} \mathbf {x_{1}} =\mathbf {H} \mathbf {x_{2}} }$, which is ${\displaystyle \mathbf {H} (\mathbf {x_{1}} -\mathbf {x_{2}} )=0}$. Since the minimum Hamming weight of ${\displaystyle (7,4,3)}$ Hamming Code is 3, ${\displaystyle d_{H}(\mathbf {x_{1}} ,\mathbf {x_{2}} )\geq 3}$. Therefore the input ${\displaystyle \mathbf {x} }$ can be recovered since ${\displaystyle d_{H}(\mathbf {x} ,\mathbf {y} )\leq 1}$.
Similarly, the bits distribution with ${\displaystyle R_{X}=7}$, ${\displaystyle R_{Y}=3}$ can be achieved by reversing the roles of ${\displaystyle X}$ and ${\displaystyle Y}$.
##### Symmetric case
In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources.
Suppose we have a pair of linear code ${\displaystyle \mathbf {C_{1}} }$ and ${\displaystyle \mathbf {C_{2}} }$ and an encoder-decoder pair based on linear codes which can achieve symmetric coding. The encoder output is given by: ${\displaystyle \mathbf {s_{1}} =\mathbf {H_{1}} \mathbf {x} }$ and ${\displaystyle \mathbf {s_{2}} =\mathbf {H_{2}} \mathbf {y} }$. If there exists two pair of valid inputs ${\displaystyle \mathbf {x_{1}} ,\mathbf {y_{1}} }$ and ${\displaystyle \mathbf {x_{2}} ,\mathbf {y_{2}} }$ generating the same syndromes, i.e. ${\displaystyle \mathbf {H_{1}} \mathbf {x_{1}} =\mathbf {H_{1}} \mathbf {x_{2}} }$ and ${\displaystyle \mathbf {H_{1}} \mathbf {y_{1}} =\mathbf {H_{1}} \mathbf {y_{2}} }$, we can get following(${\displaystyle w()}$ represents Hamming weight):
where ${\displaystyle \mathbf {e_{3}} =\mathbf {e_{2}} +\mathbf {e_{1}} }$ and ${\displaystyle w(\mathbf {e_{3}} )\leq 2}$. That means, as long as we have the minimum distance between the two codes larger than ${\displaystyle 3}$, we can achieve error-free decoding.
The two codes ${\displaystyle \mathbf {C_{1}} }$ and ${\displaystyle \mathbf {C_{2}} }$ can be constructed as subcodes of the ${\displaystyle (7,4,3)}$ Hamming code and thus has minimum distance of ${\displaystyle 3}$. Given the generator matrix ${\displaystyle \mathbf {G} }$ of the original Hamming code, the generator matrix ${\displaystyle \mathbf {G_{1}} }$ for ${\displaystyle \mathbf {C_{1}} }$ is constructed by taking any two rows from ${\displaystyle \mathbf {G} }$, and ${\displaystyle \mathbf {G_{2}} }$ is constructed by the remaining two rows of ${\displaystyle \mathbf {G} }$. The corresponding ${\displaystyle (5\times 7)}$ parity-check matrix for each sub-code can be generated according to the generator matrix and used to generate syndrome bits.
### Wyner–Ziv coding – lossy distributed coding
In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer,[21] trellis code quantizer[22] and Lloyd quantization method.[23]
### Large scale distributed quantization
Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales ${\displaystyle 2^{NR}}$. Even for moderate values of N and R (say N=10, R = 2), prior design schemes become impractical. Recently, an approach,[24] using ideas borrowed from Fusion Coding of Correlated Sources, has been proposed where design and operational complexity are traded against decoder performance. This has allowed distributed quantizer design for network sizes reaching 60 sources, with substantial gains over traditional approaches.
The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let ${\displaystyle {\mathcal {B}}}$ be the set of all subsets of the NR bits i.e.
${\displaystyle {\mathcal {B}}=2^{\{1,...,NR\}}}$
Then, we define the bit-subset selector mapping to be
${\displaystyle {\mathcal {S}}:\{1,...,N\}\rightarrow {\mathcal {B}}}$
Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits.
${\displaystyle C=\sum _{n=1}^{N}2^{|{\mathcal {S}}(n)|}}$
This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage
${\displaystyle J=D+\lambda C}$
The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence.
## Non-asymmetric DSC for more than two sources
The syndrome approach can still be used for more than two sources. Let us consider ${\displaystyle a}$ binary sources of length-${\displaystyle n}$ ${\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\cdots ,\mathbf {x} _{a}\in \{0,1\}^{n}}$. Let ${\displaystyle \mathbf {H} _{1},\mathbf {H} _{2},\cdots ,\mathbf {H} _{s}}$ be the corresponding coding matrices of sizes ${\displaystyle m_{1}\times n,m_{2}\times n,\cdots ,m_{a}\times n}$. Then the input binary sources are compressed into ${\displaystyle \mathbf {s} _{1}=\mathbf {H} _{1}\mathbf {x} _{1},\mathbf {s} _{2}=\mathbf {H} _{2}\mathbf {x} _{2},\cdots ,\mathbf {s} _{a}=\mathbf {H} _{a}\mathbf {x} _{a}}$ of total ${\displaystyle m=m_{1}+m_{2}+\cdots m_{a}}$ bits. Apparently, two source tuples cannot be recovered at the same time if they share the same syndrome. In other words, if all source tuples of interest have different syndromes, then one can recover them losslessly.
General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source [25] that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is ${\displaystyle 2^{n}(an+1)}$. Therefore, a packing bound that ${\displaystyle 2^{m}\geq 2^{n}(an+1)}$ obviously has to satisfy. When the packing bound is satisfied with equality, we may call such code to be perfect (an analogous of perfect code in error correcting code).[25]
A simplest set of ${\displaystyle a,n,m}$ to satisfy the packing bound with equality is ${\displaystyle a=3,n=5,m=9}$. However, it turns out that such syndrome code does not exist.[26] The simplest (perfect) syndrome code with more than two sources have ${\displaystyle n=21}$ and ${\displaystyle m=27}$. Let | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 206, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688565492630005, "perplexity": 569.355195546627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00244.warc.gz"} |
https://cstheory.stackexchange.com/questions/16700/finding-minimum-weight-k-cliques-in-a-complete-graph | # Finding minimum weight $k$ cliques in a complete graph
For an undirected weighted complete graph $G = (V, E)$. Assuming the edge weight indicates the similarity between different nodes, the smaller $w_{ij}$ is, it means $i$ and $j$ are more similar towards each other.
I'm trying to find a subset of size $k$ nodes such that they look not so "similar" towards one another, $\textrm{i.e.}$ $k$ nodes $G_k = (V_k, E_k)$ s.t. $\sum_{(i,j) \in E_k} w_{ij}$ is maximized.
I'm not looking for an exact solution but I do want to know whether there is an approximation algorithm that is easy to implement and can still have some kind of performance guarantee.
• Do you really want an approximation algorithm that is NOT easy to implement? – Yoshio Okamoto Mar 2 '13 at 4:07
• Negative of the function you are trying to maximize can be shown to be a submodular function on the set of nodes. So your problem can be stated as a submodular minimization problem, which has been well studied. There must be some approximation algorithms for minimizing over subsets of fixed size $k$. – polkjh Mar 2 '13 at 5:57
• @YoshioOkamoto: Oops...should be "easy to implement". :) – derekhh Mar 2 '13 at 6:11
• The submodular function we get in your case is even more special, it is an increasing function. It might be possible to use that too. But these are some papers on minimizing general submodular functions with constraints on size of subsets. algorithmofsaintqdd.googlecode.com/svn/trunk/Papers/ML/ICML2011/… and arxiv.org/pdf/0805.1071v3.pdf – polkjh Mar 2 '13 at 7:13
• In fact, it seems your problem is a standard one, called densest $k$-subgraph problem. citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.9443 – polkjh Mar 2 '13 at 7:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618831038475037, "perplexity": 281.43068787453467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00440.warc.gz"} |
https://terrytao.wordpress.com/category/mathematics/mathlo/ | You are currently browsing the category archive for the ‘math.LO’ category.
Rachel Greenfeld and I have just uploaded to the arXiv our preprint “Undecidable translational tilings with only two tiles, or one nonabelian tile“. This paper studies the following question: given a finitely generated group ${G}$, a (periodic) subset ${E}$ of ${G}$, and finite sets ${F_1,\dots,F_J}$ in ${G}$, is it possible to tile ${E}$ by translations ${a_j+F_j}$ of the tiles ${F_1,\dots,F_J}$? That is to say, is there a solution ${\mathrm{X}_1 = A_1, \dots, \mathrm{X}_J = A_J}$ to the (translational) tiling equation
$\displaystyle (\mathrm{X}_1 \oplus F_1) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J) = E \ \ \ \ \ (1)$
for some subsets ${A_1,\dots,A_J}$ of ${G}$, where ${A \oplus F}$ denotes the set of sums ${\{a+f: a \in A, f \in F \}}$ if the sums ${a+f}$ are all disjoint (and is undefined otherwise), and ${\uplus}$ denotes disjoint union. (One can also write the tiling equation in the language of convolutions as ${1_{\mathrm{X}_1} * 1_{F_1} + \dots + 1_{\mathrm{X}_J} * 1_{F_J} = 1_E}$.)
A bit more specifically, the paper studies the decidability of the above question. There are two slightly different types of decidability one could consider here:
• Logical decidability. For a given ${G, E, J, F_1,\dots,F_J}$, one can ask whether the solvability of the tiling equation (1) is provable or disprovable in ZFC (where we encode all the data ${G, E, F_1,\dots,F_J}$ by appropriate constructions in ZFC). If this is the case we say that the tiling equation (1) (or more precisely, the solvability of this equation) is logically decidable, otherwise it is logically undecidable.
• Algorithmic decidability. For data ${G,E,J, F_1,\dots,F_J}$ in some specified class (and encoded somehow as binary strings), one can ask whether the solvability of the tiling equation (1) can be correctly determined for all choices of data in this class by the output of some Turing machine that takes the data as input (encoded as a binary string) and halts in finite time, returning either YES if the equation can be solved or NO otherwise. If this is the case, we say the tiling problem of solving (1) for data in the given class is algorithmically decidable, otherwise it is algorithmically undecidable.
Note that the notion of logical decidability is “pointwise” in the sense that it pertains to a single choice of data ${G,E,J,F_1,\dots,F_J}$, whereas the notion of algorithmic decidability pertains instead to classes of data, and is only interesting when this class is infinite. Indeed, any tiling problem with a finite class of data is trivially decidable because one could simply code a Turing machine that is basically a lookup table that returns the correct answer for each choice of data in the class. (This is akin to how a student with a good memory could pass any exam if the questions are drawn from a finite list, merely by memorising an answer key for that list of questions.)
The two notions are related as follows: if a tiling problem (1) is algorithmically undecidable for some class of data, then the tiling equation must be logically undecidable for at least one choice of data for this class. For if this is not the case, one could algorithmically decide the tiling problem by searching for proofs or disproofs that the equation (1) is solvable for a given choice of data; the logical decidability of all such solvability questions will ensure that this algorithm always terminates in finite time.
One can use the Gödel completeness theorem to interpret logical decidability in terms of universes (also known as structures or models) of ZFC. In addition to the “standard” universe ${{\mathfrak U}}$ of sets that we believe satisfies the axioms of ZFC, there are also other “nonstandard” universes ${{\mathfrak U}^*}$ that also obey the axioms of ZFC. If the solvability of a tiling equation (1) is logically undecidable, this means that such a tiling exists in some universes of ZFC, but not in others.
(To continue the exam analogy, we thus see that a yes-no exam question is logically undecidable if the answer to the question is yes in some parallel universes, but not in others. A course syllabus is algorithmically undecidable if there is no way to prepare for the final exam for the course in a way that guarantees a perfect score (in the standard universe).)
Questions of decidability are also related to the notion of aperiodicity. For a given ${G, E, J, F_1,\dots,F_J}$, a tiling equation (1) is said to be aperiodic if the equation (1) is solvable (in the standard universe ${{\mathfrak U}}$ of ZFC), but none of the solutions (in that universe) are completely periodic (i.e., there are no solutions ${\mathrm{X}_1 = A_1,\dots, \mathrm{X}_J = A_J}$ where all of the ${A_1,\dots,A_J}$ are periodic). Perhaps the most well-known example of an aperiodic tiling (in the context of ${{\bf R}^2}$, and using rotations as well as translations) come from the Penrose tilings, but there are many others besides.
It was (essentially) observed by Hao Wang in the 1960s that if a tiling equation is logically undecidable, then it must necessarily be aperiodic. Indeed, if a tiling equation fails to be aperiodic, then (in the standard universe) either there is a periodic tiling, or there are no tilings whatsoever. In the former case, the periodic tiling can be used to give a finite proof that the tiling equation is solvable; in the latter case, the compactness theorem implies that there is some finite fragment of ${E}$ that is not compatible with being tiled by ${F_1,\dots,F_J}$, and this provides a finite proof that the tiling equation is unsolvable. Thus in either case the tiling equation is logically decidable.
This observation of Wang clarifies somewhat how logically undecidable tiling equations behave in the various universes of ZFC. In the standard universe, tilings exist, but none of them will be periodic. In nonstandard universes, tilings may or may not exist, and the tilings that do exist may be periodic (albeit with a nonstandard period); but there must be at least one universe in which no tiling exists at all.
In one dimension when ${G={\bf Z}}$ (or more generally ${G = {\bf Z} \times G_0}$ with ${G_0}$ a finite group), a simple pigeonholing argument shows that no tiling equations are aperiodic, and hence all tiling equations are decidable. However the situation changes in two dimensions. In 1966, Berger (a student of Wang) famously showed that there exist tiling equations (1) in the discrete plane ${E = G = {\bf Z}^2}$ that are aperiodic, or even logically undecidable; in fact he showed that the tiling problem in this case (with arbitrary choices of data ${J, F_1,\dots,F_J}$) was algorithmically undecidable. (Strictly speaking, Berger established this for a variant of the tiling problem known as the domino problem, but later work of Golomb showed that the domino problem could be easily encoded within the tiling problem.) This was accomplished by encoding the halting problem for Turing machines into the tiling problem (or domino problem); the latter is well known to be algorithmically undecidable (and thus have logically undecidable instances), and so the latter does also. However, the number of tiles ${J}$ required for Berger’s construction was quite large: his construction of an aperiodic tiling required ${J = 20426}$ tiles, and his construction of a logically undecidable tiling required an even larger (and not explicitly specified) collection of tiles. Subsequent work by many authors did reduce the number of tiles required; in the ${E=G={\bf Z}^2}$ setting, the current world record for the fewest number of tiles in an aperiodic tiling is ${J=8}$ (due to Amman, Grunbaum, and Shephard) and for a logically undecidable tiling is ${J=11}$ (due to Ollinger). On the other hand, it is conjectured (see Grunbaum-Shephard and Lagarias-Wang) that one cannot lower ${J}$ all the way to ${1}$:
Conjecture 1 (Periodic tiling conjecture) If ${E}$ is a periodic subset of a finitely generated abelian group ${G}$, and ${F}$ is a finite subset of ${G}$, then the tiling equation ${\mathrm{X} \oplus F = E}$ is not aperiodic.
This conjecture is known to be true in two dimensions (by work of Bhattacharya when ${G=E={\bf Z}^2}$, and more recently by us when ${E \subset G = {\bf Z}^2}$), but remains open in higher dimensions. By the preceding discussion, the conjecture implies that every tiling equation with a single tile is logically decidable, and the problem of whether a given periodic set can be tiled by a single tile is algorithmically decidable.
In this paper we show on the other hand that aperiodic and undecidable tilings exist when ${J=2}$, at least if one is permitted to enlarge the group ${G}$ a bit:
Theorem 2 (Logically undecidable tilings)
• (i) There exists a group ${G}$ of the form ${G = {\bf Z}^2 \times G_0}$ for some finite abelian ${G_0}$, a subset ${E_0}$ of ${G_0}$, and finite sets ${F_1, F_2 \subset G}$ such that the tiling equation ${(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = {\bf Z}^2 \times E_0}$ is logically undecidable (and hence also aperiodic).
• (ii) There exists a dimension ${d}$, a periodic subset ${E}$ of ${{\bf Z}^d}$, and finite sets ${F_1, F_2 \subset G}$ such that tiling equation ${(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = E}$ is logically undecidable (and hence also aperiodic).
• (iii) There exists a non-abelian finite group ${G_0}$ (with the group law still written additively), a subset ${E_0}$ of ${G_0}$, and a finite set ${F \subset {\bf Z}^2 \times G_0}$ such that the nonabelian tiling equation ${\mathbf{X} \oplus F = {\bf Z}^2 \times E_0}$ is logically undecidable (and hence also aperiodic).
We also have algorithmic versions of this theorem. For instance, the algorithmic version of (i) is that the problem of determining solvability of the tiling equation ${(\mathbf{X}_1 \oplus F_1) \uplus (\mathbf{X}_2 \oplus F_2) = {\bf Z}^2 \times E_0}$ for a given choice of finite abelian group ${G_0}$, subset ${E_0}$ of ${G_0}$, and finite sets ${F_1, F_2 \subset {\bf Z}^2 \times G_0}$ is algorithmically undecidable. Similarly for (ii), (iii).
This result (together with a negative result discussed below) suggest to us that there is a significant qualitative difference in the ${J=1}$ theory of tiling by a single (abelian) tile, and the ${J \geq 2}$ theory of tiling with multiple tiles (or one non-abelian tile). (The positive results on the periodic tiling conjecture certainly rely heavily on the fact that there is only one tile, in particular there is a “dilation lemma” that is only available in this setting that is of key importance in the two dimensional theory.) It would be nice to eliminate the group ${G_0}$ from (i) (or to set ${d=2}$ in (ii)), but I think this would require a fairly significant modification of our methods.
Like many other undecidability results, the proof of Theorem 2 proceeds by a sequence of reductions, in which the undecidability of one problem is shown to follow from the undecidability of another, more “expressive” problem that can be encoded inside the original problem, until one reaches a problem that is so expressive that it encodes a problem already known to be undecidable. Indeed, all three undecidability results are ultimately obtained from Berger’s undecidability result on the domino problem.
The first step in increasing expressiveness is to observe that the undecidability of a single tiling equation follows from the undecidability of a system of tiling equations. More precisely, suppose we have non-empty finite subsets ${F_j^{(m)}}$ of a finitely generated group ${G}$ for ${j=1,\dots,J}$ and ${m=1,\dots,M}$, as well as periodic sets ${E^{(m)}}$ of ${G}$ for ${m=1,\dots,M}$, such that it is logically undecidable whether the system of tiling equations
$\displaystyle (\mathrm{X}_1 \oplus F_1^{(m)}) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J^{(m)}) = E^{(m)} \ \ \ \ \ (2)$
for ${m=1,\dots,M}$ has no solution ${\mathrm{X}_1 = A_1,\dots, \mathrm{X}_J = A_J}$ in ${G}$. Then, for any ${N>M}$, we can “stack” these equations into a single tiling equation in the larger group ${G \times {\bf Z}/N{\bf Z}}$, and specifically to the equation
$\displaystyle (\mathrm{X}_1 \oplus F_1) \uplus \dots \uplus (\mathrm{X}_J \oplus F_J) = E \ \ \ \ \ (3)$
where
$\displaystyle F_j := \biguplus_{m=1}^M F_j^{(m)} \times \{m\}$
and
$\displaystyle E := \biguplus_{m=1}^M E^{(m)} \times \{m\}.$
It is a routine exercise to check that the system of equations (2) admits a solution in ${G}$ if and only if the single equation (3) admits a equation in ${G \times {\bf Z}/N{\bf Z}}$. Thus, to prove the undecidability of a single equation of the form (3) it suffices to establish undecidability of a system of the form (2); note here how the freedom to select the auxiliary group ${G_0}$ is important here.
We view systems of the form (2) as belonging to a kind of “language” in which each equation in the system is a “sentence” in the language imposing additional constraints on a tiling. One can now pick and choose various sentences in this language to try to encode various interesting problems. For instance, one can encode the concept of a function ${f: {\bf Z}^2 \rightarrow G_0}$ taking values in a finite group ${G_0}$ as a single tiling equation
$\displaystyle \mathrm{X} \oplus (\{0\} \times G_0) = {\bf Z}^2 \times G_0 \ \ \ \ \ (4)$
since the solutions to this equation are precisely the graphs
$\displaystyle \mathrm{X} = \{ (n, f(n)): n \in {\bf Z}^2 \}$
of a function ${f: {\bf Z}^2 \rightarrow G_0}$. By adding more tiling equations to this equation to form a larger system, we can start imposing additional constraints on this function ${f}$. For instance, if ${x+H}$ is a coset of some subgroup ${H}$ of ${G_0}$, we can impose the additional equation
$\displaystyle \mathrm{X} \oplus (\{0\} \times H) = {\bf Z}^2 \times (x+H) \ \ \ \ \ (5)$
to impose the additional constraint that ${f(n) \in x+H}$ for all ${n \in {\bf Z}^2}$, if we desire. If ${G_0}$ happens to contain two distinct elements ${1, -1}$, and ${h \in {\bf Z}^2}$, then the additional equation
$\displaystyle \mathrm{X} \oplus (\{0,h\} \times \{0\}) = {\bf Z}^2 \times \{-1,1\} \ \ \ \ \ (6)$
imposes the additional constraints that ${f(n) \in \{-1,1\}}$ for all ${n \in {\bf Z}^2}$, and additionally that
$\displaystyle f(n+h) = -f(n)$
for all ${n \in {\bf Z}^2}$.
This begins to resemble the equations that come up in the domino problem. Here one has a finite set of Wang tiles – unit squares ${T}$ where each of the four sides is colored with a color ${c_N(T), c_S(T), c_E(T), c_W(T)}$ (corresponding to the four cardinal directions North, South, East, and West) from some finite set ${{\mathcal C}}$ of colors. The domino problem is then to tile the plane with copies of these tiles in such a way that adjacent sides match. In terms of equations, one is seeking to find functions ${c_N, c_S, c_E, c_W: {\bf Z}^2 \rightarrow {\mathcal C}}$ obeying the pointwise constraint
$\displaystyle (c_N(n), c_S(n), c_E(n), c_W(n)) \in {\mathcal W} \ \ \ \ \ (7)$
for all ${n \in {\bf Z}^2}$ where ${{\mathcal W}}$ is the set of colors associated to the set of Wang tiles being used, and the matching constraints
$\displaystyle c_S(n+(0,1)) = c_N(n); \quad c_W(n+(1,0)) = c_E(n) \ \ \ \ \ (8)$
for all ${{\bf Z}^2}$. As it turns out, the pointwise constraint (7) can be encoded by tiling equations that are fancier versions of (4), (5), (6) that involve only one unknown tiling set ${{\mathrm X}}$, but in order to encode the matching constraints (8) we were forced to introduce a second tile (or work with nonabelian tiling equations). This appears to be an inherent feature of the method, since we found a partial rigidity result for tilings of one tile in one dimension that obstructs this encoding strategy from working when one only has one tile available. The result is as follows:
Proposition 3 (Swapping property) Consider the solutions to a tiling equation
$\displaystyle \mathrm{X} \oplus F = E \ \ \ \ \ (9)$
in a one-dimensional group ${G = {\bf Z} \times G_0}$ (with ${G_0}$ a finite abelian group, ${F}$ finite, and ${E}$ periodic). Suppose there are two solutions ${\mathrm{X} = A_0, \mathrm{X} = A_1}$ to this equation that agree on the left in the sense that
$\displaystyle A_0 \cap (\{0, -1, -2, \dots\} \times G_0) = A_1 \cap (\{0, -1, -2, \dots\} \times G_0).$
For any function ${\omega: {\bf Z} \rightarrow \{0,1\}}$, define the “swap” ${A_\omega}$ of ${A_0}$ and ${A_1}$ to be the set
$\displaystyle A_\omega := \{ (n, g): n \in {\bf Z}, (n,g) \in A_{\omega(n)} \}$
Then ${A_\omega}$ also solves the equation (9).
One can think of ${A_0}$ and ${A_1}$ as “genes” with “nucleotides” ${\{ g \in G_0: (n,g) \in A_0\}}$, ${\{ g \in G_0: (n,g) \in A_1\}}$ at each position ${n \in {\bf Z}}$, and ${A_\omega}$ is a new gene formed by choosing one of the nucleotides from the “parent” genes ${A_0}$, ${A_1}$ at each position. The above proposition then says that the solutions to the equation (9) must be closed under “genetic transfer” among any pair of genes that agree on the left. This seems to present an obstruction to trying to encode equation such as
$\displaystyle c(n+1) = c'(n)$
for two functions ${c, c': {\bf Z} \rightarrow \{-1,1\}}$ (say), which is a toy version of the matching constraint (8), since the class of solutions to this equation turns out not to obey this swapping property. On the other hand, it is easy to encode such equations using two tiles instead of one, and an elaboration of this construction is used to prove our main theorem.
Abdul Basit, Artem Chernikov, Sergei Starchenko, Chiu-Minh Tran and I have uploaded to the arXiv our paper Zarankiewicz’s problem for semilinear hypergraphs. This paper is in the spirit of a number of results in extremal graph theory in which the bounds for various graph-theoretic problems or results can be greatly improved if one makes some additional hypotheses regarding the structure of the graph, for instance by requiring that the graph be “definable” with respect to some theory with good model-theoretic properties.
A basic motivating example is the question of counting the number of incidences between points and lines (or between points and other geometric objects). Suppose one has ${n}$ points and ${n}$ lines in a space. How many incidences can there be between these points and lines? The utterly trivial bound is ${n^2}$, but by using the basic fact that two points determine a line (or two lines intersect in at most one point), a simple application of Cauchy-Schwarz improves this bound to ${n^{3/2}}$. In graph theoretic terms, the point is that the bipartite incidence graph between points and lines does not contain a copy of ${K_{2,2}}$ (there does not exist two points and two lines that are all incident to each other). Without any other further hypotheses, this bound is basically sharp: consider for instance the collection of ${p^2}$ points and ${p^2+p}$ lines in a finite plane ${{\bf F}_p^2}$, that has ${p^3+p^2}$ incidences (one can make the situation more symmetric by working with a projective plane rather than an affine plane). If however one considers lines in the real plane ${{\bf R}^2}$, the famous Szemerédi-Trotter theorem improves the incidence bound further from ${n^{3/2}}$ to ${O(n^{4/3})}$. Thus the incidence graph between real points and lines contains more structure than merely the absence of ${K_{2,2}}$.
More generally, bounding on the size of bipartite graphs (or multipartite hypergraphs) not containing a copy of some complete bipartite subgraph ${K_{k,k}}$ (or ${K_{k,\dots,k}}$ in the hypergraph case) is known as Zarankiewicz’s problem. We have results for all ${k}$ and all orders of hypergraph, but for sake of this post I will focus on the bipartite ${k=2}$ case.
In our paper we improve the ${n^{3/2}}$ bound to a near-linear bound in the case that the incidence graph is “semilinear”. A model case occurs when one considers incidences between points and axis-parallel rectangles in the plane. Now the ${K_{2,2}}$ condition is not automatic (it is of course possible for two distinct points to both lie in two distinct rectangles), so we impose this condition by fiat:
Theorem 1 Suppose one has ${n}$ points and ${n}$ axis-parallel rectangles in the plane, whose incidence graph contains no ${K_{2,2}}$‘s, for some large ${n}$.
• (i) The total number of incidences is ${O(n \log^4 n)}$.
• (ii) If all the rectangles are dyadic, the bound can be improved to ${O( n \frac{\log n}{\log\log n} )}$.
• (iii) The bound in (ii) is best possible (up to the choice of implied constant).
We don’t know whether the bound in (i) is similarly tight for non-dyadic boxes; the usual tricks for reducing the non-dyadic case to the dyadic case strangely fail to apply here. One can generalise to higher dimensions, replacing rectangles by polytopes with faces in some fixed finite set of orientations, at the cost of adding several more logarithmic factors; also, one can replace the reals by other ordered division rings, and replace polytopes by other sets of bounded “semilinear descriptive complexity”, e.g., unions of boundedly many polytopes, or which are cut out by boundedly many functions that enjoy coordinatewise monotonicity properties. For certain specific graphs we can remove the logarithmic factors entirely. We refer to the preprint for precise details.
The proof techniques are combinatorial. The proof of (i) relies primarily on the order structure of ${{\bf R}}$ to implement a “divide and conquer” strategy in which one can efficiently control incidences between ${n}$ points and rectangles by incidences between approximately ${n/2}$ points and boxes. For (ii) there is additional order-theoretic structure one can work with: first there is an easy pruning device to reduce to the case when no rectangle is completely contained inside another, and then one can impose the “tile partial order” in which one dyadic rectangle ${I \times J}$ is less than another ${I' \times J'}$ if ${I \subset I'}$ and ${J' \subset J}$. The point is that this order is “locally linear” in the sense that for any two dyadic rectangles ${R_-, R_+}$, the set ${[R_-,R_+] := \{ R: R_- \leq R \leq R_+\}}$ is linearly ordered, and this can be exploited by elementary double counting arguments to obtain a bound which eventually becomes ${O( n \frac{\log n}{\log\log n})}$ after optimising certain parameters in the argument. The proof also suggests how to construct the counterexample in (iii), which is achieved by an elementary iterative construction.
Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let ${X = (X,{\mathcal X},\mu)}$ be a probability space, and ${\mathrm{Aut}(X, {\mathcal X}, \mu)}$ be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps ${T: X \rightarrow X}$ that preserve the measure ${\mu}$: ${T_* \mu = \mu}$. To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as concrete measurable maps, and measurable spaces as concrete measurable spaces. (One could also call ${X = (X,{\mathcal X}, \mu)}$ a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)
Let ${\Gamma = (\Gamma,\cdot)}$ be a discrete group. A (concrete) measure-preserving action of ${\Gamma}$ on ${X}$ is a group homomorphism ${\gamma \mapsto T^\gamma}$ from ${\Gamma}$ to ${\mathrm{Aut}(X, {\mathcal X}, \mu)}$, thus ${T^1}$ is the identity map and ${T^{\gamma_1} \circ T^{\gamma_2} = T^{\gamma_1 \gamma_2}}$ for all ${\gamma_1,\gamma_2 \in \Gamma}$. A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when ${\Gamma}$ is the integers (with the additive group law).
Let ${K = (K,+)}$ be a compact Hausdorff abelian group, which we can endow with the Borel ${\sigma}$-algebra ${{\mathcal B}(K)}$. A (concrete measurable) ${K}$cocycle is a collection ${\rho = (\rho_\gamma)_{\gamma \in \Gamma}}$ of concrete measurable maps ${\rho_\gamma: X \rightarrow K}$ obeying the cocycle equation
$\displaystyle \rho_{\gamma_1 \gamma_2}(x) = \rho_{\gamma_1} \circ T^{\gamma_2}(x) + \rho_{\gamma_2}(x) \ \ \ \ \ (1)$
for ${\mu}$-almost every ${x \in X}$. (Here we are glossing over a measure-theoretic subtlety that we will return to later in this post – see if you can spot it before then!) Cocycles arise naturally in the theory of group extensions of dynamical systems; in particular (and ignoring the aforementioned subtlety), each cocycle induces a measure-preserving action ${\gamma \mapsto S^\gamma}$ on ${X \times K}$ (which we endow with the product of ${\mu}$ with Haar probability measure on ${K}$), defined by
$\displaystyle S^\gamma( x, k ) := (T^\gamma x, k + \rho_\gamma(x) ).$
This connection with group extensions was the original motivation for our study of measurable cohomology, but is not the focus of the current paper.
A special case of a ${K}$-valued cocycle is a (concrete measurable) ${K}$-valued coboundary, in which ${\rho_\gamma}$ for each ${\gamma \in \Gamma}$ takes the special form
$\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)$
for ${\mu}$-almost every ${x \in X}$, where ${F: X \rightarrow K}$ is some measurable function; note that (ignoring the aforementioned subtlety), every function of this form is automatically a concrete measurable ${K}$-valued cocycle. One of the first basic questions in measurable cohomology is to try to characterize which ${K}$-valued cocycles are in fact ${K}$-valued coboundaries. This is a difficult question in general. However, there is a general result of Moore and Schmidt that at least allows one to reduce to the model case when ${K}$ is the unit circle ${\mathbf{T} = {\bf R}/{\bf Z}}$, by taking advantage of the Pontryagin dual group ${\hat K}$ of characters ${\hat k: K \rightarrow \mathbf{T}}$, that is to say the collection of continuous homomorphisms ${\hat k: k \mapsto \langle \hat k, k \rangle}$ to the unit circle. More precisely, we have
Theorem 1 (Countable Moore-Schmidt theorem) Let ${\Gamma}$ be a discrete group acting in a concrete measure-preserving fashion on a probability space ${X}$. Let ${K}$ be a compact Hausdorff abelian group. Assume the following additional hypotheses:
• (i) ${\Gamma}$ is at most countable.
• (ii) ${X}$ is a standard Borel space.
• (iii) ${K}$ is metrisable.
Then a ${K}$-valued concrete measurable cocycle ${\rho = (\rho_\gamma)_{\gamma \in \Gamma}}$ is a concrete coboundary if and only if for each character ${\hat k \in \hat K}$, the ${\mathbf{T}}$-valued cocycles ${\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}}$ are concrete coboundaries.
The hypotheses (i), (ii), (iii) are saying in some sense that the data ${\Gamma, X, K}$ are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to ${K}$ being second countable, and (ii) is equivalent to ${X}$ being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.
Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each ${\langle \hat k, \rho \rangle}$ is a coboundary, then so is ${\rho}$. By hypothesis, we then have an equation of the form
$\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \alpha_{\hat k} \circ T^\gamma(x) - \alpha_{\hat k}(x) \ \ \ \ \ (2)$
for all ${\hat k, \gamma, x}$ and some functions ${\alpha_{\hat k}: X \rightarrow {\mathbf T}}$, and our task is then to produce a function ${F: X \rightarrow K}$ for which
$\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)$
for all ${\gamma,x}$.
Comparing the two equations, the task would be easy if we could find an ${F: X \rightarrow K}$ for which
$\displaystyle \langle \hat k, F(x) \rangle = \alpha_{\hat k}(x) \ \ \ \ \ (3)$
for all ${\hat k, x}$. However there is an obstruction to this: the left-hand side of (3) is additive in ${\hat k}$, so the right-hand side would have to be also in order to obtain such a representation. In other words, for this strategy to work, one would have to first establish the identity
$\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = 0 \ \ \ \ \ (4)$
for all ${\hat k_1, \hat k_2, x}$. On the other hand, the good news is that if we somehow manage to obtain the equation, then we can obtain a function ${F}$ obeying (3), thanks to Pontryagin duality, which gives a one-to-one correspondence between ${K}$ and the homomorphisms of the (discrete) group ${\hat K}$ to ${\mathbf{T}}$.
Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in ${\hat k}$, so the right-hand side must be also. Manipulating this fact, we eventually arrive at
$\displaystyle (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2}) \circ T^\gamma(x) = (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2})(x).$
In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is ${\Gamma}$-invariant. Now let us assume for sake of argument that the action of ${\Gamma}$ is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only ${\Gamma}$-invariant functions are constant. So now we get a weaker version of (4), namely
$\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = c_{\hat k_1, \hat k_2} \ \ \ \ \ (5)$
for some constants ${c_{\hat k_1, \hat k_2} \in \mathbf{T}}$.
Now we need to eliminate the constants. This can be done by the following group-theoretic projection. Let ${L^0({\bf X} \rightarrow {\bf T})}$ denote the space of concrete measurable maps ${\alpha}$ from ${{\bf X}}$ to ${{\bf T}}$, up to almost everywhere equivalence; this is an abelian group where the various terms in (5) naturally live. Inside this group we have the subgroup ${{\bf T}}$ of constant functions (up to almost everywhere equivalence); this is where the right-hand side of (5) lives. Because ${{\bf T}}$ is a divisible group, there is an application of Zorn’s lemma (a good exercise for those who are not acquainted with these things) to show that there exists a retraction ${w: L^0({\bf X} \rightarrow {\bf T}) \rightarrow {\bf T}}$, that is to say a group homomorphism that is the identity on the subgroup ${{\bf T}}$. We can use this retraction, or more precisely the complement ${\alpha \mapsto \alpha - w(\alpha)}$, to eliminate the constant in (5). Indeed, if we set
$\displaystyle \tilde \alpha_{\hat k}(x) := \alpha_{\hat k}(x) - w(\alpha_{\hat k})$
then from (5) we see that
$\displaystyle \tilde \alpha_{\hat k_1 + \hat k_2}(x) - \tilde \alpha_{\hat k_1}(x) - \tilde \alpha_{\hat k_2}(x) = 0$
while from (2) one has
$\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \tilde \alpha_{\hat k} \circ T^\gamma(x) - \tilde \alpha_{\hat k}(x)$
and now the previous strategy works with ${\alpha_{\hat k}}$ replaced by ${\tilde \alpha_{\hat k}}$. This concludes the sketch of proof of Theorem 1.
In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to ${\hat K}$ being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).
My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.
If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation ${+: K \times K \rightarrow K}$, while still continuous, can fail to be measurable as a map from ${(K \times K, {\mathcal B}(K) \otimes {\mathcal B}(K))}$ to ${(K, {\mathcal B}(K))}$! Thus for instance the sum of two measurable functions ${F: X \rightarrow K}$ need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the Nedoma pathology. A standard example arises when ${K}$ is the uncountable torus ${{\mathbf T}^{{\bf R}}}$, endowed with the product topology. Crucially, the Borel ${\sigma}$-algebra ${{\mathcal B}(K)}$ generated by this uncountable product is not the product ${{\mathcal B}(\mathbf{T})^{\otimes {\bf R}}}$ of the factor Borel ${\sigma}$-algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but ${\sigma}$-algebras do not); relating to this, the product ${\sigma}$-algebra ${{\mathcal B}(K) \otimes {\mathcal B}(K)}$ is not the same as the Borel ${\sigma}$-algebra ${{\mathcal B}(K \times K)}$, but is instead a strict sub-algebra. If the group operations on ${K}$ were measurable, then the diagonal set
$\displaystyle K^\Delta := \{ (k,k') \in K \times K: k = k' \} = \{ (k,k') \in K \times K: k - k' = 0 \}$
would be measurable in ${{\mathcal B}(K) \otimes {\mathcal B}(K)}$. But it is an easy exercise in manipulation of ${\sigma}$-algebras to show that if ${(X, {\mathcal X}), (Y, {\mathcal Y})}$ are any two measurable spaces and ${E \subset X \times Y}$ is measurable in ${{\mathcal X} \otimes {\mathcal Y}}$, then the fibres ${E_x := \{ y \in Y: (x,y) \in E \}}$ of ${E}$ are contained in some countably generated subalgebra of ${{\mathcal Y}}$. Thus if ${K^\Delta}$ were ${{\mathcal B}(K) \otimes {\mathcal B}(K)}$-measurable, then all the points of ${K}$ would lie in a single countably generated ${\sigma}$-algebra. But the cardinality of such an algebra is at most ${2^{\alpha_0}}$ while the cardinality of ${K}$ is ${2^{2^{\alpha_0}}}$, and Cantor’s theorem then gives a contradiction.
To resolve this problem, we give ${K}$ a coarser ${\sigma}$-algebra than the Borel ${\sigma}$-algebra, namely the Baire ${\sigma}$-algebra ${{\mathcal B}^\otimes(K)}$, thus coarsening the measurable space structure on ${K = (K,{\mathcal B}(K))}$ to a new measurable space ${K_\otimes := (K, {\mathcal B}^\otimes(K))}$. In the case of compact Hausdorff abelian groups, ${{\mathcal B}^{\otimes}(K)}$ can be defined as the ${\sigma}$-algebra generated by the characters ${\hat k: K \rightarrow {\mathbf T}}$; for more general compact abelian groups, one can define ${{\mathcal B}^{\otimes}(K)}$ as the ${\sigma}$-algebra generated by all continuous maps into metric spaces. This ${\sigma}$-algebra is equal to ${{\mathcal B}(K)}$ when ${K}$ is metrisable but can be smaller for other ${K}$. With this measurable structure, ${K_\otimes}$ becomes a measurable group; it seems that once one leaves the metrisable world that ${K_\otimes}$ is a superior (or at least equally good) space to work with than ${K}$ for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if ${m_K}$ is the Haar probability measure on ${K}$, then ${L^2(K,m_K) = L^2(K_\otimes,m_K)}$ (thus, every ${K}$-measurable set is equivalent modulo ${m_K}$-null sets to a ${K_\otimes}$-measurable set), so there is no damage to Plancherel caused by passing to the Baire ${\sigma}$-algebra.
Passing to the Baire ${\sigma}$-algebra ${K_\otimes}$ fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of abstract measure theory, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of abstract measurable spaces, which we formally define as the opposite category of the category of ${\sigma}$-algebras (with Boolean algebra homomorphisms). Thus, we define an abstract measurable space to be an object of the form ${{\mathcal X}^{\mathrm{op}}}$, where ${{\mathcal X}}$ is an (abstract) ${\sigma}$-algebra and ${\mathrm{op}}$ is a formal placeholder symbol that signifies use of the opposite category, and an abstract measurable map ${T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}}$ is an object of the form ${(T^*)^{\mathrm{op}}}$, where ${T^*: {\mathcal Y} \rightarrow {\mathcal X}}$ is a Boolean algebra homomorphism and ${\mathrm{op}}$ is again used as a formal placeholder; we call ${T^*}$ the pullback map associated to ${T}$. [UPDATE: It turns out that this definition of a measurable map led to technical issues. In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be $\sigma$-complete (i.e., it respects countable joins).] The composition ${S \circ T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}}$ of two abstract measurable maps ${T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}}$, ${S: {\mathcal Y}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}}$ is defined by the formula ${S \circ T := (T^* \circ S^*)^{\mathrm{op}}}$, or equivalently ${(S \circ T)^* = T^* \circ S^*}$.
Every concrete measurable space ${X = (X,{\mathcal X})}$ can be identified with an abstract counterpart ${{\mathcal X}^{op}}$, and similarly every concrete measurable map ${T: X \rightarrow Y}$ can be identified with an abstract counterpart ${(T^*)^{op}}$, where ${T^*: {\mathcal Y} \rightarrow {\mathcal X}}$ is the pullback map ${T^* E := T^{-1}(E)}$. Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the opposite measure algebra ${X_\mu}$ of ${X}$, defined as ${({\bf X}/{\bf N})^*}$ where ${{\bf N}}$ is the ideal of null sets in ${{\bf X}}$. Informally, ${X_\mu}$ is the space ${X}$ with all the null sets removed; there is a canonical abstract embedding map ${\iota: X_\mu \rightarrow X}$, which allows one to convert any concrete measurable map ${f: X \rightarrow Y}$ into an abstract one ${[f]: X_\mu \rightarrow Y}$. One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing ${X}$ with the opposite measure algebra ${X_\mu}$; see the paper for details. Our main theorem is then
Theorem 2 (Uncountable Moore-Schmidt theorem) Let ${\Gamma}$ be a discrete group acting abstractly on a ${\sigma}$-finite measure space ${X}$. Let ${K}$ be a compact Hausdorff abelian group. Then a ${K_\otimes}$-valued abstract measurable cocycle ${\rho = (\rho_\gamma)_{\gamma \in \Gamma}}$ is an abstract coboundary if and only if for each character ${\hat k \in \hat K}$, the ${\mathbf{T}}$-valued cocycles ${\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}}$ are abstract coboundaries.
With the abstract formalism, the proof of the uncountable Moore-Schmidt theorem is almost identical to the countable one (in fact we were able to make some simplifications, such as avoiding the use of the ergodic decomposition). A key tool is what we call a “conditional Pontryagin duality” theorem, which asserts that if one has an abstract measurable map ${\alpha_{\hat k}: X_\mu \rightarrow {\bf T}}$ for each ${\hat k \in K}$ obeying the identity ${ \alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2} = 0}$ for all ${\hat k_1,\hat k_2 \in \hat K}$, then there is an abstract measurable map ${F: X_\mu \rightarrow K_\otimes}$ such that ${\alpha_{\hat k} = \langle \hat k, F \rangle}$ for all ${\hat k \in \hat K}$. This is derived from the usual Pontryagin duality and some other tools, most notably the completeness of the ${\sigma}$-algebra of ${X_\mu}$, and the Sikorski extension theorem.
We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:
• (i) Suppose one has an abstract measurable map ${f: X_\mu \rightarrow Y}$ into a concrete measurable space. Does there exist a representation of ${f}$ by a concrete measurable map ${\tilde f: X \rightarrow Y}$? Is it unique up to almost everywhere equivalence?
• (ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?
For (i) the answer is somewhat interesting (as I learned after posing this MathOverflow question):
• If ${Y}$ does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If ${Y}$ is not compact or Polish, there can be counterexamples to existence.
• If ${Y}$ is a compact metric space or a Polish space, then one always has existence and uniqueness.
• If ${Y}$ is a compact Hausdorff abelian group, one always has existence.
• If ${X}$ is a complete measure space, then one always has existence (from a theorem of Maharam).
• If ${X}$ is the unit interval with the Borel ${\sigma}$-algebra and Lebesgue measure, then one has existence for all compact Hausdorff ${Y}$ assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).
• For more general ${X}$, existence for all compact Hausdorff ${Y}$ is equivalent to the existence of a lifting from the ${\sigma}$-algebra ${\mathcal{X}/\mathcal{N}}$ to ${\mathcal{X}}$ (or, in the language of abstract measurable spaces, the existence of an abstract retraction from ${X}$ to ${X_\mu}$).
• It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever ${Y}$ is compact Hausdorff.
Our understanding of (ii) is much less complete:
• If ${K}$ is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).
• If ${\Gamma}$ is at most countable and ${X}$ is a complete measure space, then the answer is again “always”.
In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.
As readers who have followed my previous post will know, I have been spending the last few weeks extending my previous interactive text on propositional logic (entitied “QED”) to also cover first-order logic. The text has now reached what seems to be a stable form, with a complete set of deductive rules for first-order logic with equality, and no major bugs as far as I can tell (apart from one weird visual bug I can’t eradicate, in that some graphics elements can occasionally temporarily disappear when one clicks on an item). So it will likely not change much going forward.
I feel though that there could be more that could be done with this sort of framework (e.g., improved GUI, modification to other logics, developing the ability to write one’s own texts and libraries, exploring mathematical theories such as Peano arithmetic, etc.). But writing this text (particularly the first-order logic sections) has brought me close to the limit of my programming ability, as the number of bugs introduced with each new feature implemented has begun to grow at an alarming rate. I would like to repackage the code so that it can be re-used by more adept programmers for further possible applications, though I have never done something like this before and would appreciate advice on how to do so. The code is already available under a Creative Commons licence, but I am not sure how readable and modifiable it will be to others currently. [Update: it is now on GitHub.]
[One thing I noticed is that I would probably have to make more of a decoupling between the GUI elements, the underlying logical elements, and the interactive text. For instance, at some point I made the decision (convenient at the time) to use some GUI elements to store some of the state variables of the text, e.g. the exercise buttons are currently storing the status of what exercises are unlocked or not. This is presumably not an example of good programming practice, though it would be relatively easy to fix. More seriously, due to my inability to come up with a good general-purpose matching algorithm (or even specification of such an algorithm) for the the laws of first-order logic, many of the laws have to be hard-coded into the matching routine, so one cannot currently remove them from the text. It may well be that the best thing to do in fact is to rework the entire codebase from scratch using more professional software design methods.]
[Update, Aug 23: links moved to GitHub version.]
About six years ago on this blog, I started thinking about trying to make a web-based game based around high-school algebra, and ended up using Scratch to write a short but playable puzzle game in which one solves linear equations for an unknown ${x}$ using a restricted set of moves. (At almost the same time, there were a number of more professionally made games released along similar lines, most notably Dragonbox.)
Since then, I have thought a couple times about whether there were other parts of mathematics which could be gamified in a similar fashion. Shortly after my first blog posts on this topic, I experimented with a similar gamification of Lewis Carroll’s classic list of logic puzzles, but the results were quite clunky, and I was never satisfied with the results.
Over the last few weeks I returned to this topic though, thinking in particular about how to gamify the rules of inference of propositional logic, in a manner that at least vaguely resembles how mathematicians actually go about making logical arguments (e.g., splitting into cases, arguing by contradiction, using previous result as lemmas to help with subsequent ones, and so forth). The rules of inference are a list of a dozen or so deductive rules concerning propositional sentences (things like “(${A}$ AND ${B}$) OR (NOT ${C}$)”, where ${A,B,C}$ are some formulas). A typical such rule is Modus Ponens: if the sentence ${A}$ is known to be true, and the implication “${A}$ IMPLIES ${B}$” is also known to be true, then one can deduce that ${B}$ is also true. Furthermore, in this deductive calculus it is possible to temporarily introduce some unproven statements as an assumption, only to discharge them later. In particular, we have the deduction theorem: if, after making an assumption ${A}$, one is able to derive the statement ${B}$, then one can conclude that the implication “${A}$ IMPLIES ${B}$” is true without any further assumption.
It took a while for me to come up with a workable game-like graphical interface for all of this, but I finally managed to set one up, now using Javascript instead of Scratch (which would be hopelessly inadequate for this task); indeed, part of the motivation of this project was to finally learn how to program in Javascript, which turned out to be not as formidable as I had feared (certainly having experience with other C-like languages like C++, Java, or lua, as well as some prior knowledge of HTML, was very helpful). The main code for this project is available here. Using this code, I have created an interactive textbook in the style of a computer game, which I have titled “QED”. This text contains thirty-odd exercises arranged in twelve sections that function as game “levels”, in which one has to use a given set of rules of inference, together with a given set of hypotheses, to reach a desired conclusion. The set of available rules increases as one advances through the text; in particular, each new section gives one or more rules, and additionally each exercise one solves automatically becomes a new deduction rule one can exploit in later levels, much as lemmas and propositions are used in actual mathematics to prove more difficult theorems. The text automatically tries to match available deduction rules to the sentences one clicks on or drags, to try to minimise the amount of manual input one needs to actually make a deduction.
Most of one’s proof activity takes place in a “root environment” of statements that are known to be true (under the given hypothesis), but for more advanced exercises one has to also work in sub-environments in which additional assumptions are made. I found the graphical metaphor of nested boxes to be useful to depict this tree of sub-environments, and it seems to combine well with the drag-and-drop interface.
The text also logs one’s moves in a more traditional proof format, which shows how the mechanics of the game correspond to a traditional mathematical argument. My hope is that this will give students a way to understand the underlying concept of forming a proof in a manner that is more difficult to achieve using traditional, non-interactive textbooks.
I have tried to organise the exercises in a game-like progression in which one first works with easy levels that train the player on a small number of moves, and then introduce more advanced moves one at a time. As such, the order in which the rules of inference are introduced is a little idiosyncratic. The most powerful rule (the law of the excluded middle, which is what separates classical logic from intuitionistic logic) is saved for the final section of the text.
Anyway, I am now satisfied enough with the state of the code and the interactive text that I am willing to make both available (and open source; I selected a CC-BY licence for both), and would be happy to receive feedback on any aspect of the either. In principle one could extend the game mechanics to other mathematical topics than the propositional calculus – the rules of inference for first-order logic being an obvious next candidate – but it seems to make sense to focus just on propositional logic for now.
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence ${G_n = (V_n, E_n)}$ of finite graphs, one can extract a subsequence ${G_{n_j} = (V_{n_j}, E_{n_j})}$ which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function ${p\colon [0,1] \times [0,1] \rightarrow [0,1]}$. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon ${p}$. For instance, the edge density
$\displaystyle \frac{1}{|V_{n_j}|^2} |E_{n_j}|$
converge to the integral
$\displaystyle \int_0^1 \int_0^1 p(x,y)\ dx dy,$
the triangle density
$\displaystyle \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,$
the four-cycle density
$\displaystyle \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,$
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence ${G_n = (V_n,E_n)}$ of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ${\alpha \in\beta {\bf N} \backslash {\bf N}}$) to obtain a nonstandard graph ${G_\alpha = (V_\alpha,E_\alpha)}$, where ${V_\alpha = \prod_{n\rightarrow \alpha} V_n}$ is the ultraproduct of the ${V_n}$, and similarly for the ${E_\alpha}$. The set ${E_\alpha}$ can then be viewed as a symmetric subset of ${V_\alpha \times V_\alpha}$ which is measurable with respect to the Loeb ${\sigma}$-algebra ${{\mathcal L}_{V_\alpha \times V_\alpha}}$ of the product ${V_\alpha \times V_\alpha}$ (see this previous blog post for the construction of Loeb measure). A crucial point is that this ${\sigma}$-algebra is larger than the product ${{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}}$ of the Loeb ${\sigma}$-algebra of the individual vertex set ${V_\alpha}$. This leads to a decomposition
$\displaystyle 1_{E_\alpha} = p + e$
where the “graphon” ${p}$ is the orthogonal projection of ${1_{E_\alpha}}$ onto ${L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}$, and the “regular error” ${e}$ is orthogonal to all product sets ${A \times B}$ for ${A, B \in {\mathcal L}_{V_\alpha}}$. The graphon ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ then captures the statistics of the nonstandard graph ${G_\alpha}$, in exact analogy with the more traditional graph limits: for instance, the edge density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|$
(or equivalently, the limit of the ${\frac{1}{|V_n|^2} |E_n|}$ along the ultrafilter ${\alpha}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)$
where ${d\mu_V}$ denotes Loeb measure on a nonstandard finite set ${V}$; the triangle density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert$
(or equivalently, the limit along ${\alpha}$ of the triangle densities of ${E_n}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),$
and so forth. Note that with this construction, the graphon ${p}$ is living on the Cartesian square of an abstract probability space ${V_\alpha}$, which is likely to be inseparable; but it is possible to cut down the Loeb ${\sigma}$-algebra on ${V_\alpha}$ to minimal countable ${\sigma}$-algebra for which ${p}$ remains measurable (up to null sets), and then one can identify ${V_\alpha}$ with ${[0,1]}$, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets ${A}$ of an abelian group ${G = (G,+)}$, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group ${A_\alpha}$ in a nonstandard group ${G_\alpha = \prod_{n \rightarrow \alpha} G_n}$, defined as the ultraproduct of finite ${K}$-approximate groups ${A_n \subset G_n}$ for some standard ${K}$. (A ${K}$-approximate group ${A_n}$ is a symmetric set containing the origin such that ${A_n+A_n}$ can be covered by ${K}$ or fewer translates of ${A_n}$.) We then let ${O(A_\alpha)}$ be the external subgroup of ${G_\alpha}$ generated by ${A_\alpha}$; equivalently, ${A_\alpha}$ is the union of ${A_\alpha^m}$ over all standard ${m}$. This space has a Loeb measure ${\mu_{O(A_\alpha)}}$, defined by setting
$\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}$
whenever ${E_\alpha}$ is an internal subset of ${A_\alpha^m}$ for any standard ${m}$, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure ${\mu_{O(A_\alpha)}}$ is a translation invariant measure on ${O(A_{\alpha})}$, normalised so that ${A_\alpha}$ has Loeb measure one. As such, one should think of ${O(A_\alpha)}$ as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that ${O(A_\alpha)}$ is not actually a locally compact group with Haar measure, for two reasons:
• There is not an obvious topology on ${O(A_\alpha)}$ that makes it simultaneously locally compact, Hausdorff, and ${\sigma}$-compact. (One can get one or two out of three without difficulty, though.)
• The addition operation ${+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)}$ is not measurable from the product Loeb algebra ${{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$. Instead, it is measurable from the coarser Loeb algebra ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$ (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let ${L(O(A_\alpha))}$ denote the space of bounded Loeb measurable functions ${f\colon O(A_\alpha) \rightarrow {\bf C}}$ (modulo almost everywhere equivalence) that are supported on ${A_\alpha^m}$ for some standard ${m}$; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation ${\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}$, defined by setting
$\displaystyle \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)$
whenever ${f\colon A_\alpha^m \rightarrow {}^* {\bf C}}$, ${g\colon A_\alpha^l \rightarrow {}^* {\bf C}}$ are bounded nonstandard functions (extended by zero to all of ${O(A_\alpha)}$), and then extending to arbitrary elements of ${L(O(A_\alpha))}$ by density. Equivalently, ${f \star g}$ is the pushforward of the ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$-measurable function ${(x,y) \mapsto f(x) g(y)}$ under the map ${(x,y) \mapsto x+y}$.
The basic structural theorem is then as follows.
Theorem 1 (Kronecker factor) Let ${A_\alpha}$ be an ultra approximate group. Then there exists a (standard) locally compact abelian group ${G}$ of the form
$\displaystyle G = {\bf R}^d \times {\bf Z}^m \times T$
for some standard ${d,m}$ and some compact abelian group ${T}$, equipped with a Haar measure ${\mu_G}$ and a measurable homomorphism ${\pi\colon O(A_\alpha) \rightarrow G}$ (using the Loeb ${\sigma}$-algebra on ${O(A_\alpha)}$ and the Baire ${\sigma}$-algebra on ${G}$), with the following properties:
• (i) ${\pi}$ has dense image, and ${\mu_G}$ is the pushforward of Loeb measure ${\mu_{O(A_\alpha)}}$ by ${\pi}$.
• (ii) There exists sets ${\{0\} \subset U_0 \subset K_0 \subset G}$ with ${U_0}$ open and ${K_0}$ compact, such that
$\displaystyle \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)$
• (iii) Whenever ${K \subset U \subset G}$ with ${K}$ compact and ${U}$ open, there exists a nonstandard finite set ${B}$ such that
$\displaystyle \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)$
• (iv) If ${f, g \in L}$, then we have the convolution formula
$\displaystyle f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)$
where ${\pi_* f,\pi_* g}$ are the pushforwards of ${f,g}$ to ${L^2(G, \mu_G)}$, the convolution ${\star}$ on the right-hand side is convolution using ${\mu_G}$, and ${\pi^*}$ is the pullback map from ${L^2(G,\mu_G)}$ to ${L^2(O(A_\alpha), \mu_{O(A_\alpha)})}$. In particular, if ${\pi_* f = 0}$, then ${f*g=0}$ for all ${g \in L}$.
One can view the locally compact abelian group ${G}$ as a “model “or “Kronecker factor” for the ultra approximate group ${A_\alpha}$ (in close analogy with the Kronecker factor from ergodic theory). In the case that ${A_\alpha}$ is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components ${{\bf R}^d \times {\bf Z}^m}$ of the Kronecker group ${G}$ are trivial, and this theorem was implicitly established by Szegedy. The compact group ${T}$ is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions ${f}$, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor ${G}$. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.
Given any sequence of uniformly bounded functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ for some fixed ${m}$, we can view the function ${f \in L}$ defined by
$\displaystyle f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)$
as an “additive limit” of the ${f_n}$, in much the same way that graphons ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ are limits of the indicator functions ${1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}$. The additive limits capture some of the statistics of the ${f_n}$, for instance the normalised means
$\displaystyle \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)$
converge (along the ultrafilter ${\alpha}$) to the mean
$\displaystyle \int_G f(x)\ d\mu_G(x),$
and for three sequences ${f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}}$ of functions, the normalised correlation
$\displaystyle \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)$
converges along ${\alpha}$ to the correlation
$\displaystyle \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),$
the normalised ${U^2}$ Gowers norm
$\displaystyle ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}$
converges along ${\alpha}$ to the ${U^2}$ Gowers norm
$\displaystyle ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}$
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised ${\ell^2}$ norm
$\displaystyle (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}$
does not necessarily converge to the ${L^2}$ norm
$\displaystyle (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},$
but can converge instead to a larger quantity, due to the presence of the orthogonal projection ${\pi_*}$ in the definition (4) of ${f}$.
An important special case of an additive limit occurs when the functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ involved are indicator functions ${f_n = 1_{E_n}}$ of some subsets ${E_n}$ of ${A_n^m}$. The additive limit ${f \in L}$ does not necessarily remain an indicator function, but instead takes values in ${[0,1]}$ (much as a graphon ${p}$ takes values in ${[0,1]}$ even though the original indicators ${1_{E_n}}$ take values in ${\{0,1\}}$). The convolution ${f \star f\colon G \rightarrow [0,1]}$ is then the ultralimit of the normalised convolutions ${\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}$; in particular, the measure of the support of ${f \star f}$ provides a lower bound on the limiting normalised cardinality ${\frac{1}{|A_n|} |E_n + E_n|}$ of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset ${2E_n = E_n + E_n}$ could contain a large number of elements which have very few (${o(|A_n|)}$) representations as the sum of two elements of ${E_n}$, and in the limit these portions of the sumset fall outside of the support of ${f \star f}$. (One can think of the support of ${f \star f}$ as describing the “essential” sumset of ${2E_n = E_n + E_n}$, discarding those elements that have only very few representations.) Similarly for higher convolutions of ${f}$. Thus one can use additive limits to partially control the growth ${k E_n}$ of iterated sumsets of subsets ${E_n}$ of approximate groups ${A_n}$, in the regime where ${k}$ stays bounded and ${n}$ goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 2 (Bohr sets) We take ${A_n}$ to be the intervals ${A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}$, where ${N_n}$ is a sequence going to infinity; these are ${2}$-approximate groups for all ${n}$. Let ${\theta}$ be an irrational real number, let ${I}$ be an interval in ${{\bf R}/{\bf Z}}$, and for each natural number ${n}$ let ${B_n}$ be the Bohr set
$\displaystyle B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.$
In this case, the (reduced) Kronecker factor ${G}$ can be taken to be the infinite cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ with the usual Lebesgue measure ${\mu_G}$. The additive limits of ${1_{A_n}}$ and ${1_{B_n}}$ end up being ${1_A}$ and ${1_B}$, where ${A}$ is the finite cylinder
$\displaystyle A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}$
and ${B}$ is the rectangle
$\displaystyle B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.$
Geometrically, one should think of ${A_n}$ and ${B_n}$ as being wrapped around the cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ via the homomorphism ${x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}$, and then one sees that ${B_n}$ is converging in some normalised weak sense to ${B}$, and similarly for ${A_n}$ and ${A}$. In particular, the additive limit predicts the growth rate of the iterated sumsets ${kB_n}$ to be quadratic in ${k}$ until ${k|I|}$ becomes comparable to ${1}$, at which point the growth transitions to linear growth, in the regime where ${k}$ is bounded and ${n}$ is large.
If ${\theta = \frac{p}{q}}$ were rational instead of irrational, then one would need to replace ${{\bf R}/{\bf Z}}$ by the finite subgroup ${\frac{1}{q}{\bf Z}/{\bf Z}}$ here.
Example 3 (Structured subsets of progressions) We take ${A_n}$ be the rank two progression
$\displaystyle A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},$
where ${N_n}$ is a sequence going to infinity; these are ${4}$-approximate groups for all ${n}$. Let ${B_n}$ be the subset
$\displaystyle B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.$
Then the (reduced) Kronecker factor can be taken to be ${G = {\bf R}^2}$ with Lebesgue measure ${\mu_G}$, and the additive limits of the ${1_{A_n}}$ and ${1_{B_n}}$ are then ${1_A}$ and ${1_B}$, where ${A}$ is the square
$\displaystyle A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}$
and ${B}$ is the circle
$\displaystyle B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.$
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism ${a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})}$ for ${a,b = O( N_n )}$ to embed the original sets ${A_n, B_n}$ into the plane ${{\bf R}^2}$. In particular, one now expects the growth rate of the iterated sumsets ${k A_n}$ and ${k B_n}$ to be quadratic in ${k}$, in the regime where ${k}$ is bounded and ${n}$ is large.
Example 4 (Dissociated sets) Let ${d}$ be a fixed natural number, and take
$\displaystyle A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}$
where ${v_1,\dots,v_d}$ are randomly chosen elements of a large cyclic group ${{\bf Z}/p_n{\bf Z}}$, where ${p_n}$ is a sequence of primes going to infinity. These are ${O(d)}$-approximate groups. The (reduced) Kronecker factor ${G}$ can (almost surely) then be taken to be ${{\bf Z}^d}$ with counting measure, and the additive limit of ${1_{A_n}}$ is ${1_A}$, where ${A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}}$ and ${e_1,\dots,e_d}$ is the standard basis of ${{\bf Z}^d}$. In particular, the growth rates of ${k A_n}$ should grow approximately like ${k^d}$ for ${k}$ bounded and ${n}$ large.
Example 5 (Random subsets of groups) Let ${A_n = G_n}$ be a sequence of finite additive groups whose order is going to infinity. Let ${B_n}$ be a random subset of ${G_n}$ of some fixed density ${0 \leq \lambda \leq 1}$. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group ${\{0\}}$, and the additive limit of the ${1_{B_n}}$ is the constant function ${\lambda}$. The convolutions ${\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}}$ then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ${\lambda^2}$; this reflects the fact that ${(1-o(1))|G_n|}$ of the elements of ${G_n}$ can be represented as the sum of two elements of ${B_n}$ in ${(\lambda^2 + o(1)) |G_n|}$ ways. In particular, ${B_n+B_n}$ occupies a proportion ${1-o(1)}$ of ${G_n}$.
Example 6 (Trigonometric series) Take ${A_n = G_n = {\bf Z}/p_n {\bf C}}$ for a sequence ${p_n}$ of primes going to infinity, and for each ${n}$ let ${\xi_{n,1},\xi_{n,2},\dots}$ be an infinite sequence of frequencies chosen uniformly and independently from ${{\bf Z}/p_n{\bf Z}}$. Let ${f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}}$ denote the random trigonometric series
$\displaystyle f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.$
Then (almost surely) we can take the reduced Kronecker factor ${G}$ to be the infinite torus ${({\bf R}/{\bf Z})^{\bf N}}$ (with the Haar probability measure ${\mu_G}$), and the additive limit of the ${f_n}$ then becomes the function ${f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}}$ defined by the formula
$\displaystyle f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.$
In fact, the pullback ${\pi^* f}$ is the ultralimit of the ${f_n}$. As such, for any standard exponent ${1 \leq q < \infty}$, the normalised ${l^q}$ norm
$\displaystyle (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}$
can be seen to converge to the limit
$\displaystyle (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.$
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
Let ${\bar{{\bf Q}}}$ be the algebraic closure of ${{\bf Q}}$, that is to say the field of algebraic numbers. We fix an embedding of ${\bar{{\bf Q}}}$ into ${{\bf C}}$, giving rise to a complex absolute value ${z \mapsto |z|}$ for algebraic numbers ${z \in \bar{{\bf Q}}}$.
Let ${\alpha \in \bar{{\bf Q}}}$ be of degree ${D > 1}$, so that ${\alpha}$ is irrational. A classical theorem of Liouville gives the quantitative bound
$\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^D} \ \ \ \ \ (1)$
for the irrationality of ${\alpha}$ fails to be approximated by rational numbers ${p/q}$, where ${c>0}$ depends on ${\alpha,D}$ but not on ${p,q}$. Indeed, if one lets ${\alpha = \alpha_1, \alpha_2, \dots, \alpha_D}$ be the Galois conjugates of ${\alpha}$, then the quantity ${\prod_{i=1}^D |q \alpha_i - p|}$ is a non-zero natural number divided by a constant, and so we have the trivial lower bound
$\displaystyle \prod_{i=1}^D |q \alpha_i - p| \geq c$
from which the bound (1) easily follows. A well known corollary of the bound (1) is that Liouville numbers are automatically transcendental.
The famous theorem of Thue, Siegel and Roth improves the bound (1) to
$\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{|q|^{2+\epsilon}} \ \ \ \ \ (2)$
for any ${\epsilon>0}$ and rationals ${\frac{p}{q}}$, where ${c>0}$ depends on ${\alpha,\epsilon}$ but not on ${p,q}$. Apart from the ${\epsilon}$ in the exponent and the implied constant, this bound is optimal, as can be seen from Dirichlet’s theorem. This theorem is a good example of the ineffectivity phenomenon that affects a large portion of modern number theory: the implied constant in the ${\gg}$ notation is known to be finite, but there is no explicit bound for it in terms of the coefficients of the polynomial defining ${\alpha}$ (in contrast to (1), for which an effective bound may be easily established). This is ultimately due to the reliance on the “dueling conspiracy” (or “repulsion phenomenon”) strategy. We do not as yet have a good way to rule out one counterexample to (2), in which ${\frac{p}{q}}$ is far closer to ${\alpha}$ than ${\frac{1}{|q|^{2+\epsilon}}}$; however we can rule out two such counterexamples, by playing them off of each other.
A powerful strengthening of the Thue-Siegel-Roth theorem is given by the subspace theorem, first proven by Schmidt and then generalised further by several authors. To motivate the theorem, first observe that the Thue-Siegel-Roth theorem may be rephrased as a bound of the form
$\displaystyle | \alpha p - \beta q | \times | \alpha' p - \beta' q | \geq c (1 + |p| + |q|)^{-\epsilon} \ \ \ \ \ (3)$
for any algebraic numbers ${\alpha,\beta,\alpha',\beta'}$ with ${(\alpha,\beta)}$ and ${(\alpha',\beta')}$ linearly independent (over the algebraic numbers), and any ${(p,q) \in {\bf Z}^2}$ and ${\epsilon>0}$, with the exception when ${\alpha,\beta}$ or ${\alpha',\beta'}$ are rationally dependent (i.e. one is a rational multiple of the other), in which case one has to remove some lines (i.e. subspaces in ${{\bf Q}^2}$) of rational slope from the space ${{\bf Z}^2}$ of pairs ${(p,q)}$ to which the bound (3) does not apply (namely, those lines for which the left-hand side vanishes). Here ${c>0}$ can depend on ${\alpha,\beta,\alpha',\beta',\epsilon}$ but not on ${p,q}$. More generally, we have
Theorem 1 (Schmidt subspace theorem) Let ${d}$ be a natural number. Let ${L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}}$ be linearly independent linear forms. Then for any ${\epsilon>0}$, one has the bound
$\displaystyle \prod_{i=1}^d |L_i(x)| \geq c (1 + \|x\| )^{-\epsilon}$
for all ${x \in {\bf Z}^d}$, outside of a finite number of proper subspaces of ${{\bf Q}^d}$, where
$\displaystyle \| (x_1,\dots,x_d) \| := \max( |x_1|, \dots, |x_d| )$
and ${c>0}$ depends on ${\epsilon, d}$ and the ${\alpha_{i,j}}$, but is independent of ${x}$.
Being a generalisation of the Thue-Siegel-Roth theorem, it is unsurprising that the known proofs of the subspace theorem are also ineffective with regards to the constant ${c}$. (However, the number of exceptional subspaces may be bounded effectively; cf. the situation with the Skolem-Mahler-Lech theorem, discussed in this previous blog post.) Once again, the lower bound here is basically sharp except for the ${\epsilon}$ factor and the implied constant: given any ${\delta_1,\dots,\delta_d > 0}$ with ${\delta_1 \dots \delta_d = 1}$, a simple volume packing argument (the same one used to prove the Dirichlet approximation theorem) shows that for any sufficiently large ${N \geq 1}$, one can find integers ${x_1,\dots,x_d \in [-N,N]}$, not all zero, such that
$\displaystyle |L_i(x)| \ll \delta_i$
for all ${i=1,\dots,d}$. Thus one can get ${\prod_{i=1}^d |L_i(x)|}$ comparable to ${1}$ in many different ways.
There are important generalisations of the subspace theorem to other number fields than the rationals (and to other valuations than the Archimedean valuation ${z \mapsto |z|}$); we will develop one such generalisation below.
The subspace theorem is one of many finiteness theorems in Diophantine geometry; in this case, it is the number of exceptional subspaces which is finite. It turns out that finiteness theorems are very compatible with the language of nonstandard analysis. (See this previous blog post for a review of the basics of nonstandard analysis, and in particular for the nonstandard interpretation of asymptotic notation such as ${\ll}$ and ${o()}$.) The reason for this is that a standard set ${X}$ is finite if and only if it contains no strictly nonstandard elements (that is to say, elements of ${{}^* X \backslash X}$). This makes for a clean formulation of finiteness theorems in the nonstandard setting. For instance, the standard form of Bezout’s theorem asserts that if ${P(x,y), Q(x,y)}$ are coprime polynomials over some field, then the curves ${\{ (x,y): P(x,y) = 0\}}$ and ${\{ (x,y): Q(x,y)=0\}}$ intersect in only finitely many points. The nonstandard version of this is then
Theorem 2 (Bezout’s theorem, nonstandard form) Let ${P(x,y), Q(x,y)}$ be standard coprime polynomials. Then there are no strictly nonstandard solutions to ${P(x,y)=Q(x,y)=0}$.
Now we reformulate Theorem 1 in nonstandard language. We need a definition:
Definition 3 (General position) Let ${K \subset L}$ be nested fields. A point ${x = (x_1,\dots,x_d)}$ in ${L^d}$ is said to be in ${K}$-general position if it is not contained in any hyperplane of ${L^d}$ definable over ${K}$, or equivalently if one has
$\displaystyle a_1 x_1 + \dots + a_d x_d = 0 \iff a_1=\dots = a_d = 0$
for any ${a_1,\dots,a_d \in K}$.
Theorem 4 (Schmidt subspace theorem, nonstandard version) Let ${d}$ be a standard natural number. Let ${L_1,\dots,L_d: \bar{{\bf Q}}^d \rightarrow \bar{{\bf Q}}}$ be linearly independent standard linear forms. Let ${x \in {}^* {\bf Z}^d}$ be a tuple of nonstandard integers which is in ${{\bf Q}}$-general position (in particular, this forces ${x}$ to be strictly nonstandard). Then one has
$\displaystyle \prod_{i=1}^d |L_i(x)| \gg \|x\|^{-o(1)},$
where we extend ${L_i}$ from ${\bar{{\bf Q}}}$ to ${{}^* \bar{{\bf Q}}}$ (and also similarly extend ${\| \|}$ from ${{\bf Z}^d}$ to ${{}^* {\bf Z}^d}$) in the usual fashion.
Observe that (as is usual when translating to nonstandard analysis) some of the epsilons and quantifiers that are present in the standard version become hidden in the nonstandard framework, being moved inside concepts such as “strictly nonstandard” or “general position”. We remark that as ${x}$ is in ${{\bf Q}}$-general position, it is also in ${\bar{{\bf Q}}}$-general position (as an easy Galois-theoretic argument shows), and the requirement that the ${L_1,\dots,L_d}$ are linearly independent is thus equivalent to ${L_1(x),\dots,L_d(x)}$ being ${\bar{{\bf Q}}}$-linearly independent.
Exercise 1 Verify that Theorem 1 and Theorem 4 are equivalent. (Hint: there are only countably many proper subspaces of ${{\bf Q}^d}$.)
We will not prove the subspace theorem here, but instead focus on a particular application of the subspace theorem, namely to counting integer points on curves. In this paper of Corvaja and Zannier, the subspace theorem was used to give a new proof of the following basic result of Siegel:
Theorem 5 (Siegel’s theorem on integer points) Let ${P \in {\bf Q}[x,y]}$ be an irreducible polynomial of two variables, such that the affine plane curve ${C := \{ (x,y): P(x,y)=0\}}$ either has genus at least one, or has at least three points on the line at infinity, or both. Then ${C}$ has only finitely many integer points ${(x,y) \in {\bf Z}^2}$.
This is a finiteness theorem, and as such may be easily converted to a nonstandard form:
Theorem 6 (Siegel’s theorem, nonstandard form) Let ${P \in {\bf Q}[x,y]}$ be a standard irreducible polynomial of two variables, such that the affine plane curve ${C := \{ (x,y): P(x,y)=0\}}$ either has genus at least one, or has at least three points on the line at infinity, or both. Then ${C}$ does not contain any strictly nonstandard integer points ${(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}$.
Note that Siegel’s theorem can fail for genus zero curves that only meet the line at infinity at just one or two points; the key examples here are the graphs ${\{ (x,y): y - f(x) = 0\}}$ for a polynomial ${f \in {\bf Z}[x]}$, and the Pell equation curves ${\{ (x,y): x^2 - dy^2 = 1 \}}$. Siegel’s theorem can be compared with the more difficult theorem of Faltings, which establishes finiteness of rational points (not just integer points), but now needs the stricter requirement that the curve ${C}$ has genus at least two (to avoid the additional counterexample of elliptic curves of positive rank, which have infinitely many rational points).
The standard proofs of Siegel’s theorem rely on a combination of the Thue-Siegel-Roth theorem and a number of results on abelian varieties (notably the Mordell-Weil theorem). The Corvaja-Zannier argument rebalances the difficulty of the argument by replacing the Thue-Siegel-Roth theorem by the more powerful subspace theorem (in fact, they need one of the stronger versions of this theorem alluded to earlier), while greatly reducing the reliance on results on abelian varieties. Indeed, for curves with three or more points at infinity, no theory from abelian varieties is needed at all, while for the remaining cases, one mainly needs the existence of the Abel-Jacobi embedding, together with a relatively elementary theorem of Chevalley-Weil which is used in the proof of the Mordell-Weil theorem, but is significantly easier to prove.
The Corvaja-Zannier argument (together with several further applications of the subspace theorem) is presented nicely in this Bourbaki expose of Bilu. To establish the theorem in full generality requires a certain amount of algebraic number theory machinery, such as the theory of valuations on number fields, or of relative discriminants between such number fields. However, the basic ideas can be presented without much of this machinery by focusing on simple special cases of Siegel’s theorem. For instance, we can handle irreducible cubics that meet the line at infinity at exactly three points ${[1,\alpha_1,0], [1,\alpha_2,0], [1,\alpha_3,0]}$:
Theorem 7 (Siegel’s theorem with three points at infinity) Siegel’s theorem holds when the irreducible polynomial ${P(x,y)}$ takes the form
$\displaystyle P(x,y) = (y - \alpha_1 x) (y - \alpha_2 x) (y - \alpha_3 x) + Q(x,y)$
for some quadratic polynomial ${Q \in {\bf Q}[x,y]}$ and some distinct algebraic numbers ${\alpha_1,\alpha_2,\alpha_3}$.
Proof: We use the nonstandard formalism. Suppose for sake of contradiction that we can find a strictly nonstandard integer point ${(x_*,y_*) \in {}^* {\bf Z}^2 \backslash {\bf Z}^2}$ on a curve ${C := \{ (x,y): P(x,y)=0\}}$ of the indicated form. As this point is infinitesimally close to the line at infinity, ${y_*/x_*}$ must be infinitesimally close to one of ${\alpha_1,\alpha_2,\alpha_3}$; without loss of generality we may assume that ${y_*/x_*}$ is infinitesimally close to ${\alpha_1}$.
We now use a version of the polynomial method, to find some polynomials of controlled degree that vanish to high order on the “arm” of the cubic curve ${C}$ that asymptotes to ${[1,\alpha_1,0]}$. More precisely, let ${D \geq 3}$ be a large integer (actually ${D=3}$ will already suffice here), and consider the ${\bar{{\bf Q}}}$-vector space ${V}$ of polynomials ${R(x,y) \in \bar{{\bf Q}}[x,y]}$ of degree at most ${D}$, and of degree at most ${2}$ in the ${y}$ variable; this space has dimension ${3D}$. Also, as one traverses the arm ${y/x \rightarrow \alpha_1}$ of ${C}$, any polynomial ${R}$ in ${V}$ grows at a rate of at most ${D}$, that is to say ${R}$ has a pole of order at most ${D}$ at the point at infinity ${[1,\alpha_1,0]}$. By performing Laurent expansions around this point (which is a non-singular point of ${C}$, as the ${\alpha_i}$ are assumed to be distinct), we may thus find a basis ${R_1, \dots, R_{3D}}$ of ${V}$, with the property that ${R_j}$ has a pole of order at most ${D+1-j}$ at ${[1,\alpha_1,0]}$ for each ${j=1,\dots,3D}$.
From the control of the pole at ${[1,\alpha_1,0]}$, we have
$\displaystyle |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{D+1-j}$
for all ${j=1,\dots,3D}$. The exponents here become negative for ${j > D+1}$, and on multiplying them all together we see that
$\displaystyle \prod_{j=1}^{3D} |R_j(x_*,y_*)| \ll (|x_*|+|y_*|)^{3D(D+1) - \frac{3D(3D+1)}{2}}.$
This exponent is negative for ${D}$ large enough (or just take ${D=3}$). If we expand
$\displaystyle R_j(x_*,y_*) = \sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b$
for some algebraic numbers ${\alpha_{j,a,b}}$, then we thus have
$\displaystyle \prod_{j=1}^{3D} |\sum_{a+b \leq D; b \leq 2} \alpha_{j,a,b} x_*^a y_*^b| \ll (|x_*|+|y_*|)^{-\epsilon}$
for some standard ${\epsilon>0}$. Note that the ${3D}$-dimensional vectors ${(\alpha_{j,a,b})_{a+b \leq D; b \leq 2}}$ are linearly independent in ${{\bf C}^{3D}}$, because the ${R_j}$ are linearly independent in ${V}$. Applying the Schmidt subspace theorem in the contrapositive, we conclude that the ${3D}$-tuple ${( x_*^a y_*^b )_{a+b \leq D; b \leq 2} \in {}^* {\bf Z}^{3D}}$ is not in ${{\bf Q}}$-general position. That is to say, one has a non-trivial constraint of the form
$\displaystyle \sum_{a+b \leq D; b \leq 2} c_{a,b} x_*^a y_*^b = 0 \ \ \ \ \ (4)$
for some standard rational coefficients ${c_{a,b}}$, not all zero. But, as ${P}$ is irreducible and cubic in ${y}$, it has no common factor with the standard polynomial ${\sum_{a+b \leq D; b \leq 2} c_{a,b} x^a y^b}$, so by Bezout’s theorem (Theorem 2) the constraint (4) only has standard solutions, contradicting the strictly nonstandard nature of ${(x_*,y_*)}$. $\Box$
Exercise 2 Rewrite the above argument so that it makes no reference to nonstandard analysis. (In this case, the rewriting is quite straightforward; however, there will be a subsequent argument in which the standard version is significantly messier than the nonstandard counterpart, which is the reason why I am working with the nonstandard formalism in this blog post.)
A similar argument works for higher degree curves that meet the line at infinity in three or more points, though if the curve has singularities at infinity then it becomes convenient to rely on the Riemann-Roch theorem to control the dimension of the analogue of the space ${V}$. Note that when there are only two or fewer points at infinity, though, one cannot get the negative exponent of ${-\epsilon}$ needed to usefully apply the subspace theorem. To deal with this case we require some additional tricks. For simplicity we focus on the case of Mordell curves, although it will be convenient to work with more general number fields ${{\bf Q} \subset K \subset \bar{{\bf Q}}}$ than the rationals:
Theorem 8 (Siegel’s theorem for Mordell curves) Let ${k}$ be a non-zero integer. Then there are only finitely many integer solutions ${(x,y) \in {\bf Z}^2}$ to ${y^2 - x^3 = k}$. More generally, for any number field ${K}$, and any nonzero ${k \in K}$, there are only finitely many algebraic integer solutions ${(x,y) \in {\mathcal O}_K^2}$ to ${y^2-x^3=k}$, where ${{\mathcal O}_K}$ is the ring of algebraic integers in ${K}$.
Again, we will establish the nonstandard version. We need some additional notation:
Definition 9
• We define an almost rational integer to be a nonstandard ${x \in {}^* {\bf Q}}$ such that ${Mx \in {}^* {\bf Z}}$ for some standard positive integer ${M}$, and write ${{\bf Q} {}^* {\bf Z}}$ for the ${{\bf Q}}$-algebra of almost rational integers.
• If ${K}$ is a standard number field, we define an almost ${K}$-integer to be a nonstandard ${x \in {}^* K}$ such that ${Mx \in {}^* {\mathcal O}_K}$ for some standard positive integer ${M}$, and write ${K {}^* {\bf Z} = K {\mathcal O}_K}$ for the ${K}$-algebra of almost ${K}$-integers.
• We define an almost algebraic integer to be a nonstandard ${x \in {}^* {\bar Q}}$ such that ${Mx}$ is a nonstandard algebraic integer for some standard positive integer ${M}$, and write ${\bar{{\bf Q}} {}^* {\bf Z}}$ for the ${\bar{{\bf Q}}}$-algebra of almost algebraic integers.
• Theorem 10 (Siegel for Mordell, nonstandard version) Let ${k}$ be a non-zero standard algebraic number. Then the curve ${\{ (x,y): y^2 - x^3 = k \}}$ does not contain any strictly nonstandard almost algebraic integer point.
Another way of phrasing this theorem is that if ${x,y}$ are strictly nonstandard almost algebraic integers, then ${y^2-x^3}$ is either strictly nonstandard or zero.
Exercise 3 Verify that Theorem 8 and Theorem 10 are equivalent.
Due to all the ineffectivity, our proof does not supply any bound on the solutions ${x,y}$ in terms of ${k}$, even if one removes all references to nonstandard analysis. It is a conjecture of Hall (a special case of the notorious ABC conjecture) that one has the bound ${|x| \ll_\epsilon |k|^{2+\epsilon}}$ for all ${\epsilon>0}$ (or equivalently ${|y| \ll_\epsilon |k|^{3+\epsilon}}$), but even the weaker conjecture that ${x,y}$ are of polynomial size in ${k}$ is open. (The best known bounds are of exponential nature, and are proven using a version of Baker’s method: see for instance this text of Sprindzuk.)
A direct repetition of the arguments used to prove Theorem 7 will not work here, because the Mordell curve ${\{ (x,y): y^2 - x^3 = k \}}$ only hits the line at infinity at one point, ${[0,1,0]}$. To get around this we will exploit the fact that the Mordell curve is an elliptic curve and thus has a group law on it. We will then divide all the integer points on this curve by two; as elliptic curves have four 2-torsion points, this will end up placing us in a situation like Theorem 7, with four points at infinity. However, there is an obstruction: it is not obvious that dividing an integer point on the Mordell curve by two will produce another integer point. However, this is essentially true (after enlarging the ring of integers slightly) thanks to a general principle of Chevalley and Weil, which can be worked out explicitly in the case of division by two on Mordell curves by relatively elementary means (relying mostly on unique factorisation of ideals of algebraic integers). We give the details below the fold.
There are a number of ways to construct the real numbers ${{\bf R}}$, for instance
• as the metric completion of ${{\bf Q}}$ (thus, ${{\bf R}}$ is defined as the set of Cauchy sequences of rationals, modulo Cauchy equivalence);
• as the space of Dedekind cuts on the rationals ${{\bf Q}}$;
• as the space of quasimorphisms ${\phi: {\bf Z} \rightarrow {\bf Z}}$ on the integers, quotiented by bounded functions. (I believe this construction first appears in this paper of Street, who credits the idea to Schanuel, though the germ of this construction arguably goes all the way back to Eudoxus.)
There is also a fourth family of constructions that proceeds via nonstandard analysis, as a special case of what is known as the nonstandard hull construction. (Here I will assume some basic familiarity with nonstandard analysis and ultraproducts, as covered for instance in this previous blog post.) Given an unbounded nonstandard natural number ${N \in {}^* {\bf N} \backslash {\bf N}}$, one can define two external additive subgroups of the nonstandard integers ${{}^* {\bf Z}}$:
• The group ${O(N) := \{ n \in {}^* {\bf Z}: |n| \leq CN \hbox{ for some } C \in {\bf N} \}}$ of all nonstandard integers of magnitude less than or comparable to ${N}$; and
• The group ${o(N) := \{ n \in {}^* {\bf Z}: |n| \leq C^{-1} N \hbox{ for all } C \in {\bf N} \}}$ of nonstandard integers of magnitude infinitesimally smaller than ${N}$.
The group ${o(N)}$ is a subgroup of ${O(N)}$, so we may form the quotient group ${O(N)/o(N)}$. This space is isomorphic to the reals ${{\bf R}}$, and can in fact be used to construct the reals:
Proposition 1 For any coset ${n + o(N)}$ of ${O(N)/o(N)}$, there is a unique real number ${\hbox{st} \frac{n}{N}}$ with the property that ${\frac{n}{N} = \hbox{st} \frac{n}{N} + o(1)}$. The map ${n + o(N) \mapsto \hbox{st} \frac{n}{N}}$ is then an isomorphism between the additive groups ${O(N)/o(N)}$ and ${{\bf R}}$.
Proof: Uniqueness is clear. For existence, observe that the set ${\{ x \in {\bf R}: Nx \leq n + o(N) \}}$ is a Dedekind cut, and its supremum can be verified to have the required properties for ${\hbox{st} \frac{n}{N}}$. $\Box$
In a similar vein, we can view the unit interval ${[0,1]}$ in the reals as the quotient
$\displaystyle [0,1] \equiv [N] / o(N) \ \ \ \ \ (1)$
where ${[N]}$ is the nonstandard (i.e. internal) set ${\{ n \in {\bf N}: n \leq N \}}$; of course, ${[N]}$ is not a group, so one should interpret ${[N]/o(N)}$ as the image of ${[N]}$ under the quotient map ${{}^* {\bf Z} \rightarrow {}^* {\bf Z} / o(N)}$ (or ${O(N) \rightarrow O(N)/o(N)}$, if one prefers). Or to put it another way, (1) asserts that ${[0,1]}$ is the image of ${[N]}$ with respect to the map ${\pi: n \mapsto \hbox{st} \frac{n}{N}}$.
In this post I would like to record a nice measure-theoretic version of the equivalence (1), which essentially appears already in standard texts on Loeb measure (see e.g. this text of Cutland). To describe the results, we must first quickly recall the construction of Loeb measure on ${[N]}$. Given an internal subset ${A}$ of ${[N]}$, we may define the elementary measure ${\mu_0(A)}$ of ${A}$ by the formula
$\displaystyle \mu_0(A) := \hbox{st} \frac{|A|}{N}.$
This is a finitely additive probability measure on the Boolean algebra of internal subsets of ${[N]}$. We can then construct the Loeb outer measure ${\mu^*(A)}$ of any subset ${A \subset [N]}$ in complete analogy with Lebesgue outer measure by the formula
$\displaystyle \mu^*(A) := \inf \sum_{n=1}^\infty \mu_0(A_n)$
where ${(A_n)_{n=1}^\infty}$ ranges over all sequences of internal subsets of ${[N]}$ that cover ${A}$. We say that a subset ${A}$ of ${[N]}$ is Loeb measurable if, for any (standard) ${\epsilon>0}$, one can find an internal subset ${B}$ of ${[N]}$ which differs from ${A}$ by a set of Loeb outer measure at most ${\epsilon}$, and in that case we define the Loeb measure ${\mu(A)}$ of ${A}$ to be ${\mu^*(A)}$. It is a routine matter to show (e.g. using the Carathéodory extension theorem) that the space ${{\mathcal L}}$ of Loeb measurable sets is a ${\sigma}$-algebra, and that ${\mu}$ is a countably additive probability measure on this space that extends the elementary measure ${\mu_0}$. Thus ${[N]}$ now has the structure of a probability space ${([N], {\mathcal L}, \mu)}$.
Now, the group ${o(N)}$ acts (Loeb-almost everywhere) on the probability space ${[N]}$ by the addition map, thus ${T^h n := n+h}$ for ${n \in [N]}$ and ${h \in o(N)}$ (excluding a set of Loeb measure zero where ${n+h}$ exits ${[N]}$). This action is clearly seen to be measure-preserving. As such, we can form the invariant factor ${Z^0_{o(N)}([N]) = ([N], {\mathcal L}^{o(N)}, \mu\downharpoonright_{{\mathcal L}^{o(N)}})}$, defined by restricting attention to those Loeb measurable sets ${A \subset [N]}$ with the property that ${T^h A}$ is equal ${\mu}$-almost everywhere to ${A}$ for each ${h \in o(N)}$.
The claim is then that this invariant factor is equivalent (up to almost everywhere equivalence) to the unit interval ${[0,1]}$ with Lebesgue measure ${m}$ (and the trivial action of ${o(N)}$), by the same factor map ${\pi: n \mapsto \hbox{st} \frac{n}{N}}$ used in (1). More precisely:
Theorem 2 Given a set ${A \in {\mathcal L}^{o(N)}}$, there exists a Lebesgue measurable set ${B \subset [0,1]}$, unique up to ${m}$-a.e. equivalence, such that ${A}$ is ${\mu}$-a.e. equivalent to the set ${\pi^{-1}(B) := \{ n \in [N]: \hbox{st} \frac{n}{N} \in B \}}$. Conversely, if ${B \in [0,1]}$ is Lebesgue measurable, then ${\pi^{-1}(B)}$ is in ${{\mathcal L}^{o(N)}}$, and ${\mu( \pi^{-1}(B) ) = m( B )}$.
$\displaystyle [0,1] \equiv Z^0_{o(N)}( [N] )$
of (1).
Proof: We first prove the converse. It is clear that ${\pi^{-1}(B)}$ is ${o(N)}$-invariant, so it suffices to show that ${\pi^{-1}(B)}$ is Loeb measurable with Loeb measure ${m(B)}$. This is easily verified when ${B}$ is an elementary set (a finite union of intervals). By countable subadditivity of outer measure, this implies that Loeb outer measure of ${\pi^{-1}(E)}$ is bounded by the Lebesgue outer measure of ${E}$ for any set ${E \subset [0,1]}$; since every Lebesgue measurable set differs from an elementary set by a set of arbitrarily small Lebesgue outer measure, the claim follows.
Now we establish the forward claim. Uniqueness is clear from the converse claim, so it suffices to show existence. Let ${A \in {\mathcal L}^{o(N)}}$. Let ${\epsilon>0}$ be an arbitrary standard real number, then we can find an internal set ${A_\epsilon \subset [N]}$ which differs from ${A}$ by a set of Loeb measure at most ${\epsilon}$. As ${A}$ is ${o(N)}$-invariant, we conclude that for every ${h \in o(N)}$, ${A_\epsilon}$ and ${T^h A_\epsilon}$ differ by a set of Loeb measure (and hence elementary measure) at most ${2\epsilon}$. By the (contrapositive of the) underspill principle, there must exist a standard ${\delta>0}$ such that ${A_\epsilon}$ and ${T^h A_\epsilon}$ differ by a set of elementary measure at most ${2\epsilon}$ for all ${|h| \leq \delta N}$. If we then define the nonstandard function ${f_\epsilon: [N] \rightarrow {}^* {\bf R}}$ by the formula
$\displaystyle f(n) := \hbox{st} \frac{1}{\delta N} \sum_{m \in [N]: m \leq n \leq m+\delta N} 1_{A_\epsilon}(m),$
then from the (nonstandard) triangle inequality we have
$\displaystyle \frac{1}{N} \sum_{n \in [N]} |f(n) - 1_{A_\epsilon}(n)| \leq 3\epsilon$
(say). On the other hand, ${f}$ has the Lipschitz continuity property
$\displaystyle |f(n)-f(m)| \leq \frac{2|n-m|}{\delta N}$
and so in particular we see that
$\displaystyle \hbox{st} f(n) = \tilde f( \hbox{st} \frac{n}{N} )$
for some Lipschitz continuous function ${\tilde f: [0,1] \rightarrow [0,1]}$. If we then let ${E_\epsilon}$ be the set where ${\tilde f \geq 1 - \sqrt{\epsilon}}$, one can check that ${A_\epsilon}$ differs from ${\pi^{-1}(E_\epsilon)}$ by a set of Loeb outer measure ${O(\sqrt{\epsilon})}$, and hence ${A}$ does so also. Sending ${\epsilon}$ to zero, we see (from the converse claim) that ${1_{E_\epsilon}}$ is a Cauchy sequence in ${L^1}$ and thus converges in ${L^1}$ for some Lebesgue measurable ${E}$. The sets ${A_\epsilon}$ then converge in Loeb outer measure to ${\pi^{-1}(E)}$, giving the claim. $\Box$
Thanks to the Lebesgue differentiation theorem, the conditional expectation ${{\bf E}( f | Z^0_{o(N)}([N]))}$ of a bounded Loeb-measurable function ${f: [N] \rightarrow {\bf R}}$ can be expressed (as a function on ${[0,1]}$, defined ${m}$-a.e.) as
$\displaystyle {\bf E}( f | Z^0_{o(N)}([N]))(x) := \lim_{\epsilon \rightarrow 0} \frac{1}{2\epsilon} \int_{[x-\epsilon N,x+\epsilon N]} f\ d\mu.$
By the abstract ergodic theorem from the previous post, one can also view this conditional expectation as the element in the closed convex hull of the shifts ${T^h f}$, ${h = o(N)}$ of minimal ${L^2}$ norm. In particular, we obtain a form of the von Neumann ergodic theorem in this context: the averages ${\frac{1}{H} \sum_{h=1}^H T^h f}$ for ${H=O(N)}$ converge (as a net, rather than a sequence) in ${L^2}$ to ${{\bf E}( f | Z^0_{o(N)}([N]))}$.
If ${f: [N] \rightarrow [-1,1]}$ is (the standard part of) an internal function, that is to say the ultralimit of a sequence ${f_n: [N_n] \rightarrow [-1,1]}$ of finitary bounded functions, one can view the measurable function ${F := {\bf E}( f | Z^0_{o(N)}([N]))}$ as a limit of the ${f_n}$ that is analogous to the “graphons” that emerge as limits of graphs (see e.g. the recent text of Lovasz on graph limits). Indeed, the measurable function ${F: [0,1] \rightarrow [-1,1]}$ is related to the discrete functions ${f_n: [N_n] \rightarrow [-1,1]}$ by the formula
$\displaystyle \int_a^b F(x)\ dx = \hbox{st} \lim_{n \rightarrow p} \frac{1}{N_n} \sum_{a N_n \leq m \leq b N_n} f_n(m)$
for all ${0 \leq a < b \leq 1}$, where ${p}$ is the nonprincipal ultrafilter used to define the nonstandard universe. In particular, from the Arzela-Ascoli diagonalisation argument there is a subsequence ${n_j}$ such that
$\displaystyle \int_a^b F(x)\ dx = \lim_{j \rightarrow \infty} \frac{1}{N_{n_j}} \sum_{a N_{n_j} \leq m \leq b N_{n_j}} f_n(m),$
thus ${F}$ is the asymptotic density function of the ${f_n}$. For instance, if ${f_n}$ is the indicator function of a randomly chosen subset of ${[N_n]}$, then the asymptotic density function would equal ${1/2}$ (almost everywhere, at least).
I’m continuing to look into understanding the ergodic theory of ${o(N)}$ actions, as I believe this may allow one to apply ergodic theory methods to the “single-scale” or “non-asymptotic” setting (in which one averages only over scales comparable to a large parameter ${N}$, rather than the traditional asymptotic approach of letting the scale go to infinity). I’m planning some further posts in this direction, though this is still a work in progress.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
(Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.
With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space ${(\Omega, {\cal E}, {\bf P})}$ – a space ${\Omega}$ (the sample space) equipped with a ${\sigma}$-algebra ${{\cal E}}$ (the event space), together with a countably additive probability measure ${{\bf P}: {\cal E} \rightarrow [0,1]}$ that assigns a real number in the interval ${[0,1]}$ to each event.
One can generalise the concept of a probability space to a finitely additive probability space, in which the event space ${{\cal E}}$ is now only a Boolean algebra rather than a ${\sigma}$-algebra, and the measure ${\mu}$ is now only finitely additive instead of countably additive, thus ${{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)}$ when ${E,F}$ are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.
In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value ${{\bf P}(E)}$ to each event, but instead merely records whether this probability is zero, one, or something in between. Thus ${{\bf P}}$ is now a function from ${{\cal E}}$ to the set ${\{0, I, 1\}}$, where ${I}$ is a new symbol that replaces all the elements of the open interval ${(0,1)}$. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)
The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than ${{\bf R}}$ or ${{\bf C}}$. In algebraic geometry one often talks about a “generic” element of a variety ${V}$ defined over a field ${k}$, which does not lie in any specified variety of lower dimension defined over ${k}$. Once ${V}$ has positive dimension, such generic elements do not exist as classical, deterministic ${k}$-points ${x}$ in ${V}$, since of course any such point lies in the ${0}$-dimensional subvariety ${\{x\}}$ of ${V}$. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field ${k}$ to a sufficiently transcendental extension ${\tilde k}$, in order to locate a sufficient number of generic points in ${V(\tilde k)}$. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in ${V}$ as being associated to the zero ideal in the function ring of ${V}$. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable ${{\bf x}}$ taking values in ${V}$, but which lies in any given lower-dimensional subvariety of ${V}$ with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over ${{\bf R}}$ or ${{\bf C}}$) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.
It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable ${x}$ is the set of all predicates ${\phi(x)}$ that are almost surely obeyed by ${x}$. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.
Andre j- Grebenc on Eigenvectors from eigenvalues tuitionhub1 on Career advice 【数学都知道】2014年3月2日 精选… on “New equidistribution es… 【数学都知道】2014年3月2日 精选… on Finite time blowup for an aver… Anonymous on Analysis I Quan on 246B, Notes 2: Some connection… Fred on Analysis II Anonymous on 246B, Notes 1: Zeroes, poles,… Anonymous on 245B, notes 5: Hilbert sp… Anonymous on 245C, Notes 2: The Fourier… Anonymous on The Collatz conjecture, Little… Anonymous on The Collatz conjecture, Little… Anonymous on The Collatz conjecture, Little… Anonymous on The Collatz conjecture, Little… Anonymous on The Collatz conjecture, Little… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1213, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567185640335083, "perplexity": 208.37346006751238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00113.warc.gz"} |
http://www.thespectrumofriemannium.com/2019/11/17/log239-higgspecial/?shared=email&msg=fail | # LOG#239. Higgspecial.
The 2012 found Higgs particle is special. The next generations of physicists and scientists, will likely build larger machines or colliders to study it precisely. The question is, of course, where is new physics, i.e., where is the new energy physics scale? Is it the Planck scale? Is it lower than ?
## What is a particle?
Particles are field excitations. Fields satisfy wave equations. Thus particles, as representations of fields, also verify field or wave equations. Fields and particles have also symmetries. They are invariant under certain group transformations. There are several types of symmetry transformations:
1. Spacetime symmetries or spacetime invariance. They include: translations, rotations, boosts (pure Lorentz transformations) and the full three type combination. The homogeneous Lorentz group does not include translations. The inhomogeneous Lorentz group includes translations and it is called Poincaré group. Generally speaking, spacetime symmetries are local spacetime transformations only.
2. Internal (gauge) symmetries. These transformations are transformations of the fields up to a phase factor at the same spacetime-point. They can be global and local.
3. Supersymmetry. Transformations relating different statistics particles, i.e., relating bosons and fermions. It can be extended to higher spin under the names of hypersymmetry and hypersupersymmetry. It can also be extended to N-graded supermanifolds.
We say a transformation is global when the group parameter does not depend on the base space (generally spacetime). We say a transformation is local when it depends of functions defined on the base space.
Quantum mechanics is just a theory relating “numbers” to each other. Particles or fields are defined as functions on the spacetime momentum (continuum in general) and certain discrete set of numbers (quantum numbers all of them)
and thus
represents quantum particles/waves as certain unitary representations of the Poincaré group (spacetime)! Superfields generalize this thing. Diffferent particles or fields are certain unitary representations of the superPoincaré group (superspacetime)! Equivalently, particles are invariant states under group or supergroup transformations. Particle physics is the study of fundamental laws of Nature governed by (yet hidden and mysterious) the fusion of quantum mechanics rules and spacetime rules.
From the 17th century to 20th century: we had a march of reductionism and symmetries. Whatever the ultimate theory is, relativity plus QM (Quantum Mechanics) are got as approximations at low or relatively low (100GeV) energies. Reductionism works: massless particles interact as an Greek Y (upsilon) vertices.
Massless particles can be easily described by twistors, certain bispinors (couples of spinors):
Indeed, interactions are believed to be effectively described by parallel twistor-like variables and . The Poincaré group completely fixes the way in which particles interact between each other. For instante, the 4-particle scattering constraints
where is the spin of the particle. Be aware of not confusing the spin with the Mandelstam variable . Locality implies the factorization of the 4-particle amplitude into two pieces, such as
Two special cases are (the Higss!) and (the graviton!):
where the latter represents the 2×2 graviton scattering. For spin S=1 you have
Interactions between both, massive and massless spin one particles must contain spin zero particles in order to unitarize the scattering amplitudes! Scalar bosons are Higgs bosons. Of course, at very high energies, the Higgs and the chiral components of the massive gauge bosons (spin one) are all unified into a single electroweak interaction. A belief in these principles has a paid-off: particles have only spin 0,1/2,1,3/2,2,… The 21st century revelations must include some additional pieces of information about this stuff:
• The doom or end of spacetime. Is the end of reductionism at sight?
• Why the Universe is big?
• New ideas required beyond spacetime and internal symmetries. The missing link is usually called supersymmetry (SUSY), certain odd symmetry transformations relating boson and fermions. New dogma.
• UV/IR entanglement/link/connection. At energies bigger than Planck energy, it seems physics classicalize. We have black holes with large sizes, and thus energies (in rest) larger than Planck energy. High energy is short distance UV physics. Low energy is large distance IR physics.
• Reductionism plus wilsonian effective field theory approaches plus paradigmatic model is false. Fundamental theories or laws of Nature nothing like condensed matter physics (even when condensed matter systems are useful analogues!). Far deeper and more radical ideas are necessary. Only at Planck scale?
Photons must stay massless for consistent Quantum Electrodynamics, so they are Higgs transparent. is the Nima statement on this thing. massless helicities are not the same of massive helicities. This fact is essential to gauge fields and chiral fermions. So they can be easily engineered in condensed matter physics. However, Higgs fields are strange to condensed matter systems. Higgs is special because it does NOT naturally arise in superconductor physics and other condensed matter fields. Why the Higgs mass is low compared to the Planck mass? That is the riddle. The enigma. Higgs particles naturally receive quantum corrections to mass from boson and fermion particles. The cosmological constant problem is beyond a Higgs-like explanation because the Higgs field energy is too-large to handle it. Of course, there are some ideas about how to fix it, but they are cumbersome and very complicated. We need to go beyond standard symmetries. And even so, puzzles appear. Flat spacetimes, de Sitter (dS) spacetimes or Anti de Sitter (AdS) spacetimes? They have some amount of extra symmetries: . The cases of (dS), (flat spacetime), (AdS). Recently, we found (not easily) dS vacua in string and superstring theories. But CFT/AdS correspondences are better understood yet. We are missing something huge about QM of the relativistic vacuum in order to understand the macroscopic Universe we observe/live in.
Why is the Higgs discovery so important? Our relativistic vacuum is qualitatively different than anything we are seen (dark matter, dark energy,…) in ordinary physics. Not just at Planck scale! Already at GeV and TeV scale we face problems! The Higgs plus nothing else at low energies means that something is wrong or at least not completely correct. The Higgs is the most important character in this dramatic story of dark stuff. We can put it under the most incisive and precise experimental testing! So, we need either better colliders, or better dark matter/dark energy observations. The Higgs is new physics from this viewpoint:
1. We have never seen scalar (fundamental and structureless?) fields before.
2. Harbinger of deep and hidden new principles/sectors at work at the quantum realm.
3. We must study it closely.
It could arise that Higgs particles are composite particles. How pointlike are Higgs particles? Higgs particles could be really composite of some superstrong pion-like stuff. But also, they could be truly fundamental. A Higgs factory working at (pole mass) of the Higgs should serve to see if the Higgs is point-like (fundamental). Furthermore, we have never seen self-interacting scalar fields before. A 100 TeV collider or larger could measure Higgs self-coupling up to 5%. The Higgs is similar to gravity there: the Higgs self-interacts much like gravitons!
Yang-Mills fields (YM) plus gravity changes helicity of particles AND color. 100 TeV colliders blast interactions and push High Energy Physics. New particles masses up to 10 times the masses accessible to the LHC would ve available. They would probe vacuum fluctuations with 100 times the LHC power. The challenge is hard for experimentalists. Meanwhile, the theorists must go far from known theories and go into theory of the mysterious cosmological constant and the Higgs scalar field. The macrouniverse versus the microuniverse is at hand. On-shell lorentzian couplings rival off-shell euclidean couplings of the Higgs? Standard local QFT in euclidean spacetimes are related to lorentzian fields. UV/IR changes this view! Something must be changed or challenged!
## Toy example
Suppose . By analytic continuation,
Is Effective Field Theory implying 0 Higgs and that unnatural value? Wrong! Take for instance
Then
This mechanism for removing bulk signs works in AdS/CFT correspondences and alike theories. For we need something similar to remove singularities and test it! For instance, UV-IR tuning could provide sensitivities to loop processes arising in the EFT of the Higgs potential
However, why should we tree cancel the 1-loop correction? It contains UV states and terms. Tree amplitudes are rational amplitudes. Loop amplitudes are trascendental amplitudes! But long known funny things in QFT computations do happen. For instance,
Well, this not happens. There is a hidden mechanism here, known from Feynman’s books. Rational approximations to trascendental numbers are aslo known from old mathematics! A High School student knows
This is trascendental because of the single pole at . If you take instead
you get an apparen tuning of rational to trascendental numbers
and thus, e.g., if N=5, you get a tiny difference to by a factor of (the difference up to this precision is ). The same idea works if you take
You get for N=1 and if N=2. Thus, we could conjecture a fantasy: there is a dual formulation of standard physics that represents physical amplitudes and observeables in terms of integral over abstract geometries (motives? schemes? a generalized amplituhedron seen in super YM?). In this formulation, the discrepancy between the cosmological constant scale and the Higgs mass is solved and it is obviously small. But it can not be obviously local physics. Another formulation separates different number theoretical parts plus it looks like local physics though! However, it will be fine-tuned like the integrals above! In the end, something could look like
Fine-tuning could, thus, be only an apparent artifact of local field theory!
A final concrete example:
Take . Then,
And it guarantees to be fine-tuning! This should have critical tests in a Higgs factory or very large LHC and/or 100TeV colliders of above. In the example above, if
with no sixth power or eight power terms. Precision circular electron-positron colliders could handle with this physics. Signals from tunning mechanisms could be searched. It is not just terms only. High dimensional operators and corrections to the Higgs potential (the vacuum structure itself!) could be studied. But also, we could search for new fields or tiny effects we can not do within the LHC.
Summary: scientific issues today are deeper than those of 1930s or even 1900s. Questions raised by the accelerated universe and Higgs discovery go at the heart of the Nature of the spacetime and the vacuum structure of our Universe!
What about symmetries? In the lagrangian (action) approach, you get a symmetry variation (quasiinvariance) of the lagrangian as follows
Then, by the first Noether theorem, imposing that the action is extremal (generally minimal), and the Euler-Lagrange equations (equations of motion), you would get
a conserved current (and a charged after integration on the suitable measure):
such as
where .
This theorem can be generealized for higher order lagrangians, in any number of dimension, and even for fractional and differentigral operators. Furthermore, a second Noether theorem handles ambiguities in this theorem, stating that gauge local transformations imply certain relations or identities between the field equations (sometimes referred as Bianchi identities but go further these classical identities). You can go further with differential forms, exterior calculus or even with Clifford geometric calculus. A p-form
defines p-dimensional objects that can be naturally integrated out. For a p-tube in D-dimensions
On p-forms, the Hodge star operator for a p-form in D-dimensions turn it into a -form
As , then we have , where if the metric is Lorentzian, for euclidean metrics and , the number of time-like dimensions, if the metric is ultrahyperbolic. Moreover,
For maps, you can also write
where the latter is generally valid up to a sign. The Hodge laplacian reads
and you also gets
If is not zero (the boundary is not null), then it implies essentially Dirichlet or Neumann boundary conditions for . When you apply the adjoint operator on p-forms you get
in general but you pick up an extra sign in euclidean signatures.
To end this eclectic post, first a new twist on Weak Gravity Conjectures(WGC). Why the electron charge is so small in certain units? That is, . Take Coulomb and Newton laws
Then,
Planck mass is
and then
Planckian entities satisfy instead ! then, the enigma is why
In other words, in relativistic natural units with with . The WGC states that the lightest charge particle with in ANY U(1) (abelian gauge) theory admits UV embedding into a consistent quantum gravity theory only and only if
where is the gauge coupling and QED satisfies WGC since . WGC ensures that extremal black holes are unstable and decay into non-extremal black holes rapidly (if even formed!) via processes (and avoid to be extremal too) that are QG consistent. Furhtermore, WGC could imply the 3rd thermodynamical law easily. For Reissner-Nordström black holes
and a grey body correction to black hole arises from this too
Generalised Uncertainty principles (GUP) plus Chandrasekhar limits enjoy similarities with the WGC:
The S-matrix
and by time reversal, the principle of deteailed balance holds so
Quantum determinism implies via unitarity . However, Nature could surprise us. And that would affect Chandrasekhar masses or TOV limits. Stellar evolucion implies luminosities , where is the effective temperature for black bodies (Planck law), and is the Stefan-Boltzmann constant
Maximal energy for a set of baryons under gravity is
as the baryon number for a star is
Using the Wien law
the stars locally have an equation for baryon density about
Stars are sustained by gas and radiation against gravitational collapse. The star pressure
Thus, the maximal mass for a white dwarf star made by barions is about . Ideal gas law implies the HR diagram!!!! Luminosity scales as the cube of mass. The Eddington limit maximal luminosity for any star reads off as
and a Buchdahl limit arises from this and the TOV limit as follows
and then
implies a BH as inevitable consequence iff approximately!
Epilogue: heterodynes or superheterodynes? Jansky? dB scales? Photoelectric effect is compatible with multiphoton processes and special relativity too. SR has formulae for Compton effect, inverse Compton effect, pair creations, pair annihilations, and strong field effects!
View ratings | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063460826873779, "perplexity": 2071.903038731627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00470.warc.gz"} |
https://www.physicsforums.com/threads/momentum-word-problem.126316/ | # Momentum Word Problem.
1. Jul 16, 2006
### JassC
Two carts with masses of 4.3 kg and 3.2 kg
move toward each other on a frictionless track
with speeds of 5.8 m/s and 4.5 m/s, respec-
tively. The carts stick together after colliding
Find their final speed. Answer in units of
m/s.
I plugged in the numbers into this equation
(4.3)(5.8) + (3.2)(4.5) = (4.3+3.2)Vf
39.34 = 7.5Vf
5.2453 = Vf
That apparently isn't the correct answer =/
Last edited by a moderator: Apr 22, 2017
2. Jul 16, 2006
### d_leet
Remember that momentum is a vector it has magnitude and direction, so you need to take this into account when you work the problem. Notice that the problem says the two carts ae heading towards each other, so they have to be moving in different directions, so take velocity in one direction as positive and the other as negative and you should be able to get the correct answer.
3. Jul 16, 2006
### JassC
Okay cool, I got it . Thanks!
4. Jul 16, 2006
### d_leet
No problem, glad I could help.
5. Jul 16, 2006
### JassC
I don't want to make a new thread so I'll ask this one.
A 37.9 kg girl is standing on a 98 kg plank.
The plank, originally at rest, is free to slide on
a frozen lake, which is a flat, frictionless sup-
porting surface. The girl begins to walk along
the plank at a constant speed of 1.54 m/s to
the right relative to the plank.
What is her velocity relative to the ice sur-
face? Answer in units of m/s.
Am I using conservation of momentum or force here? I don't know where to start.
6. Jul 16, 2006
### d_leet
Conservation of momentum, I'm pretty sure there is no such thing as conservation of force. Since she and the plank are initially at rest there is no initial momentum, so when she starts moving the plank must move in the other direction in order for momentum to be conserved.
7. Jul 16, 2006
### JassC
I meant using conservation of momentum or just "FORCE"...
8. Jul 17, 2006
### arunbg
Yes, use coservstion of momentum .
Why did the doubt arise ?
9. Nov 27, 2006
### br0wneyes786
Momentum
Hey i need help i have a web assignment due soon and i could really use some guidence.
A 10 metric ton train moves toward the south at 70 m/s. At what speed must it travel to have four times its original momentum? Answer in units of m/s.
10. Nov 27, 2006
### EthanB
What do you think? What have you tried?
You know the equation that describes its current momentum. If you want it to be four times greater, what do you do to it?
11. Nov 27, 2006 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649923205375671, "perplexity": 1328.325118625049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00127.warc.gz"} |
https://mathinsight.org/double_integral_change_variables_introduction | # Math Insight
### Introduction to changing variables in double integrals
Imagine that you had to compute the double integral \begin{align} \iint_{\dlr} g(x,y) dA \label{integralrect} \end{align} where $g(x,y)=x^2+y^2$ and $\dlr$ is the disk of radius 6 centered at the origin.
In terms of the standard rectangular (or Cartesian) coordinates $x$ and $y$, the disk is given by \begin{gather*} -6 \le x \le 6\\ -\sqrt{36-x^2} \le y \le \sqrt{36-x^2}. \end{gather*} We could start to calculate the integral in terms of $x$ and $y$ as \begin{align*} \iint_{\dlr} g(x,y) dA = \int_{-6}^6 \int_{-\sqrt{36-x^2}}^{\sqrt{36-x^2}} (x^2+y^2) \, dy \, dx\ = \text{a mess}. \end{align*}
It turns out that this integral would be a lot easier if we could change variables to polar coordinates. In polar coordinates, the disk is the region we'll call $\dlr^*$ defined by $0 \le r \le 6$ and $0 \le \theta \le 2\pi$. Hence the region of integration is simpler to describe using polar coordinates.
Moreover, the integrand $x^2+y^2$ is simple in polar coordinates because $x^2+y^2 = r^2$. Using polar coordinates, our lives will be a lot easier because it seems that all we need to do is integrate $r^2$ over the region $\dlr^*$ defined by $0 \le r \le 6$ and $0 \le \theta \le 2\pi$.
Unfortunately, it's not quite that easy. We need to account for one more consequence of changing variables, which is how changing variables changes area. You may recall that $dA$ stands for the area of a little bit of the region $\dlr$. In rectangular coordinates, we replaced $dA$ by $dx\,dy$ (or $dy\,dx$). We need to determine what $dA$ becomes when we change variables. As you will see, in polar coordinates, $dA$ does not becomes $dr \, d\theta$.
The relationship between rectangular $(x,y)$ and polar $(r,\theta)$ coordinates is given by $x=r\cos \theta$, $y=r\sin\theta$. To see how area gets changed, let's write the change of variables as the function \begin{align} (x,y)=\cvarf(r,\theta) = (r \cos\theta, r \sin \theta). \label{polartrans} \end{align} The function $\cvarf(r,\theta)$ gives rectangular coordinates in terms of polar coordinates. The transformation $\cvarf$ gives the perspective of polar coordinates as a mapping from the polar plane to the Cartesian plane. The below applet shows how $\cvarf$ maps a rectangle $\dlr^*$ in the polar plane into the region $\dlr$ in the Cartesian plane. To make $\dlr^*$ and $\dlr$ correspond to the rectangle and disk of our example, you can expland the rectangle $\dlr^*$ to its maximum size.
Polar coordinates map of rectangle. The transformation from polar coordinates to Cartesian coordinates $(x,y)=\cvarf(r,\theta) = (r \cos\theta, r \sin \theta)$ can be viewed as a map from the polar coordinate $(r,\theta)$ plane (left panel) to the Cartesian coordinate $(x,y)$ plane (right panel). This transformation maps a rectangle $\dlr^*$ in the $(r,\theta)$ plane into a region $\dlr$ in the $(x,y)$ plane that is the part of an angular sector inside an annulus. You can change the regions $\dlr^*$ and $\dlr$ by dragging the purple or cyan points in either panel. To further visualize the action of the map $(x,y)=\cvarf(r,\theta)$, you can drag the labeled red and blue points anywhere inside the large rectangle $0 \le r \le 6$, $0 \le \theta <2\pi$ and corresponding disk $x^2+y^2 \le 6^2$.
Do you understand why the disk looks like a rectangle in polar coordinates (i.e., why the region $\dlr^*$ on the left that maps onto the disk is a rectangle)? Remember, the disk is described by $0 \le r \le 6$ and $0 \le \theta \le 2\pi$, which is a rectangle when plotted in the $r\theta$-plane.
We can say that $\cvarf(r,\theta)$ parametrizes $\dlr$ for $(r,\theta)$ in $\dlr^*$. This uses the same language that we used when parametrizing a curve. We'll use it again when we talk about parametrizing surfaces.
To look at how $\cvarf(r,\theta)$ changes area, we can chop up the region $\dlr^*$ into small rectangles of width $\Delta r$ and height $\Delta \theta$. The function $\cvarf(r,\theta)$ maps each of these small rectangles into a “curvy rectangle” in $\dlr$, as shown below. (As above, you need to expand the rectangle $\dlr^*$ in the left panel to its maximum size to make the $\dlr^*$ and $\dlr$ of the applet correspond to the rectangle and disk of our example.)
Area transformation of polar coordinates map. The transformation from polar coordinates to Cartesian coordinates $(x,y)=\cvarf(r,\theta) = (r \cos\theta, r \sin \theta)$ maps a rectangle $\dlr^*$ in the $(r,\theta)$ plane (left panel) to the region $\dlr$ in the $(x,y)$ plane (right panel). It also maps each small rectangle in $\dlr^*$ to a “curvy rectangle” in $\dlr$. Although the small rectangles in $\dlr^*$ are the same size, the corresponding “curvy rectangles” vary greatly in size. Depending on the coordinates $(r,\theta)$, the map $\cvarf(r,\theta)$ shrinks or expands the area by different amounts. You can visualize the mapping of the small rectangles by dragging the yellow point in either panel; the corresponding small rectangle in $\dlr^*$ and its image in $\dlr$ are highlighted. You can also change the regions $\dlr^*$ and $\dlr$ by dragging the purple and cyan points in either panel.
The area of each small rectangle in $\dlr^*$ is $\Delta r \Delta \theta$. But we don't care about area in $\dlr^*$. The $dA$ in the integral of equation \eqref{integralrect} is based on area in $\dlr$ not area in $\dlr^*$. So we need to estimate the area of each “curvy rectangle” in $\dlr$, which we'll denote by $\Delta A$.
We can calculate the area of the “curvy rectangle” by approximating it as a parallelogram with sides $\pdiff{\cvarf}{r} \Delta r$ and $\pdiff{\cvarf}{\theta}\Delta\theta$. The area of a parallelogram is the magnitude of the cross product $\left\| \pdiff{\cvarf}{r} \times \pdiff{\cvarf}{\theta}\right\| \Delta r\Delta\theta$ of the two vectors spanning the parallelogram. Plus, since we are in two-dimensions, we write the area more simply by a $2\times 2$ determinant. After some simplification, the area of the “curvy rectangle” reduces to the expression \begin{align*} \Delta A \approx | \det \jacm{\cvarf}(r,\theta)|\Delta r\Delta\theta, \end{align*} where $\jacm{\cvarf}$ is derivative matrix of the map $\cvarf$. Just as the derivative matrix $\jacm{\cvarf}$ is sometimes called the “Jacobian matrix,” its determinant $\det \jacm{\cvarf}$ is sometimes called the “Jacobian determinant.”
Note that the $D$ in $\jacm{\cvarf}(r,\theta)$ is not the same $D$ as the region $\dlr$ of integration. Because we need to take the absolute value of the determinant, we typically use the notation “det” to denote determinant to avoid confusion (see discussion at end of the page on matrices and determinants).
For $\cvarf$ given by equation \eqref{polartrans}, you can calculate that $| \det \jacm{\cvarf}(r,\theta)| = r$ so that the area of each “curvy rectangle” is $r \Delta r \Delta \theta$. This agrees with the above picture, since the “curvy rectangles” were larger when $r$ was larger.
The important point is that $\cvarf$ stretches or shrinks $\dlr^*$ when it maps $\dlr^*$ onto $\dlr$. Consequently, when we convert from area in $\dlr^*$ to area in $\dlr$, we need to multiply by the area expansion factor $| \det \jacm{\cvarf}(r,\theta)|$. The area expansion factor captures how the “curvy rectangles” change size as you move the point around in the above applet.
We now put everything back together. We started off trying to integrate the function $g(x,y)=x^2+y^2$ over the region $\dlr$. If we use $(x,y) = \cvarf(r,\theta)$ to change variables, we can instead integrate the function $g(\cvarf(r,\theta))=r^2$ over the region $\dlr^*$. However, we need to include the area expansion factor $| \det \jacm{\cvarf}(r,\theta)| = r$ in $dA$ to account for the stretching by $\cvarf$. We can replace $dA$ with $r\,dr\,d\theta$. We end up with the formula \begin{align*} \iint_\dlr g(x,y) dA = \iint_{\dlr^*} g(\cvarf(r,\theta))| \det \jacm{\cvarf}(r,\theta)| dr\, d\theta, \end{align*} which for our example is \begin{align*} \iint_\dlr (x^2+y^2) dA = \int_0^{2\pi}\int_0^6 r^2 r\,dr\,d\theta = \int_0^{2\pi}\int_0^6 r^3 \,dr\,d\theta. \end{align*} You can compute that this integral is $6^4\pi/2$ much easier using this form than you could using the original integral of equation \eqref{integralrect}.
For a general change of variables, we tend to use the variables $\cvarfv$ and $\cvarsv$ (rather than $r$ and $\theta$). In this case, if we change variables by $(x,y) = \cvarf(\cvarfv,\cvarsv)$, our integral is \begin{align} \iint_\dlr g(x,y) dA = \iint_{\dlr^{\textstyle *}} g(\cvarf(\cvarfv,\cvarsv)) | \det \jacm{\cvarf}(\cvarfv,\cvarsv)| d\cvarfv\,d\cvarsv, \label{cvarformula}\tag{3} \end{align} where $\dlr$ is a region in the $xy$-plane that is parametrized by $(x,y)= \cvarf(\cvarfv,\cvarsv)$ for $(\cvarfv,\cvarsv)$ in the region $\dlr^*$.
Sometimes, we may write the determinant of the derivative matrix as $$\det \jacm{\cvarf}(\cvarfv,\cvarsv)=\pdiff{(x,y)}{(\cvarfv,\cvarsv)}$$ so that the area expansion factor is $$| \det \jacm{\cvarf}(\cvarfv,\cvarsv)|= \left|\pdiff{(x,y)}{(\cvarfv,\cvarsv)}\right|.$$ It's just different notation for the same object, but represents that we are taking the derivative of the $(x,y)$ variables with respect to the $(\cvarfv,\cvarsv)$ variables. With this notation, the change of variable formula looks like \begin{align*} \iint_\dlr g(x,y) dA = \iint_{\dlr^{\textstyle *}} g(\cvarf(\cvarfv,\cvarsv)) \left|\pdiff{(x,y)}{(\cvarfv,\cvarsv)}\right| d\cvarfv\,d\cvarsv. \end{align*}
We've obviously skipped quite a few details on this introductory page in an effort to just give the big picture. In particular, we've glossed over how we obtained the expression for the area expansion factor. You can read how we obtain that formula.
You can study some examples of changing variables, including more details on the disk example. To gain more intuition on how changing variables transform regions, you can read an illustrated example of a particular change of variable function.
#### Stretching by maps
The area expansion factor for changing variables in double integrals is an example of accounting for the stretching of a map, in this case, the function $\cvarf$. We encounter such factors frequently in multivariable and vector calculus.
The simplest example of such an expansion factor is the $\|\dllp'(t)\|$ we obtain when calculating the arc length or a line integral over a curve parametrized by $\dllp: \R \to \R^2$ (confused?). We could call this a length expansion factor. Just like for the $\cvarf: \R^2 \to \R^2$ of changing variables, the expansion factor for parametrized curves involves the magnitude of some expression involving the derivative matrix of the map.
For the map $\cvarf: \R^3 \to \R^3$ used to change variables in triple integrals, the volume expansion factor $|\det \jacm{\cvarf}(\cvarfv,\cvarsv,\cvartv)|$ is essentially the same as for double integrals.
Lastly, imagine you took the blue disk $\dlr$ in the above applet and lifted it out of the plane so that it was a surface floating in three-dimensional space. Then, our mapping $\cvarf$ becomes the function $\dlsp: \R^2 \to \R^3$ that parametrizes a surface. We can repeat the calculation for the area expansion factor virtually without change to obtain the area expansion factor for surface area or surface integrals. The only difference that results from being in three dimensions is that you cannot change the cross product for the parallelogram area to a $2 \times 2$ determinant. Hence, the area expansion factor for parametrized surfaces is the cross-product $\left\| \pdiff{\dlsp}{\spfv} \times \pdiff{\dlsp}{\spsv} \right\|$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781708717346191, "perplexity": 236.99877746726864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00307.warc.gz"} |
http://mathoverflow.net/questions/68888/is-a-compact-connected-orientable-3-manifold-with-mathbbzk-fundemental-g?sort=newest | # Is a compact, connected, orientable 3-manifold with $\mathbb{Z}^K$ fundemental group uniquely determined?
According to the Kneser-Milnor prime decomposition theorem for 3-manifolds, any compact, connected, orientable 3-manifold $M$ is diffeomorphic to $S^3 / \Gamma_1$ # $\cdots$ # $S^3/ \Gamma_n$ # $(S^2 \times S^1)_1$ # $\cdots$ # $(S^2 \times S^1)_r$ # $K( \pi_1,1)$ # $\cdots$ # $K( \pi_m,1)$, where # is the connect sum, and $\Gamma_i$ is a non-trivial finite subgroup of $SO(4)$ acting orthogonally to $S^3$ (so that the result is a spherical space form).
I suppose my question boils down to if $\pi_1(M)=\mathbb{Z} \times \cdots \times \mathbb{Z}$, k times, does that uniquely identify a manifold as a $(S^2 \times S^1)_1$ # $\cdots$ # $(S^2 \times S^1)_k$, or can the various quotient manifolds or aspherical factors create a more complicated topology without changing the fundamental group?
-
"non-finite trivial"?? Did you mean "non-trivial finite"? – André Henriques Jun 26 '11 at 23:43
When you say $\pi_1(M) = \mathbb{Z}^k$, do you mean the free product of $k$ copies of $\mathbb{Z}$? – Marco Golla Jun 26 '11 at 23:45
I'm guessing he doesn't, and is simply making a mistake: the fundamental group of $(S^2 \times S^1)_1$ # $\cdots$ # $(S^2 \times S^1)_k$ is a free group on $k$ generators (van Kampen's theorem). – André Henriques Jun 26 '11 at 23:48
@André Henriques, Thanks, I mean non-trivial finite. – Benjamin Horowitz Jun 27 '11 at 0:48
add comment
## 3 Answers
NO, since the three-torus $T^3$ does not have this form.
EDIT if the OP really means a free product of $\mathbb{Z}$s, so the free group $F_k,$ then the answer is YES. It is a fact (see Hempel's book, chapter 7) that every splitting of the fundamental group of $M^3$ as a free product comes from a connected sum decomposition. On the other hand, a prime three manifold is either a $K(\pi, 1)$ or $S^2 \times S^1.$ In the former case, its fundamental group cannot be $\mathbb{Z}$
-
Everything you say here is correct, but of course its correctness relies on the Poincare conjecture! – Dave Futer Jun 27 '11 at 13:26
@Dave: among many other theorems... – Igor Rivin Jun 27 '11 at 14:00
... and for orientable 3-manifolds (otherwise you have $S^1\tilde{\times}S^2$). But of course, orientability is assumed in the question. – Ian Agol Jun 27 '11 at 15:25
add comment
Let me address a more general question:
To what extent is a closed, connected $M^3$ determined by its fundamental group?
Following the Geometrization Theorem, we have a complete answer. There are only two ways in which a closed $3$--manifold $M$ can fail to be determined by its fundamental group:
1. $M$ is a lens space, or a connected sum of something with a lens space. It is well-known that lens spaces are not determined up to homeomorphism by their fundamental groups. Note that in this case, you would see a ${Z}/p$ free factor in your group $G$, so it can't arise in the context of your question.
2. $M = N_1 \sharp N_2$, where each $N_i$ is orientable, and each is chiral (fails to have an orientation-reversing symmetry). In this case, reversing the orientation on one factor would produce $M' = N_1 \sharp \overline{N_2}$, which is not homeomorphic to $M$ but has the same fundamental group. This also cannot arise in your context, for exactly the reasons that Igor outlined, and because $S^1 \times S^2$ does have an orientation-reversing symmetry.
Finally, let me point out that although Geometrization may seem like an overly big hammer for this question, in fact one needs the Poincare conjecture. For, if there existed a fake $3$--sphere, one could take the connect sum with that manifold without altering the group.
-
add comment
If you did in fact mean the free abelian group, you can still do it (up to punctures). All references in brackets are to Hempel.
If $\pi_1(M)\cong \mathbb{Z}$, then $M$ is one of $S^2\times S^1$, $S^2\tilde{\times} S^1$, the solid torus, or the solid Klein bottle. [5.3]
If $\pi_1(M)\cong \mathbb{Z}^2$, then $M$ is an I-bundle over the torus. [10.6]
If $\pi_1(M)\cong \mathbb{Z}^3$, then $M$ is the 3-torus. [11.11]
The case $k>3$ doesn't happen. [9.13]
-
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366738796234131, "perplexity": 350.1413671298808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642518/warc/CC-MAIN-20140305060722-00086-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/172631-determining-set-vector-space.html | # Math Help - Determining a set is a vector space
1. ## Determining a set is a vector space
Hey everyone,
I'm a little confused. I am confused on how to determine if a set is a vector space. I've look at some examples and I am still a bit confused. How do you go about determining if a set is vector space?
Here is a problem I am confused about:
The set of all pairs of real numbers of the form (x,0) with the standard operations on R^2
Any help is appreciated.
Thanks
2. The set, together with the scalar multiplication and vector addition operations, has to satisfy the axioms of a vector space. Now, in your case, you know already (I assume?) that R^2 is a vector space with "the standard operations". When, therefore, you're trying to determine if a set is a vector subspace, your job is a little easier. All you have to do is determine three things:
1. The set is nonempty.
2. The set is closed under scalar multiplication.
3. The set is closed under vector addition.
If those three conditions are satisfied, you've got yourself a subspace. Do those three conditions hold for your set?
3. What is meant when it is "closed" by addition and multiplication?
Thanks
4. Ok, let's get into the notation. Let $V$ be a vector space over the scalar field $F.$ (You could think of $\mathbb{R}^{2}$ as the vector space, over the field of real numbers $\mathbb{R}$.) The word "closed", in this context, means the following:
Scalar multiplication: for any vector $\mathbf{v}\in V$ and scalar $f\in F,$ it follows that $f\mathbf{v}\in V.$ That is, multiplying a vector in $V$ by a scalar in $F$ keeps you inside the vector space $V.$
Vector addition: for any two vectors $\mathbf{u},\mathbf{v}\in V,$ it follows that $\mathbf{u}+\mathbf{v}\in V.$ Again, the addition of two vectors in $V$ is still inside $V.$
That's what closure means. Does that make sense?
Here's an example. Let
$V=\{\mathbf{v}\in\mathbb{R}^{2}|v_{1}+v_{2}=0\}.$
Take the reals as the field. We check if it's closed under the two operations.
Let $r\in\mathbb{R},$ and let $\mathbf{v}\in V$ be given by $\mathbf{v}=\langle v_{1},v_{2}\rangle.$ By assumption, we have that $v_{1}+v_{2}=0.$ Then $r\mathbf{v}=\langle r v_{1},r v_{2}\rangle.$ Does the property still hold? We check to see if $rv_{1}+rv_{2}=0,$ or $r(v_{1}+v_{2})=0,$ which we can see is true, since $v_{1}+v_{2}=0.$ Thus, we can see that we're closed under scalar multiplication, since the scalar and vector I chose was completely arbitrary.
The vector addition case will be very similar. Finally, you note that the zero vector is, indeed, in our space, and hence the set is nonempty.
Make sense?
5. I think I understand what you are explaining. If you add two vectors or multiply by a scalar the new vector is still in the space?
6. Right. If that condition holds for all vectors and scalars, you've got yourself a vector subspace.
Now, I should point out that if you're trying to see if a given set is a vector space, and it's not a subset of a known vector space, then you've got to verify every single one of the axioms to which I linked in post # 2. Only then, a full vector space, will you have.
7. So with the problem I posted:
The set of all pairs of real numbers of the form (x,0) with the standard operations on R^2
So (x,0) considered "u"
8. I understand your example, but how do I see if (x,0) is a vector space of R^2 if there is no condition like in your example such as v1+v2=0.
Thanks
9. If there's no "condition", your job is actually easier.
1. Is the zero vector in your set? (Really just need nonempty, but the zero vector is often the easiest to check).
2. Closed under scalar multiplication?
10. So would this set no be a vector space because of the axiom (c+k)u = cu + ku
(cx + kx, 0) is not equal to (cx + kx)?
11. No, no, no. Just check what I mentioned in post # 9. Is the zero vector in your set?
12. So 0 + (x,0) = (x,0)? I think I am making this harder than it really is.
Thanks
13. Is the vector $\mathbf{0}=\langle 0,0\rangle$ in your set? That is, does $\langle 0,0\rangle$ look like a vector of the form $\langle x,0\rangle,$ for some $x?$
14. If x is 0 then if would be (0,0)
15. Right. So that tells you that the zero vector is in the set, and hence the set is nonempty. Next question:
If you take an arbitrary scalar $r\in\mathbb{R},$ and multiply it times an arbitrary vector $\mathbf{x}=\langle x,0\rangle$ in your set, is the result still in your set?
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403197169303894, "perplexity": 218.98421122901635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00151-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/cylinder-rolling-down-an-inclined-plane.426962/ | # Cylinder rolling down an inclined plane
1. Sep 6, 2010
### PhMichael
1. The problem statement, all variables and given/known data
So I have this standard problem of a cylinder rolling down an inclined plane, however, this time the plane itself is free to slide on the ground. I need to find the acceleration of that cylinder relative to the plane.
2. The attempt at a solution
V / A - the velocity / acceleration of the plane relative to the ground.
v / a - the velocity / acceleration of the cylinder relative to the plane.
The velocity of the cylinder relative to the ground is:
$$\vec{u}=\vec{v}+\vec{V}=(vcos \beta -V) \hat{x} - (vsin \beta) \hat{y}$$
Momentum is conserved conserved in the $$\hat{x}$$ direction so that:
$$0=-MV+m(v cos \beta -V) \to V=\frac{m}{M+m} v cos \beta$$
so the acceleration of the plane relative to the ground is:
$$A=\frac{dV}{dt}=\frac{m}{m+M}a cos \beta$$
$$\vec{\tau}_{P}=mR(g sin \beta + A cos \beta) \hat{z}$$
$$I_{P}\vec{\alpha}=(0.5MR^{2}+MR^{2})\vec{\alpha}=1.5MR^{2} \frac{a}{R}\hat{z}$$
Equating the last two equations while using the equation of A yields:
$$a=gsin\beta\left [ \frac{3}{2}-\left ( \frac{m}{m+M} \right )cos^{2}\beta \right ]$$
This answer is correct, however, I'm not sure whether what I did is kosher; to be more specific, is linear momentum conserved along the $$\hat{x}$$ direction? are fictitious forces, like the one we have here on the cylinder as a result of the plane's acceleration, not treated as real forces so that eventhough they exist, we may still use the conservation of momentum principle?
2. Sep 7, 2010
### hikaru1221
It depends on the reference frame and the system you consider. Whenever there is no external force, the linear momentum is conserved. Remember how to derive the law from F=dp/dt?
In the frame of the ground, if you consider the system of the wedge & the cylinder, the momentum of the system is conserved. With the same system, but in the frame of the wedge, there is fictitious force, i.e. F = dp/dt is not zero, the momentum is not conserved.
3. Sep 7, 2010
### Mindscrape
Looks good to me! Now if you truly wanted to make this problem difficult you could add in friction, inertia effects, and some good old lagrange multipliers. :p
Also, if you know of Lagrangian dynamics then it could be fun to solve the problem in an alternate way, which I might do just for fun. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9298176169395447, "perplexity": 285.8284528162219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257822172.7/warc/CC-MAIN-20160723071022-00029-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.maxmarks.in/questions/8402467980/the-following-state-of-strain-exists-at-a-point-p | # The following state of strain exists at a point P
Determine the principal strains and the directions of the maximum and minimum principal strains. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329219460487366, "perplexity": 813.434348543174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00010.warc.gz"} |
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&onejrnl=tran&pubname=one&v1=53A04&startRec=1 | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(53A04) AND publication=(tran) Sort order: Date Format: Standard display
Results: 1 to 30 of 44 found Go to page: 1 2
[1] Jochen Denzler. Existence and regularity for a curvature dependent variational problem. Trans. Amer. Math. Soc. 367 (2015) 3829-3845. Abstract, references, and article information View Article: PDF [2] Mohammad N. Ivaki. Centro--affine curvature flows on centrally symmetric convex curves. Trans. Amer. Math. Soc. 366 (2014) 5671-5692. Abstract, references, and article information View Article: PDF [3] David Groisser. Certain optimal correspondences between plane curves, I: Manifolds of shapes and bimorphisms. Trans. Amer. Math. Soc. 361 (2009) 2959-3000. MR 2485414. Abstract, references, and article information View Article: PDF This article is available free of charge [4] David Groisser. Certain optimal correspondences between plane curves, II: Existence, local uniqueness, regularity, and other properties. Trans. Amer. Math. Soc. 361 (2009) 3001-3030. MR 2485415. Abstract, references, and article information View Article: PDF This article is available free of charge [5] John Pardon. On the unfolding of simple closed curves. Trans. Amer. Math. Soc. 361 (2009) 1749-1764. MR 2465815. Abstract, references, and article information View Article: PDF This article is available free of charge [6] James J. Hebda and Chichen M. Tsau. Framings of knots satisfying differential relations. Trans. Amer. Math. Soc. 356 (2004) 267-281. MR 2020032. Abstract, references, and article information View Article: PDF This article is available free of charge [7] Ioannis A. Polyrakis. Minimal lattice-subspaces. Trans. Amer. Math. Soc. 351 (1999) 4183-4203. MR 1621706. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Ioannis A. Polyrakis. Finite-dimensional lattice-subspaces of $C(\Omega)$ and curves of $\mathbb{R}^n$ . Trans. Amer. Math. Soc. 348 (1996) 2793-2810. MR 1355300. Abstract, references, and article information View Article: PDF This article is available free of charge [9] Anders Linnér. Some properties of the curve straightening flow in the plane . Trans. Amer. Math. Soc. 314 (1989) 605-618. MR 989580. Abstract, references, and article information View Article: PDF This article is available free of charge [10] J. B. Wilker. Space curves that point almost everywhere . Trans. Amer. Math. Soc. 250 (1979) 263-274. MR 530055. Abstract, references, and article information View Article: PDF This article is available free of charge [11] E. P. Lane. The moving trihedron . Trans. Amer. Math. Soc. 36 (1934) 696-710. MR 1501761. Abstract, references, and article information View Article: PDF This article is available free of charge [12] W. C. Graustein. Parallelism and equidistance in classical differential geometry . Trans. Amer. Math. Soc. 34 (1932) 557-593. MR 1501651. Abstract, references, and article information View Article: PDF This article is available free of charge [13] E. H. Cutler. On the curvatures of a curve in Riemann space . Trans. Amer. Math. Soc. 33 (1931) 832-838. MR 1501619. Abstract, references, and article information View Article: PDF This article is available free of charge [14] Arthur Ranum. Errata: The singular points of analytic space-curves'' [Trans.\ Amer.\ Math.\ Soc. {\bf 31} (1929), no. 1, 145--163; 1501473] . Trans. Amer. Math. Soc. 31 (1929) 931. MR 1500506. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Arthur Ranum. The singular points of analytic space-curves . Trans. Amer. Math. Soc. 31 (1929) 145-163. MR 1501473. Abstract, references, and article information View Article: PDF This article is available free of charge [16] C. E. Weatherburn. On curvilinear congruences . Trans. Amer. Math. Soc. 31 (1929) 117-132. MR 1501471. Abstract, references, and article information View Article: PDF This article is available free of charge [17] R. M. Mathews. Cubic curves and desmic surfaces. II . Trans. Amer. Math. Soc. 30 (1928) 19-23. MR 1501418. Abstract, references, and article information View Article: PDF This article is available free of charge [18] Philip Franklin. Osculating curves and surfaces . Trans. Amer. Math. Soc. 28 (1926) 400-416. MR 1501353. Abstract, references, and article information View Article: PDF This article is available free of charge [19] Jesse Douglas. Normal congruences and quadruply infinite systems of curves in space . Trans. Amer. Math. Soc. 26 (1924) 68-100. MR 1501265. Abstract, references, and article information View Article: PDF This article is available free of charge [20] O. D. Kellogg. Some properties of spherical curves, with applications to the gyroscope . Trans. Amer. Math. Soc. 25 (1923) 501-524. MR 1501257. Abstract, references, and article information View Article: PDF This article is available free of charge [21] Mary F. Curtis. Curves invariant under point-transformations of special type . Trans. Amer. Math. Soc. 23 (1922) 151-172. MR 1501195. Abstract, references, and article information View Article: PDF This article is available free of charge [22] Jesse Douglas. On certain two-point properties of general families of curves . Trans. Amer. Math. Soc. 22 (1921) 289-310. MR 1501175. Abstract, references, and article information View Article: PDF This article is available free of charge [23] James Byrnie Shaw. On triply orthogonal congruences . Trans. Amer. Math. Soc. 21 (1920) 391-408. MR 1501152. Abstract, references, and article information View Article: PDF This article is available free of charge [24] G. M. Green. Nets of space curves . Trans. Amer. Math. Soc. 21 (1920) 207-236. MR 1501141. Abstract, references, and article information View Article: PDF This article is available free of charge [25] E. J. Wilczynski. A set of properties characteristic of a class of congruences connected with the theory of functions . Trans. Amer. Math. Soc. 21 (1920) 409-445. MR 1501153. Abstract, references, and article information View Article: PDF This article is available free of charge [26] Luther Pfahler Eisenhart. Transformations of applicable conjugate nets of curves on surfaces . Trans. Amer. Math. Soc. 19 (1918) 167-185. MR 1501096. Abstract, references, and article information View Article: PDF This article is available free of charge [27] Luther Pfahler Eisenhart. Transformations $T$ of conjugate systems of curves on a surface . Trans. Amer. Math. Soc. 18 (1917) 97-124. MR 1501064. Abstract, references, and article information View Article: PDF This article is available free of charge [28] Percey F. Smith. A theorem for space analogous to Ces\`aro's theorem for plane isogonal systems . Trans. Amer. Math. Soc. 18 (1917) 522-540. MR 1501083. Abstract, references, and article information View Article: PDF This article is available free of charge [29] Chas. T. Sullivan. Scroll directrix curves . Trans. Amer. Math. Soc. 16 (1915) 199-214. MR 1501009. Abstract, references, and article information View Article: PDF This article is available free of charge [30] David F. Barrow. Oriented circles in space . Trans. Amer. Math. Soc. 16 (1915) 235-258. MR 1501011. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 30 of 44 found Go to page: 1 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007771015167236, "perplexity": 1746.3167436358972}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641468.77/warc/CC-MAIN-20150417045721-00134-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/173995-questions-about-dot-product-vectors-print.html | # Questions about Dot Product (Vectors)
• March 9th 2011, 10:14 AM
StuckOnVectors
This is my first post here so I'm sorry if this isn't the right section (didn't see a vectors subforum). I am also not sure how to put arrows on top of letters, so assume that every letter in the following question is a vector. I will also use | | for magnitude.
1. |a| = 3 , |b| = 2 , the angle in between these vectors is 60
Determine the numerical value of (3a + 2b) dot (4a - 3b). - In order to do this I tried distributing.
I know that (3a dot 4a) = 12|a|² and (2b dot -3b) = -6|b|²
I however don't know what to do in the case of (3a dot -3b) and (2b dot 4a)
The formula given is ( |a||b|cosθ = a dot b ) and the final answer is 81 as listed in the back of the book.
2. A regular hexagon has sides of 3cm, as shown below. Determine a dot b.
http://i180.photobucket.com/albums/x...onquestion.png
This is my drawing - I slid b over resulting in an angle of 120 between a and b.
I used the above formula to yield: |3||3|cos120 = -4.5
The answer in the back of the book is however 4.5 (positive, not negative). This has also occurred in other questions where I am getting negative answers where they should be positive - is there something I'm not doing right?
All help would be greatly appreciated.
• March 9th 2011, 10:23 AM
Plato
Quote:
Originally Posted by StuckOnVectors
1. |a| = 3 , |b| = 2 , the angle in between these vectors is 60
Determine the numerical value of (3a + 2b) dot (4a - 3b). - In order to do this I tried distributing.
I know that (3a dot 4a) = 12|a|² and (2b dot -3b) = -6|b|²
I however don't know what to do in the case of (3a dot -3b) and (2b dot 4a)
The formula given is ( |a||b|cosθ = a dot b ) and the final answer is 81 as listed in the back of the book.
$\frac{a\cdot b}{\|a\|\|b\|}=\cos(60^o)=\frac{1}{2}$
$\alpha a\cdot \beta b=(\alpha\beta)a\cdot b$
$a\cdot a=\|a\|^2$
• March 9th 2011, 10:32 AM
TheChaz
So, roughly speaking (!), we have $12a^2 - ab - 6b^2$
after distributing.
"a" = 3, "b" = 2, and "ab" = 3 by following Plato's computation.
• March 9th 2011, 10:57 AM
StuckOnVectors
I'm still just not understanding.. How did you come up with -ab = -3? I know that you have to use the formula, I just don't get how.
• March 9th 2011, 12:46 PM
earboth
1 Attachment(s)
Quote:
Originally Posted by StuckOnVectors
2. A regular hexagon has sides of 3cm, as shown below. Determine a dot b.
This is my drawing - I slid b over resulting in an angle of 120 between a and b.
I used the above formula to yield: |3||3|cos120 = -4.5
The answer in the back of the book is however 4.5 (positive, not negative). This has also occurred in other questions where I am getting negative answers where they should be positive - is there something I'm not doing right?
All help would be greatly appreciated.
The vectors including an angle had to be placed tail to tail and not - as you did - head to tail. Or in other words: The arrows representing vectors had to start at the vertex of the angle.
• March 9th 2011, 05:08 PM
StuckOnVectors
earboth thank you for the help, I see where I went wrong with the hexagon
thechaz the problem was that I didn't know how to get to $12a^2 - a \cdot b - 6b^2$, but i figured it out
anyone reading this: $3a \cdot -3b$ is the same thing as $-9a \cdot b$ - not knowing this was my problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416371583938599, "perplexity": 816.5016215297453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035316.21/warc/CC-MAIN-20150601214355-00090-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/731274/let-abc-be-a-right-triangle-at-a-such-that-bc-2ab-find-angle-acb | # let ABC be a right triangle at A such that $BC=2AB$. Find $\angle ACB$
So let $\triangle ABC$ be a right triangle at vertex $A$ such that $BC=2AB$. Find the $\angle ACB$
How can I find that angle without using cosine, sine and other things?
Since I've already figure out how to find it using cos: here's my approach:
We denote $\angle ABC$ as $\alpha$ so $\cos\alpha=\frac{AB}{BC}=\frac{AB}{2AB}=\frac{1}{2}$
We do $\cos^{-1}$ to find $\angle ABC$ then we do $90^\circ-\angle ABC$ to find $\angle ACB$.
So I'm looking for alternative way.
Thanks!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938793182373047, "perplexity": 111.17477005294086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059455.0/warc/CC-MAIN-20150827025419-00334-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://scholarship.rice.edu/handle/1911/10563/browse?value=Feldmann%2C+Anja&type=author | Now showing items 1-2 of 2
• #### On the impact of variability on the buffer dynamics in IP networks
(1999-09-20)
The main objective of this paper is to demonstrate in the context of a simple TCP/IP-based network that depending on the underlying assumptions about the inherent nature of the variability of network traffic, very different ...
• #### TCP/IP traffic dynamics and network performance: A lesson in workload modeling, flow control and trace-driven simulations
(2001-04-20)
The main objective of this paper is to demonstrate in the context of a simple TCP/IP-based network that depending on the underlying assumptions about the inherent nature of the dynamics of network traffic, very different ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119300246238708, "perplexity": 764.192426653096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00067-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Rutherford_scattering | # Rutherford scattering
Rutherford scattering is the elastic scattering of charged particles by the Coulomb interaction. It is a physical phenomenon explained by Ernest Rutherford in 1911[1] that led to the development of the planetary Rutherford model of the atom and eventually the Bohr model. Rutherford scattering was first referred to as Coulomb scattering because it relies only upon the static electric (Coulomb) potential, and the minimum distance between particles is set entirely by this potential. The classical Rutherford scattering process of alpha particles against gold nuclei is an example of "elastic scattering" because neither the alpha particles nor the gold nuclei are internally excited. The Rutherford formula (see below) further neglects the recoil kinetic energy of the massive target nucleus.
The initial discovery was made by Hans Geiger and Ernest Marsden in 1909 when they performed the gold foil experiment in collaboration with Rutherford, in which they fired a beam of alpha particles (helium nuclei) at foils of gold leaf only a few atoms thick. At the time of the experiment, the atom was thought to be analogous to a plum pudding (as proposed by J. J. Thomson), with the negatively-charged electrons (the plums) studded throughout a positive spherical matrix (the pudding). If the plum-pudding model were correct, the positive "pudding", being more spread out than in the correct model of a concentrated nucleus, would not be able to exert such large coulombic forces, and the alpha particles should only be deflected by small angles as they pass through.
Figure 1. In a cloud chamber, a 5.3 MeV alpha particle track from a lead-210 pin source near point 1 undergoes Rutherford scattering near point 2, deflecting by an angle of about 30°. It scatters once again near point 3, and finally comes to rest in the gas. The target nucleus in the chamber gas could have been a nitrogen, oxygen, carbon, or hydrogen nucleus. It received enough kinetic energy in the elastic collision to cause a short visible recoiling track near point 2. (The scale is in centimeters.)
However, the intriguing results showed that around 1 in 8000 alpha particles were deflected by very large angles (over 90°), while the rest passed through with little deflection. From this, Rutherford concluded that the majority of the mass was concentrated in a minute, positively-charged region (the nucleus) surrounded by electrons. When a (positive) alpha particle approached sufficiently close to the nucleus, it was repelled strongly enough to rebound at high angles. The small size of the nucleus explained the small number of alpha particles that were repelled in this way. Rutherford showed, using the method outlined below, that the size of the nucleus was less than about 10−14 m (how much less than this size, Rutherford could not tell from this experiment alone; see more below on this problem of lowest possible size). As a visual example, Figure 1 shows the deflection of an alpha particle by a nucleus in the gas of a cloud chamber.
Rutherford scattering is now exploited by the materials science community in an analytical technique called Rutherford backscattering.
## Derivation
The differential cross section can be derived from the equations of motion for a particle interacting with a central potential. In general, the equations of motion describing two particles interacting under a central force can be decoupled into the center of mass and the motion of the particles relative to one another. For the case of light alpha particles scattering off heavy nuclei, as in the experiment performed by Rutherford, the reduced mass is essentially the mass of the alpha particle and the nucleus off of which it scatters is essentially stationary in the lab frame.
Substituting into the Binet equation, with the origin of coordinate system ${\displaystyle (r,\theta )}$ on the target (scatterer), yields the equation of trajectory as
${\displaystyle {\frac {d^{2}u}{d\theta ^{2}}}+u=-{\frac {Z_{1}Z_{2}e^{2}}{4\pi \epsilon _{0}mv_{0}^{2}b^{2}}}=-\kappa ,}$
where u = 1/r, v0 is the speed at infinity, and b is the impact parameter.
The general solution of the above differential equation is
${\displaystyle u=u_{0}\cos \left(\theta -\theta _{0}\right)-\kappa ,}$
and the boundary condition is
${\displaystyle u\to 0\quad {\text{and}}\quad r\sin \theta \to b\quad (\theta \to \pi ).}$
Solving the equations u → 0 and its derivative du/ → -1/b using those boundary conditions, we can obtain
${\displaystyle \theta _{0}={\frac {\pi }{2}}+\arctan b\kappa .}$
Then the deflection angle Θ is
{\displaystyle {\begin{aligned}\Theta &=2\theta _{0}-\pi =2\arctan b\kappa \\&=2\arctan {\frac {Z_{1}Z_{2}e^{2}}{4\pi \epsilon _{0}mv_{0}^{2}b}}.\end{aligned}}}
b can be solved to give
${\displaystyle b={\frac {Z_{1}Z_{2}e^{2}}{4\pi \epsilon _{0}mv_{0}^{2}}}\cot {\frac {\Theta }{2}}.}$
To find the scattering cross section from this result consider its definition
${\displaystyle {\frac {d\sigma }{d\Omega }}(\Omega )d\Omega ={\frac {{\hbox{number of particles scattered into solid angle }}d\Omega {\hbox{ per unit time}}}{\hbox{incident intensity}}}}$
Since the scattering angle is uniquely determined for a given E and b, the number of particles scattered into an angle between Θ and Θ + must be the same as the number of particles with associated impact parameters between b and b + db. For an incident intensity I, this implies the following equality
${\displaystyle 2\pi Ib\left|db\right|=I{\frac {d\sigma }{d\Omega }}d\Omega }$
For a radially symmetric scattering potential, as in the case of the Coulomb potential, = 2π sin Θ , yielding the expression for the scattering cross section
${\displaystyle {\frac {d\sigma }{d\Omega }}={\frac {b}{\sin {\Theta }}}\left|{\frac {db}{d\Theta }}\right|}$
Plugging in the previously derived expression for the impact parameter b(Θ) we find the Rutherford differential scattering cross section
${\displaystyle {\frac {d\sigma }{d\Omega }}=\left({\frac {Z_{1}Z_{2}e^{2}}{8\pi \epsilon _{0}mv_{0}^{2}}}\right)^{2}\csc ^{4}{\frac {\Theta }{2}}.}$
This same result can be expressed alternatively as
${\displaystyle {\frac {d\sigma }{d\Omega }}=\left({\frac {Z_{1}Z_{2}\alpha (\hbar c)}{4E_{\mathrm {K} }\sin ^{2}{\frac {\Theta }{2}}}}\right)^{2},}$
where α1/137 is the dimensionless fine structure constant, EK is the non-relativistic kinetic energy of the particle in MeV, and ħc 197 MeV·fm.
## Details of calculating maximal nuclear size
For head-on collisions between alpha particles and the nucleus (with zero impact parameter), all the kinetic energy of the alpha particle is turned into potential energy and the particle is at rest. The distance from the center of the alpha particle to the center of the nucleus (rmin) at this point is an upper limit for the nuclear radius, if it is evident from the experiment that the scattering process obeys the cross section formula given above.
Applying the inverse-square law between the charges on the alpha particle and nucleus, one can write: Assumptions: 1. There are no external forces acting on the system. Thus the total energy (K.E.+P.E.) of the system is constant. 2. Initially the alpha particles are at a very large distance from the nucleus.
${\displaystyle {\frac {1}{2}}mv^{2}={\frac {1}{4\pi \epsilon _{0}}}\cdot {\frac {q_{1}q_{2}}{r_{\text{min}}}}}$
Rearranging:
${\displaystyle r_{\text{min}}={\frac {1}{4\pi \epsilon _{0}}}\cdot {\frac {2q_{1}q_{2}}{mv^{2}}}}$
For an alpha particle:
• m (mass) = 6.64424×10−27 kg = 3.7273×109 eV/c2
• q1 (for helium) = 2 × 1.6×10−19 C = 3.2×10−19 C
• q2 (for gold) = 79 × 1.6×10−19 C = 1.27×10−17 C
• v (initial velocity) = 2×107 m/s (for this example)
Substituting these in gives the value of about 2.7×10−14 m, or 27 fm. (The true radius is about 7.3 fm.) The true radius of the nucleus is not recovered in these experiments because the alphas do not have enough energy to penetrate to more than 27 fm of the nuclear center, as noted, when the actual radius of gold is 7.3 fm. Rutherford realized this, and also realized that actual impact of the alphas on gold causing any force-deviation from that of the 1/r coulomb potential would change the form of his scattering curve at high scattering angles (the smallest impact parameters) from a hyperbola to something else. This was not seen, indicating that the surface of the gold nucleus had not been "touched" so that Rutherford also knew the gold nucleus (or the sum of the gold and alpha radii) was smaller than 27 fm.
## Extension to situations with relativistic particles and target recoil
The extension of low-energy Rutherford-type scattering to relativistic energies and particles that have intrinsic spin is beyond the scope of this article. For example, electron scattering from the proton is described as Mott scattering,[2] with a cross section that reduces to the Rutherford formula for non-relativistic electrons. If no internal energy excitation of the beam or target particle occurs, the process is called "elastic scattering", since energy and momentum have to be conserved in any case. If the collision causes one or the other of the constituents to become excited, or if new particles are created in the interaction, then the process is said to be "inelastic scattering".
## References
1. ^ Rutherford, E. (1911). "The Scattering of α and β rays by Matter and the Structure of the Atom". Philosophical Magazine. 6: 21. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169681072235107, "perplexity": 541.2698230776218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00076.warc.gz"} |
http://math.stackexchange.com/questions/291393/determining-whether-a-subset-is-a-manifold-given-two-different-graphs-one-of-wh | # Determining whether a subset is a manifold given two different graphs, one of which is not everywhere differentiable
A smooth manifold in $\mathbb{R}^2$ is locally the graph of a $C^1$ function. Consider the graph of $f(x)=x^{1/3}$. Since $f$ is not differentiable at zero, we are in trouble. However, this subset is the graph of $x=y^3$, which IS differentiable at zero.
So is this a manifold? I think so, but I remain confused on the general situation. Is the point that "being a manifold" is a topological property, so it doesn't matter which variable, x or y, is used to locally parametrize the graph?
Any generally illuminating comments will be appreciated!
-
“Being a manifold” is indeed a topological property. However, “being a smooth manifold” is not. Anyhow, in this case it does matter which variable you use.
There are several equivalent definitions of smooth submanifolds of a Euclidean space. One of them requires that the set can be parametrized by some set of coordinates locally: In your example, using the $y$ coordinate works for this purpose. That the $x$ coordinate doesn't is then immaterial.
Another definition requires the submanifold to be locally the zero set of a smooth (possibly vector valued) function, where $0$ is not a critical value of this function. In your example, $x-y^3$ satisfies this requirement.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725111126899719, "perplexity": 175.71773379292645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00330-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/222902/prove-that-a-problem-is-np-complete-with-a-reduction-from-3-sat | # Prove that a problem is NP-Complete with a reduction from 3-SAT
Here is an instance of a problem:
Instance: {U, S1, . . . Sn, k|U is a set of elements, the Si are different subsets of U, and k is a nonnegative integer}.
A YES instance is defined as follows: There exists C ⊆ U with |C| = k such that ∀i =6 j,(Si − C) = ( 6 Sj − C). That is, the sets Si − C remain different.
My question is to prove that this is NP-Complete. It is trivial to prove that this is in NP, but as yet I have not been able to find a reduction from an NP-Complete problem to prove NP-hardness.
My approach so far has been to reduce from 3-SAT as follows: consider an instance of 3-SAT, with clauses (a,b,c) and (a,~b,~c). Then, U = {a,~a,b,~b,c,~c}, S = { {a,~a}, {b,~b}, {c,~c}, {}, {a,b,c}, {a,~b,~c} }, and k = 3 (number of variable-complement pairs). If I have a NO instance of 3-sat, this correctly gives me a NO instance. However, a YES instance does not always return a YES instance, so I must modify my answer somehow.
I could not find any other online resource, so any help would be much appreciated!
-
Something has gone wrong with the notation in your third paragraph. – Gareth Rees Nov 21 '12 at 9:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951518535614014, "perplexity": 793.1943491277411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066265.80/warc/CC-MAIN-20141017150106-00022-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/functional-dependence-of-the-size-of-the-visible-universe.817224/ | # Functional dependence of the size of the visible universe
Tags:
1. Jun 3, 2015
### A. Neumaier
I am looking for reliable information about the functional dependence of the diameter $d(t)$ of the visible universe on the time $t$ since the big bang singularity, based on the different hypotheses currently deemed competitive.
Last edited by a moderator: Jun 3, 2015
2. Jun 3, 2015
### marcus
Google "LambdaCDM bounce" for the article by Yi-Fu Cai and Edward Wilson-Ewing
http://arxiv.org/abs/1412.2914
A ΛCDM bounce scenario
Yi-Fu Cai, Edward Wilson-Ewing
(Submitted on 9 Dec 2014 (v1), last revised 28 Jan 2015 (this version, v2))
We study a contracting universe composed of cold dark matter and radiation, and with a positive cosmological constant. As is well known from standard cosmological perturbation theory, under the assumption of initial quantum vacuum fluctuations the Fourier modes of the comoving curvature perturbation that exit the (sound) Hubble radius in such a contracting universe at a time of matter-domination will be nearly scale-invariant. Furthermore, the modes that exit the (sound) Hubble radius when the effective equation of state is slightly negative due to the cosmological constant will have a slight red tilt, in agreement with observations. We assume that loop quantum cosmology captures the correct high-curvature dynamics of the space-time, and this ensures that the big-bang singularity is resolved and is replaced by a bounce. We calculate the evolution of the perturbations through the bounce and find that they remain nearly scale-invariant. We also show that the amplitude of the scalar perturbations in this cosmology depends on a combination of the sound speed of cold dark matter, the Hubble rate in the contracting branch at the time of equality of the energy densities of cold dark matter and radiation, and the curvature scale that the loop quantum cosmology bounce occurs at. Importantly, as this scenario predicts a positive running of the scalar index, observations can potentially differentiate between it and inflationary models. Finally, for a small sound speed of cold dark matter, this scenario predicts a small tensor-to-scalar 14 pages, 8 figures. Published JCAP(2015)
http://inspirehep.net/record/1333367?ln=en
Last edited: Jun 3, 2015
3. Jun 3, 2015
### Chalnoth
One caveat: few cosmologists use the diameter of the universe. Sometimes the radius of the observable universe is used. There's no measure for the size of the universe beyond the observable patch.
That said, there is no closed-form expression. It has to be estimated numerically.
The equation of interest for determining this is the first Friedmann equation:
$H^2 = {8\pi G \over 3}\rho - {kc^2 \over a^2}$
Where:
$H = {1 \over a}{da \over dt}$
Here $a(t)$ is the scale factor, usually defined so that $a = 1$ at the current time. If the diameter of the universe at the current time is $d_0$, then the diameter of the universe at any other time will be $d_0 a(t)$. $\rho$ is the energy density of the universe divided by $c^2$. $k$ is the spatial curvature.
What is usually done is to make use of stress-energy conservation to show how the energy density of each component of the universe changes over time. For example, the energy density of matter scales as $1/a^3$, the energy density of radiation scales as $1/a^4$, and the energy density from dark energy is constant (in the simplest case). We then parameterize the equation in terms of the current density fraction, like so:
$H^2 = H_0^2\left({\Omega_r \over a^4} + {\Omega_m \over a^3} + {\Omega_k \over a^2} + \Omega_\Lambda \right)$
Here each $\Omega$ is a dimensionless number, and $H_0$ is the current expansion rate. Since we have defined $a = 1$ at the current time, when $a = 1$, $H^2 = H_0^2$. Thus for the above equation to be valid, $\Omega_r + \Omega_m + \Omega_k + \Omega_\Lambda = 1$. This is why the $\Omega$ terms are called the density fractions for each component. These four density fractions and the current Hubble expansion rate $H_0$ must be measured experimentally. Once you have values for the five parameters, it's possible to use a differential equation solver to get $a(t)$, which you can use to get the diameter as a function of time.
4. Jun 3, 2015
### marcus
Just in case newcomers to the topic could be reading thread, I want to mention the simple answer to this question based on the usual conventional model. This would probably be familiar to both A.N. and Chalnoth.
One can assume that the cosmological curvature constant Lambda is in fact simply that---a small constant curvature that is intrinsic to spacetime (as in Einstein's original treatment) not associated with any imagined "dark energy".
One can assume that the U is spatially flat.
Almost the entire history has radiation energy density negligible compared to matter. So we neglect the small contribution of the very early radiation-dominated era.
Then the radius of the observable region is straightforward to calculate as an integral over s = 1/a = the reciprocal of the normalized scale factor.
$$r_{obs} = R_\infty \int_1^\infty \frac{ds}{\sqrt{0.4433s^3 + 1}}$$
where R is the longterm Hubble radius c/H
and 0.4433 is (Hnow/H)2 - 1
5. Jun 3, 2015
### marcus
What that integral gets you is essentially the 46 billion LY which is the particle horizon distance in the a = S = 1 row of this table:
$${\scriptsize\begin{array}{|c|c|c|c|c|c|}\hline R_{0} (Gly) & R_{\infty} (Gly) & S_{eq} & H_{0} & \Omega_\Lambda & \Omega_m\\ \hline 14.4&17.3&3400&67.9&0.693&0.307\\ \hline \end{array}}$$ $${\scriptsize\begin{array}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline a=1/S&S&z&T (Gy)&R (Gly)&D_{par}(Gly) \\ \hline 0.001&1090.000&1089.000&0.0004&0.0006&0.001\\ \hline 0.003&339.773&338.773&0.0025&0.0040&0.006\\ \hline 0.009&105.913&104.913&0.0153&0.0235&0.040\\ \hline 0.030&33.015&32.015&0.0902&0.1363&0.249\\ \hline 0.097&10.291&9.291&0.5223&0.7851&1.491\\ \hline 0.312&3.208&2.208&2.9777&4.3736&8.733\\ \hline 1.000&1.000&0.000&13.7872&14.3999&46.279\\ \hline 3.208&0.312&-0.688&32.8849&17.1849&184.083\\ \hline 7.580&0.132&-0.868&47.7251&17.2911&458.476\\ \hline 17.911&0.056&-0.944&62.5981&17.2993&1106.893\\ \hline 42.321&0.024&-0.976&77.4737&17.2998&2639.026\\ \hline 100.000&0.010&-0.990&92.3494&17.2999&6259.262\\ \hline \end{array}}$$
The particle horizon is conventionally taken to be the current radius of the observable region. It can be defined as follows: Imagine that at the beginning of expansion our matter sent out a particle at speed c which by some miracle was never absorbed or scattered but just continued traveling in a straight line. How far from us would that particle be now? According to standard Friedmann model cosmology the answer is 46 billion LY.
This is also the farthest away some other matter could be now, if we are receiving some kind of signal from it today. You just turn the picture around. So it is the radius of the observable--the farthest any matter could be today, if we could, in principle, be receiving signal from it.
As time goes on this observable region includes more and more distant matter.
6. Jun 4, 2015
### A. Neumaier
Thanks for the answers. Did anyone in the literature draw a plot of the estimated radius $r(t)$ of the observable universe extrapolated into the far past $t$, based on the information given? For the moment, I am mostly interested in such a plot (or table), though I'll ultimately want to understand the explanation and the numerical uncertainty the estimates entail.
7. Jun 4, 2015
### marcus
Based on standard cosmology, yes, for example in Charles Lineweaver's 2003 article "Inflation and the CMB"
==abstract==
http://arxiv.org/abs/astro-ph/0305179
Inflation and the Cosmic Microwave Background
Charles H. Lineweaver (School of Physics, University of New South Wales, Sydney, Australia)
(Submitted on 12 May 2003)
I present a pedagogical review of inflation and the cosmic microwave background. ...
34 pages, 13 figures.
==endquote==
Lineweaver's tutorial is something of a classic. It is also online at the Caltech "Level 5" website.
http://ned.ipac.caltech.edu/level5/March03/Lineweaver/Lineweaver_contents.html
It's interesting to notice that in the bottom diagram which uses a "conformal" timescale (fake time making speed of light constant in comoving distance) the particle horizon, in comoving distance, is the "mirror image" of the event horizon
Last edited: Jun 4, 2015 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362125396728516, "perplexity": 535.0732897561628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867140.87/warc/CC-MAIN-20180525160652-20180525180652-00359.warc.gz"} |
https://www.arxiv-vanity.com/papers/0806.0894/ | # Convergence of the chiral expansion in two-flavor lattice QCD
J. Noaki High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan S. Aoki Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba 305-8571, Japan Riken BNL Research Center, Upton, NY 11973, USA T.W. Chiu Physics Department, Center for Theoretical Sciences, and National Center for Theoretical Sciences, National Taiwan University, Taipei 10617, Taiwan H. Fukaya The Niels Bohr Institute, The Niels Bohr International Academy, Blegdamsvej 17 DK-2100 Copenhagen Ø Denmark High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan S. Hashimoto High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan School of High Energy Accelerator Science, The Graduate University for Advanced Studies (Sokendai), Tsukuba 305-0801, Japan T.H. Hsieh Research Center for Applied Sciences, Academia Sinica, Taipei 115, Taiwan T. Kaneko High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan School of High Energy Accelerator Science, The Graduate University for Advanced Studies (Sokendai), Tsukuba 305-0801, Japan H. Matsufuru High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan T. Onogi Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan E. Shintani High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan N. Yamada High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801, Japan School of High Energy Accelerator Science, The Graduate University for Advanced Studies (Sokendai), Tsukuba 305-0801, Japan
November 30, 2020
###### Abstract
We test the convergence property of the chiral perturbation theory (ChPT) using a lattice QCD calculation of pion mass and decay constant with two dynamical quark flavors. The lattice calculation is performed using the overlap fermion formulation, which realizes exact chiral symmetry at finite lattice spacing. By comparing various expansion prescriptions, we find that the chiral expansion is well saturated at the next-to-leading order (NLO) for pions lighter than 450 MeV. Better convergence behavior is found in particular for a resummed expansion parameter , with which the lattice data in the pion mass region 290750 MeV can be fitted well with the next-to-next-to-leading order (NNLO) formulae. We obtain the results in two-flavor QCD for the low energy constants and as well as the pion decay constant, the chiral condensate, and the average up and down quark mass.
###### pacs:
11.15.Ha, 12.38.Gc
preprint: KEK-CP-212,NTUTH-08-505A,UT-559,YITP-08-38
JLQCD and TWQCD Collaborations
Chiral perturbation theory (ChPT) is a powerful tool to analyze the dynamics of low energy pions Gasser and Leutwyler (1984). The expansion parameter in ChPT is the pion mass (or momentum) divided by the typical scale of the underlying theory such as Quantum Chromodynamics (QCD). Good convergence of the chiral expansion is observed for physical pions in the analysis including the next-to-next-to-leading order (NNLO) for the pion-pion scattering Colangelo et al. (2001), for instance. In the kaon mass region, on the other hand, the validity of ChPT is not obvious and in fact an important issue in many phenomenological applications.
Lattice QCD calculation can, in principle, be used for a detailed test of the convergence property of ChPT, as one can freely vary the quark mass, typically in the range with the physical strange quark mass. However, such a direct test has been difficult, since the lattice regularization of the quark action explicitly violates flavor and/or chiral symmetry in the conventional formulations, such as the Wilson and staggered fermions. One then has to introduce additional terms with unknown parameters in order to describe those violations in ChPT, hence the test requires precise continuum extrapolation.
The aim of this article is to provide a direct comparison between the ChPT predictions and lattice QCD calculations, using the overlap fermion formulation on the lattice, that preserves exact chiral symmetry at finite lattice spacing Neuberger (1998a, b). With the exact chiral symmetry, the use of the continuum ChPT is valid to describe the lattice data at a finite lattice spacing up to Lorentz violating corrections; the discretization error of affects the value of the Low Energy Constants (LECs) and unknown Lorentz violating corrections. In order to make a cleaner analysis, we consider two-flavor QCD in this work, leaving the similar study in 2+1-flavor QCD, which introduces much more complications, for a future work. We calculate the pion mass and decay constant, for which the NNLO calculations are available in ChPT Bijnens et al. (1998). A preliminary report of this work is found in Noaki et al. (2007).
Lattice simulations are performed on a lattice at a lattice spacing = 0.1184(03)(21) fm determined with an input = 0.49 fm, the Sommer scale defined for the heavy quark potential. At six different sea quark masses , covering the pion mass region , we generate 10,000 trajectories, among which the calculation of the pion correlator is carried out at every 20 trajectories. For further details of the simulation we refer Aoki et al. (2008a).
In the calculation of the pion correlator, we computed in advance the lowest 50 conjugate pairs of eigenmode of the overlap-Dirac operator on each gauge configuration and stored on the disk. Then, by using the eigenmodes to construct a preconditioner, the inversion of the overlap-Dirac operator can be done with only 15% of the CPU time of the full calculation. The low-modes are also used to improve the statistical accuracy by averaging their contribution to the correlators over 32 source points distributed in each time slice. The correlators are calculated with a point source and a smeared source; the pion mass and decay constant are obtained from a simultaneous fit of them.
The pion decay constant is defined through , where is the (continuum) iso-triplet axial-vector current. Instead of , we calculate the matrix element of pseudo-scalar density on the lattice using the PCAC relation with the bare quark mass. Since the combination is not renormalized, no renormalization factor is needed in the calculation of . This is possible only when the chiral symmetry is exact. The renormalization factor for the quark mass is calculated non-perturbatively through the RI/MOM scheme, with which the renormalization condition is applied at some off-shell momentum for propagators and vertex functions. Such a non-perturbative calculation suffers from the non-trivial quark mass dependence of the chiral condensate. By using the calculated low-modes explicitly, we are able to control the mass dependence to determine more reliably. In the chiral limit, we obtain , where the second error arises from a subtraction of power divergence from the chiral condensate. The details of this calculation will be given elsewhere.
Since our numerical simulation is done on a finite volume lattice with for the lightest sea quark, the finite volume effect could be significant. We make a correction for the finite volume effect using the estimate within ChPT calculated up to Colangelo et al. (2005). The size of the corrections for and is about 5% for the lightest pion mass and exponentially suppressed for heavier data points. In addition, there is a correction due to fixing the global topological charge in our simulation Aoki et al. (2008a); Fukaya et al. (2006). This leads to a finite volume effect of with the physical space-time volume. The correction is calculable within ChPT Brower et al. (2003); Aoki et al. (2007) depending on the value of topological susceptibility , which we calculated in Aoki et al. (2008b). At NLO, the correction for is similar in magnitude but opposite in sign to the ordinary finite volume effect at the lightest pion mass, and thus almost cancels. For the finite volume effect due to the fixed topology starts at NLO and therefore is a subdominant effect. Note that the LECs appear in the calculation of these correction factors. We use their phenomenological values at the scale of physical (charged) pion mass MeV: , , , determined at the NNLO Colangelo et al. (2001) and . The errors in these values are reflected in the following analysis assuming a gaussian distribution.
After applying the finite volume corrections, we first analyze the numerical data for and using the ChPT formulae at NLO,
m2π/mq = 2B(1+12xlnx)+c3x, (1) fπ = f(1−xlnx)+c4x, (2)
where is the pion decay constant in the chiral limit and is related to the chiral condensate. Here the expansion is made in terms of . The parameters and are related to the LECs and , respectively. At NLO, i.e. , these expressions are unchanged when one replaces the expansion parameter by or , where and denote those at a finite quark mass. Therefore, in a small enough pion mass region the three expansion parameters should describe the lattice data equally well.
Three fit curves (-fit, -fit, and -fit) for the three lightest pion mass points ( 450 MeV) are shown in Figure 1 as a function of . From the plot we observe that the different expansion parameters seem to describe the three lightest points equally well; the values of dof are 0.30, 0.33 and 0.66 for -, - and -fits. In each fit, the correlation between and for common sea quark mass is taken into account. Between the - and -fit, all of the resulting fit parameters are consistent. Among them, and , the LECs at the leading order ChPT, are also consistent with the -fit. This indicates that the NLO formulae successfully describes the data.
The agreement among the different expansion prescriptions is lost (with the deviation greater than 3) when we extend the fit range to include the next lightest data point at 520 MeV. We, therefore, conclude that for these quantities the NLO ChPT may be safely applied only below 450 MeV.
Another important observation from Figure 1 is that only the -fit reasonably describes the data beyond the fitted region. With the - and -fits the curvature due to the chiral logarithm is too strong to accommodate the heavier data points. In fact, values of the LECs with the - and -fits are more sensitive to the fit range than the -fit. This is because , which is significantly smaller than of our data, enters in the definition of the expansion parameter. Qualitatively, by replacing and by and the higher loop effects in ChPT are effectively resummed and the convergence of the chiral expansion is improved.
We then extend the analysis to include the NNLO terms. Since we found that only the -fit reasonably describes the data beyond 450 MeV, we perform the NNLO analysis using the -expansion in the following. With other expansion parameters, the NNLO fits including heavier mass points are unstable. At the NNLO, the formulae in the -expansion are Colangelo et al. (2001)
m2π/mq =2B[1+12ξlnξ+78(ξlnξ)2 +(c4f−13(~l phys+16))ξ2lnξ] +c3ξ(1−92ξlnξ)+αξ2, (3) fπ=f [1−ξlnξ+54(ξlnξ)2+16(~l phys+532)ξ2lnξ] +c4ξ(1−5ξlnξ)+βξ2. (4)
In the terms of , the LECs at NLO appear: , where MeV. We input the phenomenological estimate to the fit. Since the data are not precise enough to discriminate between and in the given region of (0.060.19), fit parameters and partially absorb the uncertainty in . In fact, our final results for the LECs is insensitive to .
In Figure 2, we show the NNLO fits using all the data points (solid curves). In these plots and are normalized by their values in the chiral limit. As expected from the good convergence of the -fit even at NLO, the NNLO formulae nicely describe the lattice data in the whole data region. We also draw a truncation at the NLO level (dashed curves) but using the same fit parameters. The difference between the NLO truncated curves and the NLO fit curves to the three lightest data points (Figure 1) is explained by the presence of the terms and in (3) and (4), respectively. Since the factors and are significantly larger than 1 in the data region, the resulting fit parameters and in the NNLO formulae are much lower than those of the NLO fits. This indicates that the determination of the NLO LECs is quite sensitive to whether the NNLO terms are included in the analysis, while the leading order LECs are stable.
From Figure 2 we can explicitly observe the convergence behavior of the chiral expansion. For instance, at the kaon mass region 500 MeV, the NLO term contributes at a () level to (), and the correction at NNLO is about (). At least, the expansion is converging (NNLO is smaller than NLO) for both of these quantities, but quantitatively the convergence behavior depends significantly on the quantity of interest. For the NNLO contribution is already substantial at the kaon mass region.
From the -fit, we extract the LECs of ChPT, i.e. the decay constant in the chiral limit , chiral condensate , and the NLO LECs and . For each quantity, a comparison of the results between the NLO and the NNLO fits is shown in Figure 3. In each panel, the results with 5 and 6 lightest data points are plotted for the NNLO fit. The correlated fits give dof = 1.94 and 1.40, respectively. For the NLO fits, we plot results obtained with 4, 5 and 6 points to show the stability of the fit. The dof is less than 1.94. The results for these physical quantities are consistent within either the NLO or the NNLO fit. On the other hand, as seen for most prominently, there is a significant disagreement between NLO and NNLO. This is due to the large NNLO coefficients as already discussed.
We quote our final results from the NNLO fit with all data points: MeV, , , and . From the value at the neutral pion mass MeV, we obtain the average up and down quark mass and the pion decay constant as MeV and MeV. In these results, the first error is statistical, where the error of the renormalization constant is included in quadrature for and . The second error is systematic due to the truncation of the higher order corrections, which is estimated by an order counting with a coefficient of as appeared at NNLO. For quantities carrying mass dimensions, the third error is from the ambiguity in the determination of . We estimate these errors from the difference of the results with our input fm and that with fm Aubin et al. (2004). The third errors for and reflect an ambiguity of choosing the renormalization scale of ChPT ( or ). There are other possible sources of systematic errors that are not reflected in the error budget. They include the discretization effect, remaining finite volume effect and the effect of missing strange quark in the sea.
In each panel of Figure 3, we also plot reference points (pluses and star) for comparison. Overall, with the NNLO fits, we find good agreement with those reference values. For , our result is significantly lower than the two-loop result in two-flavor ChPT Colangelo and Durr (2004), MeV, but taking account of the scale uncertainty, which is not shown in the plot, the agreement is more reasonable. For and , we also plot the lattice results from our independent simulation in the -regime Fukaya et al. (2007). We observe a good agreement with the NNLO fits. Comparison of the LECs and with the phenomenological values Colangelo et al. (2001) also favor the NNLO fits, especially for .
With the presently available computational power, the chiral extrapolation is still necessary in the lattice QCD calculations. The consistency test of the lattice data with ChPT as described in this paper is crucial for reliable chiral extrapolation of any physical quantities to be calculated on the lattice. With a two-flavor simulation preserving exact chiral symmetry, we demonstrate that the lattice data are well described with the use of the resummed expansion parameter . Extension of the analysis to the case of partially quenched QCD Aoki et al. and to other physical quantites, such as the pion form factor Kaneko et al. (2007) is on-going. Also, simulations with exact chiral symmetry including dynamical strange quark are underway Hashimoto et al. (2007).
###### Acknowledgements.
Numerical simulations are performed on Hitachi SR11000 and IBM System Blue Gene Solution at High Energy Accelerator Research Organization (KEK) under a support of its Large Scale Simulation Program (Nos. 07-16). HF was supported by Nishina Foundation. This work is supported in part by the Grant-in-Aid of the Ministry of Education (Nos. 17740171, 18034011, 18340075, 18740167, 18840045, 19540286, 19740121, 19740160, 20025010, 20039005, 20340047, 20740156) and the National Science Council of Taiwan (Nos. NSC96-2112-M-002-020-MY3, NSC96-2112-M-001-017-MY3). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557428956031799, "perplexity": 1186.0268660838835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00387.warc.gz"} |
https://en.wikipedia.org/wiki/Chow_test | # Chow test
The Chow test, proposed by econometrician Gregory Chow in 1960, is a test of whether the coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis to test for the presence of a structural break at a period which can be assumed to be known a priori (for instance, a major historical event such as a war). In program evaluation, the Chow test is often used to determine whether the independent variables have different impacts on different subgroups of the population.
structural break program evaluation
At ${\displaystyle x=1.7}$ there is a structural break, regression on the subintervals ${\displaystyle [0,1.7]}$ and ${\displaystyle [1.7,4]}$ delivers a better modelling than the combined regression(dashed) over the whole interval.
Comparison of two different programs (red, green) existing in a common data set, separate regressions for both programs deliver a better modelling than a combined regression (black).
Suppose that we model our data as
${\displaystyle y_{t}=a+bx_{1t}+cx_{2t}+\varepsilon .\,}$
If we split our data into two groups, then we have
${\displaystyle y_{t}=a_{1}+b_{1}x_{1t}+c_{1}x_{2t}+\varepsilon .\,}$
and
${\displaystyle y_{t}=a_{2}+b_{2}x_{1t}+c_{2}x_{2t}+\varepsilon .\,}$
The null hypothesis of the Chow test asserts that ${\displaystyle a_{1}=a_{2}}$, ${\displaystyle b_{1}=b_{2}}$, and ${\displaystyle c_{1}=c_{2}}$, and there is the assumption that the model errors ${\displaystyle \varepsilon }$ are independent and identically distributed from a normal distribution with unknown variance.
Let ${\displaystyle S_{C}}$ be the sum of squared residuals from the combined data, ${\displaystyle S_{1}}$ be the sum of squared residuals from the first group, and ${\displaystyle S_{2}}$ be the sum of squared residuals from the second group. ${\displaystyle N_{1}}$ and ${\displaystyle N_{2}}$ are the number of observations in each group and ${\displaystyle k}$ is the total number of parameters (in this case, 3). Then the Chow test statistic is
${\displaystyle {\frac {(S_{C}-(S_{1}+S_{2}))/k}{(S_{1}+S_{2})/(N_{1}+N_{2}-2k)}}.}$
The test statistic follows the F distribution with ${\displaystyle k}$ and ${\displaystyle N_{1}+N_{2}-2k}$ degrees of freedom.
Remarks
• The global sum of squares (SSE) if often called Restricted Sum of Squares (RSSM) as we basically test a constrained model where we have 2K assumptions (with K the number of regressors).
• Some software like SAS will use a predictive Chow test when the size of a subsample is less than the number of regressors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875446081161499, "perplexity": 372.2955927589216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00464-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.eretrandre.org/tetrationforum/showthread.php?tid=17&pid=2455 | • 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Matrix Operator Method Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 04/18/2008, 01:55 PM (This post was last modified: 04/18/2008, 01:57 PM by Gottfried.) I've got some mail, where the author considered the case from the view of generation-functions, which may be interesting for a needed proof of the method. Here some exchange: Code:It seems that e.g.f. of your triangle is: (exp(y*(exp(y*(exp(x)-1))-1))-1)/y^2 which gives 1, 1,1,1, 1,3,4,3,1, 1,7,13,19,13,6,1, ...Next mail: Code:And the e.g.f. connected to your coefficient a_3() is (exp(y*(exp(y*(exp(y*(exp(x)-1))-1))-1))-1)/y^3 which gives triangle 1, 1,1,1,1, 1,3,4,6,4,3,1, 1,7,13,26,31,31,25,13,6,1, ... And so on.Then I did not understand, how the generation-function was related to the bivariate array: Code:´ > However, I did not understand how you arrived > via the generation-function process at the > actual K-matrices. Would you mind to explain > this to me? > > GottfriedFinal mail: Code:An example: Substituting t = exp(y) in your U_t(x,2) we get exp(y*(exp(y*(exp(x)-1))-1))-1 = y^2*x+(1/2*y^2+1/2*y^3+1/2*y^4)*x^2+(1/6*y^2+1/2*y^3+2/3*y^4+1/2*y^5+1/6*y^6)*x^3+(1/24*y^2+7/24*y^3+13/24*y^4+19/24*y^5+13/24*y^6+1/4*y^7+1/24*y^8)*x^4+... . Gottfried Helms, Kassel bo198214 Administrator Posts: 1,384 Threads: 90 Joined: Aug 2007 04/25/2008, 03:39 PM Gottfried Wrote:The symbolic eigensystem-decomposition is enormous resources-consuming and also not yet well documented and verified in a general manner. Here I propose a completely elementary (while still matrix-based) way to arrive at the same result as by eigensystem-analysis. ---------------------------------------------------- As you might recall from my "continuous iteration" article I've found a matrix representation for the coefficients of the powerseries for the U-tetration $\hspace{24} U_t^{^oh}(x)$, where $U_t(x)= U_t^{^o1}(x) = t^x - 1$. Gottfried, as long as you deal with functions with fixed point at 0, for example $b^x-1$ or $xb^x$ there is no need to apply the Diagonalization method in its full generality. Note, that I would like to change the name from matrix operator method to diagonalization method, as this is more specific (for example Walker uses the name "matrix method" for his natural Abel method). For the case of a fixed point at 0, we can use the ordinary formulas of regular iteration. You know there are two cases, first $f'(0)=1$ (which Daniel Geisler calls parabolic) and $0 or $f'(0)>1$ (which Daniel Geisler calls hyperbolic, however I am not really happy with those names, because where is the elliptic case? Is this $0 in opposite to the $f'(0)>1$ which we then would call "properly hyperbolic". Also the hyperbolic and elliptic cases (in the just mentioned sense) exchange by using $f^{-1}$ and hence can be treated quite similarly. This is not the case for hyperbolas and ellipses.) The formula for the parabolic iteration was already mentioned on this forum under the name double binomial formula. The formula for hyperbolic iteration (or elliptic iteration) can be derived from the fact that $f^{\circ t}\circ f=f\circ f^{\circ t}$ and by the regularity condition: $(f^{\circ t})'(0)=(f'(0))^t$ For abbreviation let $g:=f^{\circ t}$, then the first equation can be written as $g_n (f^n)_n + \sum_{m=1}^{n-1} g_m (f^m)_n = f_1 g_n + \sum_{m=2}^n f_m (g^m)_n$ where $f^n$ means the $n$th power, by the subscript index $i$ we denote the $i$th coefficients of the indexed powerseries. We can rearrange the above formula to get a recurrence relation for the coefficients of $g$: $g_n = \frac{1}{(f_1)^n-f_1}\left( f_n (g_1)^n - g_1 f_n + \sum_{m=2}^{n-1} (f_m (g^m)_n - g_m (f^m)_n) \right)$ the only undetermined coefficient is now $g_1$ but we know already that we set $g_1=f_1^t$. The coefficient $(f^m)_n$ is the entry at the m-th row and the n-th column of the Carleman matrix of $f$. We see that in the above formula also the term $(g^m)_n$ occurs though we dont know all the coefficients of $g$ yet, however we have the power formula: $\left(f^p\right)_n=\sum_{ p_1+\dots+p_n=p\\ 1p_1+\dots+np_n=n\\ p_1,\dots,p_n\ge 0 } \frac{p!}{p_1!\dots p_n!} (f_1)^{p_1}\dots (f_{n})^{p_n}$ and see that in $(f^p)_n$ the biggest index taken at $f$ that can occur is $n$. However $p_n$, the exponent of $f_n$, can be bigger than 0 only in the case $p_n=1$ (second line below the sum) and all other $p_i=0$. This however can only happen for $p=1$ (first line below the sum). This case is excluded in $(g^m)_n$ because $m\ge 2$ and so we have indeed a recurrence formula which gives a polynomial in $g_1=f_1^t$ for $g_n$. You see, we not even have to solve a linear equation system for using the diagonalization method on a fixed point at 0. Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 04/26/2008, 06:09 PM (This post was last modified: 04/26/2008, 06:40 PM by Gottfried.) Hi Henryk - bo198214 Wrote:Gottfried, as long as you deal with functions with fixed point at 0, for example $b^x-1$ or $xb^x$ there is no need to apply the Diagonalization method in its full generality. Note, that I would like to change the name from matrix operator method to diagonalization method, as this is more specific (for example Walker uses the name "matrix method" for his natural Abel method). - yes, I've no problem with it. My motive to retain the name was not to indicate something as new, but to not mix methods, which are possibly different. I hate the cases, where names are borrowed for something other than their original meaning. You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away. If you identify it fully with the name "diagonalization"-method, I've no problem to adapt this in the future, because it contains the name of the general idea behind (which is also widely acknowledged), and that is always a good manner of names... So I'll try to get used to it hoping to not provoke protests and demonstrations... What I'll keep anyway will be the term "operator" to indicate the class of matrices, whose second column are defined as the coefficients of a function (represented by a polynomial or powerseries) and the other columns are configured such that they provide the consecutive powers of that function, so that the matrix-multiplication of a vandermonde-vector (consecutive powers of one parameter) will again give a vandermonde-vector, and is thus (at least finitely often) iterable. (The Bell- and Carleman-matrices are -mutatis mutandis- instances of such matrix-"operators"). ------------------- (for the following formulae - I've to go through them in detail to see, whether they are of help to compute the coefficients. I think with the help of Aldrovandis Bell-/Carleman examples I'll be able to relate the both ways of views soon) ------------------- Quote:You see, we not even have to solve a linear equation system for using the diagonalization method on a fixed point at 0. Yepp, I already seem to have arrived at such a view. My latest Eigensystem-solver for this class of triangular matrices is already very simple, it's just a ~ten-liner in Pari/GP Code:\\ Ut is the lower triangular matrix-operator for the function \\ Ut := x -> (t^x - 1) \\ for t=exp(1) Ut is the factorially scaled matrix of Stirling-numbers 2'nd kind \\ Ut may even contain the log(t)-parameter u in *symbolic* form \\ use dim<=32 in this case to prevent excessive memory and time-consumtion. \\ Also use exact arithmetic then; provide numfmt=1 instead of =1.0 { APT_Init2EW(Ut,dim=9999,numfmt=1.0)=local(tu=Ut[2,2],UEW,UEWi,tt=exp(tu),tuv) ; dim=min(rows(Ut),dim); tuv=vectorv(dim,r,tu^(r-1)); \\ the eigenvalues UEW=numfmt*matid(dim); \\ the first eigenmatrix UEW for(c=2,dim-1, for(r=c+1,dim, UEW[r,c] = sum(k=c,r-1,Ut[r,k]*UEW[k,c])/(tuv[c]-tuv[r]) )); UEWi=numfmt*matid(dim); \\ the second eigenmatrix UEWi = UEW^-1 for(r=3,dim, forstep(c=r-1,2,-1, UEWi[r,c]=sum(k=0,r-1-c,Ut[r-k,c]*UEWi[r,r-k]) /(tuv[r] -tuv[c]) )); return([[tu,tt,exp(tu/tt)],UEW,tuv,UEWi]); \\ Ut = UEW * matdiagonal(tuv) * UEWi } This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post. Gottfried Helms, Kassel bo198214 Administrator Posts: 1,384 Threads: 90 Joined: Aug 2007 04/26/2008, 06:47 PM (This post was last modified: 04/26/2008, 06:48 PM by bo198214.) Gottfried Wrote:You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away. Nono, I didnt say that it is just regular tetration, but it is regular tetration if you do it at a fixed point. Strangely we may have different opinions about the interestingness of the application types of this method. I am fascinated to apply this method to non-fixed points, e.g. for the case $b>e^{1/e}$ while at fixed points the case is already explored by the regular iteration and yields no new results. While you, (if I understand a discussion right, that we had some time ago on this forum), whom introduced the method to this forum, rather are fascinated with finding patterns in the matrices, which rather occur at fixed points. Quote:This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post. Wow, yes this may indeed be, however I am currently too lazy to check this in detail. But at least regarding our comparison here accutely supports that they are equal. Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 07/08/2008, 06:46 AM Just read the book "Advanced combinatorics" of Louis Comtet (pg 143-14 about his method of fractional iteration for powerseries. This is just a binomial-expansion using the Bell-matrix $\hspace{24}B^t = \sum_{k=0}^{\infty} ({t \\k})*B1^k$ where $\hspace{24} B1 = B - ID$ However, with one example, with the matrix for dxp_2(x) (base= 2) the results of all three methods (Binomial-expansion, Matrix- logarithm, Diagonalization) converge to the same result. For matrix-log and binomial-expansion I need infinitely many terms to arrive at exact results (because for the general case the diagonal of the matrix is not the unit-diagonal and so the terms of the expansions are not nilpotent) while the diagonali- zation-method needs only as many terms as the truncation-size of the matrix determines, and is then constant for increasing sizes. (Well, I'm talking of triangular matrices here and without thoroughly testing...) $\hspace{24} U_t = {}^dV(\log(2))* fS2F$ So Ut is the Bell-matrix for the function $\hspace{24} Ut(x) = 2^x - 1$ Then I determined the coefficients for half-iteration Ut°0.5(x) by Ut^0.5 using all three methods. Result by Diagonalization / Binomial / Matrix-log (differences are vanishing when using more terms for the series-expansion of Binomial / matrix-logarithm; I used 200 terms here) $\hspace{24} \begin{matrix} {rrrrrrr} 1.00000000000 & . & . & . & . & . & . & . \\ 0 & 0.832554611158 & . & . & . & . & . & . \\ 0 & 0.157453119779 & 0.693147180560 & . & . & . & . & . \\ 0 & 0.0100902384840 & 0.262176641827 & 0.577082881386 & . & . & . & . \\ 0 & -0.000178584914170 & 0.0415928340834 & 0.327414558137 & 0.480453013918 \\ 0 & 0.0000878420556305 & 0.00288011566971 & 0.0829028563527 & 0.363454000182 \\ 0 & -0.00000218182495620 & 0.000191842025839 & 0.0114684142796 & 0.126396502873 \\ 0 & -0.00000702051219082 & 0.0000204251058104 & 0.00104695045599 & 0.0258020272404 \end{matrix}$ Differences: Diagonalization - binomial (200 terms) $\hspace{24} \begin{matrix} {rrrrrrr} 0.E-414 & . & . & . & . & . & . & . \\ 0 & -1.06443358765E-107 & . & . & . & . & . & . \\ 0 & 1.60236818692E-61 & -1.41872090005E-61 & . & . & . & . & . \\ 0 & -1.75734170509E-39 & 2.93458536041E-39 & -1.29912638612E-39 & . & . & . & . \\ 0 & 8.68085535070E-27 & -2.05384773365E-26 & 1.74805552817E-26 & -5.15903675680E-27 & . & . \\ 0 & -7.73773729412E-19 & 2.30948495672E-18 & -2.83611081486E-18 & 1.62496041665E-18 \\ 0 & 1.30102806362E-13 & -4.60061066251E-13 & 7.25093796854E-13 & -6.05100795132E-13 \\ 0 & -0.000000000461000932910 & 0.00000000185781313787 & -0.00000000352662825863 & 0.00000000381278773133 \end{matrix}$ Binomial - matrixlog (200 terms) $\hspace{24} \begin{matrix} {rrrrrrr} 0.E-414 & . & . & . & . & . & . & . \\ 0 & 1.55109064503E-66 & . & . & . & . & . & . \\ 0 & -5.98902754444E-56 & 5.30262558750E-56 & . & . & . & . & . \\ 0 & -1.75734170498E-39 & 2.93458536023E-39 & -1.29912638604E-39 & . & . & . & . \\ 0 & 8.68085535070E-27 & -2.05384773365E-26 & 1.74805552817E-26 & -5.15903675680E-27 & . \\ 0 & -7.73773729412E-19 & 2.30948495672E-18 & -2.83611081486E-18 & 1.62496041665E-18 \\ 0 & 1.30102806362E-13 & -4.60061066251E-13 & 7.25093796854E-13 & -6.05100795132E-13 \\ 0 & -0.000000000461000932910 & 0.00000000185781313787 & -0.00000000352662825863 \end{matrix}$ Diagonalization - matrixlog (200 terms) $\hspace{24} \begin{matrix} {rrrrrrr} 0.E-414 & . & . & . & . & . & . & . \\ 0 & -1.55109064503E-66 & . & . & . & . & . & . \\ 0 & 5.98904356812E-56 & -5.30263977471E-56 & . & . & . & . & . \\ 0 & -1.03921639872E-49 & 1.73538878994E-49 & -7.68248541586E-50 & . & . & . & . \\ 0 & 3.03477814704E-45 & -7.18038111513E-45 & 6.11155244587E-45 & -1.80377946480E-45 & . \\ 0 & -9.50474725632E-42 & 2.83773329816E-41 & -3.48603689927E-41 & 1.99810508742E-41 \\ 0 & 7.26187711067E-39 & -2.57084380097E-38 & 4.05741175331E-38 & -3.39109179435E-38 \\ 0 & -1.42479127409E-29 & 5.74050200445E-29 & -1.08939157931E-28 & 1.17741370976E-28 \end{matrix}$ Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 08/08/2008, 01:12 PM (This post was last modified: 08/09/2008, 06:39 PM by Gottfried.) I've collected basic details of the diagonalization-method as I use it for fractional U-tetration, including the Pari/GP-routine and some more sample coefficients in [update 9.8.2008] http://go.helms-net.de/math/tetdocs/APT.htm [/update] Comments welcome - Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 09/24/2008, 08:22 PM (This post was last modified: 09/25/2008, 07:38 AM by Gottfried.) Just derived a method to compute exact entries for powers of the (square) matrix-operator for T-tetration. It is applicable to positive integer powers only, but for any base. The restriction to positive integer powers lets look such solutions useless, since integer iteration-height can easily computed just using the scalar values. But I'll use this for further analysis of powerseries, series of powertowers and hopefully one time for the fractional iteration... Let's use the following notational conventions: Code:´ b^^h - the powertower of height h using base b V(x) - the vandermonde-column-vector containing consecutive powers of its parameter x: V(x) = column(1,x,x^2,x^3,...) dV(x) - used as diagonal-matrix T - the matrix which performs T-tetration to base b (in our forum:"exp_b°t(x)" ): V(x)~ * T = V(b^x)~ U - the matrix which performs U-tetration to base b (in our forum:"dxp_b°t(x)" ): V(x)~ * U = V(b^x - 1) ~ Note, that U is lower triangular. The triangularity allows to compute exact entries for the integer matrix-powers. Then the entries for positive integer powers of T can be finitely computed and are "exact", as far as we assume scalar logarithms and exponentials as exact: Code:´ T^2 = U*dV(b^^0) * T*dV(b^^1) T^3 = U*dV(b^^0) * U*dV(b^^1) * T*dV(b^^2) ... T^h = prod_{k=0}^{h-2} (U * dV(b^^k)) * (T * dV(b^^(h-1)))This finding is interesting, because in my matrix-method I had to use fixpoint-shift to get exact entries even for integer powers, and since the fixpoints for T-tetration are real only for a small range of bases we had to deal with complex-valued U-matrices when considering the general case. Here we do not need a fixpoint-shift. I did not check how this computation is related to Ioannis Galidakis' method for exact entries yet, but I think, this is interesting too. Here is the top left of the symbolic T^2, where lambda=log(b). Each row has to be multiplied by the entry in the most left column and each column must also be multiplied by the entry in the first row. Here is the top left of the symbolic T^3. (b^^2 means b^b). Legend as before The difference between the symbolic computation and the simple matrixpower is interesting. I used dim=64x64, base b=sqrt(2), which provides a good approximation when the simple matrix-power is computed. Here are two (zoomed) images: very good aproximation in the leading 12 columns (abs differences to the exact values <1e-20 ), but in the columns 52 to 63 the differences grow up to absolute values greater than 1e10. Surely I "knew" that differences should occur, but I hadn't guessed, that they are so large - I just didn't investigate this in detail. The leading first twelve columns of the matrix of differences: The twelve rightmost columns: The large errors are actually still relatively small for that base b=sqrt(2). A measure for the quality of approximation is, whether the resulting vector of V(x)~*T^3 = Y~ is actually vandermonde and thus Y = V(y) . This means, that the ratios of logarithms of its entries : log(Y[k])/log(Y[1]), k=0..63, should give the exact sequence [0,1,2,3,...], because this means, that Y contains indeed the consecutive powers of Y[1]. Here is a table of that ratios. ( Remember: we check the col-sums of the third power of T, using x=1) Code:´ column symbolic "naive" difference ------------------------------------------------------------ 0 -1.13831366798E-19 0.E-201 -1.13831366798E-19 1 1.00000000000 1.00000000000 0.E-202 2 2.00000000000 2.00000000000 1.11189479752E-19 3 3.00000000000 3.00000000000 2.20208219209E-19 4 4.00000000000 4.00000000000 3.27448139227E-19 .... 42 42.0000000000 42.0000000000 1.19448154596E-11 43 43.0000000000 43.0000000000 3.09972987348E-11 44 44.0000000000 43.9999999999 7.77154497620E-11 45 45.0000000000 44.9999999998 1.88541576938E-10 46 46.0000000000 45.9999999996 0.000000000443257385765 47 47.0000000000 46.9999999990 0.00000000101122118773 48 48.0000000000 47.9999999978 0.00000000224146913405 49 49.0000000000 48.9999999952 0.00000000483322217091 50 50.0000000000 49.9999999899 0.0000000101495726916 51 51.0000000000 50.9999999792 0.0000000207790885254 52 52.0000000000 51.9999999585 0.0000000415151935602 53 53.0000000000 52.9999999190 0.0000000810212593226 54 54.0000000000 53.9999998454 0.000000154592714129 55 55.0000000000 54.9999997114 0.000000288630924056 56 56.0000000000 55.9999994723 0.000000527723771045 57 57.0000000000 56.9999990544 0.000000945602091679 58 58.0000000000 57.9999983383 0.00000166172556358 59 59.0000000000 58.9999971341 0.00000286585838319 60 60.0000000000 59.9999951463 0.00000485372880576 61 61.0000000000 60.9999919223 0.00000807772027087 62 62.0000000000 61.9999867825 0.0000132174922318 63 63.0000000000 62.9999787236 0.0000212764324046 For base b=2 this looks already catastrophic for the "naive"-computation: Code:´ column symbolic "naive" difference ------------------------------------------------------------ 0 -4.36636233681E-20 0.E-202 -4.36636233681E-20 1 1.00000000000 1.00000000000 0.E-202 2 2.00000000000 2.00000000000 2.46662212171E-14 3 3.00000000000 2.99999999998 2.31650095614E-11 4 4.00000000000 3.99999999669 0.00000000331064284896 5 5.00000000000 4.99999985867 0.000000141325875791 6 6.00000000000 5.99999733964 0.00000266036059669 .... 50 50.0000000000 37.9413247398 12.0586752602 51 51.0000000000 38.3796009369 12.6203990631 52 52.0000000000 38.8098393554 13.1901606446 53 53.0000000000 39.2323121795 13.7676878205 54 54.0000000000 39.6472796473 14.3527203527 55 55.0000000000 40.0549905348 14.9450094652 56 56.0000000000 40.4556826502 15.5443173498 57 57.0000000000 40.8495833279 16.1504166721 58 58.0000000000 41.2369099170 16.7630900830 59 59.0000000000 41.6178702587 17.3821297413 60 60.0000000000 41.9926631496 18.0073368504 61 61.0000000000 42.3614787893 18.6385212107 62 62.0000000000 42.7244992089 19.2755007911 63 63.0000000000 43.0818986820 19.9181013180 It is obvious, that we should use the "exact" (symbolic) description, if we ever explicitely consider powers of the tetration-matrix T. Gottfried Helms, Kassel bo198214 Administrator Posts: 1,384 Threads: 90 Joined: Aug 2007 09/26/2008, 07:30 AM Gottfried Wrote:Just derived a method to compute exact entries for powers of the (square) matrix-operator for T-tetration. It is applicable to positive integer powers only, but for any base. ... Code:T^2 = U*dV(b^^0) * T*dV(b^^1) T^3 = U*dV(b^^0) * U*dV(b^^1) * T*dV(b^^2) ... T^h = prod_{k=0}^{h-2} (U * dV(b^^k)) * (T * dV(b^^(h-1)))So the finding is that though the matrix multiplication of the T's is infinite, the result is expressible in finite terms (polynomials) of $\text{log}(b)$? Quote:This finding is interesting, because in my matrix-method I had to use fixpoint-shift to get exact entries even for integer powers, and since the fixpoints for T-tetration are real only for a small range of bases we had to deal with complex-valued U-matrices when considering the general case. Here we do not need a fixpoint-shift. I did not check how this computation is related to Ioannis Galidakis' method for exact entries yet, but I think, this is interesting too. That would be useful to compare. Can you derive a recurrence from your matrix formula? Gottfried Ultimate Fellow Posts: 753 Threads: 114 Joined: Aug 2007 09/26/2008, 09:56 AM (This post was last modified: 09/26/2008, 11:28 AM by Gottfried.) bo198214 Wrote:So the finding is that though the matrix multiplication of the T's is infinite, the result is expressible in finite terms (polynomials) of $\text{log}(b)$? Exactly. The occuring infinite sums are decomposable into sums of simple exponential-series for which then exact values are defined (and computable to arbitrary precision). The compositions of the evaluated exponentials contain then only finitely many terms. See the display of the matrix T^2. I found these by studying the dot-products of the rows and second column of T*T which was then easily generalizable to the other columns of the matrix-product. Quote:That would be useful to compare. Can you derive a recurrence from your matrix formula? Yes. For this I see another streamlining of the formula first: Note, that T = U * P~ , which can also be seen by the operation Code:´ V(x)~ * T = V(b^x)~ V(x)~ * U = V(b^x -1)~ also: V(b^x -1)~ * P~ = V(b^x)~ thus: V(x)~ * (U * P~ ) = V(b^x)~ ==> U * P~ = T A further adaption can be made. We have, that the final term in T^2 = U*dV(b^^0) * T*dV(b^^1) is Code:´ T*dV(b^^1) = U *P~ * dV(b^^1) We can rewrite this in terms of a power of P ( use notation "PPow()" ) by expanding with the trivial product dV(b^^1)*dV(1/b^^1)= I Code:´ T*dV(b^^1) = U * dV(b^^1)*dV(1/b^^1) * P~ * dV(b^^1) = U * dV(b^^1)* (dV(1/b^^1) * P~ * dV(b^^1)) = U * dV(b^^1)* PPow(b^^1) ~The general product-formula changes then to Code:´ T^h = prod_{k=0}^{h-1} (U * dV(b^^k)) * PPow(b^^(h-1)) ~ and a recursion is then Code:´ T^(h+1) = T^h * PPow(-b^^(h-1))~ * U * dV(b^^h) * PPow(b^^h)~ or, even more simple T^(h+1) = T^h * PPow(-b^^(h-1))~ * T * dV(b^^h) where the first part of the product , T^h * PPow(-b^^(h-1))~ , gives a triangular matrix. The recursion may also be seen "in action", when evaluated for a parameter x. We need an ascii notation for iterated exponentiation, I use x.b^^h for exp_b°h(x) here, or if x=1, simply b^^h . We have by definition, that Code:' V(x)~ * T^(h+1) = V(x.b^^(h+1)) ~ Using the recursion-formula we get Code:' V(x)~ * T^(h+1) = V(x) ~*T^h * PPow(-b^^(h-1))~ * T * dV(b^^h) = V(x.b^^h) ~ * PPow(-b^^(h-1))~ * T * dV(b^^h) = V(x.b^^h - b^^(h-1))~ * T * dV(b^^h) = V(b^(x.b^^h - b^^(h-1)))~ * dV(b^^h) = V(b^(x.b^^h)/b^(b^^(h-1)))~ * dV(b^^h) = V(x.b^^(h+1)/b^^h)~ * dV(b^^h) = V(x.b^^(h+1))~ * dV(1/b^^h) * dV(b^^h) = V(x.b^^(h+1))~ * I = V(x.b^^(h+1))~which is the same result. Gottfried Helms, Kassel « Next Oldest | Next Newest »
Possibly Related Threads... Thread Author Replies Views Last Post A fundamental flaw of an operator who's super operator is addition JmsNxn 4 5,730 06/23/2019, 08:19 PM Last Post: Chenjesu Tetration and Sign operator tetration101 0 133 05/15/2019, 07:55 PM Last Post: tetration101 2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,150 12/21/2016, 01:27 PM Last Post: tommy1729 Tommy's matrix method for superlogarithm. tommy1729 0 1,372 05/07/2016, 12:28 PM Last Post: tommy1729 [split] Understanding Kneser Riemann method andydude 7 6,641 01/13/2016, 10:58 PM Last Post: sheldonison [2015] New zeration and matrix log ? tommy1729 1 2,706 03/24/2015, 07:07 AM Last Post: marraco Kouznetsov-Tommy-Cauchy method tommy1729 0 1,772 02/18/2015, 07:05 PM Last Post: tommy1729 Problem with cauchy method ? tommy1729 0 1,605 02/16/2015, 01:51 AM Last Post: tommy1729 Regular iteration using matrix-Jordan-form Gottfried 7 7,444 09/29/2014, 11:39 PM Last Post: Gottfried picturing method. tommy1729 4 3,343 07/03/2014, 08:23 AM Last Post: tommy1729
Users browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 52, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927605926990509, "perplexity": 2555.3640768978526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00156.warc.gz"} |
https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Book%3A_Introductory_Quantum_Mechanics_(Fitzpatrick)/14%3A_Scattering_Theory/14.3%3A_Partial_Waves | $$\require{cancel}$$
# 14.3: Partial Waves
We can assume, without loss of generality, that the incident wavefunction is characterized by a wavevector $${\bf k}$$ that is aligned parallel to the $$z$$-axis. The scattered wavefunction is characterized by a wavevector $${\bf k}'$$ that has the same magnitude as $${\bf k}$$, but, in general, points in a different direction. The direction of $${\bf k}'$$ is specified by the polar angle $$\theta$$ (i.e., the angle subtended between the two wavevectors), and an azimuthal angle $$\phi$$ about the $$z$$-axis. Equations ([e17.38]) and ([e17.39]) strongly suggest that for a spherically symmetric scattering potential [i.e., $$V({\bf r}) = V(r)$$] the scattering amplitude is a function of $$\theta$$ only: that is, $f(\theta, \phi) = f(\theta).$ It follows that neither the incident wavefunction,
$\label{e17.52} \psi_0({\bf r}) = \sqrt{n}\,\exp(\,{\rm i}\,k\,z)= \sqrt{n}\,\exp(\,{\rm i}\,k\,r\cos\theta),$ nor the large-$$r$$ form of the total wavefunction,
$\label{e17.53} \psi({\bf r}) = \sqrt{n} \left[ \exp(\,{\rm i}\,k\,r\cos\theta) + \frac{\exp(\,{\rm i}\,k\,r)\, f(\theta)} {r} \right],$ depend on the azimuthal angle $$\phi$$.
Outside the range of the scattering potential, both $$\psi_0({\bf r})$$ and $$\psi({\bf r})$$ satisfy the free-space Schrödinger equation,
$\label{e17.54} (\nabla^{\,2} + k^{\,2})\,\psi = 0.$ What is the most general solution to this equation in spherical polar coordinates that does not depend on the azimuthal angle $$\phi$$? Separation of variables yields
$\label{e17.55} \psi(r,\theta) = \sum_l R_l(r)\, P_l(\cos\theta),$ because the Legendre functions, $$P_l(\cos\theta)$$, form a complete set in $$\theta$$-space. The Legendre functions are related to the spherical harmonics, introduced in Chapter [sorb], via $P_l(\cos\theta) = \sqrt{\frac{4\pi}{2\,l+1}}\, Y_{l,0}(\theta,\varphi).$ Equations ([e17.54]) and ([e17.55]) can be combined to give
$r^{\,2}\,\frac{d^{\,2} R_l}{dr^{\,2}} + 2\,r \,\frac{dR_l}{dr} + [k^{\,2} \,r^{\,2} - l\,(l+1)]\,R_l = 0.$ The two independent solutions to this equation are the spherical Bessel functions, $$j_l(k\,r)$$ and $$y_l(k\,r)$$, introduced in Section [rwell]. Recall that
\begin{aligned} \label{e17.58a} j_l(z) &= z^{\,l}\left(-\frac{1}{z}\frac{d}{dz}\right)^l\left( \frac{\sin z}{z}\right), \\[0.5ex]\label{e17.58b} y_l(z) &= -z^{\,l}\left(-\frac{1}{z}\frac{d}{dz}\right)^l \left(\frac{\cos z}{z}\right).\end{aligned} Note that the $$j_l(z)$$ are well behaved in the limit $$z\rightarrow 0$$ , whereas the $$y_l(z)$$ become singular. The asymptotic behavior of these functions in the limit $$z\rightarrow \infty$$ is
\begin{aligned} \label{e17.59a} j_l(z) &\rightarrow \frac{\sin(z - l\,\pi/2)}{z},\\[0.5ex] y_l(z) &\rightarrow - \frac{\cos(z-l\,\pi/2)}{z}.\label{e17.59b}\end{aligned}
We can write $\exp(\,{\rm i}\,k\,r \cos\theta) = \sum_l a_l\, j_l(k\,r)\, P_l(\cos\theta),$ where the $$a_l$$ are constants. Note there are no $$y_l(k\,r)$$ functions in this expression because they are not well-behaved as $$r \rightarrow 0$$. The Legendre functions are orthonormal ,
$\label{e17.61} \int_{-1}^1 P_n(\mu) \,P_m(\mu)\,d\mu = \frac{\delta_{nm}}{n+1/2},$ so we can invert the previous expansion to give $a_l \,j_l(k\,r) = (l+1/2)\int_{-1}^1 \exp(\,{\rm i}\,k\,r \,\mu) \,P_l(\mu) \,d\mu.$ It is well known that $j_l(y) = \frac{(-{\rm i})^{\,l}}{2} \int_{-1}^1 \exp(\,{\rm i}\, y\,\mu) \,P_l(\mu)\,d\mu,$ where $$l=0, 1, 2, \cdots$$ . Thus, $a_l = {\rm i}^{\,l} \,(2\,l+1),$ giving
$\label{e15.49} \psi_0({\bf r}) = \sqrt{n}\,\exp(\,{\rm i}\,k\,r \cos\theta) =\sqrt{n}\, \sum_l {\rm i}^{\,l}\,(2\,l+1)\, j_l(k\,r)\, P_l(\cos\theta).$ The previous expression tells us how to decompose the incident plane-wave into a series of spherical waves. These waves are usually termed “partial waves”.
The most general expression for the total wavefunction outside the scattering region is $\psi({\bf r}) = \sqrt{n}\sum_l\left[ A_l\,j_l(k\,r) + B_l\,y_l(k\,r)\right] P_l(\cos\theta),$ where the $$A_l$$ and $$B_l$$ are constants. Note that the $$y_l(k\,r)$$ functions are allowed to appear in this expansion because its region of validity does not include the origin. In the large-$$r$$ limit, the total wavefunction reduces to $\psi ({\bf r} ) \simeq \sqrt{n} \sum_l\left[A_l\, \frac{\sin(k\,r - l\,\pi/2)}{k\,r} - B_l\,\frac{\cos(k\,r -l\,\pi/2)}{k\,r} \right] P_l(\cos\theta),$ where use has been made of Equations ([e17.59a]) and ([e17.59b]). The previous expression can also be written
$\label{e17.68} \psi ({\bf r} ) \simeq \sqrt{n} \sum_l C_l\, \frac{\sin(k\,r - l\,\pi/2+ \delta_l)}{k\,r}\, P_l(\cos\theta),$ where the sine and cosine functions have been combined to give a sine function which is phase-shifted by $$\delta_l$$. Note that $$A_l=C_l\,\cos\delta_l$$ and $$B_l=-C_l\,\sin\delta_l$$.
Equation ([e17.68]) yields $\psi({\bf r}) \simeq \sqrt{n} \sum_l C_l\left[ \frac{{\rm e}^{\,{\rm i}\,(k\,r - l\,\pi/2+ \delta_l)} -{\rm e}^{-{\rm i}\,(k\,r - l\,\pi/2+ \delta_l)} }{2\,{\rm i}\,k\,r} \right] P_l(\cos\theta),\label{e17.69}$ which contains both incoming and outgoing spherical waves. What is the source of the incoming waves? Obviously, they must be part of the large-$$r$$ asymptotic expansion of the incident wavefunction. In fact, it is easily seen from Equations ([e17.59a]) and ([e15.49]) that
$\psi_0({\bf r}) \simeq \sqrt{n} \sum_l {\rm i}^{\,l}\, (2l+1)\left[\frac{ {\rm e}^{\,{\rm i}\,(k\,r - l\,\pi/2)} -{\rm e}^{-{\rm i}\,(k\,r - l\,\pi/2)}}{2\,{\rm i}\,k\,r} \right]P_l(\cos\theta)\label{e17.70}$ in the large-$$r$$ limit. Now, Equations ([e17.52]) and ([e17.53]) give
$\label{e17.71} \frac{\psi({\bf r} )- \psi_0({\bf r}) }{ \sqrt{n}} = \frac{\exp(\,{\rm i}\,k\,r)}{r}\, f(\theta).$ Note that the right-hand side consists of an outgoing spherical wave only. This implies that the coefficients of the incoming spherical waves in the large-$$r$$ expansions of $$\psi({\bf r})$$ and $$\psi_0({\bf r})$$ must be the same. It follows from Equations ([e17.69]) and ([e17.70]) that $C_l = (2\,l+1)\,\exp[\,{\rm i}\,(\delta_l + l\,\pi/2)].$ Thus, Equations ([e17.69])–([e17.71]) yield
$\label{e17.73} f(\theta) = \sum_{l=0,\infty} (2\,l+1)\,\frac{\exp(\,{\rm i}\,\delta_l)} {k} \,\sin\delta_l\,P_l(\cos\theta).$ Clearly, determining the scattering amplitude, $$f(\theta)$$, via a decomposition into partial waves (i.e., spherical waves) is equivalent to determining the phase-shifts, $$\delta_l$$.
Now, the differential scattering cross-section, $$d\sigma/d{\mit\Omega}$$, is simply the modulus squared of the scattering amplitude, $$f(\theta)$$. [See Equation ([e15.17]).] The total cross-section is thus given by \begin{aligned} \sigma_{\rm total}& = \int |f(\theta)|^{\,2}\,d{\mit\Omega}\\[0.5ex] &= \frac{1}{k^{\,2}} \oint d\phi \int_{-1}^{1} d\mu \sum_l \sum_{l'} (2\,l+1)\,(2\,l'+1) \exp[\,{\rm i}\,(\delta_l-\delta_{l'})]\, \sin\delta_l \,\sin\delta_{l'}\, P_l(\mu)\, P_{l'}(\mu),\nonumber\end{aligned} where $$\mu = \cos\theta$$. It follows that $\label{e17.75} \sigma_{\rm total} = \frac{4\pi}{k^{\,2}} \sum_l (2\,l+1)\,\sin^2\delta_l,$ where use has been made of Equation ([e17.61]).
# Contributors
• Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)
$$\newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}$$ $$\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}$$ $$\newcommand {\btau}{\mbox{\boldmath\tau}}$$ $$\newcommand {\bmu}{\mbox{\boldmath\mu}}$$ $$\newcommand {\bsigma}{\mbox{\boldmath\sigma}}$$ $$\newcommand {\bOmega}{\mbox{\boldmath\Omega}}$$ $$\newcommand {\bomega}{\mbox{\boldmath\omega}}$$ $$\newcommand {\bepsilon}{\mbox{\boldmath\epsilon}}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880173206329346, "perplexity": 393.55028125673465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00400.warc.gz"} |
https://iwaponline.com/hr/article-abstract/33/1/75/836/Influence-of-Sub-Debris-Thawing-on-Ablation-and?redirectedFrom=PDF | Superficial moraines grew in size during the entire 32-year-long period of direct monitoring of water and ice balance of the Djankuat Glacier in the Caucasus. The total area of debris cover on the glacier increased from 0.104 km2 (3% of the entire glacier surface) in 1968 to 0.266 km2 (8% of the glacier) in 1996. Such rapid dynamics of moraine formation greatly influences the ablation rate and distorts fields of mass-balance components. Sub-debris thawing can be calculated by means of a model, which describes the role of debris cover for the thermal properties of a glacier. Its meltwater equivalent depends mainly on debris thickness. In 1983 and 1994 the debris cover was repeatedly mapped over the whole glacier portion that was covered with morainic material. Sub-moraine ablation increases (vs. pure ice surface) under the thin, less than ca. 7-8 cm, debris layer, whereas the thicker debris cover reduces the liquid runoff due to its shielding effect. Zones differing due to their hydrological effect are depicted on the glacier map and the degree of debris influence on ablation is estimated quantitatively. As a whole runoff from debris-covered parts of the Djankuat Glacier has diminished due to the dominant shielding effect. Variation of the terminus is also shown to be dependent on the evolution of superficial moraine.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691496253013611, "perplexity": 4054.9678240838734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00280.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2014_v51n3_831 | A FURTHER INVESTIGATION OF GENERATING FUNCTIONS RELATED TO PAIRS OF INVERSE FUNCTIONS WITH APPLICATIONS TO GENERALIZED DEGENERATE BERNOULLI POLYNOMIALS
Title & Authors
A FURTHER INVESTIGATION OF GENERATING FUNCTIONS RELATED TO PAIRS OF INVERSE FUNCTIONS WITH APPLICATIONS TO GENERALIZED DEGENERATE BERNOULLI POLYNOMIALS
Gaboury, Sebastien; Tremblay, Richard;
Abstract
In this paper, we obtain new generating functions involving families of pairs of inverse functions by using a generalization of the Srivastava's theorem [H. M. Srivastava, Some generalizations of Carlitz's theorem, Pacific J. Math. 85 (1979), 471-477] obtained by Tremblay and Fug$\small{\grave{e}}$ere [Generating functions related to pairs of inverse functions, Transform methods and special functions, Varna '96, Bulgarian Acad. Sci., Sofia (1998), 484-495]. Special cases are given. These can be seen as generalizations of the generalized Bernoulli polynomials and the generalized degenerate Bernoulli polynomials.
Keywords
generating functions;multiparameter and multivariate generating functions;inverse functions;Bernoulli polynomials;N$\small{\ddot{o}}$rlund polynomials;
Language
English
Cited by
References
1.
L. Carlitz, A degenerate Staudt-Clausen theorem, Arch. Math. 7 (1956), 28-33.
2.
L. Carlitz, A class of generating functions, SIAM J. Math. Anal. 8 (1977), no. 3, 518-532.
3.
L. Carlitz, Degenerate Stirling, Bernoulli and Eulerian numbers, Utilitas Math. 15 (1979), 51-88.
4.
L. Carlitz and H. M. Srivastava, Some new generating functions for the Hermite polynomials, J. Math. Anal. Appl. 149 (1990), 513-520.
5.
R. Donaghey, Two transformations of series that commute with compositional inversion, J. Combin. Theory Ser. A 27 (1979), no. 3, 360-364.
6.
A. Erdelyi, W. Magnus, F. Oberhettinger, and F. Tricomi, Higher Transcendental Functions. Vols. 1-3, New York, Toronto and London, McGraw-Hill Book Company, 1953.
7.
M. Garg, K. Jain, and H. M. Srivastava, Some relationships between the generalized apostol-bernoulli polynomials and Hurwitz-Lerch zeta functions, Integral Transform Spec. Funct. 17 (2006), no. 11, 803-815.
8.
A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, 2006.
9.
J.-L. Lavoie, T. J. Osler, and R. Tremblay, Fundamental properties of fractional derivatives via Pochhammer integrals, Lecture Notes in Mathematics, 1974.
10.
Y. Luke, The Special Functions and Their Approximations. Vols. 1-2, Mathematics in Science and Engineering, New York and London, Academic Press, 1969.
11.
Q.-M. Luo, Apostol-Euler polynomials of higher order and Gaussian hypergeometric functions, Taiwanese J. Math. 10 (2006), no. 4, 917-925.
12.
Q.-M. Luo, The multiplication formulas for the apostol-bernoulli and Apostol-Euler polynomials of higher order, Integral Transform Spec. Funct. 20 (2009), no. 5-6, 377-391.
13.
Q.-M. Luo, B.-N. Guo, F. Qui, and L. Debnath, Generalizations of Bernoulli numbers and polynomials, Int. J. Math. Math. Sci. 2003 (2003), no. 59, 3769-3776.
14.
Q.-M. Luo and H. M. Srivastava, Some generalizations of the Apostol-Bernoulli and Apostol-Euler polynomials, J. Math. Anal. Appl. 308 (2005), no. 1, 290-302.
15.
Q.-M. Luo and H. M. Srivastava, Some relationships between the Apostol-Bernoulli and Apostol-Euler polynomials, Comput. Math. Appl. 51 (2006), no. 3-4, 631-642.
16.
N. Nielsen, Traite elementaire des nombres de bernoulli, Gauthier-Villars, Paris, 1923.
17.
N. E. Norlund, Vorlesungen der differenzenrechnung, Sringer, Berlin, 1924.
18.
T. J. Osler, Fractional derivatives of a composite function, SIAM J. Math. Anal. 1 (1970), 288-293.
19.
T. J. Osler, Leibniz rule for fractional derivatives and an application to infinite series, SIAM J. Appl. Math. 18 (1970), 658-674.
20.
T. J. Osler, Leibniz rule, the chain rule and Taylor's theorem for fractional derivatives, Ph.D. thesis, New York University, 1970.
21.
T. J. Osler, A further extension of the Leibniz rule to fractional derivatives and its relation to Parseval's formula, SIAM J. Math. Anal. 3 (1972), 1-16.
22.
G. Polya and G. Szego, Problems and Theorems in Analysis. Vol. 1, (Translated from the German by D. Aeppli), Springer-Verlag, New York, Heidelberg and Berlin, 1972.
23.
H. M. Srivastava, Some generalizations of Carlitz's theorem, Pacific J. Math. 85 (1979), no. 2, 471-477.
24.
H. M. Srivastava, Some bilateral generating functions for a certain class of special functions. I, II, Nederl. Akad. Wetensch. Indag. Math. 42 (1980), no. 2, 221-233, 234-246.
25.
H. M. Srivastava, Some generating functions for Laguerre and Bessel polynomials, Bull. Inst. Math. Acad. Sinica (1980), no. 4, 571-579.
26.
H. M. Srivastava, Some formulas for the Bernoulli and Euler polynomials at rational arguments, Math. Proc. Cambridge Philos. Soc. 129 (2000), no. 1, 77-84.
27.
H. M. Srivastava and J. P. Singhal, New generating functions for Jacobi and related polynomials, J. Math. Anal. Appl. 41 (1973), 748-752.
28.
H. M. Srivastava, M. Garg, and S. Choudhary, A new generalization of the Bernoulli and related polynomials, Russ. J. Math. Phys. 17 (2010), no. 2, 251-261.
29.
H. M. Srivastava, M. Garg, and S. Choudhary, Some new families of generalized Euler and Genocchi polynomials, Taiwanese J. Math. 15 (2011), no. 1, 283-305.
30.
H. M. Srivastava and H. L. Manocha, A treatise on generating functions, Transform methods & special functions, Varna '96, 484-495, Bulgarian Acad. Sci., Sofia, 1998.
31.
H. M. Srivastava and A. Pinter, Remarks on some relationships between the Bernoulli and Euler polynomials, Appl. Math. Lett. 17 (2004), no. 4, 375-380.
32.
J. Touchard, Sur certaines equations fontionelles, Proc. Int. Cong. Math. Toronto 1924 (1928), 456-472.
33.
R. Tremblay and B.-J. Fugere, Generating functions related to pairs of inverse functions, Transform methods & special functions, Varna '96, Bulgarian Acad. Sci., Sofia (1998), 484-495.
34.
R. Tremblay, S. Gaboury, and B.-J. Fugere, A new Leibniz rule and its integral analogue for fractional derivatives, Integral Transforms Spec. Funct. 24 (2013), no. 2, 111-128.
35.
R. Tremblay, S. Gaboury, and B.-J. Fugere, A new transformation formula for fractional derivatives with applications, Integral Transforms Spec. Funct. 24 (2013), no. 3, 172-186.
36.
R. Tremblay, S. Gaboury, and B.-J. Fugere, Taylor-like expansion in terms of a rational function obtained by means of fractional derivatives, Integral Transforms Spec. Funct. 24 (2013), no. 1, 50-64.
37.
W. Wang, C. Jia, and T. Wang, Some results on the Apostol-Bernoulli and Apostol-Euler polynomials, Comput. Math. Appl. 55 (2008), no. 6, 1322-1332. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825651407241821, "perplexity": 2033.2281551812048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00097.warc.gz"} |
http://www.sydneytutors.com.au/hsc-chemistry-the-students-guide/hsc-chemistry-industrial-chemistry/equilibrium-equilibrium-constant/ | # Identify data, plan and perform a first-hand investigation to model an equilib- rium reaction
This experiment really just aims at showing you what equilibrium ‘looks’ like by using a model. Provided you understand Le Chatelier’s Principle thoroughly, I would not overly concerned about this experiment.
Materials:
• Two 50mL measuring cylinders
• Two pipettes of differing diameters (e.g. one 10ml, one 5ml)
Procedure:
1. Fill one 50mL measuring cylinder with water, leaving the other measuring cylinder
1. Place one pipette into first measuring cylinder, letting the pipette lightly touch the bottom. Place your finger over the top of the pipette and move its contents to the second measuring cylinder, letting your finger off the top and releasing the Do so carefully to ensure minimum spillage.
2. Place the other pipette into the second measuring cylinder, repeating the second step by empting the contents of the second pipette into the first measuring
3. Note the amount of water in both measuring cylinders and tabulate the These results represent the results after ’Cycle 1’. Repeat steps 2 & 3 until Cycle 30 is recorded. Graph the results against each other, with volume on the y-axis and cycle-count on the x-axis.
4. Repeat steps 1 to 4, but transfer 10mL of water from the second measuring cylinder to the first after the 15th cycle and continue as normal. Graph the results using the same axis as before.
Expected results:
You will find that equilibrium will be seen as the water levels on both cylinders tend towards, but do not reach, the halfway mark- 25ml of water in each measuring cylinder. The change will be relatively quick at first, but will eventually slow down to the point where the water level does not change considerably.
Step number 5 will lead to a temporary spike in the volume of water in the first measuring cylinder, and a decrease in the second measuring cylinder. However this will also be rapidly corrected at first, and then more gradually after a while.
# Choose equipment and perform a first-hand investigation to gather information and qualitatively analyse an equilibrium reaction
Given that the nature of this experiment is qualitative, this experiment is quite easily accomplished, as all you need to do is disturb a system in equilibrium, and note the changes. Clearly you must pick a disturbance which has a visible effect, and the simplest disturbance is simple a change in temperature.
Select two ampoules of nitrogen dioxide (NO2) gas. Place one ampoule in a beaker of warm-hot water, and one in a beaker of cold water. You will soon notice that one ampoule- the one which was heated- has turned a reddish brown, whereas the other- the one that was placed in cold water- has become almost completely colourless.
In effect, you are simply observing the following equilibrium:
$N_2O_{4 (g)}$ ↽⇀ $2 NO_{2 (g)}$
As $NO_2$ is a reddish brown, and $N_2O_4$ is colourless, we can effectively deduce that this equation is endothermic (Note that if the reaction was reversed such that nitrogen dioxide gas was a reactant it would be exothermic).
Confirm this by stating the effects as per Le Chatelier’s Principle. If the equation is endothermic, then an increase in heat should see a shift in the equilibrium towards the right, making the gas appear more reddish as more NO2 is produced. If there is a decrease in heat, then the gas would become colourless as more N2O4 is produced. The results are consistent with this hypothesis.
# Explain the effect of changing the following factors on identified equilibrium reactions: Pressure, volume, concentration and temperature
This dotpoint is simply a review of Le Chatelier’s Principle.
Using the following equation as an example, and assuming it is an exothermic reaction:
$2 A_{(g)} + B_{(g)}$ ↽⇀ $C_{(g)} + D_{(g)}$
An increase in pressure would shift the equilibrium to the right. An increase in the volume of the substances is essentially an increase in pressure (Same space, more atoms). As such the equilibrium would shift once more to the right to account for this. An increase in the concentration of one substance will result in the system working to minimise this change. For example, an increase in B would result in a decrease in A and B and an increase in C and D. Therefore the equilibrium has shifted to the right. An increase in temperature in an exothermic reaction will result in a shift in the equilibrium to the left as the system works to reduce the amount of heat produced.
Remember- A closed system will always work to minimise the impact of a disturbance upon the system.
# Interpret the equilibrium constant expression (no units required) from the chemical equation of equilibrium reactions
Let the following equation be any given chemical equation, where A, B, C, and D are any given substances, and a, b, c, and d are their respective molar ratios:
$aA + bB$ ↽⇀ $cC + dD$
When this equation is in equilibrium, a constant, K also known as the equilibrium constant, can be obtained using the following expression:
It helps to keep in your mind that the products are always on top of the reactants when calculating
1. K. One way to remember this is that P comes before R alphabetically.
K provides much information if it can be read successfully, as it tells us how far to completion an equilibrium is currently at. Think of it as at a scale. At one end no reactions have occurred and
there are only reactants and no products. At the other end, everything has been reacted and there
are no reactants, only products. A point of equilibrium always lies between these two points. The value of K tells us exactly where on this scale the equilibrium is currently at.
The smaller the value of K, the lower on the scale you’ll find the equilibrium- Mostly reactants, very little product. (Generally where \$latex K < 10^{−4}\$). The larger the value of K, the higher up on the scale you’ll find the equilibrium- Mostly products, very little reactant. (Generally where [latex]K < 10^4[/latex]). In between these two values, there is a moderate mixture of products and reactants, with the ratio depending upon the exact value of K, and therefore where on the scale the equilibrium can be found. When a reaction is not at equilibrium, Q replaces K in the same expression. If Q is less than K, then the reaction is lower on the scale than the point equilibrium, and thus the concentration of products must be increased in order to achieve equilibrium. Conversely, if Q is larger than K, then the reaction is higher on the scale than the point of equilibrium, and thus the concentration of reactants must be increased in order to achieve equilibrium. Remember- K is the equilibrium constant, calculated by multiplying the concentrations of the products to the power of their respective number of moles. A lower value of K indicates an equilibrium where very few products are produced, whereas a high value of K indicates that there is an equilibrium which is close to completion.
Example:
$N_{2(g)} + 3 H_{2(g)}$ ↽⇀ $2 NH_{3(g)}$
If the above equation is at equilibrium when there are 3mol of nitrogen gas, 2mol of hydrogen gas, and 2mol of ammonia, then:
# Process and present information from secondary sources to calculate K from equilibrium conditions
The easiest way to grasp how to calculate K is really to go through a worked example.
Continuing from the Haber process example, let us begin with the equation:
$N_{2 (g)} + 3 H_{2 (g)}2 NH_{3 (g)}$
At the beginning of an experiment, there were 2.1mol of nitrogen gas, and 6.9mol of hydrogen gas. The reaction was allowed to proceed to equilibrium in a 10L container, at which point 1.2mol of N2 was remaining. What is the value of K, assuming a fixed temperature?
The first step is to note the concentration of each item individually at equilibrium. If 1.2mol of N2 is remaining, then 0.9mol must have been converted. Therefore 3 x 0.9mol of H2 must also have been converted, leaving 0.3mol of H2 (For every one mole of nitrogen gas, 3 moles of hydrogen gas are converted, as per the chemical equation). In addition, there must now be 2 x 0.9mol of NH3.
We now know the concentrations of each substance. The next trick is to note that, when dealing with gases, you must account for the size of the container, as it is very rarely a simple 1L in exams. You must do this because concentration is proportional to pressure, and the size of a container influences the pressure.
Once the 10L has been taken into account, K can be calculated:
Note that the indices have been adjusted (divided by 10) to account for the 10L container.
# Identify that temperature is the only factor that changes the value of the equilibrium constant (K) for a given equation
If volume, concentrations or pressure change, then the numerator and denominator used in the calculation of the equilibrium constant K shift correspondingly, cancelling out the effect of one another. As such, these disturbances do not impact upon K.
However, when the temperature is changed, then K does in fact change.
For endothermic reactions, if the temperature increases then K increases (Remember how equilibrium ‘shifts to the left’ according to Le Chatelier’s Principle. This means it moves up the scale.) Thus for exothermic reactions, if the temperature increases then K decreases.
Remember- A change in temperature is the only factor which changes the value of K. Changes in volume, concentration, and pressure all have no effect on the value of the equilibrium constant. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744222521781921, "perplexity": 790.0223179205084}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00253.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-7-exponents-and-exponential-functions-7-3-multipying-powers-with-the-same-base-practice-and-problem-solving-exercises-page-430/56 | ## Algebra 1
$1.5 \times 10^{8}$
$a^m \cdot a^n = a^{m+n}$ Multiply the corresponding parts and use the rule above when necessary to obtain: $=(0.5\cdot 0.3) \times (10^{13} \cdot 10^{-4}) \\=0.15 \times 10^{13+(-4)} \\=0.15 \times 10^{9} \\=1.5(10^{-1}) \times 10^{9} \\=1.5 \times 10^{-1+9} \\=1.5 \times 10^{8}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930881857872009, "perplexity": 1053.4348729172154}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00579.warc.gz"} |
https://stacktuts.com/how-to-obtain-the-absolute-path-of-a-file-via-shell-bash-zsh-sh | # How to obtain the absolute path of a file via shell (bash/zsh/sh)?
## Introduction
Obtaining the absolute path of a file via Shell (BASH/ZSH/SH) can be done in a few different ways. Here are three methods that you can use:
## Method 1: Using the `pwd` and `dirname` commands
• Step 1: Navigate to the directory containing the file for which you want to obtain the absolute path.
• Step 2: Run the command `pwd` to print the current working directory. This will give you the absolute path of the directory containing the file.
• Step 3: Use the command `dirname` followed by the name of the file to obtain the full path of the file. For example, if the file is named "example.txt" and you are currently in the directory "/home/user/documents", the command `dirname /home/user/documents/example.txt` will return the full path of the file "/home/user/documents/example.txt".
## Method 2: Using the `realpath` command
• Step 1: Run the command `realpath` followed by the name of the file for which you want to obtain the absolute path. For example, if the file is named "example.txt" and it is located in the directory "/home/user/documents", the command `realpath /home/user/documents/example.txt` will return the full path of the file "/home/user/documents/example.txt".
• Step 1: Run the command `readlink -f` followed by the name of the file for which you want to obtain the absolute path. For example, if the file is named "example.txt" and it is located in the directory "/home/user/documents", the command `readlink -f /home/user/documents/example.txt` will return the full path of the file "/home/user/documents/example.txt".
In all the above examples, the commands are assuming that the file exists and you have read permissions to that file.
It's important to understand what each command does:
• `pwd` command prints the current working directory.
• `dirname` command is used to obtain the name of the directory containing a file.
• `realpath` command is used to obtain the real path of a file, resolving any symlinks in the process.
• `readlink` command is used to obtain the real path of a file, resolving any symlinks in the process.
You can use any of these command based on your shell and the scenario.
## Method 4: Using the `basename` command
• Step 1: Run the command `basename` followed by the name of the file for which you want to obtain the absolute path. For example, if the file is named "example.txt" and it is located in the directory "/home/user/documents", the command `basename /home/user/documents/example.txt` will return the name of the file "example.txt".
## Method 5: Using the `cd` command
• Step 1: Run the command `cd` followed by the directory containing the file for which you want to obtain the absolute path.
• Step 2: Run the command `pwd` to print the current working directory. This will give you the absolute path of the directory containing the file.
## Method 6: Using the `\$(pwd)`
• Step 1: Run the command `cd` followed by the directory containing the file for which you want to obtain the absolute path.
• Step 2: Run the command `echo \$(pwd)` to print the current working directory. This will give you the absolute path of the directory containing the file.
It's important to understand what each command does:
• `basename` command is used to obtain the name of a file without the directory path.
• `cd` command is used to change the current working directory.
• `\$(pwd)` command is used to expand the value of the command `pwd` in the current shell.
In all the above examples, the commands are assuming that the file exists and you have read permissions to that file. You can use any of these command based on your shell and the scenario.
It's also important to mention that, in all the above methods, if you're providing a relative path, it will be resolved to an absolute path based on the current working directory.
## Conclusion
In conclusion, obtaining the absolute path of a file via Shell (BASH/ZSH/SH) can be done using a variety of different commands such as `pwd`, `dirname`, `realpath`, `readlink`, `basename`, `cd`, and `\$(pwd)`. Each command has its own specific use case and can be used in different scenarios. It is important to understand the usage of each command and how it can be used to obtain the absolute path of a file. It's also important to mention that, in all the above methods, if you're providing a relative path, it will be resolved to an absolute path based on the current working directory. I hope this tutorial has provided you with a good understanding of how to obtain the absolute path of a file via Shell.
1. Bash | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733253717422485, "perplexity": 612.7199261715185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00604.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/search.php?author_id=8599&sr=posts | ## Search found 9 matches
Sat Mar 18, 2017 6:39 pm
Forum: *Alcohols
Topic: difference between hydroxy and alcohol
Replies: 2
Views: 836
### Re: difference between hydroxy and alcohol
I think it has to do with where the OH is attached. The I think OH on a main carbon is an alcohol. EG: Carboxylic acid has an OH, but also has another O attached to the C.
Sun Mar 12, 2017 7:41 pm
Forum: *Alkanes and Substituted Alkanes (Staggered, Eclipsed, Gauche, Anti, Newman Projections)
Topic: Staggered vs. Eclipsed
Replies: 2
Views: 338
### Re: Staggered vs. Eclipsed
Yes. In the eclipsed formation, the H atoms in an organic molecule are closer together. Electrostatics dictates that this is less favorable, and thus requires energy to achieve this orientation.
Fri Mar 03, 2017 2:50 pm
Forum: *Calculations Using ΔG° = -RT ln K
Topic: Kinetics in organic rxns
Replies: 1
Views: 589
### Kinetics in organic rxns
I was just a little confused about to what extent kinetics of organic reactions will be discussed on the quiz next week. I know that the Arrhenius equation and activation energy will be one of the topics discussed, but I remember hearing that there would be content about the kinetics of organic rxns...
Wed Feb 22, 2017 7:26 pm
Forum: Arrhenius Equation, Activation Energies, Catalysts
Topic: Activation energy
Replies: 3
Views: 361
### Re: Activation energy
The reaction with the higher activation energy essentially requires more ambient energy to initiate the reaction. Changing the temperature means changing the amount of ambient energy, so a rxn with a high activation energy is initiated more easily when there is ample energy (higher temp). A rxn with...
Mon Feb 13, 2017 5:19 pm
Forum: Appications of the Nernst Equation (e.g., Concentration Cells, Non-Standard Cell Potentials, Calculating Equilibrium Constants and pH)
Topic: Faraday Constant
Replies: 3
Views: 389
### Faraday Constant
For the Value of F in the Nernst equation and any other application of the Faraday constant, is it scaled in response to the number of electrons being transferred in the redox reaction in question? Or is it always the value of 96,485 C/mol?
Wed Feb 01, 2017 10:03 pm
Forum: Thermodynamic Definitions (isochoric/isometric, isothermal, isobaric)
Topic: Adiabatic vs Isothermal
Replies: 2
Views: 520
### Re: Adiabatic vs Isothermal
Based on that definition, an adiabatic process is by its very nature isothermal, as a process that involves no heat exchange would experience a constant temperature. I don't believe this would affect calculations, as the isothermal designation is the most common.
Thu Jan 26, 2017 6:14 pm
Forum: Heat Capacities, Calorimeters & Calorimetry Calculations
Topic: Molar or Specific heat
Replies: 2
Views: 307
### Re: Molar or Specific heat
Usually only one will be given to you, but if there is some ambiguity, You usually can tell by the amount thats given in the problem. (I.E. if they give a smaller amount, use specific heat capacity, or if they give you one mol, it'd be molar heat capacity.
Fri Jan 20, 2017 11:34 pm
Forum: Calculating Work of Expansion
Topic: entropy equations
Replies: 3
Views: 326
### Re: entropy equations
I don't think it means volume is equal to work, but work is measured in the change in the volume of a system (the work done by the system or the work done to a system). I believe it is related to the relation q=-w, which is equal to nRT ln V2/V1. I believe thats how the W2/W1 becomes the V2/V1 in th...
Thu Jan 12, 2017 9:56 pm
Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation)
Topic: Standard Enthalpy of Formation
Replies: 2
Views: 384
### Re: Standard Enthalpy of Formation
It is my understanding that standard enthalpies of formation can have both negative and positive values because it indicates the heat involved in the reaction that forms a substance. A negative enthalpy of formation implies that the formation releases heat, and a positive enthalpy of formation impli...
Go to advanced search | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614962697029114, "perplexity": 3457.2095081400607}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00689.warc.gz"} |
https://www.physicsforums.com/threads/related-rates-homework-problem.143549/ | # Related Rates Homework Problem
1. Nov 13, 2006
### muna580
I am trying to do the problem below but I don't understand how to do it. Can you please show me how to do it? DON'T give me the answer, explain to me how to get the answer.
Point C moves at a constant rate along semicircle centered at ) from A to B. The radius of the semicircle is 10 cm, and it takes 30 sec for C to move from A to C. Angle COB has measure y radians, angle OCA has measure z radians, and AC = x cm as indicated in the figure.
a) What is the rate of change, in radians per sec, of x with respect to time?
b) What is the rate of change, in radians per sec, of y with respect to time?
c) x and y are related by the Law of Cosines; that is, y^2 = 10^2 + 10^2 - 2(10)(10)cos y. What is the rate of change of x with respect to time when y = π/2 radians?
d) Let D be the area of ΔOAC. Show that D is largest when x = π/2 radians.
Last edited by a moderator: Apr 22, 2017
2. Nov 13, 2006
### HallsofIvy
Staff Emeritus
The first thing you should do is go back and take the time to copy the problem correctly! You have consistently confused x and y!
You just told us that x is measured in cm, not radians! Do you mean "in cm per sec" or do you mean rate of change of y?
Well, they've pretty much given you the answer right there! Except that, of course you mean x^2= 10^2+ 10^2- 2(10)(10)cos y. Differentiate both sides of that with respect to t. You were also told that "it takes 30 sec for C to move from A to C" which doesn't really make sense. I think you meant that it take 30 sec for the moving point to move from A to C. Unless you are given some information about exactly where C is, I don't see how that helps you! Since they specify y= $\pi$/2 radians, do they mean it take 30 seconds to go from A to $\pi$/2 radians?
The altitude of that triangle is 10 sin(y). (And again, x cannot be "$\pi/2$ radians", it is a length. Presumably, you meant y.)
Last edited by a moderator: Apr 22, 2017
3. Mar 9, 2011
### alyssajune
Can you help me with the problem below except:
angle AOC = x
angle ACO = y
and AC = s
Thanks!!!
4. Mar 9, 2011
### Char. Limit
Aside from the fact that you're hijacking a four-year-old thread, you need to show some work before we help you with a problem. It's forum policy.
Similar Discussions: Related Rates Homework Problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611363768577576, "perplexity": 728.791952114912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125074.20/warc/CC-MAIN-20170423031205-00292-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/85564/creating-two-filled-plots-that-overlap-with-transparency | # Creating two filled plots that overlap with transparency
I'm using pgfplots to plot two filled curves. I'd like them to overlap transparently so that the "lower" figure can be seen through the "upper one". Using the forget style mode will draw the first figure transparently, but that's the one that is below !
MWE:
\begin{tikzpicture}
\begin{axis}[ybar,bar width=2pt]
\end{axis}
\end{tikzpicture}
Is there a direct way to get somewhat transparent overlapping plots ?
-
add fill opacity=0.5 to the addplot command and please include a simple MWE which in this case 6-7 lines long so easier than typing out the question :). – percusse Dec 5 '12 at 0:34
The problem with what you recommend is that it doesn't appear to work for bar charts. – Suresh Dec 5 '12 at 0:48
I think I figured out my problem. I need to use draw opacity instead of fill opacity. Not sure whether I should delete the question. Ideally @percusse can post an answer and I'll accept it, since that's what got me on the right track. – Suresh Dec 5 '12 at 0:53
You can use fill opacity key to change the ...hmm... fill opacity.
\documentclass{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.7}
\begin{document}
\begin{tikzpicture}
\begin{axis}[enlargelimits=false] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218256831169128, "perplexity": 1629.7361143099347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.28/warc/CC-MAIN-20160723071024-00318-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://iwaponline.com/wst/article-abstract/54/11-12/257/13877/Temperature-and-conductivity-as-control-parameters?redirectedFrom=fulltext | Most sewer system performance indicators are not easily measurable online at high frequencies in wastewater systems, which hampers real-time control with those parameters. Instead of using a constituent of wastewater, an alternative could be to use characteristics of wastewater that are relatively easily measurable in sewer systems and could serve as indicator parameters for the dilution process of wastewater. This paper focuses on the possibility to use the parameters of temperature and conductivity. It shows a good relation of temperature and conductivity with the dilution of DWF (dry weather flow) during WWF (wet weather flow) a monitoring station in Graz, Austria, as an example. The simultaneous monitoring of both parameters leads to valuable back-up information in case one parameter (temperature) shows no reaction to a storm event. However, for various reasons, anomalies occur in the typical behaviour of both parameters. The frequency and extent of these anomalies will determine the usefulness of the proposed parameters in a system for pollution-based real-time control. Both the normal behaviour and the anomalies will be studied further by means of trend and correlation analyses of data to be obtained from a monitoring network for the parameters of interest that is currently being set up in the Netherlands.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638302683830261, "perplexity": 1054.922096576846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00027.warc.gz"} |
http://mathhelpforum.com/algebra/214046-need-assistance-algebra-word-problem.html | # Thread: Need assistance on algebra word problem
1. ## Need assistance on algebra word problem
solve the multiplication problem 23*27 by writing equations that use expanded forms and the distributed property. relate the equation to the steps in partial-products algorithm for calculating 23*27. Use this to explain why the partial-products algorithm calculates the correct answer to 23*27.
2. ## Re: Need assistance on algebra word problem
Originally Posted by green11
solve the multiplication problem 23*27 by writing equations that use expanded forms and the distributed property. relate the equation to the steps in partial-products algorithm for calculating 23*27. Use this to explain why the partial-products algorithm calculates the correct answer to 23*27.
What is a "partial-products algorithm?" Do you perhaps mean something like this: (20 + 3)(20 + 7)?
-Dan
3. ## Re: Need assistance on algebra word problem
no, something like this
partial produce algorithm example
Original problem
764
x58
------
764
x58
-------
32-8*4
480-8*60
5,600-8*700
200-50*4
3,000-*50*60
35,000-50*700
44,312(total after adding the amounts, 32+480...)
4. ## Re: Need assistance on algebra word problem
Dan's interpretation of your problem is correct.
We can consider the product of (764)x(58) as (700 + 60 + 4) x (50 + 8) = (8+50) x (4 + 60 + 700) by commutativity of addition and multiplication
(8+50) x (4+60+700) using foil method / distributive property is 8*4 + 8+60 + 8*700 + 50*4 + 50*60 + 50*700
For 23 x 27 it's exactly the same process and you start with the form Dan outlined for you already. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234668612480164, "perplexity": 2536.3470801535746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719542.42/warc/CC-MAIN-20161020183839-00441-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/60706-real-analysis-help-series-convergence.html | Math Help - Real Analysis Help. Series Convergence
1. Real Analysis Help. Series Convergence
i would love if someone could help me with 2 problems im having trouble with.
1) Let $a_1, a_2, a_3$, ... be a decreasing sequence of positive numbers. Show that $a_1+a_2+a_3+$... converges if and only if $a_1+2a_2+4a_4+8a_8+$... converges. i saw a similar one on here a couple days ago but this is slightly different
2) Show that a power series $\sum\limits_{n = 1}^\infty {c_nx^n }\$ has the same radius of convergence as $\sum\limits_{n = 1}^\infty {c_{n+m}x^n }\$, for any positive integer m
thanks
2. Here is the problem: $a_n > a_{n + 1} > 0\,,\,\sum\limits_{n = 1}^\infty {a_n } \,\& \,\sum\limits_{n = 0}^\infty {2^n a_{2^n } }$.
Using that $a_n$ is decreasing, then following shows it in one direction.
$\begin{gathered} a_1 + \underbrace {a_2 + a_3 }_{} + \underbrace {a_4 + a_5 + a_6 + a_7 }_{} + \underbrace {a_8 + a_9 + a_{10} + a_{11} + a_{12} + a_{13} + a_{14} + a_{15} }_{} + \cdots \hfill \\
\leqslant a_1 + 2a_2 + 4a_4 + 8a_8 \cdots \hfill \\
\end{gathered}$
Now note that:
$\begin{gathered} a_2 + \underbrace {a_3 + a_4 }_{} + \underbrace {a_5 + a_6 + a_7 + a_8 }_{} + \underbrace {a_9 + a_{10} + a_{11} + a_{12} + a_{13} + a_{14} + a_{15} + a_{16} }_{} + \cdots \hfill \\ \geqslant a_2 + 2a_4 + 4a_8 + 8a_{16}\cdots \hfill \\ \end{gathered}$
Also note that if $\sum\limits_{n = 1}^\infty {a_n }$ converges that $
\sum\limits_{n = 2}^\infty {2a_n }$
converges.
Can you finish?
3. Originally Posted by Plato
Here is the problem: $a_n > a_{n + 1} > 0\,,\,\sum\limits_{n = 1}^\infty {a_n } \,\& \,\sum\limits_{n = 0}^\infty {2^n a_{2^n } }$.
Using that $a_n$ is decreasing, then following shows it in one direction.
$\begin{gathered} a_1 + \underbrace {a_2 + a_3 }_{} + \underbrace {a_4 + a_5 + a_6 + a_7 }_{} + \underbrace {a_8 + a_9 + a_{10} + a_{11} + a_{12} + a_{13} + a_{14} + a_{15} }_{} + \cdots \hfill \\
\leqslant a_1 + 2a_2 + 4a_4 + 8a_8 \cdots \hfill \\
\end{gathered}$
Now note that:
$\begin{gathered} a_2 + \underbrace {a_3 + a_4 }_{} + \underbrace {a_5 + a_6 + a_7 + a_8 }_{} + \underbrace {a_9 + a_{10} + a_{11} + a_{12} + a_{13} + a_{14} + a_{15} + a_{16} }_{} + \cdots \hfill \\ \geqslant a_2 + 2a_4 + 4a_8 + 8a_{16}\cdots \hfill \\ \end{gathered}$
Also note that if $\sum\limits_{n = 1}^\infty {a_n }$ converges that $
\sum\limits_{n = 2}^\infty {2a_n }$
converges.
Can you finish?
thanks for the help i appreciate it. proofs have never been my strong point would u mind helping me finish this?
also can anyone help me with the second question?
4. Originally Posted by megamet2000
2) Show that a power series $\sum\limits_{n = 1}^\infty {c_nx^n }\$ has the same radius of convergence as $\sum\limits_{n = 1}^\infty {c_{n+m}x^n }\$, for any positive integer m
thanks
http://www.mathhelpforum.com/math-he...er-series.html
5. awesome thank you very much
6. does anyone have any tips on how to finish number 1?
7. Originally Posted by megamet2000
does anyone have any tips on how to finish number 1?
What are you having problems with? Plato practicially gave you the solution.
8. i dont understand the purpose of the braces. i get how $a_n<2^na_{2^n}$ but after he says "now note that" i dont get how that sequence is now greater than the last sequence
9. The first inequality tell us that $\sum\limits_{k = 1}^\infty {a_k } \leqslant \sum\limits_{k = 0}^\infty {2^k a_{2^k } }$.
So if $\sum\limits_{k = 0}^\infty {2^k a_{2^k } }$ converges then $
\sum\limits_{k = 1}^\infty {a_k }$
converges.
Likewise, the second inequality tells us $\sum\limits_{k = 1}^\infty {2^{k - 1} a_{2^k } } \leqslant \sum\limits_{k = 2}^\infty {a_k }$.
By comparison both converge. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703314304351807, "perplexity": 431.7541277892942}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008215.58/warc/CC-MAIN-20141125155648-00166-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://charalampakis.com/publications/conference-proceedings/c05-robust-identification-of-bouc-wen-hysteretic-systems-by-sawtooth-ga-and-bounding | C05-RobustidentificationofBouc-WenhystereticsystemsbysawtoothGAandbounding
Title: Robust identification of Bouc-Wen hysteretic systems by sawtooth GA and bounding
Authors: Charalampakis AE, Koumousis VK
Type: Conference paper
Conference: COMPDYN 2007
Venue: Rethimno, Crete, Greece
Date: 2007
Language: English
[abstract]
Hysteresis is a term that describes macroscopically many phenomena observed in engineering. The complexity of the actual mechanism behind hysteresis has given rise to the extended use of phenomenological models, such as the Bouc-Wen model. This paper presents a new stochastic identification scheme for Bouc-Wen systems that combines Sawtooth Genetic Algorithm and a Bounding technique that gradually focuses into smaller and better regions of the search space. Numerous studies show that the proposed scheme is very robust and insensitive to noisecorrupted data. Apart from frequency-independent hysteretic characteristics, the method is also able to identify viscous-type damping.
[cite as]
Charalampakis AE, Koumousis VK. Robust identification of Bouc-Wen hysteretic systems by sawtooth GA and bounding. Proc COMPDYN 2007, Rethimno, Crete, Greece; 2007, paper 1588.
[ paper]
[ presentation]
[attached software : myBWID : executable manual] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206727147102356, "perplexity": 4120.383427220936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00294.warc.gz"} |
https://www.physicsforums.com/threads/trying-to-fully-grasp-limit-definition.272009/ | # Trying to fully grasp limit definition
1. Nov 15, 2008
### snipez90
1. The problem statement, all variables and given/known data
An example in the text that involves showing that $$x^2sin\frac{1}{x}$$ approaches 0 as x approaches 0.
2. Relevant equations
$$\epsilon -\delta$$ argument
3. The attempt at a solution
I can prove many limits efficiently now using $$\epsilon -\delta$$ but I don't think I am that flexible with the definition. I don't feel that I fully understand it. For instance, in this example, it's easy to choose $$d = \sqrt{\epsilon}$$ and after noting that $$|sin\frac{1}{x}| \leq 1$$, the proof is very short.
But in this part of the text, Spivak assumes that the reader does not know the definition yet. He argues that for $$|x^2sin\frac{1}{x}|$$ to be less than $$\epsilon$$, it is only required that $$|x| < \epsilon$$ and $$x \neq 0$$, provided that $$\epsilon \leq 1$$. This makes sense because $$|x^2| = |x|^2 \leq |x|$$ for $$|x| \leq 1$$ and hence the stated bound $$\epsilon \leq 1$$. If $$\epsilon > 1$$, then it is required that $$|x| < 1$$ and $$x \neq 0$$.
This approach may seem more trouble than it's worth since $$\delta = \sqrt{\epsilon}$$ apparently works well. But when trying to write up a proof based on the above approach, I had a hard time. I understood his approach but it seemed weird to be considering two different epsilon cases. After working off of $$|f(x) - L|$$ , I quickly got to $$|x^2sin\frac{1}{x}| \leq |x|^2$$, but am now stuck. I know from Spivak's argument that I can have $$|f(x)-L| < |x|^2 < |x| < \epsilon$$ for $$\epsilon \leq 1$$ but how do I choose $$\delta$$? Does choosing delta = min{a,b}, where a and b are real numbers, have something to do with this approach?
I guess choosing delta to be the min of two real quantities is still unclear to me. Basically for harder limit proofs, I manipulate the |f(x) - L| term until I get a quantity that has the |x-a| term (which is < delta). Then if multiple terms are involved, I assume that delta is bounded above by some z > 0 (usually 1 or a fraction less than 1) and find a bound for each of the terms beside the |x-a| term. Multiplying the bounds on these terms gives a bound B on the |x-a| term so that $$B|x-a| < B\delta$$. Then I just choose delta to be $$\frac{\epsilon}{B}$$.
Right now, my idea of why this approach is justified is that we can choose delta. Once we do enough manipulations to find the other delta choice dependent on epsilon, then epsilon can vary so that we can always find a delta for which the limit definition holds. But this is all very hazy and if someone could clarify my reasoning or justify my approach and why it works it would be greatly appreciated.
2. Nov 15, 2008
### Office_Shredder
Staff Emeritus
That's exactly it. If you can find an easy proof of the limit for small epsilon, say epsilon smaller than M, then when epsilon is larger than M, you can pick a delta that worked when epsilon was smaller than M and it still trivially works
3. Nov 15, 2008
### snipez90
Hmm ok I think that makes sense. For some reason I kept thinking that min meant that I could only pick one of the quantities that delta can equal. But since delta bounds two quantities, then no matter what epsilon > 0 is chosen, then the |f(x) - L| term will be less than epsilon. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610328674316406, "perplexity": 240.4477349670952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.38/warc/CC-MAIN-20161020183839-00217-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/514297/why-doesnt-the-block-fall/514313 | # Why doesn't the block fall?
I came upon this question as I was going through the concepts of tension. Well according to Newton's third law- every action has an equal and opposite reaction. Here my question is that if the tension at point B balances the tension at point A then which force balances mg as it can't be balanced by the reaction force of mg which attracts the earth towards the mass as it is not in contact with the mass. Then why doesn't the mass fall?
• Be careful of third-law reaction forces. They act on different bodies. The tension force at A acts on the block, the tension force at B acts on the ceiling. – electronpusher Nov 16 '19 at 22:52
• What keeps the ceiling from falling? – Adrian Howard Nov 16 '19 at 23:53
• Please someone draw the diagram of forces which balance each other – Janstew Nov 16 '19 at 23:59
• Is B attached to something that is falling, or the ceiling of a building, or what? – Adrian Howard Nov 17 '19 at 0:06
• @AdrianHoward it's a fixed platform – Janstew Nov 17 '19 at 0:09
There are two forces acting on the mass are gravity and tension from the rope. This is seen in your free body diagram on the right side of your image. The mass does not fall (or rise) because these forces are of equal magnitude and opposite direction, thus the net force acting on the mass is $$0$$.
Yes, if we look at the rope the force pulling it down at $$A$$ is equal and opposite to the force pulling it up at $$B$$, but this doesn't make the force at $$A$$ suddenly nonexistent for the mass. There is a downward force acting on the rope at $$A$$, and by Newton's third law this means there is an upward force acting on the mass at $$A$$. This is the upward force discussed above. Note that the forces at $$A$$ and $$B$$ acting on the rope do not form a Newton's third law pair. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091945886611938, "perplexity": 207.41991421204514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00377.warc.gz"} |
http://bootmath.com/find-all-real-functions-fmathbb-r-to-mathbb-r-satisfying-the-equation-fx2y-fxx-fxy.html | # Find all real functions $f:\mathbb {R} \to \mathbb {R}$ satisfying the equation $f(x^2+y.f(x))=x.f(x+y)$
Find all real functions $f:\mathbb {R} \to \mathbb {R}$ satisfying the equation
$f(x^2+y.f(x))=x.f(x+y)$
My attempt –
Clearly $f(0)=0$
Putting $x^2=x,y.f(x)=1$, we have $f(x+1)=x.f(x+y)$.
Now putting $x=x-1$,we have $f(x)=(x-1)f(x-1+y)$
Putting $x=0$ ,we have $f(0)=-1.f(y-1)$ or $f(y-1)=0$(\since $f(0)=0$)
Finally putting $y=(x+1)$ gives us $f(x)=0$.
This is one of the required functions. But $f(x)=x$ also satisfies the equation.How to achieve this? One of my friends said that the answer $f(x)=x$ could be obtained by using Cauchy theorem but when I searched the internet, I could not find any theorem of Cauchy related to functions .Does any such theorem exist. If yes what is it and how can it be used to solve the functional equation. Is there a way similar to the method of getting the first solution to achieve the second one?
#### Solutions Collecting From Web of "Find all real functions $f:\mathbb {R} \to \mathbb {R}$ satisfying the equation $f(x^2+y.f(x))=x.f(x+y)$"
You proved that $f(0)=0$, we can continue from here
Suppose that there exist $k \neq 0$ such that $f(k)=0$.
Then plugging $x=k,y=y-k$ gives, $f(k^2) = f(k^2 + (y-k)f(k)) = kf(k+(y-k))=kf(y)$, which means that $f(x)$ is a constant function and so $f(x)=0$.
Now suppose that there exist no $k \neq 0$ such that $f(k)=0$.
Then plugging $x=x,y=-x$ gives $f(x^2-xf(x)) = xf(0)= 0$, which by assumption means $x^2=xf(x)$ or $f(x)=x$.
So we conclude that, possible functions are $f(x)=x,0 \forall x \in \mathbb{R}$
As you said $f(0)=0$
$$f(x^2+y.f(x))=x.f(x+y)$$
Take $y=0\implies f(x^2)=xf(x)$. So for $p>0$(p for positive,also I took this partition because $x^{1/2n}\mid n\in\mathbb N$ is defined only for $x>0$): $$f(p^2)=p(\sqrt pf(\sqrt p))=\lim_{n\to\infty}p^{\displaystyle \left(\sum_{k=0}^{n}\frac1{2^k}\right)}f\left(p^{1/n}\right)=p^2f(1)\\f(p)=kp\tag{p>0}$$
Now to find the function for $n<0$(n for negative):
$$f((-x)^2)=f(x^2)\implies -xf(-x)=xf(x)\implies f(x)+f(-x)=0$$
$$f(n)=-f(-n)=-(-kn)=kn\tag{n<0}$$
So $f(x)$ is odd and we can say that it is:
$$f(x)=kx$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711494445800781, "perplexity": 180.4793388123531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00284.warc.gz"} |
https://cs.stackexchange.com/questions/10977/finding-recurrence-when-master-theorem-fails | # Finding recurrence when Master Theorem fails
Following method is explained by my senior. I want to know whether I can use it in all cases or not. When I solve it manually, I come to same answer.
$T(n)= 4T(n/2) + \frac{n^2}{\lg n}$
In above recurrence master theorem fails. But he gave me this solution, when
for $T(n) = aT(n/b) + \Theta(n^d \lg^kn)$
if $d = \log_b a$
if $k\geq0$ then $T(n)=\Theta(n^d \lg^{k+1})$
if $k=-1$ then $T(n)=\Theta(n^d\lg\lg n)$
if $k<-1$ then $T(n)=\Theta(n^{\log_ba})$
using above formulae, the recurrence is solved to $\Theta(n^2\lg\lg n)$. When I solved manually, I come up with same answer. If it is some standard method, what it is called ?
• See also our reference question for solving recurrences. In particular, the first case you have been given is covered by the master theorem. But then, even the Akra-Bazzi method does not cover your example. Oh well. By manually, do you mean using recursion trees? – Raphael Apr 2 '13 at 19:16
• ^yes. Basically I meant without using Master Theorem or Akra-Bazi method. Here's one solution : chuck.ferzle.com/Notes/Notes/DiscreteMath/… – avi Apr 3 '13 at 12:38
• I see; that would be guess & proof, then. Legit, but arduous: you need to deal with lower and upper bound separately and perform induction proofs for both. – Raphael Apr 3 '13 at 14:08
OK, try Akra-Bazzi (even if Raphael thinks it doesn't apply...) $$T(n) = 4 T(n / 2) + n^2 / \lg n$$ We have $g(n) = n^2 / \ln n = O(n^2)$, check. We have that there is a single $a_1 = 4$, $b_1 = 1 / 2$, which checks out. Assuming that the $n / 2$ is really $\lfloor n / 2 \rfloor$ and/or $\lceil n / 2 \rceil$, the implied $h_i(n)$ also check out. So we need: $$a_1 b_1^p = 4 \cdot (1 / 2)^p = 1$$ Thus $p = 2$, and: $$T(n) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{u^2 du}{u^3 \ln u} \right) \right) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{du}{u \ln u} \right) \right) = \Theta(n^2 \ln \ln n)$$ (The integral as given with lower limit 1 diverges, but the lower limit should be the $n_0$ for which the recurrence starts being valid, the difference will usually just be a constant, so using 1 or $n_0$ won't make a difference; check the original paper.)
• Ah, so you are allowed/supposed to change the lower boundary of the integral -- that was my problem exactly! Your explanation does not make a lot of sense to me, though: the integral does not converge on $[1,2]$, so the difference it not a constant! I guess I'll have to look at the paper at some point... if only it was readily available. – Raphael Apr 2 '13 at 22:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538403749465942, "perplexity": 472.56080490673787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00617.warc.gz"} |
https://research-portal.uea.ac.uk/en/publications/the-feller-diffusion-filter-rules-and-abnormal-stock-returns | # The Feller Diffusion, Filter Rules and Abnormal Stock Returns
Paul Docherty, Yizhe Dong, Xiaojong Song, Mark Tippett
Research output: Contribution to journalArticle
3 Citations (Scopus)
## Abstract
We determine the conditional expected logarithmic (that is, continuously compounded) return on a stock whose price evolves in terms of the Feller diffusion and then use it to demonstrate how one must know the exact probability density that describes a stock’s return before one can determine the correct way to calculate the abnormal returns that accrue on the stock. We show in particular that misspecification of the stochastic process which generates a stock’s price will lead to systematic biases in the abnormal returns calculated on the stock. We examine the implications this has for the proper conduct of empirical work and for the evaluation of stock and portfolio performance.
Original language English 426-438 13 The European Journal of Finance 24 5 5 Apr 2017 https://doi.org/10.1080/1351847X.2017.1309328 Published - 2018
## Keywords
• Feller diffusion
• Fokker-Planck equation
• Geometric Brownian Motion
• Logarithmic return | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019700646400452, "perplexity": 2357.6143326729625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00455.warc.gz"} |
http://en.wikipedia.org/wiki/Earth's_radius | For the historical development of the concept, see Spherical Earth.
Earth radius is the distance from Earth's center to its surface, about 6,371 kilometers (3,959 mi). This length is also used as a unit of distance, especially in astronomy and geology, where it is usually denoted by $R_\oplus$.
This article deals primarily with spherical and ellipsoidal models of the Earth. See Figure of the Earth for a more complete discussion of models. The Earth is only approximately spherical, so no single value serves as its natural radius. Distances from points on the surface to the center range from 6,353 km to 6,384 km (3,947–3,968 mi). Several different ways of modeling the Earth as a sphere each yield a mean radius of 6,371 kilometers (3,959 mi).
While "radius" normally is a characteristic of perfect spheres, the term as used in this article more generally means the distance from some "center" of the Earth to a point on the surface or on an idealized surface that models the Earth. It can also mean some kind of average of such distances, or of the radius of a sphere whose curvature matches the curvature of the ellipsoidal model of the Earth at a given point.
The first scientific estimation of the radius of the earth was given by Eratosthenes about 240 BC. Estimates of the accuracy of Eratosthenes’s measurement range from within 2% to within 15%.
## Introduction
Main article: Figure of the Earth
Earth's rotation, internal density variations, and external tidal forces cause it to deviate systematically from a perfect sphere.[a] Local topography increases the variance, resulting in a surface of unlimited complexity. Our descriptions of the Earth's surface must be simpler than reality in order to be tractable. Hence we create models to approximate the Earth's surface, generally relying on the simplest model that suits the need.
Each of the models in common use come with some notion of "radius". Strictly speaking, spheres are the only solids to have radii, but looser uses of the term "radius" are common in many fields, including those dealing with models of the Earth. Viewing models of the Earth from less to more approximate:
In the case of the geoid and ellipsoids, the fixed distance from any point on the model to the specified center is called "a radius of the Earth" or "the radius of the Earth at that point".[d] It is also common to refer to any mean radius of a spherical model as "the radius of the earth". On the Earth's real surface, on other hand, it is uncommon to refer to a "radius", since there is no practical need. Rather, elevation above or below sea level is useful.
Regardless of model, any radius falls between the polar minimum of about 6,357 km and the equatorial maximum of about 6,378 km (≈3,950 – 3,963 mi). Hence the Earth deviates from a perfect sphere by only a third of a percent, sufficiently close to treat it as a sphere in many contexts and justifying the term "the radius of the Earth". While specific values differ, the concepts in this article generalize to any major planet.
### Physics of Earth's deformation
Rotation of a planet causes it to approximate an oblate ellipsoid/spheroid with a bulge at the equator and flattening at the North and South Poles, so that the equatorial radius $a$ is larger than the polar radius $b$ by approximately $a q$ where the oblateness constant $q$ is
$q=\frac{a^3 \omega^2}{GM}\,\!$
where $\omega$ is the angular frequency, $G$ is the gravitational constant, and $M$ is the mass of the planet.[e] For the Earth q−1 ≈ 289, which is close to the measured inverse flattening f−1 ≈ 298.257. Additionally, the bulge at the equator shows slow variations. The bulge had been declining, but since 1998 the bulge has increased, possibly due to redistribution of ocean mass via currents.[2]
The variation in density and crustal thickness causes gravity to vary on the surface, so that the mean sea level will differ from the ellipsoid. This difference is the geoid height, positive above or outside the ellipsoid, negative below or inside. The geoid height variation is under 110 m on Earth. The geoid height can change abruptly due to earthquakes (such as the Sumatra-Andaman earthquake) or reduction in ice masses (such as Greenland).[3]
Not all deformations originate within the Earth. The gravity of the Moon and Sun cause the Earth's surface at a given point to undulate by tenths of meters over a nearly 12 hour period (see Earth tide).
Biruni's (973–1048) method for calculation of Earth's radius improved accuracy.
Given local and transient influences on surface height, the values defined below are based on a "general purpose" model, refined as globally precisely as possible within 5 m of reference ellipsoid height, and to within 100 m of mean sea level (neglecting geoid height).
Additionally, the radius can be estimated from the curvature of the Earth at a point. Like a torus the curvature at a point will be largest (tightest) in one direction (North-South on Earth) and smallest (flattest) perpendicularly (East-West). The corresponding radius of curvature depends on location and direction of measurement from that point. A consequence is that a distance to the true horizon at the equator is slightly shorter in the north/south direction than in the east-west direction.
In summary, local variations in terrain prevent the definition of a single absolutely "precise" radius. One can only adopt an idealized model. Since the estimate by Eratosthenes, many models have been created. Historically these models were based on regional topography, giving the best reference ellipsoid for the area under survey. As satellite remote sensing and especially the Global Positioning System rose in importance, true global models were developed which, while not as accurate for regional work, best approximate the earth as a whole.
The following radii are fixed and do not include a variable location dependence. They are derived from the WGS-84 ellipsoid.[4]
The value for the equatorial radius is defined to the nearest 0.1 meter in WGS-84. The value for the polar radius in this section has been rounded to the nearest 0.1 meter, which is expected to be adequate for most uses. Please refer to the WGS-84 ellipsoid if a more precise value for its polar radius is needed.
The radii in this section are for an idealized surface. Even the idealized radii have an uncertainty of ± 2 meters.[5] The discrepancy between the ellipsoid radius and the radius to a physical location may be significant. When identifying the position of an observable location, the use of more precise values for WGS-84 radii may not yield a corresponding improvement in accuracy.
The Earth's equatorial radius $a$, or semi-major axis, is the distance from its center to the equator and equals 6,378.1370 kilometers (3,963.1906 mi). The equatorial radius is often used to compare Earth with other planets.
The Earth's polar radius $b$, or semi-minor axis, is the distance from its center to the North and South Poles, and equals 6,356.7523 kilometers (3,949.9028 mi).
• Maximum: The summit of Chimborazo is 6,384.4 km (3,968 mi) from the Earth's center.
• Minimum: The floor of the Arctic Ocean is ≈6,352.8 kilometers (3,947.4 mi) from the Earth's center.[6]
The distance from the Earth's center to a point on the spheroid surface at geodetic latitude $\varphi\,\!$ is:
$R=R(\varphi)=\sqrt{\frac{(a^2\cos\varphi)^2+(b^2\sin\varphi)^2}{(a\cos\varphi)^2+(b\sin\varphi)^2}}\,\!$
where $a$ and $b$ are the equatorial radius and the polar radius, respectively.
Eratosthenes used two points, one almost exactly north of the other. The points are separated by distance $D$, and the vertical directions at the two points are known to differ by angle of $\theta$, in radians. A formula used in Eratosthenes' method is
$R= \frac{D}{\theta}\,\!$
which gives an estimate of radius based on the north-south curvature of the Earth.
#### Meridional
In particular the Earth's radius of curvature in the (north-south) meridian at $\varphi\,\!$ is:
$M=M(\varphi)=\frac{(ab)^2}{((a\cos\varphi)^2+(b\sin\varphi)^2)^{3/2}}\,\!$
#### Normal
If one point had appeared due east of the other, one finds the approximate curvature in east-west direction.[f]
This radius of curvature in the prime vertical, which is perpendicular, or normal, to M at geodetic latitude $\varphi\,\!$ is:[g]
$N=N(\varphi)=\frac{a^2}{\sqrt{(a\cos\varphi)^2+(b\sin\varphi)^2}}\,\!$
Note that N=R at the equator:
At geodetic latitude 48.46791 degrees (e.g., Lèves, Alsace, France), the radius R is 20000/π ≈ 6,366.197, namely the radius of a perfect sphere for which the meridian arc length from the equator to the North Pole is exactly 10000 km, the originally proposed definition of the meter.
The Earth's meridional radius of curvature at the equator equals the meridian's semi-latus rectum:
$\frac{b^2}{a}\,\!$ = 6,335.437 km
The Earth's polar radius of curvature is:
$\frac{a^2}{b}\,\!$ = 6,399.592 km
#### Combinations
It is possible to combine the meridional and normal radii of curvature above.
The Earth's Gaussian radius of curvature at latitude $\varphi\,\!$ is:[7]
$R_a=\sqrt{MN}=\frac{a^2b}{(a\cos\varphi)^2+(b\sin\varphi)^2}\,\!$
The Earth's radius of curvature along a course at an azimuth (measured clockwise from north) $\alpha\,\!$, at $\varphi\,\!$ is derived from Euler's curvature formula as follows:[7]
$R_c=\frac{{}_{1}}{\frac{\cos^2\alpha}{M}+\frac{\sin^2\alpha}{N}}\,\!$
The Earth's mean radius of curvature at latitude $\varphi\,\!$ is:[7]
$R_m=\frac{{}_{2}}{\frac{1}{M}+\frac{1}{N}}\,\!$
The Earth can be modeled as a sphere in many ways. This section describes the common ways. The various radii derived here use the notation and dimensions noted above for the Earth as derived from the WGS-84 ellipsoid;[4] namely,
$\textstyle a =$ Equatorial radius (6,378.1370 km)
$\textstyle b =$ Polar radius (6,356.7523 km)
A sphere being a gross approximation of the spheroid, which itself is an approximation of the geoid, units are given here in kilometers rather than the millimeter resolution appropriate for geodesy.
The International Union of Geodesy and Geophysics (IUGG) defines the mean radius (denoted $R_1$) to be[8]
$R_1 = \frac{2a+b}{3}\,\!$
For Earth, the mean radius is 6,371.009 kilometers (3,958.761 mi).[9]
Earth's authalic ("equal area") radius is the radius of a hypothetical perfect sphere which has the same surface area as the reference ellipsoid. The IUGG denotes the authalic radius as $R_2$.[8]
A closed-form solution exists for a spheroid:[10]
$R_2=\sqrt{\frac{a^2+\frac{ab^2}{\sqrt{a^2-b^2}}\ln{\left(\frac{a+\sqrt{a^2-b^2}}b\right)}}{2}}=\sqrt{\frac{a^2}2+\frac{b^2}2\frac{\tanh^{-1}e}e} =\sqrt{\frac{A}{4\pi}}\,\!$
where $e^2=(a^2-b^2)/a^2$ and $A$ is the surface area of the spheroid.
For Earth, the authalic radius is 6,371.0072 kilometers (3,958.7603 mi).[9]
Another spherical model is defined by the volumetric radius, which is the radius of a sphere of volume equal to the ellipsoid. The IUGG denotes the volumetric radius as $R_3$.[8]
$R_3=\sqrt[3]{a^2b}\,\!$
For Earth, the volumetric radius equals 6,371.0008 kilometers (3,958.7564 mi).[9]
Another mean radius is the rectifying radius, giving a sphere with circumference equal to the perimeter of the ellipse described by any polar cross section of the ellipsoid. This requires an elliptic integral to find, given the polar and equatorial radii:
$M_r=\frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}\sqrt{{a^2}\cos^2\varphi + {b^2} \sin^2\varphi}\,d\varphi$.
The rectifying radius is equivalent to the meridional mean, which is defined as the average value of M:[10]
$M_r=\frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}\!M(\varphi)\,d\varphi\!$
For integration limits of [0...π/2], the integrals for rectifying radius and mean radius evaluate to the same result, which, for Earth, amounts to 6,367.4491 kilometers (3,956.5494 mi).
The meridional mean is well approximated by the semicubic mean of the two axes:
$M_r\approx\left[\frac{a^{3/2}+b^{3/2}}{2}\right]^{2/3}\,$
yielding, again, 6,367.4491 km; or less accurately by the quadratic mean of the two axes:
$M_r\approx\sqrt{\frac{a^2+b^2}{2}}\,\!$;
about 6,367.454 km; or even just the mean of the two axes:
$M_r\approx\frac{a+b}{2}\,\!$;
## Osculating sphere
The best spherical approximation to the ellipsoid in the vicinity of a given point is given by the osculating sphere. Its radius equals the Gaussian radius of curvature as above, and its radial direction coincides with the ellipsoid normal direction. This concept aids the interpretation of terrestrial and planetary radio occultation refraction measurements.
## Notes
1. ^ For details see Figure of the Earth, Geoid, and Earth tide.
2. ^ There is no single center to the geoid; it varies according to local geodetic conditions.
3. ^ In a geocentric ellipsoid, the center of the ellipsoid coincides with some computed center of the earth, and best models the earth as a whole. Geodetic ellipsoids are better suited to regional idiosyncrasies of the geoid. A partial surface of an ellipsoid gets fitted to the region, in which case the center and orientation of the ellipsoid generally do not coincide with the earth's center of mass or axis of rotation.
4. ^ The value of the radius is completely dependent upon the latitude in the case of an ellipsoid model, and nearly so on the geoid.
5. ^ This follows from the International Astronomical Union definition rule (2): a planet assumes a shape due to hydrostatic equilibrium where gravity and centrifugal forces are nearly balanced.[1]
6. ^ East-west directions can be misleading. Point B which appears due East from A will be closer to the equator than A. Thus the curvature found this way is smaller than the curvature of a circle of constant latitude, except at the equator. West can exchanged for east in this discussion.
7. ^ N is defined as the radius of curvature in the plane which is normal to both the surface of the ellipsoid at, and the meridian passing through, the specific point of interest.
## References
1. ^ IAU 2006 General Assembly: Result of the IAU Resolution votes
2. ^
3. ^
4. ^ a b http://earth-info.nga.mil/GandG/publications/tr8350.2/wgs84fin.pdf
5. ^ http://earth-info.nga.mil/GandG/publications/tr8350.2/tr8350.2-a/Chapter%203.pdf
6. ^ "Discover-TheWorld.com - Guam - POINTS OF INTEREST - Don't Miss - Mariana Trench". Guam.discover-theworld.com. 1960-01-23. Retrieved 2013-09-16.
7. ^ a b c Torge (2001), Geodesy, p.98, eq.(4.23), [1]
8. ^ a b c Moritz, H. (1980). Geodetic Reference System 1980, by resolution of the XVII General Assembly of the IUGG in Canberra.
9. ^ a b c Moritz, H. (March 2000). "Geodetic Reference System 1980". Journal of Geodesy 74 (1): 128–133. Bibcode:2000JGeod..74..128.. doi:10.1007/s001900050278.
10. ^ a b Snyder, J.P. (1987). Map Projections – A Working Manual (US Geological Survey Professional Paper 1395) p. 16–17. Washington D.C: United States Government Printing Office. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612550735473633, "perplexity": 1006.2159402713742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500812867.24/warc/CC-MAIN-20140820021332-00070-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.nag.com/numeric/nl/nagdoc_27/flhtml/g01/g01ezf.html | # NAG FL Interfaceg01ezf (prob_kolmogorov2)
## 1Purpose
g01ezf returns the probability associated with the upper tail of the Kolmogorov–Smirnov two sample distribution.
## 2Specification
Fortran Interface
Function g01ezf ( n1, n2, d,
Real (Kind=nag_wp) :: g01ezf Integer, Intent (In) :: n1, n2 Integer, Intent (Inout) :: ifail Real (Kind=nag_wp), Intent (In) :: d
#include <nag.h>
double g01ezf_ (const Integer *n1, const Integer *n2, const double *d, Integer *ifail)
The routine may be called by the names g01ezf or nagf_stat_prob_kolmogorov2.
## 3Description
Let ${F}_{{n}_{1}}\left(x\right)$ and ${G}_{{n}_{2}}\left(x\right)$ denote the empirical cumulative distribution functions for the two samples, where ${n}_{1}$ and ${n}_{2}$ are the sizes of the first and second samples respectively.
The function g01ezf computes the upper tail probability for the Kolmogorov–Smirnov two sample two-sided test statistic ${D}_{{n}_{1},{n}_{2}}$, where
$Dn1,n2=supxFn1x-Gn2x.$
The probability is computed exactly if ${n}_{1},{n}_{2}\le 10000$ and $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\le 2500$ using a method given by Kim and Jenrich (1973). For the case where $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\le 10%$ of the $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)$ and $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\le 80$ the Smirnov approximation is used. For all other cases the Kolmogorov approximation is used. These two approximations are discussed in Kim and Jenrich (1973).
## 4References
Conover W J (1980) Practical Nonparametric Statistics Wiley
Feller W (1948) On the Kolmogorov–Smirnov limit theorems for empirical distributions Ann. Math. Statist. 19 179–181
Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin
Kim P J and Jenrich R I (1973) Tables of exact sampling distribution of the two sample Kolmogorov–Smirnov criterion ${D}_{mn}\left(m Selected Tables in Mathematical Statistics 1 80–129 American Mathematical Society
Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill
Smirnov N (1948) Table for estimating the goodness of fit of empirical distributions Ann. Math. Statist. 19 279–281
## 5Arguments
1: $\mathbf{n1}$Integer Input
On entry: the number of observations in the first sample, ${n}_{1}$.
Constraint: ${\mathbf{n1}}\ge 1$.
2: $\mathbf{n2}$Integer Input
On entry: the number of observations in the second sample, ${n}_{2}$.
Constraint: ${\mathbf{n2}}\ge 1$.
3: $\mathbf{d}$Real (Kind=nag_wp) Input
On entry: the test statistic ${D}_{{n}_{1},{n}_{2}}$, for the two sample Kolmogorov–Smirnov goodness-of-fit test, that is the maximum difference between the empirical cumulative distribution functions (CDFs) of the two samples.
Constraint: $0.0\le {\mathbf{d}}\le 1.0$.
4: $\mathbf{ifail}$Integer Input/Output
On entry: ifail must be set to $0$, . If you are unfamiliar with this argument you should refer to Section 4 in the Introduction to the NAG Library FL Interface for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{n1}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n2}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n1}}\ge 1$ and ${\mathbf{n2}}\ge 1$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{d}}<0.0$ or ${\mathbf{d}}>1.0$: ${\mathbf{d}}=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=3$
The Smirnov approximation used for large samples did not converge in $200$ iterations. The probability is set to $1.0$.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
## 7Accuracy
The large sample distributions used as approximations to the exact distribution should have a relative error of less than 5% for most cases.
## 8Parallelism and Performance
g01ezf is not threaded in any implementation.
The upper tail probability for the one-sided statistics, ${D}_{{n}_{1},{n}_{2}}^{+}$ or ${D}_{{n}_{1},{n}_{2}}^{-}$, can be approximated by halving the two-sided upper tail probability returned by g01ezf, that is $p/2$. This approximation to the upper tail probability for either ${D}_{{n}_{1},{n}_{2}}^{+}$ or ${D}_{{n}_{1},{n}_{2}}^{-}$ is good for small probabilities, (e.g., $p\le 0.10$) but becomes poor for larger probabilities.
The time taken by the routine increases with ${n}_{1}$ and ${n}_{2}$, until ${n}_{1}{n}_{2}>10000$ or $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\ge 2500$. At this point one of the approximations is used and the time decreases significantly. The time then increases again modestly with ${n}_{1}$ and ${n}_{2}$.
## 10Example
The following example reads in $10$ different sample sizes and values for the test statistic ${D}_{{n}_{1},{n}_{2}}$. The upper tail probability is computed and printed for each case.
### 10.1Program Text
Program Text (g01ezfe.f90)
### 10.2Program Data
Program Data (g01ezfe.d)
### 10.3Program Results
Program Results (g01ezfe.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 57, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731475114822388, "perplexity": 4677.291718892577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00634.warc.gz"} |
https://chemistry.stackexchange.com/questions/50656/how-is-volatility-useful-in-the-production-of-acids | # How is volatility useful in the production of acids?
Sulfuric acid because of its low volatility can be used to manufacture more volatile acids from their corresponding salts.
How does volatility affect the production of acids? Isn't it that sulfuric acid being stronger than the salt of weak acid replaces the anion part of salt so the weak acid is produced or does just the low volatility of sulfuric acid matter? Will it be feasible to produce a highly volatile acid even stronger than $\ce{H2SO4}$ by exploiting the low volatile nature of $\ce{H2SO4}$? What is the real process involved?
• Can you please tell where did you quote that statement from? Aug 26 '16 at 14:35
• @Kartik From NCERT chemistry class 12 p block pg.no:190 under uses of sulfuric acid.
– JM97
Aug 26 '16 at 14:39
Yes, this is indeed the case. The reasoning behind it is using chemical equilibria to their fullest.
If you have a Brønsted acid and a Brønsted base in the same vessel, you will always have an equilibrium of the following kind:
$$\ce{HA + B- <=> A- + HB}$$
It depends on the nature of the acid and the base — i.e. their $\mathrm{p}K_\mathrm{a}/\mathrm{p}K_\mathrm{b}$ values — which side of the equation is favoured in a closed system (i.e. solution). For example, dissolving $\ce{NaHSO4}$ and $\ce{NaSH}$ in the same sample of water will result in the following set of equations:
$$\ce{H2SO4 + Na2S <=>> NaHSO4 + NaSH <=>> Na2SO4 + H2S}$$
Because the $\mathrm{p}K_\mathrm{a}$ values of both compare as follows:
$$\begin{array}{ccc}\hline \text{acid} & \mathrm{p}K_\mathrm{a1} & \mathrm{p}K_\mathrm{a2}\\ \hline \ce{H2SO4} & \approx -3 & 1.9\\ \ce{H2S} & 7.0 & 12.9\\ \hline\end{array}$$
We already have a way of producing $\ce{H2S}$ by means of protonating sulfide ions in solution here. However, $\mathrm{p}K_\mathrm{a1}\left (\ce{HCl}\right ) \approx -6$, which is more acidic than sulphuric acid, so adding $\ce{NaCl}$ to $\ce{H2SO4}$ should result in a net reaction of nearly nothing.
This is where vapour pressure comes in as a second factor. $\ce{HCl}$ is a gas under standard temperature and pressure while $\ce{H2SO4}$ is a liquid. ($\ce{H2S}$ is also a gas. If we added liquid sulphuric acid to sodium sulfide, $\ce{H2S}$ gas would evolve, but that’s what the $\mathrm{p}K_\mathrm{a}$ value predicts anyway.) If we have an open vessel — i.e. gases can diffuse away — then any $\ce{HCl}$ produced in equilibrium will dissociate away in the long run. This is due to a second equilibrium that we could write as follows:
$$\ce{HCl (g) <=> HCl (aq)}$$
The ‘equilibrium constant’ of this physical process is more or less what we call vapour pressure, and for hydrogen chloride both sides are significantly more balanced than for sulphuric acid. Therefore, we should describe the entire system in the following way:
$$\ce{NaCl + H2SO4 <=> NaHSO4 + HCl (diss) <=>> NaHSO4 + HCl (g) ^}$$
Where $\ce{(diss)}$ is a shorthand for dissolved in sulphuric acid, our reaction medium.
Drawing away the hydrogen chloride gas in the rightmost equation, for example by reducing pressure or inducing an air flow, will shift all equilibria to the right since that product is constantly being removed. And thus we can liberate $\ce{HCl}$ gas with sulphuric acid, even though the $\mathrm{p}K_\mathrm{a}$ of hydrogen chloride is lower than that of sulphuric acid.
An example of this would be the reaction between sulfuric acid and potassium nitrate, the salt of nitric acid. When this salt is mixed with sulfuric acid, nitric acid and potassium acid sulfate are the result, although the reaction is incomplete. If the resulting mixture is then heated the nitric acid evolves as a vapor because it is much more volatile than sulfuric acid. This vapor can then be condensed separately. In this way relatively pure nitric acid can be produced.
• Great, that's exactly what I needed in solving a textbook problem right now. +1 May 4 '16 at 18:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311052918434143, "perplexity": 811.5416844547699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00101.warc.gz"} |
https://intl.siyavula.com/read/maths/grade-12/probability/10-probability-04 | We think you are located in South Africa. Is this correct?
# The Fundamental Counting Principle
## 10.4 The fundamental counting principle (EMCJZ)
Mathematics began with counting. Initially, fingers, beans and buttons were used to help with counting, but these are only practical for small numbers. What happens when a large number of items must be counted?
This section focuses on how to use mathematical techniques to count different assortments of items.
### Introduction (EMCK2)
An important aspect of probability theory is the ability to determine the total number of possible outcomes when multiple events are considered.
For example, what is the total number of possible outcomes when a die is rolled and then a coin is tossed? The roll of a die has six possible outcomes ($$1;2;3;4;5$$ or $$\text{6}$$) and the toss of a coin, $$\text{2}$$ outcomes (heads or tails). The sample space (total possible outcomes) can be represented as follows:
$S = \left\{\begin{array}{cccccc} (1;H); & (2;H); & (3;H); & (4;H); & (5;H); & (6;H); \\ (1;T); & (2;T); & (3;T); & (4;T); & (5;T); & (6;T) \end{array}\right\}$
Therefore there are $$\text{12}$$ possible outcomes.
The use of lists, tables and tree diagrams is only feasible for events with a few outcomes. When the number of outcomes grows, it is not practical to list the different possibilities and the fundamental counting principle is used instead.
The fundamental counting principle
The fundamental counting principle states that if there are $$n(A)$$ outcomes in event $$A$$ and $$n(B)$$ outcomes in event $$B$$, then there are $$n(A) \times n(B)$$ outcomes in event $$A$$ and event $$B$$ combined.
If we apply this principle to our previous example, we can easily calculate the number of possible outcomes by multiplying the number of possible die rolls with the number of outcomes of tossing a coin: $$6 \times 2 = 12$$ outcomes. This allows us to formulate the following:
If there $$n_{1}$$ possible outcomes for event $$A$$ and $$n_{2}$$ outcomes for event $$B$$, then the total possible number of outcomes for both events is $$n_1 \times n_2$$
This can be generalised to $$k$$ events, where $$k$$ is the number of events. The total number of outcomes for $$k$$ events is:
${n}_{1}\times {n}_{2}\times {n}_{3}\times \cdots \times {n}_{k}$
The order in which the experiments are done does not affect the total number of possible outcomes.
## Worked example 10: Choices without repetition
A take-away has a 4-piece lunch special which consists of a sandwich, soup, dessert and drink for R $$\text{25,00}$$. They offer the following choices for:
Sandwich: chicken mayonnaise, cheese and tomato, tuna mayonnaise, ham and lettuce
Soup: tomato, chicken noodle, vegetable
Dessert: ice-cream, piece of cake
Drink: tea, coffee, Coke, Fanta, Sprite
How many possible meals are there?
### Determine how many parts to the meal there are
There are 4 parts: sandwich, soup, dessert and drink.
### Identify how many choices there are for each part
Meal component Sandwich Soup Dessert Drink Number of choices $$\text{4}$$ $$\text{3}$$ $$\text{2}$$ $$\text{5}$$
### Use the fundamental counting principle to determine how many different meals are possible
$$4\times 3\times 2\times 5=120$$
So there are $$\text{120}$$ possible meals.
In the previous example, there were a different number of options for each choice. But what happens when the number of choices is unchanged each time you choose?
For example, if a coin is flipped three times, what is the total number of different results? Each time a coin is flipped, there are two possible outcomes, namely heads or tails. The coin is flipped $$\text{3}$$ times. We can use a tree diagram to determine the total number of possible outcomes:
From the tree diagram, we can see that there is a total of 8 different possible outcomes.
Drawing a tree diagram is possible to draw for three different coin flips, but as soon as the number of events increases, the total number of possible outcomes increases to the point where drawing a tree diagram is impractical.
For example, think about what a tree diagram would look like if we were to flip a coin six times. In this case, using the fundamental counting principle is a far easier option. We know that each time a coin is flipped that there are two possible outcomes. So if we flip a coin six times, the total number of possible outcomes is equivalent to multiplying 2 by itself six times:
$2 \times 2 \times 2 \times 2 \times 2 \times 2 = 2^{6} = 64$
Another example is if you have the letters A, B, C, and D and you wish to discover the number of ways of arranging them in three-letter patterns if repetition is allowed, such as ABA, DCA, BBB etc. You will find that there are $$\text{64}$$ ways. This is because for the first letter of the pattern, you can choose any of the four available letters, for the second letter of the pattern, you can choose any of the four letters, and for the final letter of the pattern you can choose any of the four letters. Multiplying the number of available choices for each letter in the pattern gives the total available arrangements of letters:
$4 \times 4 \times 4 = 4^{3} = 64$
This allows us to formulate the following:
When you have $$n$$ objects to choose from and you choose from them $$r$$ times, then the total number of possibilities is $n \times n \times n \ldots \times n \enspace (r \text{ times}) = n^{r}$
## Worked example 11: Choices with repetition
A school plays a series of $$\text{6}$$ soccer matches. For each match there are $$\text{3}$$ possibilities: a win, a draw or a loss. How many possible results are there for the series?
### Determine how many outcomes you have to choose from for each event
There are $$\text{3}$$ outcomes for each match: win, draw or lose.
### Determine the number of events
There are $$\text{6}$$ matches, therefore the number of events is $$\text{6}$$.
### Determine the total number of possible outcomes
There are $$\text{3}$$ possible outcomes for each of the $$\text{6}$$ events. Therefore, the total number of possible outcomes for the series of matches is
$3 \times 3 \times 3 \times 3 \times 3 \times 3 = 3^6 = 729$
# Success in Maths and Science unlocks opportunities
## Number of possible outcomes if repetition is allowed
Exercise 10.4
Tarryn has five different skirts, four different tops and three pairs of shoes. Assuming that all the colours complement each other, how many different outfits can she put together?
$5 \times 4 \times 3 = \text{60} \text{ different outfits}$
In a multiple-choice question paper of $$\text{20}$$ questions the answers can be A, B, C or D. How many different ways are there of answering the question paper?
$4^{20} = \text{1,0995} \times \text{10}^{\text{12}} \text{ different ways of answering the exam paper}$
A debit card requires a five digit personal identification number (PIN) consisting of digits from 0 to 9. The digits may be repeated. How many possible PINs are there?
$10^{5} = \text{100 000} \text{ possible PINs}$
The province of Gauteng ran out of unique number plates in 2010. Prior to 2010, the number plates were formulated using the style LLLDDDGP, where L is any letter of the alphabet excluding vowels and Q, and D is a digit between 0 and 9. The new style the Gauteng government introduced is LLDDLLGP. How many more possible number plates are there using the new style when compared to the old style?
\begin{align*} \text{Old style: } 20^{3} \times 10^{3} &= \text{8 000 000} \text{ possible arrangements} \\ \text{New style: } 20^{4} \times 10^{2} &= \text{16 000 000} \text{ possible arrangements} \\ \text{16 000 000} - \text{8 000 000} &= \text{8 000 000} \end{align*}
Therefore there are $$\text{8 000 000}$$ more possible number plates using the new style.
A gift basket is made up from one CD, one book, one box of sweets, one packet of nuts and one bottle of fruit juice. The person who makes up the gift basket can choose from five different CDs, eight different books, three different boxes of sweets, four kinds of nuts and six flavours of fruit juice. How many different gift baskets can be produced?
$5 \times 8 \times 3 \times 4 \times 6 = \text{2 880} \text{ possible gift baskets}$
The code for a safe is of the form XXXXYYY where X is any number from 0 to 9 and Y represents the letters of the alphabet. How many codes are possible for each of the following cases:
the digits and letters of the alphabet can be repeated.
$10^{4} \times 26^{3} = \text{175 760 000} \text{ possible codes}$
the digits and letters of the alphabet can be repeated, but the code may not contain a zero or any of the vowels in the alphabet.
We exclude the digit $$\text{0}$$ and the vowels (A; E; I; O; U), leaving $$\text{9}$$ other digits and $$\text{21}$$ letters to choose from.
$9^{4} \times 21^{3} = \text{60 761 421} \text{ possible codes}$
the digits and letters of the alphabet can be repeated, but the digits may only be prime numbers and the letters X, Y and Z are excluded from the code.
The prime digits are $$\text{2}$$, $$\text{3}$$, $$\text{5}$$ and $$\text{7}$$. This gives us 4 possible digits. If we exclude the letters X, Y and Z, we are left with $$\text{23}$$ letters to choose from.
$4^{4} \times 23^{3} = \text{3 114 752} \text{ possible codes}$
A restaurant offers four choices of starter, eight choices for the main meal and six choices for dessert. A customer can choose to eat just one course, two different courses or all three courses. Assuming that all courses are available, how many different meal options does the restaurant offer?
• A person who eats only a starter has 4 choices
• A person who eats only a main meal has 8 choices
• A person who eats only a dessert has 6 choices
• A person who eats a starter and a main course has $$4 \times 8 = 32$$ choices
• A person who eats a starter and a dessert has $$4 \times 6 = 24$$ choices
• A person who eats a main meal and a dessert has $$8 \times 6 = 48$$ choices
• A person who eats all three courses has $$4 \times 8 \times 6 = 192$$ choices.
$\text{Therefore, there are } 4 + 8 + 6 + 32 +24 + 48 + 192 = \text{314} \text{ different meal options}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067324161529541, "perplexity": 400.95780979760474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00012.warc.gz"} |
http://www.ask.com/math/write-decimals-standard-form-ea9924c75308e6bb | Q:
# How do you write decimals in standard form?
A:
To write decimals in standard form, move the decimal point to the right until it is at the right of the first nonzero digit. Then, multiply the number by 10 to the power of the negative of the number of spaces the decimal point was moved.
Know More
## Keep Learning
Credit: Robert Couse-Baker CC-BY 2.0
For example, the decimal 0.0000005467 can be expressed in standard form as 5.467 * 10^-7. This is because in order for the decimal to be to the right of the first nonzero digit, it has to be moved seven places to the right. After moving the decimal seven places to the right, the resulting number is 5.467. Then, this number is multiplied by 10^-7, because when moving the decimal to the right, one must multiply by 10 to the power of the negative of the number of places the decimal point moved.
Another simple example is the decimal 0.01. It can be expressed as 1 * 10^-2, because the decimal has to move two places to the right in order to be to the right of the first nonzero digit.
Sources:
## Related Questions
• A:
To read a decimal, say the number to the left of the decimal point as a whole number. Use the word "and" to indicate the decimal point. Read the number to the right of the decimal point as a whole number, ending with the place value of the last digit.
Filed Under:
• A:
Long division with decimals is functionally the same as long division without decimals, but you have to move the decimal point of the divisor and dividend so you have a whole number.
Filed Under:
• A:
One hundred one million, two hundred thirty thousand and four is written as 101,230,004 in standard form. The standard form is the way people usually write numbers. Since the given number is expressed in words, standard form requires conversion to its numeral form.
Filed Under:
• A:
A nonterminating decimal refers to any number that has a fractional value (numbers to the right of the decimal point) that will continue on infinitely. The most well-known example of one of these numbers is pi; another example is 1/3, which is 0.333 with the three repeating forever. A nonterminating decimal is indicated by placing a short bar (ellipsis) over the final two decimal values that are recorded. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053000211715698, "perplexity": 382.5602106200045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461266.22/warc/CC-MAIN-20150226074101-00229-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://bird.bcamath.org/handle/20.500.11824/1/browse?type=subject&value=maximal+operator | Now showing items 1-1 of 1
• #### Maximal estimates for a generalized spherical mean Radon transform acting on radial functions
(Annali de Matematica Pura et Applicata, 2020)
We study a generalized spherical means operator, viz.\ generalized spherical mean Radon transform, acting on radial functions. As the main results, we find conditions for the associated maximal operator and its local ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819360375404358, "perplexity": 2065.5984078046868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538082.57/warc/CC-MAIN-20210123125715-20210123155715-00123.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/148251-isomorphism.html | # Math Help - Isomorphism
1. ## Isomorphism
Let V and W be linear spaces onto field F.
T:V $\rightarrow$W is called Isomorphism from V over W iff it has these three properties:
1) T is a linear transformation
2) T is one-to-one
3) T is onto W
So by that definition of Isomorphism can I say that if KerT has more than one vector that transforms to $0_w$ then those subspaces are not isomorphic.
Basically, I'm concluding that if two spaces are isomorphic to each other, only the zero vector of V transforms to the zero vector of W
2. Originally Posted by jayshizwiz
Let V and W be linear spaces onto field F.
T:V $\rightarrow$W is called Isomorphism from V over W iff it has these three properties:
1) T is a linear transformation
2) T is one-to-one
3) T is onto W
So by that definition of Isomorphism can I say that if KerT has more than one vector that transforms to $0_w$ then those subspaces are not isomorphic.
Basically, I'm concluding that if two spaces are isomorphic to each other, only the zero vector of V transforms to the zero vector of W
u are right since if the kernel have more than two vectors say u,v $u\ne v \;$ then
T(u) = T(v) = 0 not one-one | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706900119781494, "perplexity": 703.8641674384878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010292910/warc/CC-MAIN-20140305090452-00084-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://encyclopediaofmath.org/wiki/Completion | Completion
2010 Mathematics Subject Classification: Primary: 54E50 [MSN][ZBL]
of a metric space $(X,d)$
Given a metric space $(X,d)$, a completion of $X$ is a triple $(Y,\rho,i)$ such that:
Often people refer to the metric space $(Y, \rho)$ as the completion. Both the space and the isometric embedding are unique up to isometries.
The standard construction of the completion is through Cauchy sequences and can be described as follows. Consider the set $Z$ of all possible Cauchy sequences $\{x_k\}$ of $X$ and introduce the equivalence relation $\{x_k\} \sim \{y_k\} \quad \iff \quad \lim_{k\to\infty} d (x_k, y_k)= 0\, .$ $Y$ is then the quotient space $Z/\sim$ endowed with the metric $\rho \left(\left[\{x_k\}\right], \left[\{y_k\}\right]\right) = \lim_{k\to\infty} d (x_k, y_k)\, .$ The map $i:X\to Y$ maps each element $x\in X$ in the constant sequence $x_n=x$.
A notion of completion can be introduced in general uniform spaces: the completion of a metric space is then just a special example. Another notable special example is than the completion of a topological vector space.
The concept of completion was first introduced by Cantor, who defined the space of real numbers as the completion of that of rational numbers, see real number
References
[Al] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian) [Du] J. Dugundji, "Topology" , Allyn & Bacon (1966) MR0193606 Zbl 0144.21501 [Ke] J.L. Kelley, "General topology" , Springer (1975) [Ko] G. Köthe, "Topological vector spaces" , 1 , Springer (1969)
How to Cite This Entry:
Completion. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Completion&oldid=33803
This article was adapted from an original article by M.I. Voitsekhovskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812594056129456, "perplexity": 902.5818395932876}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00273.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/206704-factorization-polynomials-irreducibles.html | # Math Help - Factorization of Polynomials - Irreducibles
1. ## Factorization of Polynomials - Irreducibles
I am reading Anderson and Feil - A First Course in Abstract Algebra.
On page 56 (see attached) ANderson and Feil show that the polynomial $f = x^2 + 2$ is irreducible in $\mathbb{Q} [x]$
After this they challenge the reader with the following exercise:
Show that $x^4 + 2$ is irreducible in $\mathbb{Q} [x]$. taking your lead from the discussion of $x^2 + 2$ above. (see attached)
Can anyone help me to show this in the manner requested. Would appreciate the help.
Peter
Attached Files
2. ## Re: Factorization of Polynomials - Irreducibles
Originally Posted by Bernhard
I am reading Anderson and Feil - A First Course in Abstract Algebra.
On page 56 (see attached) ANderson and Feil show that the polynomial $f = x^2 + 2$ is irreducible in $\mathbb{Q} [x]$
After this they challenge the reader with the following exercise:
Show that $x^4 + 2$ is irreducible in $\mathbb{Q} [x]$. taking your lead from the discussion of $x^2 + 2$ above. (see attached)
Can anyone help me to show this in the manner requested. Would appreciate the help.
Peter
You have the polynomial
$f(x)=\underbrace{x^4}_{p}+\underbrace{2}_{q}$
Then by the rational roots theorem if it has any linear factors they have to be of the form
$\frac{q_{i}}{p_{i}}$ where $p_i,q_i$ are all of the factors of p and q. so the possible zeros are $\pm 1, \pm 2$
If you check these in $f(x)$ they are not zero, it does not have any linear factors.
Now comes the harder work (it isn't too bad )
If it has a non trivial factorization it must have two quadratic factors. Since the leading coefficent is 1 and 2 only has two factors it would have to look like
$(x^2+bx+1)(x^2+cx+2)=x^4+2$
If we expand the left hand side we get
$x^4+(b+c)x^3+(2+b+c)x^2+(2b+c)x+2$
This gives us 3 equations that must hold
$b+c=0 \quad 2+b+c=0 \quad 2b+c=0$
The first equation and the third equation force $b=-c=0$ but this contradicts the 2nd equation. So the equation does not factor
3. ## Re: Factorization of Polynomials - Irreducibles
one need not appeal to the rational roots theorem, because Q is an ORDERED field.
therefore: q4 = (q2)2 ≥ 0 for all rational q.
if q is a root of x4 + 2, we have: q4 = -2 < 0, a contradiction.
this shows x4 + 2 has no rational roots (indeed, no REAL roots), thus no linear factors.
x4 + 2 = (x2 + ax + b)(x2 + cx + d) = x4 + (a+c)x3 + (b+d+ac)x2 + (ad+bc)x + bd, with a,b,c,d rational.
this gives:
a + c = 0
b + d + ac = 0
bd = 2
the first equation tells us c = -a. this gives by substitution into the other 3 equations:
b + d = a2
a(d - b) = 0
bd = 2
the equation a(d - b) = 0 tells us either a = 0, or b = d. we can look at each of these in turn.
a = 0 leads to: b = -d, in which case, bd = -b2 ≤ 0, and thus cannot equal 2.
on the other hand, if b = d, then we have bd = b2 = 2, in which case b = ±√2, which cannot happen because b is rational.
**********
with all due respect to TheEmptySet, his proof is incomplete (although what is missing isn't hard to show).
it may be that we have factors of the form:
(x2 + bx - 1)(x2 + cx - 2), as well.
also, the expansion seems to be incorrect, i get:
(x2 + bx + 1)(x2 + cx + 2) = x4 + (b+c)x3 + (3+bc)x2 + (2b+c)x + 2
b + c = 0
3 + bc = 0
2b + c = 0
the same contradiction does hold, though. and the first and third equations are still the same, so we have b = 2b + c - (b + c) = 0 - 0 = 0, and thus c = 0, so that:
3 + 0 = 0, which certainly is never true.
4. ## Re: Factorization of Polynomials - Irreducibles
Thanks Deveno
Will work through your post now
Peter | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086170196533203, "perplexity": 525.8398197284229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132372.31/warc/CC-MAIN-20140914011212-00238-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.fachschaft.informatik.tu-darmstadt.de/forum/viewtopic.php?f=311&t=14233 | ## A3 P2
Moderator: Computer Vision 2
Dragon
Mausschubser
Beiträge: 80
Registriert: 18. Apr 2006 15:36
### A3 P2
I don't manage to calculate the mrf_grad_log_prior correctly.
I tried to compute the derivative analytically (computing the derivative of the students t distribution and divide it by the students t diestribution) and then do this for all four neighbors of a pixel.
My idea was to construct a Matrix for every neighborhood.
One for the left neighbor of each pixel in my original Matrix, one for the right and so on.
Can anyone tell me whats wrong here?
Another problem is that I don't know how to treat the borders efficiently.
>flo<
Erstie
Beiträge: 20
Registriert: 6. Sep 2005 18:08
### Re: A3 P2
Try computing the gradient potientials for the horizontal direction and for the vertical direction likewise. Then you can add/subtract these results in order to get the result composed of the several neighborhood potentials. Try sketching the situation for a small example (e.g. 3x3) on a sheet of paper--that helps.
Computerversteher
Beiträge: 353
Registriert: 2. Okt 2006 18:53
### Re: A3 P2
could someone give me a hint on what
should be small (in absolute value)
means?
Is small < 10^-5 or is small < 1 ?
I think I implemented it correctly and I'm getting a maximum absolute difference of 0.25 and an average distance of 0.09. This somehow seems to be too big.
SebFreutel
Computerversteher
Beiträge: 317
Registriert: 30. Okt 2006 21:54
### Re: A3 P2
Maradatscha hat geschrieben:I think I implemented it correctly and I'm getting a maximum absolute difference of 0.25 and an average distance of 0.09.
I get very similar values.
If I make a scatterplot of the gradient values of each pixel (of a 10x10 patch from la.png or a random image) and corresponding estimated gradients (something like plot(g1(:),g2(:),'rx') at the end of test_grad.m), I get an uncorrelated point cloud in the x and y range -.25 to .25, so the analytical and estimated gradients seem to have nothing in common.
btw, this is actually A3 P3.
/edit, solved: okay, the mistake was that in the gradient computation, one cannot simply take the sum of the partial derivatives, but has to pay attention to the signs, since there is sometimes an inner derivative -1 coming from terms like $$\frac{\partial}{\partial x_{i,k}}(x_{i+1,k} - x_{i,k})$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135851860046387, "perplexity": 1190.6325800977634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00295.warc.gz"} |
https://byjus.com/area-of-a-trapezoid-formula/ | # Area of a Trapezoid Formula
A trapezoid is described as a 2-dimensional geometric figure which has four sides and at least one set of opposite sides are parallel. The parallel sides are called the bases, while the other sides are called the legs. There are different types of trapezoids: isosceles trapezoid, right trapezoid, scalene trapezoid. A trapezoid with the two non-parallel sides the same length is called an isosceles trapezoid. A right trapezoid is a trapezoid that has at least two right angles. A right isosceles trapezoid is a trapezoid that is simultaneously a right trapezoid and an isosceles trapezoid. In Euclidean geometry, such trapezoids are automatically rectangles.
Area of a Trapezoid = A = $\frac{1}{2}$ $\times$ h $\times$ (a + b)
Where:
h = height (Note – This is the perpendicular height, not the length of the legs.)
a = the short base
b = the long base
### Solved Examples
Question 1: Find the area of a trapezoid whose bases are 17 cm and 12 cm and height is 7 cm ?
Solution:
Given,
a = 17 cm
b = 12 cm
h = 7 cm
Area of a trapezoid
= $\frac{1}{2}$ h(a + b)
= $\frac{1}{2}$ 7(17 + 12) cm2
= $\frac{1}{2}$ 7(29) cm2
= $\frac{1}{2}$ * 203 cm2
= 101.5 cm2
More topics in Area of a Trapezoid Formula Isosceles Trapezoid Formula
#### Practise This Question
Calculate the mass of a photon with wavelength 3.6˚A. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905135989189148, "perplexity": 852.0237358523635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000164.31/warc/CC-MAIN-20190626033520-20190626055520-00107.warc.gz"} |
https://forum.allaboutcircuits.com/threads/current-and-voltage-source.5128/#post-65394 | Current and voltage source
jasonhaykin
Joined Mar 8, 2007
23
Hi guys,
what is a current source ?
if someone says a supply of 5A , what do they mean ?
does current source have a voltage rating also ?
Thanks
beenthere
Joined Apr 20, 2004
15,819
The basic electronic formula, E = IR, says it must. E is voltage, I is current, R is resistance.
The 5 amp rating means thar the regulator can supply up to 5 amps current before the output voltage sags or the ripple increases beyond spec.
tiger_prodigy
Joined Mar 10, 2007
2
just a thought but is a current source the same as a voltage source except that it provides a constant current value and a adjustable voltage source as to allow for a higher amount of watts to be drawn on? form the formula W=EI and so when some one says 5A source does that not mean that you have a constant supply of 5A but a varying voltage value according to the amount of watts demanded by the circuit? Please correct me f I'm wrong thanks.
beenthere
Joined Apr 20, 2004
15,819
Most power supplies work in the volltage mode, so they maintain a constant voltage despite a varying load. Some have a current regulation mode where the voltage changes to maintain a constant current.
It's easier maintaining a constant voltage, unless the amperage limit is exceeded. Constant surrent is not possible when the load gets too resistive - not enough voltage to push the current. And there is also the limit to the amount of current that can be supplied.
awright
Joined Jul 5, 2006
91
Current sources and voltage sources are exact analogues of each other and they each have their appropriate applications. We are most familiar with voltage sources as power supplies for most applications.
A VOLTAGE SOURCE such as a common bench power supply or a battery ideally provides a fixed voltage independent of the amount of current required to maintain the fixed voltage. It appears to the load to have a ZERO SOURCE RESISTANCE. That is, viewing the junction of the voltage source output and the load input as the center point of a two-resistor voltage divider fed from an infinitely stiff, fixed voltage source, if the top (source) resistor is zero, the voltage at the junction ramains fixed for any value of the lower (load) resistance other than zero. Of course, the whole analogy breaks down at the value of load resistance to which the source cannot supply enough current. That point would be the knee of the voltage/current curve of the supply. Any voltage source will have some rated maximum current it can supply before voltage deviates more than some specified amount from the set voltage.
The degree to which the voltage source deviates from zero source resistance is the degree to which it deviates from perfection, as all real devices must. On a voltage/current graph, the ideal voltage source would have a straight, horizontal line out to the point at wihich it cannot supply any more current and would then have a vertical line down to zero volts at its limiting current. In real life, the horizontal region slopes slightly or dramatically out to a sharp or very gradual knee (or burnout), followed by a steep or gradual slope down to zero volts.
A CURRENT SOURCE ideally supplies a fixed current independent of the voltage required to maintain that current. It appears to the load ideally to have an INFINITE SOURCE RESISTANCE supplied by an infinite voltage (neither being actually true). A current source will have a voltage compliance specification, that is, some voltage at which it can no longer maintain the desired current within some specified accuracy. Note the inverse analogy with a voltage source. Viewed as a two-resistor voltage divider supplied by a quasi-infinite voltage, if the upper resistor is extremely large (approaching infinite) then the value of the lower (load) resistor will have no influence on the total current flowing and the voltage at the junction will vary in exact proportion to the load resistance. This is a current source.
Both devices are imperfect, the degree of imperfection depending upon the intended application, the quality of the design, and where in the range of capabilities of the source you are operating, among many other factors.
A voltage (vertical axis) vs. current (horizontal axis) graph of the output of the two types of sources would be a straight horizontal line for a perfect voltage source and a straight, vertical line for a perfect current source.
Many modern bench power supplies can be set up by the user for either constant voltage or constant current function by proper use of the voltage limit and current limit knobs.
A common application of a current source that you use every day would be a battery charger that limits the initial charging current to some safe value specified by the battery manufacturer. Another is as the "long tail" of a differential amplifier stage. Schematic diagrams will show such "long tailed" configurations in one or more stages of an operational amplifier IC and in many audio power amplifiers. In real applications, of course, there is no infinite resistance or infinite voltage supply - it is all simulated with active electronics. Take a look at the schematic of an op amp and look for two mirror-imaged transistors and notice how the emitters are connected directly or via a low resistance. The circuit below those connected emitters is a constant current source.
Another common application of current sources is in circuits powering resistance strain gauges. If the simulated source resistance appears to be high enough, the output voltage from the strain gauge will be directly proportional to the resistance of the gauge and, therefore, the amount of strain.
Hope this is interesting to you.
awright
jasonhaykin
Joined Mar 8, 2007
23
thanks , this is very interesting .
solarnovice
Joined Apr 12, 2008
3
i wondered if I could deviate slightly with my question for a home-made solar charger... Say you have a solar panel putting out 15V 100MilliAmp giving 1.5Watt power rating. hope my numbers are right. Now most USB devices require a 5 to 5.3V supply (max 500ma) to charge up. So i looked around and found it was easy enough to limit the voltage to 5.3V. But the thing that is bothering me is that the extra volts were dissipated via resistor into heat. Could there be a way to use those extra volts to push up the amps put out by the circuit instead? I mean, the solar panel produces the extra volts but not the amps. Further, if I attached a device that required more amps than the solar panel can generate, what will be the effect ?
my name says it all - i am a novice to this subject. could be i am no novice in others! Thanks for any suggestions/clarifications.
Joined Apr 16, 2008
1
Very Thanx
Caveman
Joined Apr 15, 2008
471
solarnovice,
Shoulda been a new post, not a thread hijack, but since you asked...
What you want is a switching power supply. It is able to convert high voltages at lower currents to lower voltages at higher currents (minus some efficiency loss, which becomes heat).
If you want to do it really simply, look into the TI PTH08080W. It costs a bit more (\$8.60), but it can be done. If you want to do it cheaper, you will have to learn a lot more.
Oh and read the entire datasheet. These kinds of things require attention to detail. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393985629081726, "perplexity": 919.7319145447809}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00291.warc.gz"} |
http://math.stackexchange.com/questions/116716/is-the-pythagorean-closure-of-mathbb-q-equal-to-the-field-of-constructible-nu | # Is the Pythagorean closure of $\mathbb Q$ equal to the field of constructible numbers?
A Pythagorean field is one in which every sum of two squares is again a square. $\mathbb Q$ is not Pythagorean, which is easy to see. I have read a theorem online which says that every field has a unique (up to isomorphism) Pythagorean closure. I haven't found the proof so I thought I should start with the most familiar field, $\mathbb Q$.
I was thinking if it would be possible to somehow imagine or characterize the Pythagorean closure $\mathbb P$ of $\mathbb Q.$ I know that in the field $\mathbb E$ of all constructible numbers, every positive number is a square because it is possible to construct square roots of already constructed numbers (by drawing a certain right triangle and its altitude). So $\mathbb E$ must be Pythagorean. $(1)$ But is it equal to $\mathbb P?$ If it's not, then what is the $\mathbb P$-dimension of $\mathbb E$?
Surely, for any $q_1,q_2,\ldots,q_n\in \mathbb Q,$ we must have $\sqrt{q_1^2+q_2^2+\cdots + q_n^2}\in \mathbb P$ by a simple induction. $(2)$ Is this all we have to adjoin to $\mathbb Q$ to obtain $\mathbb P?$ It looks like it is but I haven't done anything with infinite extensions and I don't know how to handle this.
-
I imagine "unique Pythagorean closure" could mean unique up to isomorphism. This leaves a question of whether two distinct but isomorphic subfields of $\mathbb{R}$ could be Pythagorean closures of $\mathbb{Q}$. But certainly there's one that you get just by closing under the usual field operations and taking square roots. – Michael Hardy Mar 5 '12 at 18:25
@CamMcLeman I'm looking for a basis. I think the two equations I wrote down may be helpful in finding it because they give a way to think about less complicated expressions. – user23211 Mar 5 '12 at 22:05
@CamMcLeman Could you please comment here? I think if I reply here it will create a mess. – user23211 Mar 5 '12 at 22:13
@ymar: Already each non-negative rational is the sum of four squares of rationals, so the $n$ above need not be large! – André Nicolas Mar 5 '12 at 22:31
@ymar: Look at $a/b=ab/b^2$. Express $ab$ as the sum of the squares of the integers $s$, $t$, $u$, $v$. Then $a/b=(s/b)^2+(t/b)^2+(u/b)^2+(v/b)^2$. – André Nicolas Mar 5 '12 at 23:01
No, the two fields are not the same: It is a result of Hilbert that the Pythagorean field is the maximal totally real subfield of the field of constructible numbers. So any constructible number which is not totally real (i.e., its minimal polynomial has complex roots) gives a non-Pythagorean Euclidean number. An easy example is $\sqrt{1+\sqrt{2}}$ (with non-real conjugate $\sqrt{1-\sqrt{2}}$). Generalizing this example, it's easy to see that $\mathbb{E}$ is infinite-dimensional over $\mathbb{P}$.
I believe your second question is answer in the affirmative by using that $\mathbb{P}$ is the smallest subfield of the Euclidean numbers closed under the operation of $x\rightarrow \sqrt{1+x^2}$, and inducting on the number of such operations you'd have to apply. I'd take a look at Roger Alperin's series of papers on trisections and origami for a good discussion of the fields in question (and others!).
-
Thank you very much. Could you please take a look at my edited question? – user23211 Mar 5 '12 at 21:51
@ymar: You're welcome! Though I think it makes more sense to ask a separate follow-up question than to append to the current one (to avoid confusion about which answers are answering which parts). In any case, I agree that it's clear that $(\mathbb{P}:\mathbb{Q})$ is infinite. – Cam McLeman Mar 5 '12 at 21:59
OK, I'll cut-paste this to a new question. – user23211 Mar 5 '12 at 22:07
I treat the concept of Pythagorean closure in these notes. In fact, I speak briefly about a class of fields being "closable" and give the Pythagorean closure as an example of that.
Let $F$ be a field. Here are two key (but easy) observations:
(CC1) Any algebraically closed field extension of $F$ is Pythagorean.
(CC2) If $\{K_i\}_{i \in I}$ is a family of Pythagorean field extensions of $F$ -- all contained in some large field $L$ -- then also $K = \bigcap_{i \in I} K_i$ is a Pythagorean field extension of $F$.
It follows that inside any algebraic closure $\overline{F}$ of $F$ there is a literally unique Pythagorean closure: the intersection of all Pythagorean algebraic extensions of $F$. (When $F = \mathbb{Q}$, Cam McLeman has given a spot-on description of it.)
-
They seem to be fantastic notes. I love that you included references. I've never seen lecture notes with references before! I will try to read them, but I may have to return to them later when I learn more about field theory. – user23211 Mar 7 '12 at 16:52
@ymar: thanks. Actually they are not really lecture notes in the sense that I have not (at least not yet) taught any corresponding course. They are more like "study notes" for me and my students...hence the references. (Although I do try to include references in my lecture notes as well...) – Pete L. Clark Mar 7 '12 at 17:19
(Same as my answer, but you explicitly treated uniqueness.) I might have written something like "If $\mathcal{K}$ is a family of Pythagorean field extensions of $F$ -- all contained in some large field $L$ -- then also $k=\bigcap\limits_{K\in\mathcal{K}} K$ is a Pythagorean field extension of $F$." Do the indices $i\in I$ add anything? – Michael Hardy Mar 8 '12 at 13:50
@Michael: Yes, my answer is quite similar to yours...but the uniqueness of the Pythagorean closure was explicitly part of the OP's question, so I felt it was worth it to nail down an answer to that. "Do the indices $i \in I$ add anything?" Not really, no. The way you might have written it seems just as good... – Pete L. Clark Mar 8 '12 at 13:58
When I think of the question of how to prove that every field has a unique Pythagorean closure, the first thing that comes to mind is that I might start with the fact that every field has a unique algebraic closure. Once you've got that, you wouldn't have to construct a field from scratch; you'd just look at the intersection of all subfields of the algebraic closure that include the field you started with and are closed under $(a,b)\mapsto\sqrt{a^2+b^2}$. The algebraic closure is closed under that operation, so you'd get something. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396139740943909, "perplexity": 212.3667738259289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776438333.54/warc/CC-MAIN-20140707234038-00021-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1409.5335/ | # Pimsner algebras and Gysin sequences from principal circle actions
Francesca Arici, Jens Kaad, Giovanni Landi International School of Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy International School of Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy Matematica, Università di Trieste, Via A. Valerio 12/1, 34127 Trieste, Italy and INFN, Sezione di Trieste, Trieste, Italy , ,
v1 September 2014; v2 February 2015
###### Abstract.
A self Morita equivalence over an algebra , given by a -bimodule , is thought of as a line bundle over . The corresponding Pimsner algebra is then the total space algebra of a noncommutative principal circle bundle over . A natural Gysin-like sequence relates the -theories of and of . Interesting examples come from a quantum lens space over a quantum weighted projective line (with arbitrary weights). The -theory of these spaces is explicitly computed and natural generators are exhibited.
###### Key words and phrases:
KK-theory, Pimsner algebras, Gysin sequences, circle actions, quantum principal bundles, quantum lens spaces, quantum weighted projective spaces.
###### 2010 Mathematics Subject Classification:
19K35, 55R25, 46L08, 58B32
Thanks. GL was partially supported by the Italian Project “Prin 2010-11 – Operator Algebras, Noncommutative Geometry and Applications”.
## 1. Introduction
In the present paper we put in close relation two notions that seem to have touched each other only occasionally in the recent literature. These are the notion of a Pimsner (or Cuntz-Krieger-Pimsner) algebra on one hand and that of a noncommutative (in general) principal circle bundle on the other.
At the -algebraic level one needs a self Morita equivalence of a -algebra , thus we look at a full Hilbert -module over together with an isomorphism of with the compacts on . Through a natural universal construction this data gives rise to a -algebra, the Pimsner algebra generated by . In the case where both and its Hilbert -module dual are finitely generated projective over one obtains that the -subalgebra generated by the elements of and becomes the total space of a noncommutative principal circle bundle with base space .
At the purely algebraic level we start from a -graded -algebra which forms the total space of a quantum principal circle bundle with base space the -subalgebra of invariant elements and with a coaction of the Hopf algebra coming from the -grading. Provided that comes equipped with a -norm, which is compatible with the circle action likewise defined by the -grading, we show that the closure of has the structure of a Pimsner algebra. Indeed, the first spectral subspace is then finitely generated and projective over the algebra . The closure of will become a Hilbert -module over , the closure of , and the couple will lend itself to a Pimsner algebra construction.
The commutative version of this part of our program was spelled out in [11, Prop. 5.8]. This amounts to showing that the continuous functions on the total space of a (compact) principal circle bundle can be described as a Pimsner algebra generated by a classical line bundle over the compact base space.
With a Pimsner algebra there come two natural six term exact sequences in -theory, which relate the -theories of the Pimsner algebra with that of the -algebra of (the base space) scalars . The corresponding sequences in -theory are noncommutative analogues of the Gysin sequence which in the commutative case relates the -theories of the total space and of the base space. The classical cup product with the Euler-class is in the noncommutative setting replaced by a Kasparov product with the identity minus the generating Hilbert -module . Predecessors of these six term exact sequences are the Pimsner-Voiculescu six term exact sequences of [19] for crossed products by the integers.
Interesting examples are quantum lens spaces over quantum weighted projective lines. The latter spaces are defined as fixed points of weighted circle actions on the quantum -sphere . On the other hand, quantum lens spaces are fixed points for the action of a finite cyclic group on . For general coprime positive integers and any positive integer , the coordinate algebra of the lens space is a quantum principal circle bundle over the corresponding coordinate algebra for the quantum weighted projective space, thus generalizing the cases studied in [5].
At the -algebra level the lens spaces are given as Pimsner algebras over the -algebra of the continuous functions over the weighted projective spaces (see §6). Using the associated exact sequences coming from the construction of [18], we explicitly compute in §7 the -theory of these spaces for general weights. A central character in this computation is played by an integer matrix whose entries are index pairings. These are in turn computed by pairing the corresponding Chern-Connes characters in cyclic theory. The computation of the -theory of our class of -deformed lens spaces is, to the best of our knowledge, a novel one. Also, it is worth emphasizing that the quantum lens spaces and weighted projective spaces are in general not -equivalent to their commutative counterparts.
Pimsner algebras were introduced in [18]. This notion gives a unifying framework for a range of important -algebras including crossed products by the integers, Cuntz-Krieger algebras [9, 8], and -algebras associated to partial automorphisms [10]. Generalized crossed products, a notion which is somewhat easier to handle, were independently invented in [3]. More recently, Katsura has constructed Pimsner algebras for general -correspondences [15]. In the present paper we work in a simplified setting (see Assumption 2.1 below) which is close to the one of [3].
#### Acknowledgments
We are very grateful to Georges Skandalis for many suggestions and to Ralf Meyer for useful discussions. We thank Tomasz Brzeziński for making us aware of the reference [17]. This paper was finished at the Hausdorff Research Institute for Mathematics in Bonn during the 2014 Trimester Program “Non-commutative Geometry and its Applications”. We thank the organizers of the Program for the kind invitation and all people at HIM for the nice hospitality.
## 2. Pimsner algebras
We start by reviewing the construction of Pimsner algebras associated to Hilbert -modules as given in [18]. Rather than the full fledged generality we aim at a somewhat simplified version adapted to the context of the present paper, and motivated by our geometric intuition coming from principal circle bundles.
Our reference for the theory of Hilbert -modules is [16]. Throughout this section will be a countably generated (right) Hilbert -module over a separable -algebra , with -valued (and right -linear) inner product denoted ; or simply to lighten notations. Also, is taken to be full, that is the ideal is dense in .
Given two Hilbert -modules and over the same algebra , we denote by the space of bounded adjointable homomorphisms . For each of these there exists a homomorphism (the adjoint) with the property that for any and . Given any pair , an adjointable operator is defined by
θξ,η(ζ)=ξ⟨η,ζ⟩,∀ζ∈E.
The closed linear subspace of spanned by elements of the form as above is denoted , the space of compact homomorphisms. When , it results that is a -algebra with the (sub) -algebra of compact endomorphisms of .
### 2.1. The algebras and their universal properties
On top of the above basic conditions, the following will remain in effect as well:
###### Assumption 2.1.
There is a -homomorphism which induces an isomorphism .
Next, let be the dual of (when viewed as a Hilbert -module):
E∗:={ϕ∈HomB(E,B)|∃ξ∈Ewithϕ(η)=⟨ξ,η⟩∀η∈E}.
Thus, with , if is the operator defined by , for all , every element of is of the form for some . By its definition, . The dual can be given the structure of a (right) Hilbert -module over . Firstly, the right action of on is given by
λξb:=λξ∘ϕ(b).
Then, with operator for , the inner product on is given by
⟨λξ,λη⟩:=ϕ−1(θξ,η),
and is full as well. With the -homomorphism defined by , the pair satisfies the conditions in Assumption 2.1.
We need the interior tensor product of with itself over . As a first step, one constructs the quotient of the vector space tensor product by the ideal generated by elements of the form
ξb⊗η−ξ⊗ϕ(b)η,forξ,η∈E,b∈B. (2.1)
There is a natural structure of right module over with the action given by
(ξ⊗η)b=ξ⊗(ηb),forξ,η∈E,b∈B,
and a -valued inner product given, on simple tensors, by
⟨ξ1⊗η1,ξ2⊗η2⟩=⟨η1,ϕ(⟨ξ1,ξ2⟩)η2⟩ (2.2)
and extended by linearity. The inner product is well defined and has all required properties; in particular, the null space is shown to coincide with the subspace generated by elements of the form in (2.1). One takes and defines to be the Hilbert module obtained by completing with respect to the norm induced by (2.2). The construction can be iterated and, for , we denote by , the -fold interior tensor power of over . Like-wise, denotes the -fold interior tensor power of over .
To lighten notation, in the following we define, for each , the modules
E(n):=⎧⎪⎨⎪⎩Eˆ⊗ϕnn>0Bn=0(E∗)ˆ⊗ϕ∗(−n)n<0.
Clearly, and . We define the Hilbert -module over :
E∞:=⨁n∈ZE(n).
For each we have a bounded adjointable operator defined component-wise by
Sξ(b) :=ξ⋅b, b∈B, Sξ(ξ1⊗⋯⊗ξn) :=ξ⊗ξ1⊗⋯⊗ξn, n>0, Sξ(λξ1⊗⋯⊗λξ−n) :=λξ2⋅ϕ−1(θξ1,ξ)⊗λξ3⊗⋯⊗λξ−n, n<0.
In particular, .
The adjoint of is easily found to be given by :
Sλξ(b) :=λξ⋅b, b∈B, Sλξ(ξ1⊗…⊗ξn) :=ϕ(⟨ξ,ξ1⟩)(ξ2)⊗ξ3⊗…⊗ξn, n>0, Sλξ(λξ1⊗…⊗λξ−n) :=λξ⊗λξ1⊗…⊗λξ−n, n<0;
and in particular .
From its definition, each has a natural structure of Hilbert -module over and, with again denoting the Hilbert -module compacts, we have isomorphisms
K(E(n),E(m))≃E(m−n).
###### Definition 2.2.
The Pimsner algebra of the pair is the smallest -subalgebra of which contains the operators for all . The Pimsner algebra is denoted by with inclusion .
There is an injective -homomorphism . This is induced by the injective -homomorphism defined component-wise by
ϕ(b)(b′):=b⋅b′,ϕ(b)(ξ1⊗…⊗ξn):=ϕ(b)(ξ1)⊗ξ2⊗…⊗ξn,ϕ(b)(λξ1⊗…⊗λξn):=ϕ∗(b)(λξ1)⊗λξ2⊗…⊗λξn=λξ1⋅b∗⊗λξ2⊗…⊗λξn,
and which factorizes through the Pimsner algebra . Indeed, for all it holds that , that is the operator on is right-multiplication by the element .
A Pimsner algebra is universal in the following sense [18, Thm. 3.12]:
###### Theorem 2.3.
Let be a -algebra and let be a -homomorphism. Suppose that there exist elements for all such that
1. for all and ,
2. and for all and ,
3. for all ,
4. for all .
Then there is a unique -homomorphism with for all .
Also, in the context of this theorem the identity follows automatically.
###### Remark 2.4.
In the paper [18], the pair was referred to as a Hilbert bimodule, since the map (taken to be injective there) naturally endows the right Hilbert module with a left module structure. As mentioned, our Assumption 2.1 simplifies the construction to a great extent (see also [3]). For the pair with a general -homomorphism , (in particular, a non necessarily injective one), the name -correspondence over has recently emerged as a more common one, reserving the terminology Hilbert bimodule to the more restrictive case where one has both a left and a right inner product satisfying an extra compatibility relation.
### 2.2. Six term exact sequences
With a Pimsner algebra there come two six term exact sequences in -theory. Firstly, since factorizes through the compacts , the following class is well defined.
###### Definition 2.5.
The class in defined by the even Kasparov module (with trivial grading) will be denoted by .
Next, let denote the orthogonal projection with
Im(P)=(⊕∞n=1E(n))⊕B⊆E∞.
Notice that for all and thus for all .
Then, let and recall that is the inclusion.
###### Definition 2.6.
The class in defined by the odd Kasparov module will be denoted by .
For any separable -algebra we then have the group homomorphisms
[E]:KK∗(B,C)→KK∗(B,C),[E]:KK∗(C,B)→KK∗(C,B)
and
[∂]:KK∗(C,OE)→KK∗+1(C,B),[∂]:KK∗(B,C)→KK∗+1(OE,C),
which are induced by the Kasparov product.
The six term exact sequences in -theory given in the following theorem were constructed by Pimsner, see [18, Thm. 4.8].
###### Theorem 2.7.
Let be the Pimsner algebra of the pair over the -algebra . If is any separable -algebra, there are two exact sequences:
KK0(C,B)1−[E]−−−−→KK0(C,B)i∗−−−−→KK0(C,OE)[∂]↑⏐⏐⏐⏐↓[∂]KK1(C,OE)←−−−−i∗KK1(C,B)←−−−−1−[E]KK1(C,B)
and
KK0(B,C)←−−−−1−[E]KK0(B,C)←−−−−i∗KK0(OE,C)⏐⏐↓[∂][∂]↑⏐⏐KK1(OE,C)i∗−−−−→KK1(B,C)1−[E]−−−−→KK1(B,C)
with , the homomorphisms in -theory induced by the inclusion .
###### Remark 2.8.
For , the first sequence above reduces to
K0(B)1−[E]−−−−→K0(B)i∗−−−−→K0(OE)[∂]↑⏐⏐⏐⏐↓[∂].K1(OE)←−−−−i∗K1(B)←−−−−1−[E]K1(B)
This could be considered as a generalization of the classical Gysin sequence in -theory (see [14, IV.1.13]) for the ‘line bundle’ over the ‘noncommutative space’ and with the map having the role of the Euler class of the line bundle . The second sequence would then be an analogue in -homology:
K0(B)←−−−−1−[E]K0(B)←−−−−i∗K0(OE)⏐⏐↓[∂][∂]↑⏐⏐.K1(OE)i∗−−−−→K1(B)1−[E]−−−−→K1(B)
Examples of Gysin sequences in -theory were given in [2] for line bundles over quantum projective spaces and leading to a class of quantum lens spaces. These examples will be generalized later on in the paper to a class of quantum lens spaces as circle bundles over quantum weighted projective spaces with arbitrary weights.
## 3. Pimsner algebras and circle actions
An interesting source of Pimsner algebras consists of -algebras which are equipped with a circle action and subject to an extra completeness condition on the associated spectral subspaces. We now investigate this relationship.
Throughout this section will be a -algebra and will be a strongly continuous action of the circle on .
### 3.1. Algebras from actions
For each , define the spectral subspace
A(n):={ξ∈A∣σz(ξ)=z−nξ forallz∈S1}.
Then the invariant subspace is a -subalgebra and each is a (right) Hilbert -module over with right action induced by the algebra structure on and -valued inner product just , for all .
###### Assumption 3.1.
The data as above is taken to satisfy the conditions:
1. The -algebra is separable.
2. The Hilbert -modules and are full and countably generated over the -algebra .
###### Lemma 3.2.
With the -homomorphism simply defined by , the pair satisfies the conditions of Assumption 2.1.
###### Proof.
To prove that is injective, let and suppose that for all . It then follows that for all . But this implies that for all . Since is full this shows that . We may thus conclude that is injective, and the image of is therefore closed.
To conclude that it is now enough to show that the operator for all . But this is clear since .
To prove that it suffices to check that for all (again since is full). But this is true being . ∎
The condition that both and are full over has the important consequence that the action is semi-saturated in the sense of the following:
###### Definition 3.3.
A circle action on a -algebra is called semi-saturated if is generated, as a -algebra, by the fixed point algebra together with the first spectral subspace .
###### Proposition 3.4.
Suppose that and are full over . Then the circle action is semi-saturated.
###### Proof.
With refering to the norm-closure, we show that the Banach algebra
cl(∞∑n=0A(n))⊆A
is generated by and . A similar proof in turn shows that
cl(∞∑n=0A(−n))⊆A
is generated by and . Since the span is norm-dense in (see [10, Prop. 2.5]), this proves the proposition. We show by induction on that
(A(1))n:=span{x1⋅…⋅xn∣x1,…,xn∈A(1)}
is dense in . For the statement is void.
Suppose thus that the statement holds for some . Then, let and choose a countable approximate identity for the separable -algebra . Let be given. We need to construct an element such that
∥x−y∥<ε .
To this end we first remark that the sequence converges to . Indeed, this follows due to and since, for all ,
∥x⋅um−x∥2=∥umx∗xum+x∗x−x∗xum−umx∗x∥.
We may thus choose an such that
∥x⋅um−x∥<ε/3 .
Since is full over , there are elements and so that
∥x⋅um−k∑j=1x⋅ξ∗j⋅ηj∥<ε/3 .
Furthermore, since we may apply the induction hypothesis to find elements such that
∥k∑j=1x⋅ξ∗j⋅ηj−k∑j=1zj⋅ηj∥<ε/3 .
Finally, it is straightforward to verify that for the element
y:=k∑j=1zj⋅ηj∈(A(1))n+1
it holds that: . This proves the present proposition. ∎
Having a semi-saturated action one is lead to the following theorem [3, Thm. 3.1].
###### Theorem 3.5.
The Pimsner algebra is isomorphic to . The isomorphism is given by for all .
In much of what follows, the -algebras of interest with a circle action, will come from closures of dense -graded -algebras, with the -grading defining the circle action in a natural fashion.
Let be a -graded unital -algebra. The grading is compatible with the involution , this meaning that whenever for some . For , define the -automorphism by
σw:x↦w−nxforx∈A(n)n∈Z.
We will suppose that we have a -norm on satisfying
∥σw(x)∥≤∥x∥forallw∈S1x∈A,
thus the action has to be isometric. The completion of is denoted by .
The following standard result is here for the sake of completeness and its use below. The proof relies on the existence of a conditional expectation naturally associated to the action.
###### Lemma 3.6.
The collection extends by continuity to a strongly continuous action of on . Each spectral subspace agrees with the closure of .
###### Proof.
Once is shown to be dense in the rest follows from standard arguments. Thus, for , define the bounded operator by
E(n):x↦∫S1wnσw(x) dw,
where the integration is carried out with respect to the Haar-measure on . We have that for all and then that . This implies that is dense. ∎
Let now and consider the unital -subalgebra . Then is a -graded unital -algebra as well and we denote the associated circle action by . Let and choose a such that . Then
σ1/dw(xnd)=wn⋅xnd=znd⋅xnd=σz(xnd),forallxnd∈A(nd),
and it follows that for all . With the -norm obtained by restriction , it follows in particular that
∥σ1/dw(x)∥≤∥x∥
by our standing assumption on the compatibility of with the norm on . The -completion of is denoted by .
###### Proposition 3.7.
Suppose that is semi-saturated on and let . Then we have unitary isomorphisms of Hilbert -modules
(A(1))ˆ⊗ϕd≃(A1/d)(1)and(A(−1))ˆ⊗ϕd≃(A1/d)(−1)
induced by the product .
###### Proof.
We only consider the case of since the the proof for is the same.
Observe firstly that . Thus Lemma 3.6 yields . This implies that the product is a well-defined homomorphism of right modules over (here “” refers to the algebraic tensor product of bimodules over ). Furthermore, since
⟨x1⊗…⊗xd,y1⊗…⊗yd⟩=x∗d⋅…⋅x∗1⋅y1⋅…⋅yd,
we get that extends to a homomorphism of Hilbert -modules over with for all .
It is therefore enough to show that is dense. But this is a consequence of [10, Prop. 4.8]. ∎
###### Lemma 3.8.
Suppose that satisfies the conditions of Assumption 3.1. Then satisfies the conditions of Assumption 3.1 for all .
###### Proof.
We only need to show that the Hilbert -modules and are full and countably generated over .
By Proposition 3.4 we have that is semi-saturated. It thus follows from Proposition 3.7 that
A(d)≃(A(1))ˆ⊗ϕdandA(−d)≃(A(−1))ˆ⊗ϕd . (3.1)
Since both and are full and countably generated by assumption these unitary isomorphisms prove the lemma. ∎
The following result is a stronger version of Theorem 3.5 since it incorporates all the spectral subspaces and not just the first one.
###### Theorem 3.9.
Suppose that the circle action on satisfies the conditions in Assumption 3.1. Then the Pimsner algebra is isomorphic to the -algebra for all . The isomorphism is given by for all .
###### Proof.
This follows by combining Lemma 3.8, Proposition 3.7 and Theorem 3.5. ∎
We finally investigate what happens when the -norm on is changed. Thus, let be an alternative -norm on satisfying
∥σw(x)∥′≤∥x∥′forallw∈S1andx∈A.
The corresponding completion will carry an induced circle action . The next theorem can be seen as a manifestation of the gauge-invariant uniqueness theorem, [15, Thm. 6.2 and Thm. 6.4]. This property was indirectly used already in [18, Thm. 3.12] for the proof of the universal properties of Pimsner algebras.
###### Theorem 3.10.
Suppose that for all . Then satisfies the conditions of Assumption 3.1 if and only if satisfies the conditions of Assumption 3.1. And in this case, the identity map induces an isomorphism of -algebras. In particular, we have that for all .
###### Proof.
Remark first that the identity map induces an isometric isomorphism of Hilbert -modules for all . This is a consequence of the identity for all . But then we also have isomorphisms
(A(1))ˆ⊗ϕn≃(A′(1))ˆ⊗ϕnand(A(−1))ˆ⊗ϕn≃(A′(−1))ˆ⊗ϕn
for all . These observations imply that satisfies the conditions of Assumption 3.1 if and only if satisfies the conditions of Assumption 3.1. But it then follows from Theorem 3.5 that
A≃OA(1)≃OA′(1)≃A′,
with corresponding isomorphism induced by the identity map . ∎
## 4. Quantum principal bundles and Z-graded algebras
We start by recalling the definition of a quantum principal -bundle.
Later on in the paper we shall exhibit a novel class of quantum lens spaces as principal -bundles over quantum weighted projective lines with arbitrary weights.
### 4.1. Quantum principal bundles
Define the unital complex algebra
O(U(1)):=C[z,z−1]/⟨1−zz−1⟩
where denotes the ideal generated by in the polynomial algebra in two variables. The algebra is a Hopf algebra by defining, for all , coproduct , antipode and counit . We simply write for short.
Let be a complex unital algebra and suppose in addition that it is a right comodule algebra over , that is we have a homomorphism of unital algebras
ΔR:A→A⊗O(U(1)),
which also provides a coaction of the Hopf algebra on .
Let denote the unital subalgebra of consisting of coinvariant elements for the coaction.
###### Definition 4.1.
One says that the datum is a quantum principal -bundle when the canonical map
can:A⊗BA→A⊗O(U(1)),x⊗y↦x⋅ΔR(y),
is an isomorphism. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850068092346191, "perplexity": 727.795181225801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00580.warc.gz"} |
http://www.zndxzk.com.cn/paper/paperView.aspx?id=paper_319915 | ## Journal of Central South University
第50卷 第7期 总第299期 2019年7月
[PDF全文下载] [Flash在线阅读]
(中南大学 冶金与环境学院,湖南 长沙,410083)
Electrochemical synthesis of nickel isooctanoate
School of Metallurgy and Environment, Central South University, Changsha 410083, China
Abstract:The electrochemical synthesis of nickel isooctanoate was studied systematically. The effects of supporting electrolyte NaOH concentration, NaCl concentration, current density and temperature on the electrochemical synthesis process were investigated, and the optimal condition for each influencing factor was determined. The impurities were removed by dissolving and filtering, and finally the organic solvent was separated by distillation to obtain a final product. The product was qualitatively analyzed using FT-IR to determine the structure of the product, and the product was analyzed by dilute hydrochloric acid lysate. Finally, the electrolyte was repeatedly recycled. The results show that the optimal experimental condition is as follows: NaOH concentration of 0.2 mol/L, NaCl concentration of 2.0 mol/L, current density of 2 380 A/m2, temperature of 60 ℃. The yield is above 95%, the nickel content of the product is above 10%, and the technical indicators meet the requirements of Q/FMH02—2011. The product is identified as nickel isooctanoate by FT-IR. Through the circulation use experiment of the electrolyte, the nickel mass fraction, the yield and the impurities sodium mass fraction of the product in the five-cycle experiments are relatively stable, and the electrolyte can be recycled.
Key words: electrochemical synthesis; nickel isooctanoate; recycling; FT-IR
中南大学学报(自然科学版) ISSN 1672-7207 CN 43-1426/NZDXZAC 中南大学学报(英文版) ISSN 2095-2899 CN 43-1516/TBJCSTFT | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262580633163452, "perplexity": 4385.338238953138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00556.warc.gz"} |
https://www.physicsforums.com/threads/matrix-of-a-linear-mapping.623390/ | # Matrix of a linear mapping
1. Jul 25, 2012
### juanma101285
Hi, I have the following problem that is solved, but I get lost at one step and cannot find how to do it in the notes. I would really appreciate it if someone could tell me where my teacher gets the result from.
The problem says:
"Find the matrix of linear mapping $T:P_3 → P_3$ defined by
$(Tp)(t)=p(t)+p'(t)+p(0)$
with respect to the basis {$1,t,t^2,t^3$} of $P_3$. Deduce that, given $q \in P_3$, there exists $p \in P_3$ such that
$q(t)=p(t)+p'(t)+p(0)$."
And I get lost here... It says:
"We have
$T(1)=2$
$T(t)=1+t$
$T(t^2)=2t+t^2$
$T(t^3)=3t^2+t^3$"
So I don't know why it says $T(1)=2$.... I think $T(t)=1+t$ because it is the derivative of t plus t, and $T(t^2)$ is the derivative of $t^2$ plus $t^2$... But why T(1)=2?
Thanks a lot!
2. Jul 25, 2012
### Muphrid
I think this would be clearer written slightly differently.
$$T[p(t)] = p(0) + \frac{dp}{dt} + p(t)$$
When you say $T(1)$, you substitute $p(t) = 1$ into the above. Clearly, then, $p(0) = 1$ because $p(t) = 1$ for any $t$. That takes care of the first and last terms. The derivative is zero because $p$ is a constant.
In short, you get
$$T(1) = 1 + 0 + 1 = 2$$
3. Jul 25, 2012
### Fredrik
Staff Emeritus
T takes polynomials to polynomials, so when he writes T(1), 1 denotes a polynomial. The only polynomial that it makes sense to denote by 1 is the function that takes every real number to 1. It might be less confusing to denote it by a symbol like I instead. Then for all real numbers t, we have $(T(I))(t)=I(t)+I'(t)+I(0)=1+0+1$, as Muphrid has already said. So T(I) is the polynomial that takes every real number t to 2. In this context, it seems to be standard to denote this polynomial by 2.
I think this notation is worse, because now it looks like T is acting on the real number p(t) instead of on the polynomial p.
Last edited: Jul 25, 2012
4. Jul 25, 2012
### Fredrik
Staff Emeritus
I don't understand this argument. To find T(t), you need to do something similar to what I did above.
Here t denotes the identity map on the set of real numbers, i.e. the function that takes every real number to itself. If we use this notation, then for all real numbers s, we have
$(T(t))(s)=t(s)+t'(s)+t(0)=s+1+0$. So T(t) is the polynomial that takes s to 1+s. In this notation, that polynomial is denoted by 1+t.
This problem shows how confusing it can be to use notations like t2 both for a number (the square of the number t) and a function (the function that takes every real number to its square). I would prefer to use a different notation for the basis vectors, for example $\{e_1,e_2,e_3,e_4\}$ instead of $\{1,t,t^2,t^3\}$, where the $e_i$ are defined by
$e_1(s)=1$ for all s.
$e_2(s)=s$ for all s.
...and so on.
Now what the problem writes as T(1) and T(t) can be written as $Te_0$ and $Te_1$ respectively, and for all $t\in\mathbb R$,
$Te_1(t)=e_1(t)+e_1'(0)+e_1(0)=1+0+1=2=2(e_1(t))=(2e_1)(t),$
$Te_2(t)=e_2(t)+e_2'(0)+e_2(0)=t+1+0=e_2(t)+e_1(t)=e_1(t)+e_2(t)=(e_1+e_2)(t)$.
Since this holds for all t, we have $Te_1=2e_1$ and $Te_2=e_1+e_2$.
Can you do $T(t^2)$ and $T(t^3)$ now? You can stick to the t^something notation if it doesn't confuse you, but then you will have to write weird things like ${t^2}'(s)=2s$.
Last edited: Jul 25, 2012
5. Jul 26, 2012
### HallsofIvy
Staff Emeritus
The derivative of the polynomial p(t)= 1 (for all t) is 0 so p(t)+ p'(t)+ p(0)= 1+ 0+ 1= 2.
More precisely p(t)+ p'(t)+ p(0)= t+ 1+ 0= t+ 1
Again, $p(t)+ p'(t)+ p(0)= t^2+ 2t+ 0= t^2+ 2t$
Similar Discussions: Matrix of a linear mapping | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704952836036682, "perplexity": 427.5860991140174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00090.warc.gz"} |
https://community.canvaslms.com/t5/Studio-Users/Can-I-move-where-where-closed-captions-appear-in-Arc/m-p/157838/highlight/true | cancel
Showing results for
Did you mean:
Surveyor
Can I move where where closed captions appear in Arc?
When using the closed captions function in Arc, the captions appear at the top of the screen. Is there a way to have the closed captions to appear at the bottom of the screen?
Tags (2)
1 Solution
Accepted Solutions
Navigator
Not currently. When we met in July with the Arc team at InstructureCon, they indicated that it was put at the top so that it didn't appear over the user comments that are shown at the bottom of the screen.
This was in response to the concern I voiced that the font is too small and the WebVTT standard allows for positioning information, yet they are ignoring it.
3 Replies
Navigator
Hi Taylor,
I'm going to share this question over with the Arc Users Group. I'm sure someone in that group can help answer your question.
Navigator
Not currently. When we met in July with the Arc team at InstructureCon, they indicated that it was put at the top so that it didn't appear over the user comments that are shown at the bottom of the screen.
This was in response to the concern I voiced that the font is too small and the WebVTT standard allows for positioning information, yet they are ignoring it.
Surveyor
Thanks, James. I agree. The font is too small. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630440473556519, "perplexity": 1382.9215767267567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00611.warc.gz"} |
https://mathoverflow.net/questions/244300/arakelov-divisors-and-the-meaning-of-real-coefficients | Arakelov divisors and the meaning of real coefficients
I'm learning Arakelov theory on arithmetic surfaces and I have the following general question.
Let $K$ be a number field and consider its ring of integers $O_K$. Moreover let $S:=\operatorname{Spec} O_K$ and consider a regular, projective arithmetic surface $\pi: X\to S$. For every embedding $\sigma:K\to\mathbb C$ we have the fiber at infinity $X_\sigma$ which is a Riemann surface. If we fix a Kalher metric $\Omega_\sigma$ on each $X_\sigma$, then an Arakelov divisor can be written uniquely as: $$\hat D:=D+\sum_{\sigma}\alpha_\sigma X_\sigma$$
where $D$ is a usual divisor of $X$ and $\alpha_\sigma\in\mathbb R$.
I don't understand why the coefficients for the fibers at infinity are in $\mathbb R$ and not in $\mathbb Z$, why do we need this? Foe example consider the real vector $\varepsilon=(\varepsilon_\sigma)_\sigma$ where for every $\sigma$ the real number $\varepsilon_\sigma$ is small enough and construct the Arakelov divisor $$\hat D_\varepsilon:=D+\sum_{\sigma}(\alpha_\sigma+\varepsilon_\sigma) X_\sigma$$ (note that $D$ is fixed)
What are the geometrical differences between $\hat D$ and $\hat D_\varepsilon$?
• You can define formal linear combinations with integer coefficients if you want, but already at very early steps in the theory you will find difficulties. For instance, check the definition of principal Arakelov divisors. The coefficients at infinity are not integers (in general). Jul 14 '16 at 19:25
• Arakelov's arithmetic intersection pairing on arithmetic surfaces has been extended by Deligne and Gillet-Soulé. In these more general approaches, an arithmetic divisor is a pair $(D,g_D)$, where $g_D$ is a Green current at infinity. The choice of a Kähler form allows to normalize the Green current up to scalars (at each place). — Although this does not answer the question, it may help explain the appearance of real coefficients.
– ACL
Jul 15 '16 at 0:23
• You should take a look at Durov's work. Your intuition that $\mathbb{R}$ is "too many" coefficients is correct. I think Durov just uses $Log(\mathbb{Q}^+)$. Apr 13 '17 at 12:56
Changing the coefficient of a divisor by some $\epsilon > 0$ can have a very significant effect, for example it can move you out of the nef cone. Indeed, even in non-Arakelov algebraic geometry it is often very useful to allow real coefficients on divisors. As a specific example, let $A$ be a simple abelian variety with CM by a real quadratic field. Then $\text{NS}(A)\otimes\mathbb Q$ has rank 2, say generated by $D_1$ and $D_2$, but the boundary of the nef cone in $\text{NS}(A)\otimes\mathbb R$ is spanned by divisors of the form $aD_1+bD_2$ with $a/b\notin\mathbb Q$. Another reason one might use real coefficients is that the image of $\text{NS}(X)$ in $\text{NS}(X)\otimes\mathbb R$ is a lattice in a finite dimensional vector space, so now one can use geometry to study $\text{NS}(X)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689351320266724, "perplexity": 160.8663308186098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00371.warc.gz"} |
http://mathhelpforum.com/calculus/33243-couple-more-optimization-problems-print.html | # A couple more optimization problems
• April 4th 2008, 07:04 PM
NAPA55
A couple more optimization problems
Couldn't figure this one out for the life of me... got a few steps down and then it all went downhill from there.
If the concentration of a drug in the bloodstream at time "t" is given by this function, http://img395.imageshack.us/img395/9...tionfd7.th.jpg
where a, b(b>a), and k are positive constants that depend on the drug, at what time is the concentration at its highest level?
And the second question:
The motion of a particle is given by s(t) = 5cos(2t + pi/4). What are the maximum values of the displacement, the velocity, and the acceleration?
• April 4th 2008, 07:06 PM
Mathstud28
I'd help
but I cant read the paper...rewrite it...preferably in LaTeX
• April 4th 2008, 07:27 PM
TheEmptySet
Quote:
Originally Posted by NAPA55
Couldn't figure this one out for the life of me... got a few steps down and then it all went downhill from there.
If the concentration of a drug in the bloodstream at time "t" is given by this function, http://img395.imageshack.us/img395/9...tionfd7.th.jpg
where a, b(b>a), and k are positive constants that depend on the drug, at what time is the concentration at its highest level?
And the second question:
The motion of a particle is given by s(t) = 5cos(2t + pi/4). What are the maximum values of the displacement, the velocity, and the acceleration?
$c(t)=\frac{k}{b-a}\left[ e^{-at}-e^{-bt}\right]$
Taking the derivative we get...
$\frac{dc}{dt}=\frac{k}{b-a}\left[ -ae^{-at}+be^{-bt}\right]$
setting equal to zero and solving we get
$-ae^{-at}+ be^{-bt}=0 \iff \frac{a}{b}=\frac{e^{-bt}}{e^{-at}}\iff \frac{a}{b}=e^{(a-b)t} \iff ln \left( \frac{a}{b}\right)=(a-b)t$
$t= \frac{ln \left( \frac{a}{b}\right)}{a-b}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256736397743225, "perplexity": 790.6382950920489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00226-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/19424/describing-a-particular-quotient-ring-involving-a-subring-of-mathbbqx | # Describing a particular quotient ring involving a subring of $\mathbb{Q}[x]$
I had to do some homework problems involving the polynomial ring $R=\mathbb{Z}+x\,\mathbb{Q}[x]\subset\mathbb{Q}[x]$. This is an integral domain but not a UFD. Further, $x$ is not prime in $R$.
One of the problems was to describe to $R/(x)$.
Since $x$ is not a prime element, we know $(x)$ is not a prime ideal. So at the very least, $R/(x)$ is not an integral domain.... but what else can I say? This is perhaps something I should not admit, but problems of this form have always befuddled me. I know there's not any one "answer" they're looking for, but I never quite know what to say.
Anyway, this homework has been submitted already, so I am not including a homework tag. I'm just curious how you all would describe this particular quotient.
-
The key is to understand what that ideal $(x)$ looks like in the first place. So, clearly all polynomials in that ideal have degree at least 1, but what degree 1 polynomials can occur? For $f(x)*x$ to have degree 1, $f(x)$ must have a non-zero constant term. That term must be in $\mathbb{Z}$, and it is easy to see that you get any coefficients for higher degrees. So $(x)=\mathbb{Z}x + \mathbb{Q}x^2+\ldots+\mathbb{Q}x^n+\ldots$.
Thus, in the quotient, any coset is represented by some $n + ax$, $n\in \mathbb{Z},a\in \mathbb{Q}$, and another element $m+bx$ represents a different coset if and only if $n\neq m$ or $a-b\notin\mathbb{Z}$. In other words, the quotient is isomorphic to $\left(\mathbb{Z} + \mathbb{Q}/\mathbb{Z}x\right)/(x^2).$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696664571762085, "perplexity": 128.66689096845897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00012-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/76844/solution-to-an-ode-cant-follow-a-step-of-a-stability-example | # Solution to an ODE, can't follow a step of a Stability Example
In my course notes, we are working on the stability of solutions, and in one example we start out with:
Consider the IVP on $(-1,\infty)$:
$x' = \frac{-x}{1 + t}$ with $x(t_{0}) = x_{0}$.
Integrating, we get $x(t) = x(t_{0})\frac{1 + t_{0}}{1 + t}$.
I can't produce this integration but the purpose of the example is to show that $x(t)$ is uniformly stable, and asymptotically stable, but not uniformly asymptotically stable.
But I can't verify the initial part and don't want to just skip over it.
Can someone help me with the details here?
Update: the solution has been pointed out to me and is in the answer below by Bill Cook (Thanks!).
-
## 1 Answer
Separate variables and get $\int 1/x \,dx = \int -1/(1+t)\,dt$. Then $\ln|x|=-\ln|1+t|+C$
Exponentiate both sides and get $|x| = e^{-\ln|1+t|+C}$ and so $|x|=e^{\ln|(1+t)^{-1}|}e^C$
Relabel the constant drop absolute values and recover lost zero solution (due to division by $x$) and get $x=Ce^{\ln|(1+t)^{-1}|}=C(1+t)^{-1}$.
Finally plug in the IC $x_0 = x(t_0)=C(1+t_0)^{-1}$ so that $C=x_0(1+t_0)$ and there you go the solution is
$$x(t) = x_0 \frac{1+t_0}{1+t}$$
-
Wow thank you so much. I can't believe I forgot how to solve a separable first order ODE. :S Thanks very much though! – Kyle Schlitt Oct 29 '11 at 2:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030112624168396, "perplexity": 346.28432646053807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769709.84/warc/CC-MAIN-20141217075249-00021-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-to-simplify-the-expression-boolean-algebra.808260/ | # Homework Help: How to simplify the expression -- Boolean algebra
1. Apr 13, 2015
### physics=world
1. prove that:
X'Y'Z + X'YZ' + XY'Z' + XYZ = (X⊕Y)⊕Z
2. Relevant equations
Use postulates and theorems.
3. The attempt at a solution
X'Y'Z + X'YZ' + XY'Z' + XYZ (original expression)
X'Y'Z + X'YZ' + X(Y'Z' + YZ) (distributive)
X'Y'Z + X'YZ' + X.1 (complement)
X'Y'Z + X'YZ' + X (identity)
need help.
also may you expand the expression
: (X⊕Y)⊕Z
I think it is like this: (XY' + X'Y)(ZXY' +Z'X'Y)
but i do not know if it's right
2. Apr 13, 2015
### BvU
Hi,
Not familiar with the ' notation, but I suppose X' means $\neg X$
When you expand $(X\oplus Y) \oplus Z$ you just write it out using $(A\oplus B) = A'B + AB'$ twice.
For me the easiest part is the second term $(X'Y + XY') Z' = X'Y Z'+ XY' Z'$ which are the 2nd and 3rd in the left hand side of your original staement.
Leaves you to prove $(X'Y + XY')' Z = X'Y' Z + XY Z$; not that hard, I hope ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212236404418945, "perplexity": 2858.482710628261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00542.warc.gz"} |
http://plantsinaction.science.uq.edu.au/content/114-light-and-co2-effects-leaf-photosynthesis | # 1.1.4 - Light and CO2 effects on leaf photosynthesis
## Figp1.06.png
Figure 1.6 Photosynthetic response to photon irradiance for a Eucalyptus maculata leaf measured at three ambient CO2 concentrations, 140, 350 and 1000 µmol mol-1. Irradiance is expressed as µmol quanta of photosynthetically active radiation absorbed per unit leaf area per second, and net CO2 assimilation is inferred from a drop in CO2 concentration of gas passing over a leaf held in a temperature-controlled cuvette. CO2 evolution in darkness is shown on the ordinate as an extrapolation below zero. The irradiance at which net CO2 exchange is zero is termed the light compensation point (commonly 15-50 µmol quanta m-2 s-1, shade to sun species respectively). The initial slope of light-response curves for CO2 assimilation per absorbed quanta represents maximum quantum yield for a leaf. (Based on E. Ögren and J.R. Evans, Planta 189: 182-190, 1993)
Light impinging on plants arrives as discrete particles we term photons, so that a flux of photosynthetically active photons can be referred to as ‘photon irradiance’. Each photon carries a quantum of electromagnetic (light) energy. In biology the terms photon and quantum (plural quanta) tend to be used interchangeably.
CO2 assimilation varies according to both light and CO2 partial pressure. At low light (low photon irradiance in Figure 1.6) assimilation rate increases linearly with increasing irradiance, and the slope of this initial response represents maximum quantum yield (mol CO2 fixed per mol quanta absorbed). Reference to absorbed quanta in this expression is important. Leaves vary widely in surface characteristics (hence reflectance) as well as internal anatomy and chlorophyll content per unit leaf area. Therefore, since absorption of photosynthetically active quanta will vary, quantum yield expressed in terms of incident irradiance does not necessarily reflect the photosynthetic efficiency of the mesophyll. In the case of comparisons between sun and shade leaves, it has led to a widely held but mistaken belief that shade leaves (thinner and with higher chlorophyll content) are more efficient. Expressed in terms of absorbed quanta, sun and shade leaves have virtually identical quantum efficiencies for CO2 assimilation.
Assimilation rate increases more slowly at higher irradiances until eventually a plateau is reached where further increases in irradiance do not increase the rate of CO2 assimilation (Figure 1.6). Chloroplasts are then light saturated. Absolute values for both quantum yield and light-saturated plateaux depend on CO2 concentration. Quantum yield increases as CO2 concentration increases as it competes more successfully with other species such as oxygen, at the binding site on Rubisco. Leaf absorptance has a hyperbolic dependence on chlorophyll content. For most leaves, 80–85% of 400–700 nm light is absorbed and it is only in leaves produced under severe nitrogen deficiency where there is less than 0.25 mmol Chl m–2 that absorptance falls below 75%.
The plateau in Figure 1.6 at high irradiance is set by maximum Rubisco activity. With increasing CO2 partial pressure, the rate of carboxylation increases. The transition from light-limited to Rubisco-limited CO2 assimilation as irradiance increases becomes progressively more gradual at higher CO2 partial pressures. In part, this gentle transition reflects the fact that a leaf is a population of chloroplasts which have different photosynthetic properties depending on their position within that leaf. As discussed above, the profile of photosynthetic capacity per chloroplast changes less than the profile of light absorption per chloroplast (Figure 1.4). This results in an increase in CO2 fixed per quanta absorbed with increasing depth. A transition from a light to a Rubisco limitation therefore occurs at progressively higher incident irradiances for each subsequent layer and results in a more gradual transition in the irradiance response curve of a leaf compared to that of a chloroplast.
Photosynthetic capacity of leaves varies widely according to light, water and nutrient availability and these differences in capacity usually reflect Rubisco content. Leaves in high light environments (‘sun’ leaves) have greater CO2 assimilation capacities than those in shaded environments and this is reflected in the larger allocation of nitrogen-based resources to photosynthetic carbon reduction (PCR cycle; Section 2.1). Sun leaves have a high stomatal density, are thicker and have a higher ratio of Rubisco to chlorophyll in order to utilise the larger availability of photons (and hence ATP and NADPH). Shade leaves are larger and thinner, but have more chlorophyll per unit leaf dry weight than sun leaves. They can have a greater quantum yield per unit of carbon invested in leaves, but with a relatively greater allocation of nitrogen-based resources to photon capture, shade leaves achieve a lower maximum rate of assimilation.
Despite such differences in leaf anatomy and chloroplast composition, leaves sustain energy transduction and CO2 fixation in an efficient and closely coordinated fashion. Processes responsible are discussed below (Section 1.2). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81532883644104, "perplexity": 4349.096734509156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00414.warc.gz"} |
https://www.thestudentroom.co.uk/showthread.php?t=4537726 | # What is momentum
Watch
Announcements
#1
I understand conservation of momentum and equations related to it, but what is the very nature of it? I am speaking in regards to GCSE physics
0
4 years ago
#2
This is a very good question, I hope you are thinking of pursuing physics further? I'm going to try and answer you question as best I can, but if I say something you don't understand/you think might be wrong please just say and I'll try and explain it better!
Okay so in terms of newton's laws: "Newton's second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it" (Wikipedia). This is another way of saying that momentum is a property that can only be changed by the action of a force. In other words, you have to do work (have you done work/energy yet?) to change momentum, so it can be thought of being a bit like kinetic energy, in that it is a property that cannot be created or destroyed, and any gain in momentum by one object, much equal a loss in momentum by another object.
This is not a rigorous definition, because momentum and energy are actually fundamentally different things. More generally, linear momentum is the manifestation of a much more complex concept called 'translational symmetry of space time.' Now this is something I don't actually know anything about myself (I'm in my second year of doing physics at uni so by no means an expert), however from a quick read of the wikipedia article, I'll try and explain it :P. Basically this means that the laws of physics are the same everywhere, i.e. the rules that govern our universe don't change depending on where you are. However this is far too complex for me to understand so probably will confuse you as well.
2
4 years ago
#3
I think the explanation by Darth_Narwhale is very accurate, detailed and high power, which might be good for the clever GCSE student that ra1500 seems to be.
If I were to answer your question, especially considering you are only doing GCSE (and my highest knowledge is only as a minor subject [Medical Physics] as a medical student), I would (and ONLY could!) put it more simply as below:
Think of momentum as the tendency of a moving object to carry on moving further in the direction it is moving, and if it were to collide with another object, to "push" it in the direction of the first object, or, if the momntum of the second object is in a different direction and is greater than the momentum of the first object, then for the second object to push the first object.
You can imagine that if a rugby player (probably a massive guy) collided into a netball player (probs a much smaller girl!!), it is likely the two together will move in the same direction as the rugby player - this explains why momentum has m (mass) as one item in its calculation.
Secondly, take two identical cars A and B (of the same weight) moving in opposite directions. If car A was moving East at 70mph and car B was moving West at 10mph, and they collide head on, surely it is essy to work out that the two will after the collision be more likely to move Eastwards [because the momentum of the faster car would be greater i.e. the tendency of the two cars to move eastwards will be greater than the tendency for them to move westwards) This explains the second variable v (velocity) in the calculation of momentum.
In a way, momentum is the opposite of inertia. (inertia is the tendency of an object to stay put where it is and not to move).
Hope this helps further.
Mukesh (science tutor)
1
4 years ago
#4
Momentum does not actually exist as a physical force or object. It is simply the product of mass and velocity of an object.
Much like gravity, gravity itself does not exist, it's just the result of mass in space.
0
4 years ago
#5
Answering what momentum is is a very difficult question, especially sticking to GCSEs, so I will try and go a little bit further.
Intuitively, momentum is a sort of measure of how hard it is stop something. The more momentum the object has, the longer you have to apply a force, or the larger that force will be. This is Newton's second law. But this intuition fails when it comes to fields. Fields (like the gravity field or the electric field) and waves can have momentum, yet you can't stop a field. Actually what does it even mean to stop a field?
So basically momentum must have some other definition. Perhaps "the ability to change the motion of other objects" works. A field transfers its momentum to accelerate a particle and waves do the same. Of course this definition is really vague; we need to concretise it. Objects carrying momentum can transfer that momentum to other objects by exerting a force. As long as they are moving they can exert a force, and the heavier they are, the more force they can exert. This prompts one to define momentum as mv. Note that this definition implies conservation of momentum . What I'm trying to say is that momentum is simply a mathematical aide in calculations which does not have a physical representation like distance or speed. As succinctly put by AishaGirl:
(Original post by AishaGirl)
Momentum does not actually exist as a physical force or object. It is simply the product of mass and velocity of an object.
Ultimately, it is just the product of mass and velocity, which is something that just so happens to be useful, because it is conserved so we retain it and give it a name.
If you want to delve deeper, momentum becomes a lot of things. In fact you can show that if energy is to be conserved, "something else" must be conserved, and that "something else" is called the momentum. The reason why energy has to be conserved is that the laws of physics don't change with time, so "something" must remain the same, and that "something" is energy. Based on this definition of momentum, you can show that if you took very small changes in kinetic energy, and divided that by the velocity and then summed them all, the resultant is the momentum. (if you are doing addmaths, what I'm saying here is that the integral of the reciprocal of velocity with respect to the kinetic energy is the momentum.). Then the next question is what is energy? By the way the process I'm describing is totally backwards: historically, the concept of momentum came first, then energy. But conceptually, this is better.
Now energy is something else where are definitions are arbitrary. In fact, the energy and the momentum share very intricate relationships, which ultimately boil down to this: the laws of physics are the same regardless of your velocity (if constant). Thus no law can tell you whether you truly are moving or not, provided you have constant speed. This is known as the principle of relativity, from Galileo. It is a very powerful principle, and ultimately it describes the momentum and the energy, which describes the forces. Therefore, it is at the roots of all laws.
In conclusion, momentum is basically a quantity that is defined as the thing that is conserved because energy is conserved, and energy is this conserved thing that arises from the laws of physics being constant in time, and constant in velocity.
0
4 years ago
#6
(Original post by Darth_Narwhale)
This is not a rigorous definition, because momentum and energy are actually fundamentally different things. More generally, linear momentum is the manifestation of a much more complex concept called 'translational symmetry of space time.' Now this is something I don't actually know anything about myself (I'm in my second year of doing physics at uni so by no means an expert), however from a quick read of the wikipedia article, I'll try and explain it :P. Basically this means that the laws of physics are the same everywhere, i.e. the rules that govern our universe don't change depending on where you are. However this is far too complex for me to understand so probably will confuse you as well
I'll try and explain it.
Consider a quantity L = K - U, where K is the kinetic energy and U the potential energy. A fundamental law in (classical) physics says that for each coordinate, if you differentiate L with respect to the velocity in that coordinate's direction, then differentiate that with respect to time, that's the same as differentiating L with respect to that coordinate.
Because the potential energy does not depend on the speed (in general), the time derivative of dL/dx' is the time derivative of dK/du, which is mx', the momentum. And since the potential is defined as ∫-Fdx, and the since the kinetic energy does not depend on position, dL/dx is just F. So basically:
where I replaced the x' by v so that the law stands out better. This is Newton's 2nd law. Now suppose that for one coordinate, dL/dx = 0, so that L does not depend on that coordinate. Then the right hand side becomes zero, and you have an equation telling you that the momentum in that direction is conserved. Since the derivative L is zero, this means that L does not change if you moved your entire system along that coordinate. This implies that the system behaves the same no matter the value of that coordinate: the laws of physics are the same under translation.
In fact L is the Lagrangian for classical mechanics, so there is no magnetic field term in there. In the case of a magnetic field, the Lagrangian is no longer K - U, but K - f(v) where f(v) is a function of velocity.
You can extend the approach: symmetry under rotation is angular momentum, and symmetry under time shifts is energy, albeit this one is a bit harder to show.
1
4 years ago
#7
(Original post by dbs1984)
I'll try and explain it.
Consider a quantity L = K - U, where K is the kinetic energy and U the potential energy. A fundamental law in (classical) physics says that for each coordinate, if you differentiate L with respect to the velocity in that coordinate's direction, then differentiate that with respect to time, that's the same as differentiating L with respect to that coordinate.
Because the potential energy does not depend on the speed (in general), the time derivative of dL/dx' is the time derivative of dK/du, which is mx', the momentum. And since the potential is defined as ∫-Fdx, and the since the kinetic energy does not depend on position, dL/dx is just F. So basically:
where I replaced the x' by v so that the law stands out better. This is Newton's 2nd law. Now suppose that for one coordinate, dL/dx = 0, so that L does not depend on that coordinate. Then the right hand side becomes zero, and you have an equation telling you that the momentum in that direction is conserved. Since the derivative L is zero, this means that L does not change if you moved your entire system along that coordinate. This implies that the system behaves the same no matter the value of that coordinate: the laws of physics are the same under translation.
In fact L is the Lagrangian for classical mechanics, so there is no magnetic field term in there. In the case of a magnetic field, the Lagrangian is no longer K - U, but K - f(v) where f(v) is a function of velocity.
You can extend the approach: symmetry under rotation is angular momentum, and symmetry under time shifts is energy, albeit this one is a bit harder to show.
Thanks, this is a brilliant explanation. We are actually doing a classical dynamics lecture course atm, and have literally just covered this, but your wording is very clear.
1
4 years ago
#8
(Original post by ra1500)
I understand conservation of momentum and equations related to it, but what is the very nature of it? I am speaking in regards to GCSE physics
An older term for momentum was simply "motion". Momentum is the quantity that tells you how much motion there is in a system, in some sense.
Intuitively, a mass of 1 kg moving in a straight line at 1 m/s has less "motion" than a mass of 1 kg moving in a straight line at 10 m/s. However, a mass of 10 kg moving in a straight line at 1 m/s has the same amount of motion as 10 1 kg masses moving in a straight line at 1 m/s - that's because we can imagine the 10 kg mass as 10 1 kg masses stuck together.
However, we also need to take direction into account. Consider the following experiment. You are floating in space with two 1 kg masses. You are in the centre of a large, stationary, rectangular metal box. You push the two masses at the same time with the same speed towards opposite ends of the box. By symmetry, when the masses collide with the walls of the box, you expect it to remain at rest.
From the outside of the box, an observer sees a box at rest, and says that the system has no motion at any time. Inside the box, you see two identical masses moving with the same speed in opposite directions. You must conclude that the two moving masses have no total motion, even though each has an individual motion.
We can also reason about motion using other thought experiments. Imagine you are an observer watching two lumps of clay of mass travel towards each other at speed . When they collide, they stick together to form a lump of mass . By symmetry, it must be at rest relative to you.
Now imagine a lump of clay of mass moving at towards an identical lump of clay. When they collide, they move off relative to you at some speed . However, an observer moving in the same direction as the moving mass sees a mass moving at speed towards a mass moving at speed in the opposite direction i.e. she sees the original situation. So from her POV, the combined mass is stationary after collision, which from your POV means that it moves with speed (the same as the moving observer). If we now tot up the quantities mass x velocity before and after collision, we find that they are the same (.
Arguments like this allow you to conclude that the "motion" in a system is conserved by collisions, if we define motion = mass x velocity (But of course, you need to do experiments to find out if the universe agrees with your reasoning - this is physics, not abstract maths). These days we call it "momentum", of course.
1
4 years ago
#9
(Original post by atsruser)
An older term for momentum was simply "motion". Momentum is the quantity that tells you how much motion there is in a system, in some sense.
Intuitively, a mass of 1 kg moving in a straight line at 1 m/s has less "motion" than a mass of 1 kg moving in a straight line at 10 m/s. However, a mass of 10 kg moving in a straight line at 1 m/s has the same amount of motion as 10 1 kg masses moving in a straight line at 1 m/s - that's because we can imagine the 10 kg mass as 10 1 kg masses stuck together.
However, we also need to take direction into account. Consider the following experiment. You are floating in space with two 1 kg masses. You are in the centre of a large, stationary, rectangular metal box. You push the two masses at the same time with the same speed towards opposite ends of the box. By symmetry, when the masses collide with the walls of the box, you expect it to remain at rest.
From the outside of the box, an observer sees a box at rest, and says that the system has no motion at any time. Inside the box, you see two identical masses moving with the same speed in opposite directions. You must conclude that the two moving masses have no total motion, even though each has an individual motion.
We can also reason about motion using other thought experiments. Imagine you are an observer watching two lumps of clay of mass travel towards each other at speed . When they collide, they stick together to form a lump of mass . By symmetry, it must be at rest relative to you.
Now imagine a lump of clay of mass moving at towards an identical lump of clay. When they collide, they move off relative to you at some speed . However, an observer moving in the same direction as the moving mass sees a mass moving at speed towards a mass moving at speed in the opposite direction i.e. she sees the original situation. So from her POV, the combined mass is stationary after collision, which from your POV means that it moves with speed (the same as the moving observer). If we now tot up the quantities mass x velocity before and after collision, we find that they are the same (.
Arguments like this allow you to conclude that the "motion" in a system is conserved by collisions, if we define motion = mass x velocity (But of course, you need to do experiments to find out if the universe agrees with your reasoning - this is physics, not abstract maths). These days we call it "momentum", of course.
I thought momentum was a fictitious force, much like gravity... https://www.scientificamerican.com/a...titious-force/
I don't know I think whether certain forces are actually real or not is a philosophical debate. Correct me if I'm wrong though.
0
4 years ago
#10
(Original post by AishaGirl)
I thought momentum was a fictitious force, much like gravity...
Momentum isn't a force at all, so this doesn't make much sense, I'm afraid. A force is what acts on a body when its momentum is changing, and the size of the force in Newtons is equal to the rate of change of that body's momentum.
I don't know I think whether certain forces are actually real or not is a philosophical debate. Correct me if I'm wrong though.
I'm not sure I really understand the question. However, generally in physics "real" forces are produced by interactions between bodies e.g. due to the EM field, and they come in Newton III force pairs. "Fictitious" forces have to be invented when a system is viewed from a non-inertial frame e.g. one that is accelerating.
For example, consider a ice hockey puck mass tied to a post in the ice with a spring, and made to move in a circle on the ice. From the POV of an observer who is not rotating relative to the pole, we say that the puck is accelerating with magnitude v^2/r towards the pole and consequently a force acts on it towards the pole of size mv^2/r. The force is due to the stretching of the spring, and we call it centripetal force.
From the POV of an observer rotating at the same rate as puck and in the same sense, then the puck is not moving at all. However, she can see that the stretched spring is pulling on it, so by Newton II it should accelerate. To ensure that Newton's laws still make sense in her rotating frame, she has to invent a new force of the same magnitude as the force, in the opposite direction, to "balance" the force of the spring. This appears (to the rotating observer) to be pulling the puck away from the pole, and we call it centrifugal force.
However, the centrifugal force doesn't really exist - there is no interaction with a phyiscal body that is causing it - it is merely invented to make sure Mrs Rotating can make Newton's law work from her POV.
0
4 years ago
#11
(Original post by Darth_Narwhale)
Thanks, this is a brilliant explanation. We are actually doing a classical dynamics lecture course atm, and have literally just covered this, but your wording is very clear.
Thanks for the compliment. I'm hoping to teach physics some day, so comments like those show I'm on the right track.
0
X
Attached files
new posts
Back
to top
Latest
My Feed
### Oops, nobody has postedin the last few hours.
Why not re-start the conversation?
see more
### See more of what you like onThe Student Room
You can personalise what you see on TSR. Tell us a little about yourself to get started.
### Poll
Join the discussion
#### What do you find most difficult about revising?
Prioritising what to revise (10)
10.31%
Remembering what was covered in lessons (1)
1.03%
Finding the time to revise (7)
7.22%
Finding the motivation (24)
24.74%
Getting distracted (14)
14.43%
Procrastination (40)
41.24%
Having the tools you need (0)
0%
Something else (tell us in the thread) (1)
1.03% | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174322485923767, "perplexity": 322.4755990784152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00312.warc.gz"} |
http://mathoverflow.net/questions/44062/computing-pi-1-s1-using-groupoids?sort=votes | Computing $\pi_1 S^1$ using groupoids
I believe it is possible to compute $\pi_1 S^1$ by applying the groupoid version of the Seifert-Van Kampen Theorem (in the version presented in May's Concise Course) to a covering of the circle by three arcs. Is there an account like this somewhere in the literature? Ideally I'd like a discussion that a student familiar with May's book would be able to read. (May doesn't take a 2-categorical approach to groupoids, and so he does not discuss the fact that a diagram of groupoids that is a point-wise equivalence induces an equivalence of colimits. This is rather important for computations.)
Edit: this last statement is false in general! I was thinking of homotopy colimits. The relevant (correct) fact appears in Ronnie Brown's book: retracts of pushouts are pushouts. This is the means by which one compares the Van Kampen theorem for the full fundamental groupoid - as in May's book - with the Van Kampen theorem for the fundamental groupoid on a set of basepoints.)
-
The book of Tammo Tom Dieck "Algebraic Topology" takes a related but different approach. – j.c. Oct 29 '10 at 7:09
The isomorphism $\pi_1(S^1)\simeq \mathbf{Z}$ is just a triviality:if we define $S^1$ as the pushout of the diagram of simplicial sets $\Delta^0 \leftarrow \partial\Delta^1\to \Delta^1$, then we see immediately that the set of maps from $S^1$ to the nerve of a groupoid $G$ is simply the set of arrows $x\to y$ in $G$ such that $x=y$. Therefore, the groupoid $\pi_1(S^1)$ is canonically isomorphic to the group $\mathbf{Z}$ (seen as a groupoid with one object), simply because they both share the same universal property. – Denis-Charles Cisinski Nov 14 '10 at 0:13
Denis-Charles, what exactly do you mean by "the groupoid $\pi_1 S^1$"? Are you referring to a combinatorial definition of the fundamental groupoid of a simplicial set? I'm familiar with Kan's combinatorial description of the homotopy groups of a Kan complex, and I guess you could mean something similar? Most importantly, how easy is it to identify this object with the fundamental groupoid of the geometric realization? Milnor's original proof that Kan's homotopy groups agree with the homotopy groups of the realization used the Van Kampen theorem. – Dan Ramras Nov 14 '10 at 6:01
Using the version of Van Kampen in Brown's book, you can even compute the fundamental group of the circle by using a cover with two intervals: just pick one basepoint in each component of the intersection. – Omar Antolín-Camarena Nov 14 '10 at 14:43
Omar, yes that's right! May's book has a seemingly unnecessary connectedness assumption on the intersections of sets in the cover, which is why I was thinking of 3 arcs. – Dan Ramras Nov 14 '10 at 18:10
I can only point to the place where this was originally done (or rather, the latest edition thereof):
Topology and Groupoids by Ronnie Brown
It's a fantastic textbook and easy to read (and cheap, if you buy the electronic copy - the best £5 I've spent). Ideally what you'd do is calculate the equivalent subgroupoid $\Pi_1(S^1,\{a,b,c\})$ where $a,b,c$ are three points in $S^1$, one in each intersection of opens.
-
This is what I had in mind. I did look in Chapter 9 of Brown's book (Chap. 9 is free) and didn't see this. (Brown seems to compute $\pi_1 S^1$ in some other way in Corollary 9.1.5; I haven't digested his notation yet, though.) Maybe this argument appears somewhere else? I'll have to get myself the full book, I guess. – Dan Ramras Oct 29 '10 at 5:50
Actually this isn't how he computes it, but it should be possible to do so. – David Roberts Oct 29 '10 at 5:52
I second David's recommendation. – jd.r Oct 29 '10 at 11:33
Chapter 6 is the place to look, not Chapter 9. – Dan Ramras Nov 13 '10 at 20:59
A rather belated comment on these! I like the comparison between the circle $S^1$ as obtained from the unit interval $[0,1]$ by identifying $0$ and $1$ in the category of spaces, and the group of integers $\mathbb Z$ as obtained from the groupoid $\mathcal I$, which has objects $0$ and $1$ and exactly one arrow $\iota:0 \to 1$, by identifying $0$ and $1$ in the category of groupoids.
I got hold of the idea in the 1960s from writing the first edition of this book that all of 1-dimensional homotopy theory was better expressed in terms of groupoids rather than groups This led to the question: are groupoids useful in higher homotopy theory? Is the 1-dimensional case a one-off''? or not?
I liked the more exciting prospect, but it took 9 years to get with Philip Higgins in 1974 a good definition in dimension 2, namely the homotopy double groupoid of a pair of spaces, and a 2-dimensional van Kampen theorem.
-
Further comment: although I was attracted by the retraction argument, it turned out that it did not work easily for any family of open sets, and did not work in higher dimensions. So I returned to the original argument of Crowell and this is given in the paper: R.Brown and A. Razak, A van Kampen theorem for unions of non-connected spaces'', Archiv. Math. 42 (1984) 85-88, which also gives the best possible connectivity condition, that the set $A$ of base points meets each path component of each $3$-fold intersections of the cover. This is related to the Lebesgue Covering Dimension. – Ronnie Brown Jan 13 '13 at 22:19
I also prefer, contrary to Gramain and other authors, to give a proof by verification of the required universal property, rather than relying on a specific construction of a pushout, or colimit, of groups, or groupoids. One reason is to have a proof which generalises to higher dimensions, given the appropriate gadget, and so obtain new results and methods in homotopy theory. – Ronnie Brown Nov 8 '13 at 18:33
The reference to Brown is probably the best one for the moment. Unfortunately, May's book doesn't seem to be so useful because, even in the theorem about groupoids, there is a connectedness assumption on the interesections of the open sets covering the space. Moreover, once one has a general pushout theorem for groupoids, one needs to know how to compute the isotropy groups of the given groupoid, which May doesn't explain.
André Gramain has written a short account of that, Le théorème de van Kampen.
For a good space, the theory of coverings gives an equivalence of category between coverings and sets with an action of the fundamental group. (This determines the group.) This gives various way of computing the fundamental group(oid) of a space via descent theory. In the setting of schemes, Grothendieck gives the relevant theorems and formulae in a few lines in SGA 1.
I can phrase Denis-Charles's answer above in a slightly more elementary way, using the formulation via coverings.
The circle $S^1$ is the interval $[0,1]$ with endpoints attached; therefore, a covering of $S^1$ can be described as a covering of $[0,1]$ together with an identification of the fibers at $0$ and $1$. We thus have a covering $A\times [0,1]$, with a bijection of $A\times\{0\}$ with $A\times\{1\}$. That is, a set $A$, with a bijection of~$A$. That is, a set $A$ with an action of the group $\mathbf Z$. So $\pi_1(S^1)=\mathbf Z$.
-
May's path connectedness assumption seems to be unnecessary. I don't see anywhere in his proof that path connectedness is used (am I missing something?). The covering space perspective sounds a lot like the way Quillen computed the fundamental group of the Q-construction. – Dan Ramras Nov 14 '10 at 18:10
It is indeed unnecessary. But the computation of the isotropy groups of pushouts is definitely lacking without the assumption... – ACL Nov 14 '10 at 23:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024298787117004, "perplexity": 265.67544851811516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.61/warc/CC-MAIN-20160428161522-00114-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://www.ntg.nl/pipermail/ntg-context/2004/007509.html | # [NTG-context] \presentationstep
Mon Nov 8 13:59:49 CET 2004
At 19:57 -0500 7/11/04, David Munger wrote:
>Oh sorry for not being clear about it. I was assuming that some
>presentation module would be imported, for instance:
>\usemodule[pre-original]
>
>So probably there lacks a \page command in your \Subject definition.
>
>About the $and$: you're right. I was using the amsl module from
>Giuseppe Bilotta.
>
>
>
>So, assuming that the steps code is in a file name t-rsteps.tex, the
>complete example would be:
>
>----------------------------------------------------------
>
>\usemodule [pre-original]
>\usemodule [rsteps]
Hi David,
Thanks for the details. Indeed I get now what is expected from your
macros, and as a matter of fact the result is much much better than
that of my crude macros... You did a great improvement!
So I am going to use yours from now on: thanks again!
If I can suggest a possible improvement to the t-rsteps.tex macros,
it is the following:
When one uses these macros with an automatic numbering such as
\placeformula[equation-reference] (see the example below), with each
invocation of \page (that is a step) the number increases, and this
is an unwanted side result. Would it possible to "freeze" the
numbering procedure in such a way that the number doesn't change in
each step? (When I was using my macros, I didn't use \placeformula in
sildes with steps, but rather an old \leqno from plain TeX).
Best regards: OK
%%%%%%%%%%%%%%%%%%%%%%% example steps-david-2.tex
\usemodule [pre-original]
\usemodule [rsteps]
\starttext
\StartSteps[Slide Title] % the title is passed to
% the \Subject macro defined in pre-original
\startitemize
\FromStep[1] {\item {\bf Lemma. } {\it For any $u,v \in H$, a Hilbert
space, we have the following Cauchy-Schwarz inequality\/}
\placeformula[Cauchy-Schwarz]
\startformula
|(u|v)| \leq \Vert u\Vert \cdot \Vert v \Vert.
\stopformula}
\FromStep[2] {\item {\bf Proof. } Consider $f(t):= (u+tv|u+tv)$ for
$t\in {\Bbb C}$.}
\FromStep[3]{\item We have $f(t) \geq 0$ for all $t\in {\Bbb C}$.}
\stopitemize
\StopSteps
\stoptext
%%%%%%%%%%%%%%%%%% end example steps-david-2.tex | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985212504863739, "perplexity": 4689.550562337154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444657.46/warc/CC-MAIN-20141017005724-00291-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.astroexplorer.org/details/apjlab380df4 | Image Details
Choose export citation format:
On the Diversity of Fallback Rates from Tidal Disruption Events with Accurate Stellar Structure
• Authors: E. C. A. Golightly, C. J. Nixon, and E. R. Coughlin
2019 The Astrophysical Journal Letters 882 L26.
• Provider: AAS Journals
Caption: Figure 4.
Fallback rate onto the 106M SMBH in units of Solar masses per year as a function of time in years. Here solid curves correspond to the density profiles generated from MESA, dashed curves are γ = 5/3 polytropes matched to the stellar mass and radius of the MESA star, and dotted–dashed curves are γ = 1.35 polytropes matched to the MESA star mass and radius; dotted black lines show the power-law ∝ t−5/3, while the dotted–dotted–dashed line in the bottom-right panel shows the scaling ∝ t−9/4. The long-dashed black line gives the Eddington luminosity of the BH, assuming a radiative efficiency of 10% and an electron-scattering opacity of 0.34 cm2 g−1. The specific star is shown by the name in the legend, and panels on the left side show the fallback from stars at ZAMS, while those on the right are more highly evolved. It is apparent from the top-left and middle-left panels that the fallback curves from the 0.3 M, ZAMS and the 1 M, ZAMS progenitors are very well-reproduced by γ = 5/3 and γ = 1.35 polytropes, respectively. Every other fallback curve from a MESA-generated density profile, however, shows significant deviations from the polytropic approximations. We also see that the 3.0 M, MAMS follows ∝ t−9/4 at late times, which results from the presence of a bound core that survives the encounter (Coughlin & Nixon 2019; no bound core is left when the star is modeled as a polytrope). The 0.3 M, MAMS MESA star also shows enhanced variability in the fallback rate, which arises from the fact that the stream—unlike the polytropic models for the same MESA star mass and radius—has fragmented vigorously into small-scale clumps. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542385697364807, "perplexity": 3153.3196277419324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00425.warc.gz"} |
https://brilliant.org/problems/3-variables-2-equations-1-answer/ | # 3 variables, 2 equations, 1 answer
$\large x + y = z^2\\ \large x^2 + y^2 = z^3$
If positive integral solutions $$(x_1, y_1, z_1), (x_2, y_2, z_2), \ldots, (x_n, y_n, z_n)$$ satisfy the system of equations above, find the value of
$\large \displaystyle \sum_{i=1}^n (x_i + y_i + z_i)$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989035725593567, "perplexity": 737.6188423813197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.72/warc/CC-MAIN-20170322212949-00516-ip-10-233-31-227.ec2.internal.warc.gz"} |