qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
221,762
<p>Let $f$ be a newform of level $\Gamma_1(N)$ and character $\chi$ which is not induced by a character mod $N/p$. I learned from <a href="http://wstein.org/books/ribet-stein/main.pdf" rel="nofollow">these notes</a> by Ribet and Stein that $|a_p|=p^{(k-1)/2}$ where $k$ is the weight of $f$. So I wonder</p> <p>1, the proof of this statement,</p> <p>2, is $a_p$ for $p|N$ in the number field $K_f$ of $f$, i.e the number field generated by all $a_q$ for $(q,N)=1$?</p> <p>Now I see that 2 is true. Why I thought it's wrong is because $p^{(k-1)/2}$ is of degree 2, which will implies in this case $2|[K_f:\mathbb{Q}]$. Seems strange...</p>
Community
-1
<p>A proof that $|a_p|=p^\frac{k-1}{2}$ in the situation you are considering is given in the proof of Theorem 3 of Winnie Li's paper <em>Newforms and Functional Equations</em>, where it is deduced from a result of Ogg. (She also considers the case in which the character of the newform in question is also a character mod $N/p$.) </p>
692,376
<p>Can a vector space over an infinite field be a finite union of proper subspaces ?</p>
BlackAdder
74,362
<p>Short answer: No. </p> <p>Long answer: No. Assume that our vectorspace $V$ is a finite union of proper subspaces, hence $$V=\bigcup_{i=1}^n U_i.$$</p> <p>Now, pick a non-zero vector $x\in U_1$, and pick another vector $y\in V\,\backslash U_1$.</p> <p>There are infinitely many vectors $x+ky$, where $k\in K^*$ ($K$ is our infinite field). Note that $x+ky$ is not in $U_1$, hence must be contained in some $U_j$ where $j\neq 1$.</p> <p>Then since $k\in K^*$, we can have $x+k_1y,x+k_2y\in U_j$, which implies that it also contains $y$ and hence also $x$, hence $U_1\subset U_j$. Hence $$V=\bigcup_{i=2}^n U_i.$$ Evidently, this can be continued, hence a contradiction arises.</p>
441,448
<p><strong>Contextual Problem</strong></p> <p>A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! </p> <p>Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is </p> <p>\begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation}</p> <p><strong>My Question</strong></p> <p>The eigenvalues and eigenvectors of the corresponding matrix</p> <p>\begin{equation} \left[ \begin{array}{cc} 1 &amp; 1\\ 0.08 &amp; 0.4 \end{array} \right] \end{equation} </p> <p>are</p> <p>\begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align}</p> <p>How do I interpret their physical meaning in the context of this particular problem?</p>
Robert Mastragostino
28,869
<p>First we need to interpret the transformation "physically". What this does is gets the amount of each type of alcohol as input, and spits out the total volume and alcohol volume as output.</p> <p>I'd be surprised if this has a serious "physical interpretation", because the input and output are of different types. I suppose I'd say that the eigenvectors are the alcohol mixes where the ratio of beer to vodka is the same as the percentage of alcohol in the final mixture. Technically interprable, but probably no more than a curiosity.</p>
441,448
<p><strong>Contextual Problem</strong></p> <p>A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! </p> <p>Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is </p> <p>\begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation}</p> <p><strong>My Question</strong></p> <p>The eigenvalues and eigenvectors of the corresponding matrix</p> <p>\begin{equation} \left[ \begin{array}{cc} 1 &amp; 1\\ 0.08 &amp; 0.4 \end{array} \right] \end{equation} </p> <p>are</p> <p>\begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align}</p> <p>How do I interpret their physical meaning in the context of this particular problem?</p>
user26872
26,872
<p>While the eigenvectors and eigenvalues don't play their usual role in this problem (as argued in the other answers) the eigensystem still has a physical interpretation.</p> <p>The eigenvector $v_2$ is unphysical since it corresponds to a negative volume.</p> <p>Let $M$ represent the matrix above, $$M v_1 = \lambda_1 v_1.$$ Since $\lambda_1 y/(\lambda_1 x) = y/x$, the alcohol content of the final mixture is equal to the ratio of vodka to beer. In addition $$x+y = \lambda_1 x,$$ so the eigenvalue is the alcohol content of the final mixture plus one. </p> <p>This mixture could be used since the alcohol content is about 11 percent. To get a 10 liter mixture scale the eigenvector, $v_1 \rightarrow 10v_1/(x+y)$. Perhaps the committee would find it a more mathematically interesting mixture. </p>
1,225,551
<p>I have a vector $\mathbf{x}$ (10 $\times$ 1), a vector $\mathbf{f}(\mathbf{x})$ (10 $\times$ 1) and an invertible square matrix $\mathbf{Z}(\mathbf{x})$ (10 $\times$ 10) which is block diagonal:</p> <p>$$\mathbf{Z}(\mathbf{x}) = \begin{bmatrix}\mathbf{A} &amp; 0\\\mathbf{C} &amp; \mathbf{B}\end{bmatrix}$$</p> <p>$\mathbf{A}$ is 4x4, $\mathbf{B}$ is 6x6 diagonal, $\mathbf{C}$ is 6x4 with columns 3 and 4 null.</p> <p>I need to compute (numerically) $\Delta(\mathbf{x})$</p> <p>$$\Delta(\mathbf{x}) = \frac{\partial(\mathbf{Z}^{-1}\mathbf{f})}{\partial \mathbf{x}}$$</p> <p>I came up with the following:</p> <p>$$\Delta(\mathbf{x}) = \frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}\mathbf{f}+ \mathbf{Z}^{-1}\frac{\partial\mathbf{f}}{\partial \mathbf{x}}$$</p> <p>The second term is trivial, but I can't compute $\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}$.<br> I've mixed matrix-by-vector derivative and inverse derivative (I'm not sure it is legal, however) and got this:</p> <p>$$\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}} = -\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial \mathbf{x}}\mathbf{Z}^{-1}$$</p> <p>The middle term is a $n \times 1$ vector of $n \times n$ matrices, <a href="http://en.wikipedia.org/wiki/Matrix_calculus#Other_matrix_derivatives" rel="nofollow">according to Wikipedia</a>.</p> <p>How am I supposed to multiply a vector of matrices and two matrices in order to obtain a $n \times n$ matrix?</p> <p>And if the last deduction is wrong, how to compute $\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}$?</p>
greg
357,854
<p><span class="math-container">$\def\v{{\rm vec}}\def\p#1#2{\frac{\partial #1}{\partial #2}}$</span>Assume the following gradients are known <span class="math-container">$$\eqalign{ J &amp;= \p{f}{x},\qquad {\mathcal H} = \p{Z}{x} \\ }$$</span> Note that <span class="math-container">$(J,{\mathcal H})\,$</span> are <span class="math-container">$\,(2^{nd},\,3^{rd})$</span> order tensors, respectively.</p> <p>Flattening <span class="math-container">$(Z,{\cal H})$</span> makes them much easier to work with <span class="math-container">$$z=\v(Z),\qquad H = \p{z}{x}$$</span> Let's give the objective function a convenient name <span class="math-container">$$w = Z^{-1}f$$</span> and find its differential and gradient <span class="math-container">$$\eqalign{ dw &amp;= Z^{-1}\,df + dZ^{-1}\,f \\ &amp;= Z^{-1}J\,dx - Z^{-1}\,dZ\,Z^{-1}f \\ &amp;= Z^{-1}J\,dx - Z^{-1}\,dZ\,w \\ &amp;= Z^{-1}J\,dx - \left(w^T\otimes Z^{-1}\right)dz \\ &amp;= \Big(Z^{-1}J - \left(w^T\otimes Z^{-1}\right)H\Big)\,dx \\ \p{w}{x} &amp;= Z^{-1}J - \left(w^T\otimes Z^{-1}\right)H \\ }$$</span> where <span class="math-container">$\otimes$</span> denotes the Kronecker product which is required to <a href="https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Compatibility_with_Kronecker_products" rel="nofollow noreferrer">vectorize</a> a matrix expression.</p> <p>The problem with the accepted answer is that it <strong>assumes</strong> a product rule for the gradient with respect to a vector, which turns out to be <strong>false</strong>, i.e. <span class="math-container">$$\eqalign{ \frac{\partial (My)}{\partial x} \ne \bigg(\frac{\partial M}{\partial x}\bigg)y + M\bigg(\frac{\partial y}{\partial x}\bigg) \cr }$$</span> For differentials however, the analogous rule is valid <span class="math-container">$$d(My) = \big(dM\big)\,y + M\,\big(dy\big) $$</span> The underlying issue is that taking the gradient of a variable <em>changes</em> its tensorial character, while taking its differential does not.</p> <p>For example, <span class="math-container">$\frac{\partial y}{\partial x}$</span> is a matrix whereas <span class="math-container">$dy$</span> is just a vector.</p> <p>Similarly, <span class="math-container">$\frac{\partial M}{\partial x}$</span> is a third-order tensor while <span class="math-container">$dM$</span> is a matrix.</p>
519,093
<p>Let $C[0,1]$ be the set of all continuous real-valued functions on $[0,1]$.</p> <p>Let these be 3 metrics on $C$.</p> <p>$p(f,g)=\sup_{t\in[0,1]}|f(t)-g(t)|$</p> <p>$d(f,g)=(\int_0^1|f(t)-g(t)|^2dt)^{1/2}$</p> <p>$t(f,g)=\int_0^1|f(t)-g(t)|dt$</p> <p>Prove that for every $f,g\in C$, the following holds $t(f,g)\le d(f,g)\le p(f,g)$</p> <p>I understand that $t(f,g)\le p(f,g)$ since $t(f,g)=\int_0^1|f(t)-g(t)|dt \le \int_0^1\sup_{t\in[0,1]}|f(t)-g(t)|dt =\sup_{t\in[0,1]}|f(t)-g(t)|=p(f,g)$.</p> <p>But I can't get the others.</p> <p>I think using Schwarz's inequality might be useful $(\int_0^1 w(t)v(t)dt)^2 \le (\int_0^1w^2(t)dt)(\int_0^1v^2(t)dt)$</p>
Keshav Srinivasan
71,829
<p><strong>Hint</strong>: That's equivalent to showing that $y^2 - x^2 = (y+x) (y-x)$ is greater than or equal to $0$. (Try proving that with the axioms.) So you just need to show that $y+x$ and $y-x$ are both greater than or equal to $0$.</p>
4,306,419
<p>Can I write <span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2}$</span> implies <span class="math-container">$\frac{dx}{dy}=\frac{x^2-y^2}{2yx}$</span> ?</p> <p>Actually I have been given two differential equations to solve.</p> <p><span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2}$</span> , <span class="math-container">$\frac{dy}{dx}=\frac{y^2-x^2}{2yx}$</span></p> <p>They have done in the following manner.</p> <p><span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2} \implies \frac{dx}{dy}=\frac{x^2-y^2}{2xy}$</span></p> <p>Now they have solved this <span class="math-container">$\frac{dx}{dy}=\frac{x^2-y^2}{2xy}$</span>.</p> <p>Solution of this differential equation is the solution of the first one and after that they interchanged <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to get the solution of second one.</p> <p>I can not understand how they can do it.</p> <p>Can anyone please tell me when we can do this ?</p>
Amrit Awasthi
717,462
<p>Let <span class="math-container">$y=f(x)$</span> represent some function, and assume that it is invertible in a neighbourhood of, and differentiable at, some point <span class="math-container">$(x_0,y_0)$</span> at which <span class="math-container">$\frac{dy}{dx}\neq 0$</span> then the identity holds. <br /> <br /> Note that This formula holds in general whenever <span class="math-container">$ {\displaystyle f}$</span> is continuous and injective on an interval I, with<span class="math-container">$ {\displaystyle f}$</span> being differentiable at <span class="math-container">${\displaystyle f^{-1}(a)}{\displaystyle f^{-1}(a)}({\displaystyle \in I})$</span> and where<span class="math-container">${\displaystyle f'(f^{-1}(a))\neq 0}$</span></p>
2,583,232
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p> <p>$$\forall \epsilon&gt;0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n &gt;N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|&lt;\epsilon$$ $$ |\dfrac{-2}{25n^2+5}|&lt;\epsilon \\ |\dfrac{2}{25n^2+5}|&lt;\epsilon \\ \dfrac{5(n^2+1)}{2}&lt;\epsilon \\ {5(n^2+1)}&lt;2\epsilon \\ {n}&gt;\sqrt{\dfrac{2\epsilon -5}{25}}$$</p> <p>What do I do?</p>
user
505,767
<p>$$\forall \epsilon&gt;0 \ \ :\exists n_0\in \mathbb{N} \ \ \text{such that} \ \ \forall n &gt;n_0 \quad \left|\dfrac{2n^2}{5n^2+1}-\dfrac{2}{5}\right|&lt;\epsilon$$</p> <p>$$\left|\dfrac{-2}{25n^2+5}\right|&lt;\epsilon\iff \dfrac{2}{25n^2+5}&lt;\epsilon\iff 25n^2+5&gt;\frac{2}{\epsilon}\iff25n^2&gt; \frac{2}{\epsilon}-5\iff n^2&gt;\frac{2}{25\epsilon}-\frac15\implies n&gt; \sqrt{\frac{2}{25\epsilon}-\frac15}$$</p>
2,583,232
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p> <p>$$\forall \epsilon&gt;0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n &gt;N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|&lt;\epsilon$$ $$ |\dfrac{-2}{25n^2+5}|&lt;\epsilon \\ |\dfrac{2}{25n^2+5}|&lt;\epsilon \\ \dfrac{5(n^2+1)}{2}&lt;\epsilon \\ {5(n^2+1)}&lt;2\epsilon \\ {n}&gt;\sqrt{\dfrac{2\epsilon -5}{25}}$$</p> <p>What do I do?</p>
M.R. Yegan
134,463
<p>$|{\frac{2}{5(5n^2+1)}}|&lt;\frac{2}{25n^2}&lt;\varepsilon$. So $n&gt;\frac{\sqrt2}{5\sqrt\varepsilon}$ and $N=[{\frac{\sqrt2}{5\sqrt\varepsilon}}]+1$</p>
2,583,232
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p> <p>$$\forall \epsilon&gt;0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n &gt;N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|&lt;\epsilon$$ $$ |\dfrac{-2}{25n^2+5}|&lt;\epsilon \\ |\dfrac{2}{25n^2+5}|&lt;\epsilon \\ \dfrac{5(n^2+1)}{2}&lt;\epsilon \\ {5(n^2+1)}&lt;2\epsilon \\ {n}&gt;\sqrt{\dfrac{2\epsilon -5}{25}}$$</p> <p>What do I do?</p>
John Wayland Bales
246,513
<p>If you want an $\epsilon,\delta$ proof of the limit then given an $\epsilon&gt;0$ you must find an $N&gt;0$ such that if $n&gt;N$ then</p> <p>$$ \left\vert \frac{2n^2}{5n^2+1}-\frac{2}{5}\right\vert&lt;\epsilon $$</p> <p>That is,</p> <p>$$ \frac{2}{25n^2+5}&lt;\epsilon $$</p> <p>You may replace the expression on the left by a larger expression to simplify the exercise, such as for example</p> <p>$$\frac{2}{25n^2+5}&lt;\frac{2}{25n^2}&lt;\frac{1}{n^2}$$</p> <p>Then if you can find for which values of $n$ it is true that </p> <p>$$ \frac{1}{n^2}&lt;\epsilon $$ then it will also be true that </p> <p>$$ \frac{2}{25n^2+5}&lt;\epsilon $$</p> <p>So clearly we want $n&gt;N&gt;\dfrac{1}{\sqrt{\epsilon}}$.</p> <p>But this analysis is just the prequel to the actual proof.</p> <p>$$\text{Prove that } \lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5} $$</p> <p>Let $\epsilon&gt;0$ and let $N&gt;\dfrac{1}{\sqrt{\epsilon}}$. Then $\frac{1}{N^2}&lt;\epsilon$. Let $n&gt;N$. Then</p> <p>\begin{eqnarray} \left\vert \frac{2n^2}{5n^2+1}-\frac{2}{5}\right\vert&amp;=&amp;\frac{2}{25n^2+5}\\ &amp;&lt;&amp;\frac{1}{n^2}\\ &amp;&lt;&amp;\frac{1}{N^2}\\ &amp;&lt;&amp;\epsilon \end{eqnarray}</p>
722,663
<p>I am totally stack in how to do this exercise : How can I find the irreducible components of $ V(X^2 - XY - X^2 Y + X^3) $ in $A^2$(R)?</p> <p>Given an algebraic set, what is the consideration I have to make to find them? I know the definitions of components and irreducible components but these don't help in to solve this kind of exercise in the end.</p> <p>Can anyone suggest a good reference book also?</p> <p>Thanks</p>
Peter Crooks
101,240
<p>Write your polynomial as a product of irreducible polynomials. The vanishing loci of these irreducible polynomials are then the irreducible components of your variety. </p>
2,025,007
<p>One can show that if $n \geq 3$ is a positive integer, $d=n^2-4$, and $\varepsilon = 1$ if $n$ is odd and $\varepsilon = 0$ if $n$ is even, then the continued fraction expansion of $\frac{\sqrt{d}+\varepsilon}{2}$ has period of even length of the form $(1,n-2)$. One can show that such a continued fraction has period of odd length if and only if the negative Pell equation $t^2 - du^2 = -4$ has a solution. Thus it has no solution when $d=n^2-4$ except when $d=5$, since then the period degenerates to $(1)$. It takes some work to prove all of the details here.</p> <p>However, in Barbeau's book "Pell's equation", Exercise 3.11, he says to take an odd positive integer $n$ and set $d = n^2-4$. Part (b) of the exercise says to show that $d$ has a prime factor congruent to 3 modulo 4 and hence show that the negative Pell equation $t^2 - du^2 = -4$ has no solution. But the claim is false: when $n=3$ or $n=15$, for instance, $d$ has no prime factor congruent to 3 modulo 4. </p> <p>It gets me wondering, though, if he had something else in mind. Is there a short, simple proof that $t^2 - du^2 = -4$ has no solution for $d = n^2-4$, excepting the case $d=5$, perhaps just using some tricky algebra, congruence conditions, quadratic reciprocity, etc.?</p>
franz lemmermeyer
23,365
<p>Start with the unit $\eta = \frac{n + \sqrt{d}}2$. It has norm $+1$, so if there is a unit with norm $-1$ then $\eta$ is a square, say $\eta = \varepsilon^2$ for $\varepsilon = \frac{a+b\sqrt{d}}2$. This leads to $ab=1$, and there are finitely many possibilities left.</p>
3,760,140
<p>My professor asked us to find the boundary points and isolated points of <span class="math-container">$\Bbb{Q}$</span>; my answer that there is no boundary point or either isolated point for rational numbers set. However the question for confusing me, which asks &quot;to find&quot;, means that there exist boundary points and isolated points, and I have to find them. So is there any boundary point or either isolated point for rational numbers set?</p>
peek-a-boo
568,204
<p>There actually exist infinitely many functions with those properties (these are the basic functions involved in constructing a partition of unity).</p> <blockquote> <p>with &quot;finite interval&quot; support [what is this property called?]</p> </blockquote> <p>You probably mean &quot;with compact support&quot;, which in <span class="math-container">$\Bbb{R}^n$</span> because of Heine-Borel theorem simply means that the support is bounded.</p> <p>Anyway, if we cheat slightly, then we know instantly that your proof is incorrect somewhere. Why? Because you seem to know about the existence of partitions of unity. In particular <span class="math-container">$\Bbb{R}$</span> admits a partition of unity (which is smooth, has compact support and is subordinate to the trivial open cover <span class="math-container">$\{\Bbb{R}\}$</span>). And any function in a partition of unity satisfies all the conditions you're asking for (amongst other things).</p> <hr /> <p>Here's one possible explicit construction. Let <span class="math-container">$h:\Bbb{R} \to \Bbb{R}$</span> be defined by <span class="math-container">\begin{align} h(x) := \begin{cases} e^{-1/x} &amp; \text{if $x&gt;0$}\\ 0 &amp; \text{if $x \leq 0$} \end{cases} \end{align}</span> Verify for yourself that <span class="math-container">$h$</span> is <span class="math-container">$C^{\infty}$</span>, <span class="math-container">$0 \leq h(\cdot) &lt; 1$</span>, and <span class="math-container">$h(x) &gt; 0$</span> if and only if <span class="math-container">$x&gt;0$</span>. Next, For any <span class="math-container">$a,b \in \Bbb{R}$</span> with <span class="math-container">$a&lt;b$</span>, define <span class="math-container">$H_{a,b}: \Bbb{R} \to \Bbb{R}$</span> by <span class="math-container">\begin{align} H_{a,b}(x) := h(x-a) \cdot h(b-x) \end{align}</span> Then, <span class="math-container">$H_{a,b}$</span> is <span class="math-container">$C^{\infty}$</span>, <span class="math-container">$0 \leq H_{a,b}(\cdot) &lt; 1$</span>, and <span class="math-container">$H_{a,b}(x) &gt; 0$</span> if and only if <span class="math-container">$a&lt;x&lt;b$</span>. In particular, we have that <span class="math-container">$\text{support}(H_{a,b}) = [a,b]$</span> is compact.</p> <hr /> <p>This is one of those situations where a picture is worth a thousand words. Simply sketch the graph of <span class="math-container">$h$</span> using your knowledge of the exponential function, and prove the somewhat technical part that <span class="math-container">$h$</span> is smooth at the origin. Once you prove this, the rest is really a matter of messing around with various translations and reflections, and if you have the right picture in mind, you'll be able to reconstruct such a function anytime you want.</p>
1,926,383
<p>Recently I baked a spherical cake ($3$ cm radius) and invited over a few friends, $6$ of them, for dinner. When done with main course, I thought of serving this spherical cake and to avoid uninvited disagreements over the size of the shares, I took my egg slicer with parallel wedges(and designed to cut $6$ slices at a go; my slicer has $5$ wedges) and placed my spherical cake right in the exact middle of it before pressing uniformly upon the slicer.</p> <p>What should the relative placement of my wedges of the slicer be like so that I am able to equally distribute the cake among my $6$ friends?The set-up of the egg slicer looks something like this : <a href="https://i.stack.imgur.com/Mx0xL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mx0xL.jpg" alt="enter image description here"></a></p>
Bobson Dugnutt
259,085
<p>Let $R$ be the radius of your spherical cake. We will consider the case when we need to divide the cake into $n$ equally sized pieces. </p> <p>Each piece will then have volume $\frac{4}{3}\frac{\pi R^3}{n}.$ </p> <p>We can calculate the volume of each slice by setting up a series of integrals, where we integrate over an infinitesimally thin cylinder with volume $dV=\pi r^2 dy,$ where $r\leq R$ is the radius of the cylinder (like <a href="http://s3.amazonaws.com/answer-board-image/1c3eb690-aa4b-44d1-9b8a-e80e01d286c5.gif" rel="nofollow noreferrer">this</a>, but where $r$ here is the radius at height $h=R-y$). We have (draw this for yourself to see it) $r^2+y^2=R^2,$ which gives </p> <p>$$\frac{4}{3}\frac{\pi R^3}{n}=\pi\int_{a_i}^{b_i}(R^2-y^2)dy=\pi\left(R^2(b_i-a_i) +\frac{a_i^3-b_i^3}{3} \right),$$ where $a_i$ is the value of $y$ where the $i$th slice starts and $b_i$ is the value where it ends, with $a_1=-R$ and $b_n=R$. Note that we have $a_{i+1}=b_i.$</p> <p>We thus have $n$ equations with $n-1$ unknowns:</p> <p>\begin{align} \frac{4}{3}\frac{ R^3}{n}&amp;=R^2(b_1-a_1) +\frac{a_1^3-b_1^3}{3}=R^2(b_1-(-R)) +\frac{(-R)^3-b_1^3}{3}\\ &amp;=R^2(b_2-a_2) +\frac{a_2^3-b_2^3}{3}=R^2(b_2-b_1) +\frac{b_1^3-b_2^3}{3} \\ &amp;\quad\quad \quad\;\;\;\;\quad \quad\quad\quad\;\quad\quad \quad\vdots \\ &amp;= R^2(b_n-a_n) +\frac{a_n^3-b_n^3}{3}=R^2(R-b_{n-1}) +\frac{b_{n-1}^3-R^3}{3}. \end{align}</p> <p>As this is non-linear, it probably most fruitful to solve it numerically. </p> <p>For $n=6$, in units of cake-radii (so $R=1$, but we can always scale), we have </p> <p>$$b_1\approx-0.4817, \quad b_2\approx-0.2261,$$</p> <p>which is all we need because of the symmetry ($b_3=0$). So your cake/egg-slicer should have distance </p> <p>$$\Delta b_{01}=0-b_1\approx 0.4817,\Delta b_{12}=b_2-b_1\approx 0.2556,\Delta b_{23}=0-b_2 \approx 0.2261.$$</p> <p>So your egg-slicer should look a little something like this:</p> <p>$\quad\quad\quad\;\;\;$<a href="https://i.stack.imgur.com/WItMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WItMs.png" alt="enter image description here"></a></p> <p>Here is the same as above, but for $n\in[1,20]$:</p> <p><a href="https://i.stack.imgur.com/PJixn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PJixn.png" alt="enter image description here"></a></p> <p>I must say that I'm surprised how broad the outermost pieces should be in order for them to have a volume equal to the rest, but this is probably my brain not being able to grasp exponentiation (here taking the third power).</p> <p>Here is the (probably very ugly, but working) Mathematica code:</p> <pre><code>ClearAll["Global`*"]; M = 20; For[ n = 1, n &lt; M, n++, ans = b /. FullSimplify[ Assuming[Element[n, Integers], Solve[4/n == 3 (b - a) + a^3 - b^3, b]]]; B[a_] = N[Re[ans[[3]]]]; tabs[n] = {RecurrenceTable[{h[k + 1] == B[h[k]], h[1] == -1}, h, {k, 1, n + 1}]}; ] NumberLinePlot[Table[tabs[k], {k, 1, n}], {x, -1, 1}] </code></pre> <p>As shown in <a href="https://math.stackexchange.com/a/1928624/259085">this answer</a>, we can find the position of the <em>first</em> blade to be (in the case of $R=1$)</p> <p>$$ x(n)=2\cos \left( {\frac{{\alpha - 2\pi }} {3}} \right), \quad\quad \text{with } {\,\alpha = \arctan \left( {\frac{{2\sqrt {\left( {n - 1} \right)} }} {{n - 2}}} \right)}, \text{ for } n&gt;2.$$</p> <p>But feeding this answer into the next cubic equation and trying to solve just seems masochistic, hence the above call for numerical methods. Note that as $\tan (0)=0$ we have $\lim_{n\rightarrow \infty}x(n)=-1$ as expected. </p>
989,740
<p>How do I prove the following statement?</p> <blockquote> <p>If $x^2$ is irrational, then $x$ is irrational. The number $y = π^2$ is irrational. Therefore, the number $x = π$ is irrational</p> </blockquote>
Community
-1
<p><strong>Hint</strong></p> <p>$$(A\implies B)\iff (\neg B\implies \neg A)$$</p>
989,740
<p>How do I prove the following statement?</p> <blockquote> <p>If $x^2$ is irrational, then $x$ is irrational. The number $y = π^2$ is irrational. Therefore, the number $x = π$ is irrational</p> </blockquote>
Timbuc
118,527
<p>$$\;(A\implies B)\equiv(\neg B\implies\neg A)\;$$</p> <p>and this means that</p> <p>($\;x^2\;$ is irrational $\;\implies\;x\;$ is irrational)$\;\equiv (x\;$ is rational $\;\implies\;x^2\;$ is rational) , which seems way simpler to prove.</p>
989,740
<p>How do I prove the following statement?</p> <blockquote> <p>If $x^2$ is irrational, then $x$ is irrational. The number $y = π^2$ is irrational. Therefore, the number $x = π$ is irrational</p> </blockquote>
Cameron Buie
28,900
<p>As written, it doesn't appear that you have a statement that needs proved, but an argument (consisting of <em>multiple</em> statements) that needs validated.</p> <p>It appears that you have a <a href="http://en.wikipedia.org/wiki/Modus_ponens" rel="nofollow">modus ponens</a> argument. All such arguments are valid, so you're done, as far as I can tell.</p> <p>In order to prove that this particular argument is sound, you would need to prove that both the premises are true. The answers above suggest how you may show that the first premise is true. Showing that $\pi^2$ is irrational, though, is likely not something that you're being asked to do. Hence, I strongly suspect that a proof isn't what's required, here.</p>
283,816
<p>$ 2 \ln (5x) = 16$</p> <p>$ \ln (5x) = 8 $</p> <p>$ 5x = e^8 $</p> <p>$ x = \dfrac {1}{5}e^8$</p> <p>But why can't we do it like this:</p> <p>$ \ln(5x)^2 = 16$</p> <p>I thought that was a possibilty with logaritms?</p>
André Nicolas
6,312
<p>Sure it's possible. We have $\ln((5x)^2)=16$, and therefore $(5x)^2=e^{16}$. Take square roots, remembering that $x$ must be positive for the logarithm to be defined. We get the right answer, a little more slowly than before. </p>
283,816
<p>$ 2 \ln (5x) = 16$</p> <p>$ \ln (5x) = 8 $</p> <p>$ 5x = e^8 $</p> <p>$ x = \dfrac {1}{5}e^8$</p> <p>But why can't we do it like this:</p> <p>$ \ln(5x)^2 = 16$</p> <p>I thought that was a possibilty with logaritms?</p>
Andrew Maurer
51,043
<p>If you mean $\ln\left[(5x)^2\right]$, then yes! </p> <p>You are allowed to do that!</p>
1,946,535
<p>A Liouville number is an irrational number $x$ with the property that, for every positive integer $n$, there exist integers $p$ and $q$ with $q &gt; 1$ and such that $0 &lt; \mid x - \frac{p}{q} \mid &lt; \frac{1}{q^n} $.</p> <p>I'm looking for either hints or a complete proof for the fact that $e$ is not a Liouville number. I can prove that $e$ is irrational and even that it is transcendental, but I'm a bit stuck here.</p> <p>Here's my research:</p> <p><a href="https://en.wikipedia.org/wiki/Liouville_number" rel="noreferrer">The wikipedia article about Liouville numbers</a> states:</p> <blockquote> <p>[...] not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of $e$, one can show that e is an example of a transcendental number that is not Liouville.</p> </blockquote> <p>However, theres clearly more to the argument then the boundedness of the continued fraction of $e$, because the terms of $e$'s continued fraction expansion <em>are</em> unbounded and yet it is not a Liouville number. Also, if possible, i would like to avoid using continued fractions at all.</p> <p><a href="https://books.google.de/books?id=R70lCQAAQBAJ&amp;pg=PA43&amp;lpg=PA43&amp;dq=e%20not%20a%20liouville%20number%20proof&amp;source=bl&amp;ots=DyJ4TDhjaR&amp;sig=8Cjvcnj9fOXhLBda9I9DMKNe4TE&amp;hl=de&amp;sa=X&amp;ved=0ahUKEwj45q7pubTPAhUmQpoKHeSjDQY4ChDoAQgbMAA#v=onepage&amp;q=e%20not%20a%20liouville%20number%20proof&amp;f=false" rel="noreferrer">This book</a> has the following as an exercise:</p> <blockquote> <p>Prove that $e$ is not a Liouville number. (Hint: Follow the irrationality proof of $e^n$ given in the supplements to Chapter 1.)</p> </blockquote> <p>Unfortunately, the supplements to Chapter 1 are not publically available in the sample and I do not want to buy that book.</p> <p><a href="https://books.google.de/books?id=TRNxKFMuNPkC&amp;pg=PA25&amp;lpg=PA25&amp;dq=e%20not%20a%20liouville%20number%20proof&amp;source=bl&amp;ots=wyHE_EcTUE&amp;sig=-MsBOxHYKLh-K_xGKBIB2JthBM8&amp;hl=de&amp;sa=X&amp;ved=0ahUKEwjSvP2S0LTPAhWlNJoKHcR3Dwg4FBDoAQgjMAE#v=onepage&amp;q=e%20not%20a%20liouville%20number%20proof&amp;f=false" rel="noreferrer">This book</a> states:</p> <blockquote> <p>Given any $\varepsilon &gt; 0$, there exists a constant $c(e,\varepsilon) &gt; 0$ such that for all $p/q$ there holds $\frac{c(e,\varepsilon)}{q^{2+\varepsilon}} &lt; \mid e - \frac{p}{q} \mid$. [...] Using [this] inequality, show that $e$ is not a Liouville number.</p> </blockquote> <p>Which, given the inequality, I managed to do. But I do not have any idea of how one would go about proving that inequality.</p> <p>I greatly appreciate any help!</p>
Jack D'Aurizio
44,121
<p>Using <a href="https://en.wikipedia.org/wiki/Gauss%27s_continued_fraction" rel="noreferrer">Gauss continued fraction for $\tanh$</a>, it is not difficult to show that the continued fraction of $e$ has the following structure: $$ e=[2;1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,\ldots]\tag{1}$$ then by studying the sequence of convergents $\left\{\frac{p_n}{q_n}\right\}_{n\geq 1}$ through $$\left|\frac{p_n}{q_n}-\frac{p_{n+1}}{q_{n+1}}\right|=\frac{1}{q_n q_{n+1}}=\frac{1}{q_n(\alpha_{n+1}q_n+q_{n-1})}\tag{2}$$ and $$ \left|e-\frac{p_n}{q_n}\right| = \left|\sum_{k\geq n}\frac{(-1)^k}{q_k q_{k+1}}\right| \tag{3} $$ we may easily get that there is no rational approximation such that $$ \left|e-\frac{p_n}{q_n}\right|\leq \frac{1}{q_n^4}\tag{4} $$ hence $e$ is not a Liouville number. It is not difficult to use $(1)$ to prove the stronger statement</p> <blockquote> <p>The irrationality measure of $e$ is $2$.</p> </blockquote>
3,655,215
<p>There are many contradictions in literature on tensors and differential forms. Authors use the words coordinate-free and geometric. For example, the book Tensor Analysis and Elementary Differential Geometry for Physicists and Engineers say differential forms are coordinate free while tensors are dependendent on coordinate. But when you look at the wikipedia article on tensor calculus it says that tensors are coordinate free representation. Another, mention would be Kip Thornes Modern Classical Physics where he explains that he develops physics in a coordinate free way using tensors. Other authors say, we develop differential geometry in a geometric way. Or we develop physics in a geometric way. Is geometry synonymous with coordinate free? This is all very confusing. There are many more examples in the literature but I dont see a definitive answer. The further I look the contradictions between authors. I am looking for an authoritative textbook that I can learn from. What do you think about Chris Isham's Modern Differential Geometry for Physicists? Also, is it better to use tensors vs differential forms in theoretical physics? </p>
Traves Wood
1,048,652
<p>I have a book on tensors that teach both coordinate free (geometric) tensors and coordinate dependent. It is published by Springer and is called<em>Introduction to Tensor Analysis and the Calculus of Moving Surfaces</em> by Pavel Grinfeld. This could be why there is an argument because people think a mathematical tool is mutually exclusive to both systems.</p>
3,655,215
<p>There are many contradictions in literature on tensors and differential forms. Authors use the words coordinate-free and geometric. For example, the book Tensor Analysis and Elementary Differential Geometry for Physicists and Engineers say differential forms are coordinate free while tensors are dependendent on coordinate. But when you look at the wikipedia article on tensor calculus it says that tensors are coordinate free representation. Another, mention would be Kip Thornes Modern Classical Physics where he explains that he develops physics in a coordinate free way using tensors. Other authors say, we develop differential geometry in a geometric way. Or we develop physics in a geometric way. Is geometry synonymous with coordinate free? This is all very confusing. There are many more examples in the literature but I dont see a definitive answer. The further I look the contradictions between authors. I am looking for an authoritative textbook that I can learn from. What do you think about Chris Isham's Modern Differential Geometry for Physicists? Also, is it better to use tensors vs differential forms in theoretical physics? </p>
Deane
10,584
<p>Yes, &quot;in a geometric way&quot; means coordinate-independent. Here's the story I tell my students: Although units (meters, grams, seconds, etc.) are used all the time in physics, we do not believe that the laws of physics change if the units are changed. Coordinates on 2-dimensional or 3-dimsnsional space are just higher dimensional versions of units. So, even though we do most of our calculations using coordinates, the initial assumptions and the final conclusions should not depend on the coordinates we used to show that the assumptions imply the conclusions.</p> <p>The same is true with geometry. Geometric facts should not depend on coordinates. Recall that until Descartes came along, geometry was always done without the use of coordinates. Coordinates are extraordinarily useful, because the purely geometric arguments, such as what you learn in high school geometry, are painful to do and find. Descartes showed us how to do it all using algebra and, later, calculus far more easily.</p> <p>By now I can identify at least 3 different ways to do calculations. One is to use local coordinates and write tensors as higher dimensional generalizations of vectors in <span class="math-container">$\mathbb{R}^n$</span>. Another is to use differential forms and the method of moving frames as developed by Elie Cartan. The third is to avoid using coordinates completely and use abstract definitions for everything. I find that I use all three. Which one is easier to use depends on the specific calculation or proof.</p> <p>But in the end you usually (but not 100% of the time) want to show that what you've derived is independent of coordinates.</p>
1,268
<p>There has long been debate about whether a first year undergraduate course in discrete mathematics would be better for students than the traditional calculus sequence. The purpose of this question is not to further that debate, but to inquire whether any text has tried to teach calculus by emphasizing probability as a primary motivating example.</p> <p>It seems that such a text would introduce integrals before derivatives, which I know many authors have tried.</p> <p>I am asking this question because it seems <a href="https://mathoverflow.net/questions/162125/what-areas-of-pure-mathematics-research-are-best-for-a-post-phd-transition-to-in">industry has use for people proficient with probability</a>, and pure mathematics makes great use of these ideas as well (think of probabilistic methods in number theory, for example). It seems that a good course of the above type would be cosmopolitan enough to attract students with tastes in either pure or applied mathematics to major in mathematics.</p> <p>EDIT: In the absence of a perfect text for this, it may be worth a collaborative effort by mathematicians to write such a text. Is there a way to "crowdsource" writing a text like this as a wiki?</p>
neuron
660
<p>I think this one may suit your needs, at least partially: Richard W. Hamming, <em>Methods of Mathematics Applied to Calculus, Probability, and Statistics</em>, <a href="http://store.doverpublications.com/0486439453.html" rel="nofollow">http://store.doverpublications.com/0486439453.html</a></p>
2,413,899
<p>When I was 7 or 8 years old, I was playing with drawing circles. For some reason, I thought about taking a 90° angle and placing the vertex on the circle and marking where the two sides intersected the circle.</p> <p><a href="https://i.stack.imgur.com/oskEP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oskEP.png" alt="90° angle on a circle"></a></p> <p>It appeared to me that connecting the two points of intersection created a line through the center regardless of how the square was rotated.</p> <p><a href="https://i.stack.imgur.com/IIMqF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IIMqF.png" alt="90° angle on a circle with line connecting points of intersection"></a></p> <p>I tested this many times and, within the error my tools, my conjecture seemed to be correct. I had indeed discovered a method for finding the center of a circle. Simply create at least two of these bisecting lines and voila!</p> <p><a href="https://i.stack.imgur.com/MrFPw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MrFPw.png" alt="3 90° angles on a circle with lines connecting points of intersection that appear to meet in the center of the circle"></a></p> <p>The problem I have now, many years later, is that I am not satisfied with simply checking a bunch of times; I want a proof.</p> <p>I can intuitively explain two extremes and the middle without any fancy math, but I still have not come up with a general proof.</p> <p>As one of the points of intersection approaches the vertex of the angle, the leg with that point approaches a tangent of the circle. It is intuitive to me that a line, which is perpendicular to a tangent line and passes through the point at which the tangent line touches the circle, would bisect the circle.</p> <p><a href="https://i.stack.imgur.com/y1uqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y1uqm.png" alt="90° angle on a circle being rotated until one side is tangent to the circle"></a></p> <p>In addition, when the angle is rotated such that the two line segments formed between the vertex and the points of intersection are equal, these line segments form half of an inscribed square. A line that passes through two opposite vertices of that square would also bisect the circle.</p> <p><a href="https://i.stack.imgur.com/wjDJf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjDJf.png" alt="Square inscribed in circle"></a></p> <p>Can you prove whether placing the vertex of a 90° angle on a circle creates a bisector by connecting the points of intersection?</p>
Satish Ramanathan
99,745
<p>Hint:</p> <p>Put <span class="math-container">$y = 2^{x^2+x}$</span></p> <p>Integral becomes <span class="math-container">$\int_{1}^{4} \frac{1}{\sqrt{1+ \frac{4}{\ln(2)}\ln(y)}}dy$</span></p> <p>Again Put <span class="math-container">$\sqrt{1+ \frac{4}{\ln(2)}\ln(y)}= u$</span></p> <p>Integral becomes <span class="math-container">$\int_{1}^{3}\frac{1}{2e^a} e^{au^2} du$</span></p> <p>where <span class="math-container">$a = \frac{\ln(2)}{4}$</span></p> <p>It resembles the standard integral <span class="math-container">$\int_{1}^{3} e^{au^2}du$</span></p> <p><span class="math-container">$$\int e^{au^2}du = \frac{-i\sqrt{\pi}}{2\sqrt{a}} \text{erf}\left(iu\sqrt{a}\right)$$</span></p> <p>I hope you can take it from there</p> <p>I am attaching the table of standard integrals for your reference</p> <p><a href="http://integral-table.com/downloads/integral-table.pdf" rel="nofollow noreferrer">http://integral-table.com/downloads/integral-table.pdf</a></p> <p>see page page 7, integral number 67</p> <p>Good luck</p>
2,413,899
<p>When I was 7 or 8 years old, I was playing with drawing circles. For some reason, I thought about taking a 90° angle and placing the vertex on the circle and marking where the two sides intersected the circle.</p> <p><a href="https://i.stack.imgur.com/oskEP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oskEP.png" alt="90° angle on a circle"></a></p> <p>It appeared to me that connecting the two points of intersection created a line through the center regardless of how the square was rotated.</p> <p><a href="https://i.stack.imgur.com/IIMqF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IIMqF.png" alt="90° angle on a circle with line connecting points of intersection"></a></p> <p>I tested this many times and, within the error my tools, my conjecture seemed to be correct. I had indeed discovered a method for finding the center of a circle. Simply create at least two of these bisecting lines and voila!</p> <p><a href="https://i.stack.imgur.com/MrFPw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MrFPw.png" alt="3 90° angles on a circle with lines connecting points of intersection that appear to meet in the center of the circle"></a></p> <p>The problem I have now, many years later, is that I am not satisfied with simply checking a bunch of times; I want a proof.</p> <p>I can intuitively explain two extremes and the middle without any fancy math, but I still have not come up with a general proof.</p> <p>As one of the points of intersection approaches the vertex of the angle, the leg with that point approaches a tangent of the circle. It is intuitive to me that a line, which is perpendicular to a tangent line and passes through the point at which the tangent line touches the circle, would bisect the circle.</p> <p><a href="https://i.stack.imgur.com/y1uqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y1uqm.png" alt="90° angle on a circle being rotated until one side is tangent to the circle"></a></p> <p>In addition, when the angle is rotated such that the two line segments formed between the vertex and the points of intersection are equal, these line segments form half of an inscribed square. A line that passes through two opposite vertices of that square would also bisect the circle.</p> <p><a href="https://i.stack.imgur.com/wjDJf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjDJf.png" alt="Square inscribed in circle"></a></p> <p>Can you prove whether placing the vertex of a 90° angle on a circle creates a bisector by connecting the points of intersection?</p>
Bill Donald
476,771
<p>Hint:</p> <p>Using $u=\frac{2x+1}{2}$ yields an <a href="http://mathworld.wolfram.com/Erfi.html" rel="nofollow noreferrer">imaginary error function</a>.</p>
745,876
<p>$$\exists! x : A(x) \Rightarrow \exists x : A(x)$$ Assuming that $A(x)$ is an open sentence. I'm new to abstract mathematics and proofs, so I came here to ask for some simplification. Thanks</p>
gt6989b
16,192
<p>If there is a unique $x$ such that $A(x)$ is true,</p> <p>then there must be some $x$ for which $A(x)$ is true.</p>
4,230,782
<blockquote> <p>Compute <span class="math-container">$\frac 17$</span> in <span class="math-container">$\Bbb{Z}_3.$</span></p> </blockquote> <p>We will have to solve <span class="math-container">$7x\equiv 1\pmod p,~~p=3.$</span></p> <ul> <li>We get <span class="math-container">$x\equiv 1\pmod 3.$</span></li> <li>Then <span class="math-container">$x\equiv 1+3a_1\pmod 9,$</span> so <span class="math-container">$7(1+3a_1)\equiv 1 \pmod 9$</span> basically lifting the exponent of <span class="math-container">$p=3,$</span> we get <span class="math-container">$1+3a_1\equiv 4\pmod 9\implies a_1\equiv 1\pmod 3.$</span></li> <li>So let <span class="math-container">$$x\equiv 1+3\cdot 1+3^2\cdot a_2 \pmod 2\implies 7(4+3^2\cdot a_2)\equiv 1\pmod {27}\implies 4+3^2\cdot a_2\equiv 4\pmod {27}\implies a_2\equiv 0 \pmod 3.$$</span></li> <li>So let <span class="math-container">$$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot a_3 \pmod {81}\implies 7(4+3^2\cdot 0+3^3\cdot a_3)\equiv 1\pmod {81}\implies 4+3^3\cdot a_3\equiv 58\pmod {81}\implies a_2\equiv 2 \pmod 3.$$</span></li> <li>So let <span class="math-container">$$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot 2+3^4\cdot a_4 \pmod {243}\implies 7(4+3^2\cdot 0+3^3\cdot 2+3^4\cdot a_4)\equiv 1\pmod {243}\implies 1+3+54+3^4\cdot a_4\equiv 139\pmod {243}\implies a_4\equiv 1 \pmod 3.$$</span></li> <li>So let <span class="math-container">$$ x\equiv 1+3\cdot 1+3^2\cdot 0+ 3^3\cdot 2+3^4\cdot 1+ 3^5\cdot a_5 \pmod {729}\implies 7(4+3^2\cdot 0+3^3\cdot 2+3^4\cdot 1+3^5\cdot a_5)\equiv 1\pmod {729}\implies 1+3+54+81\equiv 625\pmod {243}\implies a_5\equiv 2 \pmod 3.$$</span></li> </ul> <p>I haven't worked out but I think <span class="math-container">$a_6$</span> is <span class="math-container">$0.$</span></p> <p>So the sequence we are getting is <span class="math-container">$(a_0,a_1,a_2,a_3,a_4,\dots)=(1,1,0,2,1,2,\dots).$</span></p> <p>But I am not sure if it's correct, since it's not being periodic. Any help?</p>
SSA
863,540
<p>lets use it 3-adic expansion here.</p> <ul> <li><em>The theorem says: A rational number with p-adic absolute value 1 has a purely periodic p-adic expansion if and only if it lies in the real interval [-1, 0)</em>. and 1/7 is periodic.</li> <li>we will use first <span class="math-container">${-\frac{1}{7}}$</span> and then negate it.</li> <li>we know that <span class="math-container">${\overline {n_0n_1...n_{k-1}} = \frac {n_0n_1...n_{k-1}}{1-p^k}}$</span></li> <li>The least <span class="math-container">${k \geq 1}$</span> making <span class="math-container">${3^k \equiv 1 (mod 7)}$</span> is k=6. so we can take</li> <li><span class="math-container">${3^6=729-1 =728\equiv 0 (mod7)}$</span>, now <span class="math-container">${728=104 \cdot 7}$</span> hence our equation becomes</li> <li><span class="math-container">${-\frac{1}{7}}$</span>=<span class="math-container">${- \frac{1\cdot104}{7\cdot104}}$</span> = <span class="math-container">${- \frac{104}{3^6-1}}$</span> = <span class="math-container">${-\frac{10212_3}{3^6-1}}$</span> = <span class="math-container">${\frac{021201}{1-3^6}}$</span></li> <li><span class="math-container">${-\frac{1}{7}=}$</span> <span class="math-container">${\overline{021201...}}$</span>, now negate to get answer, as we know</li> <li>if <span class="math-container">${x=c_dp^d+c_{d+1}p^{d+1}+...+ c_ip^i+... }$</span> here <span class="math-container">${c_d \neq 0}$</span> then</li> <li><span class="math-container">${-x = (p-c_d)p^d+ (p-1-c_{d+1})p^{d+1}+...+(p-1-c_i)p^i+...}$</span></li> <li><span class="math-container">${\frac {1}{7}= {\overline {11021}}}$</span></li> </ul>
2,513,509
<p>If $f,g,h$ are functions defined on $\mathbb{R},$ the function $(f\circ g \circ h)$ is even:</p> <p>i) If $f$ is even.</p> <p>ii) If $g$ is even.</p> <p>iii) If $h$ is even.</p> <p>iv) Only if all the functions $f,g,h$ are even. </p> <p>Shall I take different examples of functions and see?</p> <p>Could anyone explain this for me please?</p>
lab bhattacharjee
33,337
<p>If $\arctan h=u,\sin u=h,\cos u=\sqrt{1-h^2},\tan u=\dfrac h{\sqrt{1-h^2}},u=\arctan\dfrac h{\sqrt{1-h^2}}$</p> <p>$$\dfrac{\arcsin h-\arctan h}{\sin h-\tan h}=-\cos h\cdot\dfrac{\arctan\dfrac h{\sqrt{1-h^2}}-\arctan h}{\sin h(1-\cos h)}$$</p> <p>Now $\arctan\dfrac h{\sqrt{1-h^2}}-\arctan h=\arctan\dfrac{\dfrac h{\sqrt{1-h^2}}-h}{1+\dfrac{h^2}{\sqrt{1-h^2}}}$</p> <p>$=\arctan\dfrac{h(1-\sqrt{1-h^2})}{\sqrt{1-h^2}+h^2}=\arctan\dfrac{h^3}{(\sqrt{1-h^2}+h^2)(1+\sqrt{1-h^2})}$</p> <p>So, $\displaystyle\lim_{h\to0}\dfrac{\arctan\dfrac{h^3}{(\sqrt{1-h^2}+h^2)(1+\sqrt{1-h^2})}}{h^3}=\cdots=\dfrac1{1+\sqrt1}=?$</p> <p>and $\displaystyle\lim_{h\to0}\dfrac{\sin h(1-\cos h)}{h^3}=\lim_{h\to0}\dfrac1{(1+\cos h)}\left(\lim_{h\to0}\dfrac{\sin h}h\right)^3=\dfrac1{1+\cos0}=?$</p>
99,312
<p>Let C be a geometrically integral curve over a number field K and let K' be a number field containing K. Does there exist a number field L containing K such that</p> <ul> <li>$L \cap K' = K$, and</li> <li>$C(L) \neq \emptyset$?</li> </ul> <p>Note that the hypotheses on C are necessary -- the curve x^2 + y^2 = 0, with the origin removed, is not geometrically integral, but gives a counterexample for K = Q and K' = Q(i).</p> <p>Also, I can prove that this is true when C has prime gonality. It would be odd, though, for this to be a necessary hypothesis.</p>
Community
-1
<p>Yes, this follows from a Theorem of Moret-Bailly, see for example Corollary 1.5</p> <p><a href="http://math.stanford.edu/~conrad/vigregroup/vigre05/mb.pdf" rel="noreferrer">http://math.stanford.edu/~conrad/vigregroup/vigre05/mb.pdf</a></p> <p>Roughly speaking, given a finite set $S$ of primes with $C(K_v)$ is non-empty, this produces a field $L$ with $C(L) \neq \emptyset$ and $L_v = K_v$ for all $v \in S$.</p> <p>To guarantee that $L \cap K' = K$, one may as well assume that $K'/K$ is Galois with Galois group $G$. Then for every conjugacy class $g \in G = \mathrm{Gal}(K'/K)$, let $v$ be a prime such that $\langle \mathrm{Frob}_v \rangle = \langle g \rangle \in G$ and $C(K_v) \ne \emptyset$. (The existence of such $v$ follows from Cebotarev, the Weil conjectures, and Hensel's Lemma.) If $S$ is the resulting set, then one may find $L$ with $C(L)$ non-empty and $L_v = K_v$ for all $v \in S$, and so (by Cebotarev) that $L \cap K' = K$.</p> <p>This theorem gets used all the time in "potential modularity" theorems.</p>
116,201
<p>I am curious if the "<a href="http://neumann.math.tufts.edu/~mduchin/UCD/111/readings/architecture.pdf" rel="nofollow noreferrer">Bourbaki's approach</a>" to mathematics is still a viable point of view in modern mathematics, despite the fact that Bourbaki is vilified by many.</p> <p>Even more specifically, does anyone actively approach mathematics from the more "yielding" <a href="http://www.math.jussieu.fr/~leila/grothendieckcircle/chap1.pdf" rel="nofollow noreferrer">point of view</a> famously practiced by Grothendieck? Which, or what type of, research areas are welcoming to (or practicing) Grothendieck's approach to mathematics?</p> <p><strong>Motivation:</strong></p> <p>To me, there is a deep question regarding motivation of mathematicians over time which is addressed by this viewpoint. An emphasis on resolving hard technical problems is quite depressing, generally, whereas the idea of finding a general framework which presents a natural and explanatory solution through the development of a vast theory seems very motivating. In such a view, the open problem only serves to motivate a better development of the general theory surrounding the core difficulty, bringing into focus a clearer picture of the essential issue at hand.</p> <p>It seems to me that carefully developing a general (sometimes axiomatic) theory is analogous to performing scientific experiment. One is not looking to be clever, but instead is filling in data which may, when examined later, reveal clear and natural answers to mathematical questions. Obviously such an approach can be exhausting, in that one must spend much more time to fill in an entire picture than to, at some point, jump to a resolution of a particular question. On the other hand, It may be possible to persevere longer at such a task, as one is not so sensitive to one's loss of quickness or cleverness and can simply engage the task at hand.</p> <p>Is this viewpoint valid?</p> <hr> <p>[<strong>Edited (Dec. 17, 2012) by A. Caicedo</strong>, following suggestions <a href="http://mathoverflow.tqft.net/discussion/1490/bourbaki-thread/" rel="nofollow noreferrer">here</a>. Question originally asked by user <a href="https://mathoverflow.net/users/29891/curious1">curious1</a>.]</p>
Nut immerser
29,893
<p>Jacob Lurie seems the most obvious answer. His publication history (deep books published in his own time rather than a bunch of small articles) is indeed of the sort that you allude to at the end, but fortunately he had no trouble being offered a suitable position (while still quite young).</p>
1,260,945
<p>$\textbf{My understanding of divergence:}$ Consider any vector field $\textbf{u}$, then $\operatorname{div}(u) = \nabla \cdot u$. More conceptually, if I place an arbitrarily small sphere around any point of the vector field $\textbf{u}$, divergence measures the amount of "particles" exiting the sphere, i.e. positive divergence represent a vector field which is "moving faster" as we move to the right. However, how do I interpret $$ \int_U \operatorname{div}(u) \, dx$$ where $U$ is any bounded open subset of $\mathbb{R}^n$. </p>
Joffan
206,402
<p>The number of <a href="https://en.wikipedia.org/wiki/Quadratic_residue" rel="nofollow noreferrer">quadratic residues</a> <span class="math-container">$\bmod p$</span> (prime), <span class="math-container">$|\mathcal{QR}| = \dfrac{p+1}{2}\qquad$</span> [1].</p> <p>If we form pairs of residues <span class="math-container">$(a,b)$</span> such that <span class="math-container">$a+b = p-1$</span>, we have <span class="math-container">$\frac{p-1}{2}$</span> distinct pairs plus <span class="math-container">$\frac{p-1}{2}$</span> paired with itself.</p> <p>If <span class="math-container">$\frac{p-1}{2}$</span> is a quadratic residue, we can form the required expression using this. Otherwise, by the pigeonhole principle, one of the other residue pairs must both consist of quadratic residues - so there is <span class="math-container">$(x,y)$</span> with <span class="math-container">$x^2\equiv a$</span> and <span class="math-container">$y^2\equiv b\bmod p$</span>. This gives the required assurance that we can form <span class="math-container">$x^2+y^2 \equiv -1 \bmod p$</span>.</p> <hr /> <p>[1] The number of quadratic residues can be seen as a consequence of the existence of primitive roots. If <span class="math-container">$g$</span> is a primitive root <span class="math-container">$\bmod p$</span>, this means that the smallest <span class="math-container">$k$</span> for which <span class="math-container">$g^k\equiv 1\bmod p$</span> is <span class="math-container">$k=p-1$</span>. Now all the <em>even</em> powers of <span class="math-container">$g$</span>, <span class="math-container">$(g^{2j})$</span> are quadratic residues since <span class="math-container">$g^{2j} \equiv (g^j)^2 \bmod p$</span>, giving <span class="math-container">$(p-1)/2$</span> quadratic residues. Then there's one more: <span class="math-container">$0$</span> is a quadratic residue since <span class="math-container">$p^2\equiv 0 \bmod p$</span>, so <span class="math-container">$(p+1)/2$</span> quadratic residues all together.</p>
2,369,274
<p>More specifically, I am trying to solve the problem in Ravi Vakil's notes:</p> <blockquote> <p>Make sense of the following sentence: &quot;The map <span class="math-container">$$\mathbb{A}^{n+1}_k \setminus \{0\} \rightarrow \mathbb{P}^n_k$$</span> given by <span class="math-container">$$(x_0,x_1,..x_n) \rightarrow [x_0,x_1,..,x_n]$$</span> is a morphism of schemes.&quot; Caution: you can’t just say where points go; you have to say where functions go. So you may have to divide these up into affines, and describe the maps, and check that they glue.</p> </blockquote> <p>I understand why this works on a very technical level, essentially it seems to be because this diagram commutes (which is mostly because a map of affine schemes is given by its map of global sections):</p> <p><a href="https://i.stack.imgur.com/DKoEU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DKoEU.jpg" alt="enter image description here" /></a></p> <p>However, in <a href="https://math.stackexchange.com/questions/114165/on-a-certain-morphism-of-schemes-from-affine-space-to-projective-space">this</a> answer to the same problem, Michael Joyce argues that there is no need to check gluing because the map is globally defined. So my question is:</p> <p>Is this map globally defined on points or on functions? If it is globally defined on points, then what makes this example different from the different maps of schemes <span class="math-container">$Spec(\mathbb{C}) \to Spec(\mathbb{C})$</span> given by the identity and conjugation maps <span class="math-container">$\mathbb{C} \to \mathbb{C}$</span>, which are different maps but agree on points? If the map is defined on functions, then why does it seem like the arrow is going in the wrong direction?</p>
mlc
360,141
<p>Both components are concave parabolas, with vertices respectively in $x=2$ and $x=0$. So the first component is increasing for $x&lt;1$ and the second component is decreasing for $x \ge 1$. The function is continuous at $x=1$. Therefore (B) is the correct answer.</p>
2,006,565
<p>I was solving a question to find $\lambda$ and this is the current situation:</p> <p>$$\begin{bmatrix} 5 &amp; 10\\ 15 &amp; 10 \end{bmatrix} =\lambda\begin{bmatrix}2\\3\end{bmatrix}$$</p> <p>My algebra suggested that I should divide the right-hand matrix with the left-hand one to find $\lambda$. Then after a few searches I found out that $A/B = AB^{-1}$, so now I think I need to find the inverse of that $2\times1$ matrix to multiply it with the other matrix to find $\lambda$, but the inverse is only present for a square matrix. I also have a feeling that this is not the right approach, so I am here for help. Basically I have no idea how to find $\lambda$.</p>
Jan Eerland
226,665
<p>HINT:</p> <p>$$\mathcal{I}\left(x\right)=\int\left(2^2+\arccos\left(\sqrt{x}\right)\right)\space\text{d}x=2^2\int1\space\text{d}x+\int\arccos\left(\sqrt{x}\right)\space\text{d}x$$</p> <p>Now substitute $u=\sqrt{x}$ and $\text{d}u=\frac{1}{2\sqrt{x}}\space\text{d}x$, after that use integration by parts:</p> <p>$$\int\arccos\left(\sqrt{x}\right)\space\text{d}x=2\int u\arccos(u)\space\text{d}u=u^2\arccos(u)+\int\frac{u^2}{\sqrt{1-u^2}}\space\text{d}u$$</p>
828,148
<p>In Euclidean geometry, we know that SSS, SAS, AAS (or equivalently ASA) and RHS are the only 4 tools for proving the congruence of two triangles. I am wondering if the following can be added to that list:-</p> <p>If ⊿ABC~⊿PQR and the area of ⊿ABC = that of ⊿PQR, then the two triangles are congruent.</p>
user145570
145,570
<p>Yeah you're right, 4 is correct. </p>
1,879,440
<p>I am reading the definition of associative <span class="math-container">$R$</span>-algebra, and am confused about the following definition from Wiki:</p> <p><a href="https://en.wikipedia.org/wiki/Associative_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Associative_algebra</a></p> <p><a href="https://i.stack.imgur.com/M6HMf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M6HMf.png" alt="enter image description here" /></a></p> <p>The first equality comes from the associativity of being an algebra.</p> <p>However, how to obtain the second equality?</p> <p>I believe <span class="math-container">$\mathcal{A}$</span> does noe have to be a commutative ring though <span class="math-container">$\mathcal R$</span> is a commutative ring.</p>
Alon Amit
308
<p>You don't <em>obtain</em> these equalities. They are part of the definition. An $R$-algebra is defined as in the text you quoted, and it is <em>required</em> that those equalities hold true. If they do, you have an $R$-algebra, and if they don't, you don't. </p> <p>There's no way to derive those equalities from any of the other assumptions made here. </p>
4,333,062
<p>I stumbled upon the following identity <span class="math-container">$$\sum_{k=0}^n(-1)^kC_k\binom{k+2}{n-k}=0\qquad n\ge2$$</span> where <span class="math-container">$C_n$</span> is the <span class="math-container">$n$</span>th Catalan number. Any suggestions on how to prove it are welcome!</p> <p>This came up as a special case of a generating function for labeled binary trees. Actually I can directly prove the identity is zero by showing that certain trees don't exist, but I expect that seeing a direct proof will help me find nice closed formulae for other coefficients of the generating function.</p>
ryang
21,813
<blockquote> <p><span class="math-container">$$\sqrt{2k+1}-\sqrt{2k-5}=4$$</span> Somewhere along the way in solving for <span class="math-container">$k$</span> we come to the equation <span class="math-container">$$\sqrt{2k-5} =-\frac{3}{2}.$$</span></p> </blockquote> <blockquote> <p>I would love to be able to say that if we ever come to an equation like <span class="math-container">$$\sqrt{a}=-b$$</span> for <span class="math-container">$a$</span>, <span class="math-container">$b\in \mathbf{R}^+$</span> we can stop, and conclude that our original equation has no solution.</p> </blockquote> <p>To be clear, I think you are asking whether <span class="math-container">$$\sqrt{f(x)}=-b\quad\text{and}\quad b&gt;0\tag1$$</span> can have a solution.</p> <p>The answer is No, simply by definition: since <span class="math-container">$\sqrt{f(x)}$</span> is <a href="https://math.stackexchange.com/a/4633335/21813">defined</a> to never be negative, no complex (including real) value of <span class="math-container">$x$</span> can ever satisfy <span class="math-container">$(1),$</span> that is, make it true. As such, that equation in <span class="math-container">$(1)$</span> is called an <a href="https://math.stackexchange.com/a/4254850/21813"><strong>inconsistent equation</strong></a>.</p>
1,466,281
<p>Let $X$ and $Y$ be two absolutely continuous random variables on some probability space having joint distribution function $F$ and joint density $f$. Find the joint distribution and the joint density of the random variables $W=X^2$ and $Z=Y^2$.</p> <p>I tried the following. I know that $$F(w,y)=P[W \leq w,Z \leq z]=P[X^2 \leq w,Y^2 \leq z]$$ Equivalently $$P[-\sqrt{w} \leq X \leq \sqrt{w},-\sqrt{z} \leq Y \leq \sqrt{z}]=\int_{-\sqrt{z}}^{\sqrt{z}}\int_{-\sqrt{w}}^{\sqrt{w}} f(x,y) \, dx \, dy$$ Then I tried the change of variable $x=\sqrt{w}$ but i can't see how to rescue the joint density and distribution of $W$ and $Y$</p>
Graham Kemp
135,106
<p>$$\begin{align}\int_a^b\int_c^d f_{X,Y}(x,y)\operatorname d y \operatorname d x &amp; =\int_a^b\int_c^d f_{X}(x) f_{Y\mid X=x}(y)\operatorname d y\operatorname d x \\[1ex] &amp; = \int_a^b f_X(x)(F_{Y\mid X=x}(d)-F_{Y\mid X=x}(c))\operatorname d x \\[1ex] &amp; = F_{X,Y}(b, d) - F_{X,Y}(a, d)- F_{X,Y}(b,c)+F_{X,Y}(a, c)\end{align}$$</p>
748,442
<p>I have $\lceil x \rceil = -\lfloor -x \rfloor$, but I can't figure out how to rely on this in order to get $\lfloor x \rfloor$ from $\lceil x−1 \rceil$.</p> <p>For the record, I am only interested in non-negative real values.</p> <p>I wish to avoid the use of $modulo$, $abs$, $round$ and $if$.</p> <p>The motivation behind this question is as follows:</p> <p>$$x-\frac{1}{2}-\frac{\arctan(\tan(\pi(x-\frac{1}{2})))}{\pi}=\lceil x-1 \rceil$$</p> <p>How do I "manipulate" the value of $x$ in order to get $\lfloor x \rfloor$ instead of $\lceil x-1 \rceil$?</p> <p><strong>UPDATE:</strong></p> <p>I also have this if it helps:</p> <p>$$x+\frac{1}{2}+\frac{\arctan(\tan(\pi(-x-\frac{1}{2})))}{\pi}=\lceil x \rceil$$</p>
AlexR
86,940
<p>If $x\in\mathbb N$, $\lceil x-1 \rceil = x-1 = \lfloor x \rfloor - 1$, else $\lceil x-1\rceil = \lfloor x \rfloor$. Without additional information, this is impossible.</p>
4,140,881
<p>I want to show the title.</p> <blockquote> <p>If <span class="math-container">$M$</span> is a finitely generated module over a local ring <span class="math-container">$A$</span>, then there is a free <span class="math-container">$A$</span>-module <span class="math-container">$L$</span> such that <span class="math-container">$L/mL\simeq M/mM$</span> where <span class="math-container">$m$</span> is a unique maximal ideal of <span class="math-container">$A$</span>.</p> </blockquote> <p>My attempt: Let <span class="math-container">$M$</span> is a finitely generated <span class="math-container">$A$</span>-module where <span class="math-container">$A$</span> is a local ring with maximal ideal <span class="math-container">$m$</span>. Then <span class="math-container">$M/mM$</span> is a <span class="math-container">$A/m$</span>-vector space so there is a basis <span class="math-container">$x_1+mM,...,x_n+mM$</span> of <span class="math-container">$M/mM$</span>. Now let <span class="math-container">$L$</span> be a free <span class="math-container">$A$</span>-module generated by <span class="math-container">$x_1,...,x_n$</span>. Define <span class="math-container">$\phi:L\to M/mM$</span> by <span class="math-container">$x_i\mapsto x_i+mM$</span>. Then I want to show the kernel <span class="math-container">$\{\sum_{i=1}^na_ix_i| \sum_{i=1}^na_ix_i \in mM\}=mL$</span>. <span class="math-container">$\supset$</span> is clear but how can I prove the reverse inclusion?</p>
Empy2
81,790
<p>Have you heard of Inclusion-Exclusion?</p> <p>Start with the full set. <span class="math-container">$68^{10}$</span></p> <p>Subtract those without numbers <span class="math-container">$58^{10}$</span>, without lowercase <span class="math-container">$39^{10}$</span>, and without uppercase <span class="math-container">$39^{10}$</span></p> <p>Add back in (because you subtracted them twice) those without letters, those with neither numbers nor lowercase, and those with neither numbers nor uppercase.</p>
1,783,837
<p>I'm studying how to use Laplace transform to solve ODEs. <br/><br/> I have thought to use this very simple example: $$y'(t)=t+1 \qquad y(0)=0$$</p> <p>I can use integration to find $y(t)$: $$y(t)=\int (t+1) \ \ dt=\frac{1}{2} t^2+t+C$$</p> <p>$C \in \mathbb{R} $, $C=0$ for the initial condition, so:</p> <p>$$y(t)=\frac{1}{2} t^2+t$$</p> <p><br /></p> <p>I consider $F(s)$ the Laplace transform of $f(t)=t+1$: $$F(s)=\frac{1}{s}+\frac{1}{s^2}=\frac{1+s}{s^2}$$</p> <p>I consider, now, the coefficient of the linear ODE: $$H(s)=\frac{1}{s}$$</p> <p>So: $$Y(s)=H(s) \ F(s)=\frac{1+s}{s^3}$$</p> <p>Partial Fraction Decomposition of $Y(s)$:</p> <p>$$Y(s)=\frac{1}{s^2}+\frac{1}{s^3}$$</p> <p>Antitransform: $$y(t)=t^2+t \ne \frac{1}{2} t^2+t $$</p> <p>Where is the mistake?</p> <p>Thanks!</p>
Jan Eerland
226,665
<p>$$y'(t)=t+1\Longleftrightarrow$$ $$\mathcal{L}_t\left[y'(t)\right]_{(s)}=\mathcal{L}_t\left[t+1\right]_{(s)}\Longleftrightarrow$$ $$sy(s)-y(0)=\frac{1}{s^2}+\frac{1}{s}\Longleftrightarrow$$</p> <hr> <p>Use $y(0)=0$:</p> <hr> <p>$$sy(s)-0=\frac{1}{s^2}+\frac{1}{s}\Longleftrightarrow$$ $$sy(s)=\frac{1}{s^2}+\frac{1}{s}\Longleftrightarrow$$ $$y(s)=\frac{\frac{1}{s^2}+\frac{1}{s}}{s}\Longleftrightarrow$$ $$y(s)=\frac{1+s}{s^3}\Longleftrightarrow$$ $$y(s)=\frac{1}{s^3}+\frac{1}{s^2}\Longleftrightarrow$$</p> <hr> <p>Use $$\mathcal{L}_s^{-1}\left[\frac{1}{s^n}\right]_{(t)}=\frac{t^{n-1}}{\Gamma(n)}$$</p> <hr> <p>$$y(s)=\frac{1}{s^3}+\frac{1}{s^2}\Longleftrightarrow$$ $$\mathcal{L}_s^{-1}\left[y(s)\right]_{(t)}=\mathcal{L}_s^{-1}\left[\frac{1}{s^3}+\frac{1}{s^2}\right]_{(t)}\Longleftrightarrow$$ $$y(t)=\frac{t^2}{2}+t$$</p>
2,403,851
<p>Here is Proposition 3 (page 12) from Section 2.1 in <em>A Modern Approach to Probability</em> by Bert Fristedt and Lawrence Gray. </p> <blockquote> <p>Let $X$ be a function from a measurable space $(\Omega,\mathcal{F})$ to another measurable space $(\Psi,\mathcal{G})$. Suppose $\mathcal{E}$ is a family of subsets of $\Psi$ that generates $\mathcal{G}$ and that $X^{-1}(B) \in \mathcal{F}$ for every $B \in \mathcal{E}$. Then $X$ is a measurable function.</p> </blockquote> <p>I´m trying to solve this problem in probability but I don´t know how to solve it.</p> <p>I understand that this problem (if I solve it) says that I can prove that a random variable is measurable from a generator of the algebra and not all the elements of the algebra.</p> <p>Someone can help me to prove this please, I´m really stuck with this proposition of the book. Thanks for the help and time.</p>
Albert
251,592
<p>I suggest you use $\ln$ in the paper and $\log$ when you have order results (since the base of $\log$ does not matter, but it is conventional to use $\log$ when stating order results). </p>
1,031,464
<p>I am supposed to simplify this:</p> <p>$$(x^2-1)^2 (x^3+1) (3x^2) + (x^3+1)^2 (x^2-1) (2x)$$</p> <p>The answer is supposed to be this, but I can not seem to get to it:</p> <p>$$x(x^2-1)(x^3+1)(5x^3-3x+2)$$</p> <p>Thanks</p>
Community
-1
<p>$$(x^2-1)^2 (x^3+1) (3x^2) + (x^3+1)^2 (x^2-1) (2x)\\=x(x^2-1)(x^3+1)\left[3x(x^2-1)+2(x^3+1)\right] \color{blue}{\text{(Factor out common factors)}}\\=x(x^2-1)(x^3+1)\left[3x^3-3x+2x^3+2\right] \color{blue}{\text{ (Distribute inside brackets)}}\\=x(x^2-1)(x^3+1)(5x^3-3x+2) \color{blue}{\text{ (Simplify)}}$$</p>
734,203
<blockquote> <p>Find all vectors of $V_{3}$ which are perpendicular to the vector $(7,0,-7)$ and belong to the subspace $L((0,-1,4), (6,-3,0)$.</p> </blockquote> <p>As a note, this is an extra question of a long exercise, the vectors found above are replaced by results I found at the above questions.</p> <p>Let $\overline{x}$ = {${ x_{1}, x_{2}, x_{3} } $} then since it's perpendicular to {${7,0, -7}$}</p> <p>from there I get </p> <p>$&lt;\overline{x}, 2\overline{u_{2}} + \overline{u_{3}} &gt; = 0$</p> <p>$7x_{1} + 7x_{3} = 0$</p> <p>$x_{1} = x_{3}$</p> <p>So $\overline{x} = \left \{ \left. x_{1}, x_{2}, x_{1} \right \} \right.$ , $x_{i} \epsilon \mathbb{R}$, $i=1,2,3$</p> <p>How do I proceed from here?</p>
Lutz Lehmann
115,115
<p>You can determine the resultant $R(x)=Res_y(y^n-P(x),Q(x,y))$ of both polynomials eliminating $y$. If it is the zero polynomial, then the polynomials have a common factor, which means that for any $x$ you find at least one $y$ so that $(x,y)$ a solution (of the common factor and thus of the system).</p> <p>If the resultant is not zero, then $R(x)$ is a polynomial. Only at the roots of this polynomial you will then find at least one $y$ so that $(x,y)$ is a solution of the system.</p>
4,232,341
<p>To prove <span class="math-container">$\lim\limits_{n\to\infty} \left(1+\frac{x}{n}\right)^{n}$</span> exists, we prove that the sequence <span class="math-container">$$f_n=\left(1+\frac{x}{n}\right)^n$$</span> is bounded and monotonically increasing toward that bound.</p> <p><strong>Proof Attempt:</strong></p> <hr /> <p>We begin by showing <span class="math-container">$f_n=\left(1+\frac{x}{n}\right)^n$</span> is monotonically increasing by looking at the ratio of consecutive terms: <span class="math-container">\begin{align*} \frac{f_{n+1}}{f_n} &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}}{\left(1+\frac{x}{n}\right)^{n}} \tag{Definition of $f_n$} \\ &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}\left(1+\frac{x}{n}\right)}{\left(1+\frac{x}{n}\right)^{n}\left(1+\frac{x}{n}\right)} \tag{Multiplication by $\frac{\left(1+\frac{x}{n}\right)}{\left(1+\frac{x}{n}\right)}$} \\ &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}}{\left(1+\frac{x}{n}\right)^{n+1}}\left(1+\frac{x}{n}\right) \tag{Simplify $a^n\cdot a = a^{n+1}$} \\ &amp;=\left(\frac{1+\frac{x}{n+1}}{1+\frac{x}{n}}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{a^{n+1}}{b^{n+1}}=\left(\frac{a}{b}\right)^{n+1}$} \\ &amp;=\left(\frac{\frac{n+1+x}{n+1}}{\frac{n+x}{n}}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Common denominators} \\ &amp;=\left(\frac{n+1+x}{n+1}\cdot \frac{n}{n+x}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{\frac{a}{b}}{\frac{c}{d}}=\frac{a}{b}\cdot \frac{d}{c}$} \\ &amp;=\left(\frac{n^2+n+nx}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Distribute $(n+1+x)n$} \\ &amp;=\left(\frac{n^2+n+nx+x-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Add and subtract $x$} \\ &amp;=\left(\frac{(n+1)(n+x)-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Factor $n^2+n+nx+x$} \\ &amp;=\left(1+\frac{-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{a+b}{c}=\frac{a}{c}+\frac{b}{c}$} \\ &amp;\ge\left(1+\frac{-x}{(n+x)}\right)\left(1+\frac{x}{n}\right) \tag{Bernoulli: $(1+x)^n \ge 1+nx$} \\ &amp;=\left(\frac{n}{n+x}\right)\left(\frac{n+x}{n}\right) \tag{Common denominators} \\ &amp;=1 \tag{Simplify $\frac{a}{b} \cdot \frac{b}{a}=1$} \end{align*}</span> Since <span class="math-container">$\frac{f_{n+1}}{f_n}&gt;1$</span>, then <span class="math-container">$f_{n+1}&gt;f_n$</span>, which shows the sequence <span class="math-container">$f_n$</span> is monotonically increasing for all <span class="math-container">$n \in \mathbb{N}$</span>.</p> <p>Next, we show <span class="math-container">$f_n=\left(1+\frac{x}{n}\right)^n$</span> is bounded above. Note that <span class="math-container">\begin{align*} f_n &amp;=\left(1+\frac{x}{n}\right)^n \tag{Definition of $f_n$} \\ &amp;=\sum_{k=0}^n \binom{n}{k} (1)^{n-k} \left(\frac{x}{n}\right)^{k} \tag{Binomial Theorem} \\ &amp;=1+\frac{n}{1!}\left(\frac{x}{n}\right)+\frac{n(n-1)}{2!}\left(\frac{x}{n}\right)^2+\frac{n(n-1)(n-2)}{3!}\left(\frac{x}{n}\right)^3+\cdots+\frac{n!}{n!}\left(\frac{x}{n}\right)^n \\ &amp;=1+\frac{\frac{n}{n}}{1!}x+\frac{\frac{n(n-1)}{n^2}}{2!}x^2+\frac{\frac{n(n-1)(n-2)}{n^3}}{3!}x^3+\cdots+\frac{\frac{n!}{n^n}}{n!}x^n \tag{Simplify}\\ &amp;=1+\frac{1}{1!}x+\frac{\left(1-\frac{1}{n}\right)}{2!}x^2+\frac{\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)}{3!}x^3+\cdots+\frac{\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots \left(1-\frac{n-1}{n}\right)}{n!}x^n \\ &amp; \le 1+\frac{1}{1!}x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\cdots+\frac{1}{n!}x^n \tag{$1-\frac{k}{n}&lt;1$} \\ &amp; = \sum_{k=0}^n \frac{1}{k!} x^k \tag{Sigma notation}\\ &amp; \to %\underset{n \to \infty}{\to} \sum_{k=0}^\infty \frac{1}{k!}x^k \tag{as $n \to \infty$} \\ &amp; = \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{k!}x^k \tag{$\exists K$, $k&gt;K$ implies $k! \ge (2x)^k$}\\ &amp; \le \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{(2x)^k} x^k \tag{$k! \ge (2x)^k$ implies $\frac{1}{k!} \le \frac{1}{(2x)^k}$}\\ &amp; = \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{2^k} \tag{$\frac{1}{(2x)^k}x^k=\frac{1}{2^k x^k}x^k = \frac{1}{2^k}$}\\ &amp;= \sum_{k=0}^K \frac{1}{k!} x^k + \frac{1}{2^K} \tag{Geometric series evaluation} \end{align*}</span> which is finite. Thus, the sequence <span class="math-container">$f_n$</span> is bounded. Since it is both monotonically increasing and bounded, it is convergent by the Monotone convergence theorem.</p> <hr /> <p>Is my proof correct? I am suspicious of the step which says &quot;<span class="math-container">$\rightarrow \sum_{k=0}^n \frac{1}{k!}x^k$</span>&quot;, and would like to avoid taking another limit in the middle of the boundedness proof.</p> <p>I also compared my proof to the following references and saw something worrisome:</p> <ul> <li><p><a href="http://www.sci.brooklyn.cuny.edu/%7Emate/misc/exp_x.pdf" rel="nofollow noreferrer">Reference 1</a> &lt;- Assumes <span class="math-container">$x\ge 0$</span> (Why?)</p> </li> <li><p><a href="https://math.stackexchange.com/a/1590263/100167">Reference 2</a> &lt;- Assumes <span class="math-container">$x \ge -1$</span> (Why?)</p> </li> <li><p><a href="https://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html?m=0#.YSUsR8pUviD" rel="nofollow noreferrer">Reference 3 </a> &lt;- Considers <span class="math-container">$x=0$</span>, <span class="math-container">$x&gt;0$</span>, and <span class="math-container">$x&lt;0$</span> separately (Why?)</p> </li> </ul> <p>All of the above proofs either assumed <span class="math-container">$x&gt;0$</span> or considered cases where <span class="math-container">$x&gt;0$</span> and <span class="math-container">$x&lt;0$</span> separately, but I do not know why. In fact, the third reference considers <span class="math-container">$\left(1-\frac{x}{n}\right)^{-n}$</span> for <span class="math-container">$x&gt;0$</span> (I think this is a typo and should read <span class="math-container">$x&lt;0$</span>), but I am not sure why the negative exponent is needed (we are talking about a negative value of <span class="math-container">$x$</span>, not negative <span class="math-container">$n$</span>.)</p> <p>I could only find one proof that did not consider different cases on the sign of <span class="math-container">$x$</span>:</p> <ul> <li><a href="https://proofwiki.org/wiki/Exponential_Function_is_Well-Defined/Real/Proof_2" rel="nofollow noreferrer">Bonus reference 4</a> &lt;- Uses absolute values, but I am not sure why these are necessary either.</li> </ul> <p>I would like to verify my proof and ask 3 questions:</p> <ol> <li><p>Why is it necessary to consider cases <span class="math-container">$x&gt;0$</span> and <span class="math-container">$x&lt;0$</span> separately? Did any step in my proof implicitly assume that <span class="math-container">$x&gt;0$</span>? If so, which one?</p> </li> <li><p>Is there any way to avoid taking a limit in the middle of the boundedness proof?</p> </li> <li><p>Substituting <span class="math-container">$n=1$</span> in my boundedness proof shows <span class="math-container">$1+x \le \sum_{k=0}^n \frac{1}{k!}x^k$</span>. Does this imply <span class="math-container">$1+x \le \lim\limits_{n\to \infty} \left(1+\frac{x}{n}\right)^n$</span>, since <span class="math-container">$f_n$</span> is an increasing function of <span class="math-container">$n$</span>? Can this be seen explicitly, or would that require a separate proof?</p> </li> </ol> <p>Thank you.</p>
Aidan Lytle
902,851
<p>I'm not certain that they are necessarily related, other than that there are two dummy variables being integrated up to x and y respectively, and differentiating w/r/t one gives you the other. Maybe I am misunderstanding the question.</p> <p>After rereading, I think maybe the question is just asking you to differentiate w/r/t x and then y, then switch the order of the integrals, and differentiate w/r/t y and x, in that order. This will show that the order of the partials does not matter, if we assume Fubini's theorem.</p>
3,427,794
<p>Let's see I have the following equation</p> <p><span class="math-container">$$ x=1 $$</span></p> <p>I take the derivate of both sides with respect to <span class="math-container">$x$</span>:</p> <p><span class="math-container">$$ \frac{\partial }{\partial x} x = \frac{\partial }{\partial x}1 $$</span></p> <p>Therefore, <span class="math-container">$1=0$</span>. Clearly, that is not the right approach. </p> <p>So what is the right way to think of <span class="math-container">$x=1$</span>. What kind of object is it?</p>
Mohammad Riazi-Kermani
514,496
<p>You can take derivative of both sides of an identity not an equation.</p> <p>For example <span class="math-container">$$\sin^2 x + \cos^2 x =1$$</span> is an identity, so we can differentiate to get <span class="math-container">$$2\sin x \cos x -2\sin x\cos x =0$$</span> or <span class="math-container">$$\cos 2x = \cos ^2 x - \sin ^2 x $$</span> gives <span class="math-container">$$-2\sin 2x = -2\sin x \cos x-2\sin x \cos x $$</span></p> <p>Which is <span class="math-container">$$\sin 2x = 2\sin x \cos x$$</span></p> <p>But you can not differentiate the equation <span class="math-container">$$\sin x =x$$</span> to get <span class="math-container">$$\cos x =1$$</span></p>
1,900,640
<p>I have a function $$f(x)=(x+1)^2+(x+2)^2 + \dots + (x+n)^2 = \sum_{k=1}^{n}(x+k)^2$$ for some positive integer $n$. I started wondering if there is an equivalent expression for $f(x)$ that can be calculated more directly (efficiently). </p> <p>I began by expanding some terms to look for a pattern. $$ (x^2+2x+1) + (x^2+4x+4) + (x^2+6x+9) + (x^2+8x+16) + \dots $$ By regrouping the $x^2$, $x$, and constant terms, I can see that $$ f(x) = \sum_{k=1}^{n}x^2 + \sum_{k=1}^{n}2kx + \sum_{k=1}^{n} k^2 $$ for which I've found some identities to get $$ f(x) = n x^2 + n(n+1)x + \frac{1}{6}n(n+1)(2n+1) $$ and simplifying some (attempting to make it computationally efficient) $$ f(x) = n \left[ x^2 + (n+1)x + \frac{1}{6}(n+1)(2n+1) \right] $$<br> $$ f(x) = n \left[ x^2 + (n+1) \left( x + \frac{2n+1}{6} \right) \right] $$ </p> <blockquote> <p>Is this a particular type of summation (maybe just exponential?), and if so is there a standard way to write it?<br> Along those lines, is there a more direct derivation than what I've attempted here, perhaps using an identity I don't know?</p> </blockquote>
Community
-1
<p>Tony already explained the meaning... you take the "frequency" ("probability") of something happening on $[1,n]$, and then you take the limit as $n \to \infty$. This is not a <em>probability</em> in the sense of probability theory. There is no probability measure on a $\sigma$ - algebra on $\Bbb Z$ or $\Bbb N$ under which this number is the "probability" of an event. In fact no such probability space can exist. It is an unfortunate (confusing) abuse of terminology.</p> <p>It is the same sense in which the "probability" of a randomly chosen integer being even (or odd) is 0.5. </p>
2,108,352
<p>It is a question in functional analysis by writer Erwin Kryzic </p>
Ahmed S. Attaalla
229,023
<p>In order for it to be a metric it must follow these properties by definition,</p> <p>$$d(x,y) \geq 0$$</p> <p>$$d(x,y)=0 \iff x=y$$</p> <p>$$d(x,y)=d(y,x)$$</p> <p>$$d(x,z) \leq d(x,y)+d(y,z)$$</p> <p>Does it?</p>
3,746,412
<blockquote> <p>Prove: For all sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, ( <span class="math-container">$A \cap B = A \cup B \implies A = B$</span> )</p> </blockquote> <p>In the upcoming proof, we make use of the next lemma.</p> <p>Lemma: For all sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, <span class="math-container">$A = B$</span> iff <span class="math-container">$A - B = B - A$</span>.</p> <p>Proof: Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be arbitrary sets and let <span class="math-container">$S = A \cap B$</span> and <span class="math-container">$T = A \cup B$</span>. If <span class="math-container">$S = T$</span>, then <span class="math-container">$S$</span> and <span class="math-container">$T$</span> have got the same elements. Thus, by simplification law, Towe can state that</p> <p><span class="math-container">$\forall x \in A \cup B \, ( x \in A \cap B ) \tag 1 \label 1$</span></p> <p>To prove <span class="math-container">$A = B$</span> we might as well harness our lemma and establish that <span class="math-container">$A - B = A - B$</span>, which we can be verified through the axiom of extension by showing that <span class="math-container">$\forall y \in A - B \,( y \in B - A )$</span> and <span class="math-container">$\forall y^* \in B - A \, (y^* \in A - B )$</span>. So, let</p> <p><span class="math-container">$y \in A - B \tag 2 \label 2$</span></p> <p>and let <span class="math-container">$y^* \in B - A \tag 3 \label 3$</span></p> <p>From \eqref{2} we rest assured that <span class="math-container">$y\in A$</span> and <span class="math-container">$y\not\in B$</span>. Furthermore, by \eqref{1} we know that</p> <p><span class="math-container">$y \in A \land y \in B \tag 4 \label 4$</span></p> <p>Nonetheless \eqref{4} is an antilogy since from \eqref{2} we deduced, inter alia, that <span class="math-container">$y\not\in B$</span>. Notice thus far we are trying to prove that \eqref{3} and <span class="math-container">$F$</span> implies <span class="math-container">$y \in B - A $</span> and <span class="math-container">$y^* \in A - B$</span>. Therefore, it’s vacuously true that</p> <p><span class="math-container">$A = B$</span></p> <p>Q.E.D</p> <p>I believe I could've done this proof more easily by using the contrapositive method, but my question is, is this proof right?</p>
Sahiba Arora
266,110
<p>Your proof is correct up to <span class="math-container">$y \notin B.$</span> Note that <span class="math-container">$y \in A \setminus B \implies y \in A \cup B =A\cap B \implies y \in B,$</span> which is a contradiction. This implies <span class="math-container">$A\setminus B=\emptyset$</span> and similarly <span class="math-container">$B\setminus A =\emptyset.$</span> Therefore <span class="math-container">$A \setminus B=B\setminus A.$</span> Moreover, the use of &quot;vacuously true&quot; in the end is also incorrect.</p> <p>Alternatively, the statement can be proved in a much simpler way as follows:</p> <p><span class="math-container">$$A\subseteq A \cup B = A \cap B \subseteq B.$$</span> Similarly <span class="math-container">$B\subseteq A$</span> and hence <span class="math-container">$A=B.$</span></p>
334,351
<p>I am a student of mathematics, and have some background in </p> <ul> <li>Algebraic Topology (Hatcher, Bott-Tu, Milnor-Stasheff), </li> <li>Differential Geometry (Lee, Kobayashi-Nomizu), </li> <li>Riemannian Geometry (Do Carmo), </li> <li>Symplectic Geometry (Ana Cannas da Silva) and </li> <li>Differential Topology (Hirsch, Minor (Morse Theory), Milnor (h-cobordism theorem)). </li> </ul> <p>I would like to learn Floer homology and Khovanov homology. </p> <p>Q1. Is my background sufficient to learn these topics right now? What are the standard first-level and second-level sources for these topics? </p> <p>Q2. At the research level, will I need to know any other areas to work on these topics? In particular, would I need to know algebraic geometry and to what extent? To give you an idea of my present knowledge, I currently know absolutely no algebraic geometry, and even my commutative algebra background has several gaps, especially in parts about DVRs.</p> <p>Q3. Once I finish books recommended in Q1, what are some good papers to start reading on these areas? </p> <p>I would greatly appreciate reasoning behind any comments that aren't strictly factual and are opinions of the writer.</p> <p>Thank you in advance! :)</p>
Greg Friedman
6,646
<p>I'm not an expert in Floer homology or Khovanov homology, but if that's your goal I don't think you need quite as wide a background as suggested in the other current answer (though admittedly that answer was written when the title of the question was much broader). For example, in the talks I've seen about these things, I don't think much in the way of spectra comes up, or even homotopy theory at the level of May. My more modest suggestion is first to acquaint yourself with some more classical knot theory. Rolfsen is a very good start for the really classical stuff, while there are now many places to learn about the more modern skein-type invariants. My personal recommendation would probably be Lickorish, but there are several good books and surveys now about that material. After that, some symplectic topology might be useful, though you say you've already got some of that. Beyond that background, I think that this is an area that does not yet have many secondary textbook sources, but there are very many survey articles and I think the primary material tends to be written fairly well compared to some other areas of mathematics. Just Googling "Introduction to Khovanov homology" and "Introduction to Floer homology" brings up a lot of surveys and lecture notes. I would suggest diving into that stuff and then referring outward once you hit something specific that you don't understand or want more background about. Based on what you wrote in your question, I think your broad topological background is already in pretty good shape to get started.</p>
3,266,367
<p>Find the solution set of <span class="math-container">$\frac{3\sqrt{2-x}}{x-1}&lt;2$</span></p> <p>Start by squaring both sides <span class="math-container">$$\frac{-4x^2-x+14}{(x-1)^2}&lt;0$$</span> Factoring and multiplied both sides with -1 <span class="math-container">$$\frac{(4x-7)(x+2)}{(x-1)^2}&gt;0$$</span> I got <span class="math-container">$$(-\infty,-2)\cup \left(\frac{7}{4},\infty\right)$$</span> Since <span class="math-container">$x\leq2$</span> then <span class="math-container">$$(-\infty,-2)\cup \left(\frac{7}{4},2\right]$$</span></p> <p>But the answer should be <span class="math-container">$(-\infty,1)\cup \left(\frac{7}{4},2\right]$</span>. Did I missed something?</p>
Allawonder
145,126
<p>The radicand cannot be negative, so we must have <span class="math-container">$x\le 2.$</span> Also, the denominator is negative for <span class="math-container">$x\lt 1.$</span> Thus, you must consider this in the two cases when</p> <p>(1) <span class="math-container">$x\lt 1,$</span> or when</p> <p>(2) <span class="math-container">$1\lt x\le 2.$</span></p> <p>What you've done (which depends crucially on your first step of <em>squaring both sides</em>) works only with the assumption in case (2), since then we have that <span class="math-container">$\text {LHS}\ge 0.$</span> Thus you can square both sides. In the first case, you <strong>cannot</strong> since then <span class="math-container">$\text {LHS}\lt 0$</span> whereas <span class="math-container">$\text {RHS}\gt 0.$</span> This is not true in general since, for example, the fact that <span class="math-container">$-3&lt;1$</span> does not imply that <span class="math-container">$9=(-3)^2&lt;1.$</span></p> <p>Thus, in the first case you need to approach with a different method. In particular multiply both sides by the negative quantity <span class="math-container">$x-1$</span> to get <span class="math-container">$$3\sqrt{2-x}&gt;2(x-1),$$</span> which is obviously true for any <span class="math-container">$x&lt;1,$</span> since then <span class="math-container">$\text {LHS}\gt 0$</span> and <span class="math-container">$\text {RHS}\le 0.$</span> Thus the solution in this case is <span class="math-container">$$(-\infty,1),$$</span> as wanted.</p>
106,560
<p>Mochizuki has recently announced a proof of the ABC conjecture. It is far too early to judge its correctness, but it builds on many years of work by him. Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?</p>
Vesselin Dimitrov
26,522
<p><em>Last revision: 10/20.</em> (Probably the last for at least some time to come: until Mochizuki uploads his revisions of IUTT-III and IUTT-IV. My apology for the multiple revisions. )</p> <p><strong>Completely rewritten. (9/26)</strong></p> <p>It seems indeed that nothing like Theorem 1.10 from Mochizuki's IUTT-IV could hold. </p> <p>Here is an infinite set of counterexamples, assuming for convenience two standard conjectures (the first being in fact a consequence of ABC), that contradict Thm. 1.10 <em>very</em> badly. </p> <p><em>Assumptions:</em> </p> <ul> <li><p>A (Consequence of ABC) <em>For all but finitely many elliptic curves over $\mathbb{Q}$, the conductor $N$ and the minimal discriminant $\Delta$ satisfy $\log{|\Delta|} &lt; (\log{N})^2$.</em></p></li> <li><p>B (Uniform Serre Open Image conjecture) <em>For each</em> $d \in \mathbb{N}$, <em>there is a constant</em> $c(d) &lt; \infty$ <em>such that for every number field</em> $F/\mathbb{Q}$ with $[F:\mathbb{Q}] \leq d$, <em>and every non-CM elliptic curve</em> $E$ <em>over</em> $F$, <em>and every prime</em> $\ell \geq c(d)$, <em>the Galois representation of</em> $G_F$ <em>on</em> $E[\ell]$ <em>has full image</em> $\mathrm{GL}_2(\mathbb{Z}/{\ell})$. (In fact, it is sufficient to take the weaker version in which $F$ is held fixed. )</p></li> </ul> <p>Further, as far as I can tell from the proof of Theorem 1.10 of IUTTIV, the only reason for taking $F := F_{\mathrm{tpd}}\big( \sqrt{-1}, E_{F_{\mathrm{tpd}}}[3\cdot 5] \big)$ --- rather than simply $F := F_{\mathrm{tpd}}(\sqrt{-1})$ --- was to ensure that $E$ has semistable reduction over $F$. <em>Since I will only work in what follows with semistable elliptic curves over</em> $\mathbb{Q}$, <em>I will assume, for a mild technical convenience in the examples below, that for elliptic curves already semistable over</em> $F_{\mathrm{tpd}}$, <em>we may actually take</em> $F := F_{\mathrm{tpd}}(\sqrt{-1})$ <em>in Theorem 1.10.</em></p> <p><em>The infinite set of counterexamples.</em> They come from Masser's paper [Masser: Note on a conjecture of Szpiro, <em>Asterisque</em> 1990], as follows. Masser has produced an infinite set of Frey-Hellougarch (i.e., semistable and with rational 2-torsion) elliptic curves over $\mathbb{Q}$ whose conductor $N$ and minimal discriminant $\Delta$ satisfy $$ (1) \hspace{3cm} \frac{1}{6}\log{|\Delta|} \geq \log{N} + \frac{\sqrt{\log{N}}}{\log{\log{N}}}. $$ (Thus, $N$ in these examples may be taken arbitrarily large. ) By (A) above, taking $N$ big enough will ensure that $$ (2) \hspace{3cm} \log{|\Delta|} &lt; (\log{N})^2. $$ Next, the sum of the logarithms of the primes in the interval $\big( (\log{N})^2, 3(\log{N})^2 \big)$ is $2(\log{N})^2 + o((\log{N})^2)$, so it is certainly $&gt; (\log{N})^2$ for $N \gg 0$ big enough. Thus, by (2), it is easy to see that the interval $\big( (\log{N})^2, 3(\log{N})^2 \big)$ contains a prime $\ell$ which divides neither $|\Delta|$ nor any of the exponents $\alpha = \mathrm{ord}_p(\Delta)$ in the prime factorization $|\Delta| = \prod p^{\alpha}$ of $|\Delta|$.</p> <p>Consider now the pair $(E,\ell)$: it has $F_{\mathrm{mod}} = \mathbb{Q}$, and since $E$ has rational $2$-torsion, $F_{\mathrm{tpd}} = \mathbb{Q}$ as well. Let $F := \mathbb{Q} \big( \sqrt{-1}\big)$. I claim that, upon taking $N$ big enough, the pair $(E_F,\ell)$ arises from an <strong>initial $\Theta$-datum</strong> as in IUTT-I, Definition 3.1. Indeed:</p> <ul> <li>Certainly (a), (e), (f) of IUTT-I, Def. 3.1 are satisfied (with appropriate $\underline{\mathbb{V}}, \, \underline{\epsilon}$);</li> <li>(b) of IUTT-I, Def. 3.1 is satisfied since by construction $E$ is semistable over $\mathbb{Q}$;</li> <li>(c) of IUTT-I, Def. 3.1 is satisfied, in view of (B) above and the choice of $\ell$, as soon as $N \gg 0$ is big enough (recall that $\ell &gt; (\log{N})^2$ by construction!), and by the observation that, for $v$ a place of $F = \mathbb{Q}(\sqrt{-1})$, the order of the $v$-adic $q$-parameter of $E$ equals $\mathrm{ord}_v (\Delta)$, which equals $\mathrm{ord}_p(\Delta)$ for $v \mid p &gt; 2$, and $2\cdot\mathrm{ord}_2(\Delta)$ for $v \mid 2$; </li> </ul> <p>while $\mathbb{V}_{\mathrm{mod}}^{\mathrm{bad}}$ consists of the primes dividing $\Delta$;</p> <ul> <li>Finally, (d) of IUTT-I, Def. 3.1 is satisfied upon excluding at most four of Masser's examples $E$. (See page 37 of IUTT-IV).</li> </ul> <p><strong>Now</strong>, take $\epsilon := \big( \log{N} \big)^{-2}$ in Theorem 1.10 of IUTT-IV; this is certainly permissible for $N \gg 0$ large enough. <em>I claim that the conclusion of Theorem 1.10 contradicts (1) as soon as $N \gg 0$ is large enough.</em></p> <p>For note that Mochizuki's quantity $\log(\mathfrak{q})$ is precisely $\log{|\Delta|}$ (reference: see e.g. Szpiro's article in the Grothendieck Festschrift, vol. 3); his $\log{(\mathfrak{d}^{\mathrm{tpd}})}$ is zero; his $d_{\mathrm{mod}}$ is $1$; and his $\log{(\mathfrak{f}^{\mathrm{tpd}})}$ is our $\log{N}$. By construction, our choice $\epsilon := \big( \log{N} \big)^{-2}$ then makes $1/\ell &lt; \epsilon$ and $\ell &lt; 3/\epsilon$, whence the finaly display of Theorem 1.10 would yield $$ \frac{1}{6} \log{|\Delta|} \leq (1+29\epsilon) \cdot \log{N} + 2\log{(3\epsilon^{-8})} &lt; \log{N} + 16\log{\log{N}} + 32, $$ where we have used $\epsilon \log{N} = (\log{N})^{-1} &lt; 1$ for $N &gt; 3$, and $2\log{3} &lt; 3$.</p> <p><em>The last display contradicts (1) as soon as $N \gg 0$ is big enough.</em></p> <p>Thus Masser's examples yield infinitely many counterexamples to Theorem 1.10 of IUTT-IV (as presently written).</p> <p><strong>Added on 10/15, and revised 10/20.</strong> Mochizuki has commented on the apparent contradiction between Masser's examples and Theorem 1.10: </p> <p><a href="http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20IV%20(comments).pdf">http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20IV%20(comments).pdf</a></p> <p>He writes that he will revise portions of IUTT-III and IUTT-IV, and will make them available in the near future. (He estimates January 2013 to be a reasonable period). He confirms the following ["essentially"] anticipated revision of Theorem 1.10:</p> <p>Let $E/\mathbb{Q}$ be a semistable elliptic curve with [say, for the sake of simplifying] rational $2$-torsion [i.e., a Frey-Hellegouarch curve] of minimal discriminant $\Delta$ and conductor $N$ (square-free). For $\epsilon &gt; 0$, let $N_{\epsilon} := \prod_{p \mid N, p &lt; \epsilon^{-1}} p$. Then: $$ \frac{1}{6} \log{|\Delta|} &lt; \big( 1 + \epsilon \big) \log{N} + \Big( \omega(N_{\epsilon}) \cdot \log{(1/\epsilon)} - \log{N_{\epsilon}} \Big) + O\big( \log{(1/\epsilon)} \big) $$ $$ &lt; \log{N} + \Big( \epsilon \log{N} + \big( \epsilon \log{(1/\epsilon)} \big)^{-1} \Big) + o\Big( \big( \epsilon \log{(1/\epsilon)} \big)^{-1} \Big), $$ where $\omega(\cdot)$ denotes "number of prime factors." The second estimate comes from the prime number theorem in the form $\pi(t) = t/\log{t} + t/(\log{t})^2 + o\big( t/(\log{t})^2 \big)$, applied to $t := \epsilon^{-1}$, and is sharp if you restrict $\epsilon$ to the range $\epsilon^{-1} &lt; (\log{N})^{\xi}$ with $\xi &lt; 1$, as there nothing prevents $N$ from being divisible by all primes $p &lt; (\log{N})^{\xi}$. In particular, as the Erdos-Stewart-Tijdeman-Masser construction is based on the pigeonhole principle, which cannot preclude that $N$ be divisible by all the primes $&lt; (\log{N})^{2/3}$, the second estimate could very well be sharp in all the Masser examples. As it is easily seen that the bracketed term exceeds the range $\sqrt{\log{N}}/(\log{\log{N}})$ of Masser's examples, this has the implication that </p> <p><em>the Erdos-Stewart-Tijdeman-Masser method cannot disprove Mochizuki's revised inequality,</em> </p> <p>which therefore seems reasonable.</p> <p>On the other hand, if we take $\epsilon := (\log{N})^{-1}$ <em>and</em> assume $\omega(N_{\epsilon})$ bounded, this would yield $(1/6)\log{|\Delta|} &lt; \log{N} + O(\log{\log{N}})$, just as before. (<em>Thus, Mochizuki predicts that this last bound must hold for $N$ a large enough square-free integer such that the number of primes $&lt; \log{N}$ dividing $N$ is bounded</em>. I cannot see evidence neither for nor against this at the moment: again, the Masser and Erdos-Stewart-Tijdeman constructions are based on the pigeonhole principle, and do not seem to be able to exclude the small primes $&lt; \log{N}$. So here we have an open problem by which one could probe Mochizuki's revised inequality. A reminder: in terms of the $abc$-triple, $\Delta$ is <em>essentially</em> $(abc)^2$, and $N = \mathrm{rad}(abc)$).</p> <p>A side remark: note that the inverse $1/\ell$ of the prime level from the de Rham-Etale correspondence $(E^{\dagger}, &lt; \ell) \leftrightarrow E[\ell]$ in Mochizuki's "Hodge-Arakelov theory" ultimately figures as the $\epsilon$ in the ABC conjecture. </p> <p><em>[I have deleted the remainder of the 10/15 Addendum, since it is now obsolete after Mochizuki's revised comments. ]</em></p>
155,237
<p>The axiom of constructibility $V=L$ leads to some very interesting consequences, one of which is that it becomes possible to give explicit constructions of some of the "weird" results of AC. For instance, in $L$, there is a definable well-ordering of the real numbers (since there is a definable well-ordering of the universe).</p> <p>Since AC holds true in $L$, the ultrafilter lemma must be true. Does this mean that a definable non-principal ultrafilter on $\mathbb{N}$ exists in $L$, given by an explicit formula?</p> <p>If so, what is the formula?</p>
Asaf Karagila
7,206
<p>Generally formulas in set theory are not "very nice". There is very little structure to work with.</p> <p>We can define when $U$ is a filter over $X$, simply by saying that every element of $U$ is a subset of $X$, and $U$ is closed under superset (below $X$), and under finite intersections (recall that in set theory we can say that within a first-order formula, simply by stating that the intersection over a finite subset of $U$ is in $U$).</p> <p>Then we can define $U$ as an ultrafilter as a filter which is maximal with respect to inclusion. And $U$ is free if it doesn't contain finite sets, or a singleton, or its entire intersection is empty. Pick your pick.</p> <p>Finally, from the axiom of choice we can prove that the set of ultrafilters over $\Bbb N$ is non-empty. Therefore using the canonical well-ordering of $L$ we can fully state "the least free ultrafilter over $\Bbb N$". It's just an insanely long formula (especially if you unwind all the recursive definitions everywhere and the abbreviations for subsets, finite, etc.) which is not very pretty.</p>
17,270
<p>I just joined MathSE and it's beautiful here, except for the fact that some unregistered users ask a question and never come back. Most of the time these questions are trivial, though they still consume answerers' (valuable) time which never gets rewarded. I thought it was okay until I saw someone's profile with the following statistics: active $1$ year $7$ Months, $0$ Answers , $72$ Questions, $0$ accept votes. Yes, I agree that the answers are up-voted in this case but is it really okay to never accept any answers?</p>
dustin
78,317
<p>Regardless of the gratitude argument, accepting answers makes the sites questions easier to sift through. When there are questions that don't have accepted answers, people are more inclined to use their time to peruse the question and answers. If then they have wasted their time due to there being one or more perfectly acceptable answers, that is a loss to the site. That user could have been reading over another question which needed attention. Therefore, more users get to more questions and hopefully add helpful answers.</p> <p>Of course there are question with answer that some people will be interested in reading even with or without accepted answers, but I am not speaking of those question that speak to someone's interest or speciality since they aren't wasting their time in that sense.</p>
3,841,266
<p>Let <span class="math-container">$E$</span> and hilbert space and <span class="math-container">$f(x)=\| x\|$</span> for all <span class="math-container">$x\in E$</span>. Study the differentiability of <span class="math-container">$f$</span> on <span class="math-container">$0$</span> and find <span class="math-container">$df(x)h$</span> for all <span class="math-container">$h\in E$</span> and <span class="math-container">$x\neq0$</span>.</p> <p><strong>My attempt :</strong> <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>, in fact consider <span class="math-container">$E=\mathbb R^n$</span> and <span class="math-container">$\|x\|=\sqrt{x_1^2+\cdots+x_n^2}=f(x_1,\cdots,x_n).$</span> <span class="math-container">$\displaystyle\lim_{x_i \to 0^+} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=1$</span> and <span class="math-container">$\displaystyle\lim_{x_i \to 0^-} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=-1$</span>, so <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>.</p> <p>Let <span class="math-container">$x\neq 0$</span>, I have proved in a previous question that <span class="math-container">$ \psi : x \mapsto\|x\|^2$</span> is differentiable and that <span class="math-container">$d\psi(x)h=2\langle x,h\rangle$</span>, so <span class="math-container">$f(x=\sqrt{\psi(x)}=\varphi\circ\psi(x)$</span> with <span class="math-container">$\varphi(t)=\sqrt t$</span> for all <span class="math-container">$t\geqslant 0$</span>, so <span class="math-container">$$df(x)h=d\varphi(\psi(x))\circ d\psi(x)h=\frac{1}{2\|x\|}2\langle x,h\rangle=\langle\frac{x}{\|x\|},h \rangle.$$</span> Is my attempt correct? Thanks in advance !</p>
hardmath
3,111
<p>Given your background, I'll make some suggestions about how one might try to solve:</p> <p><span class="math-container">$$ \ln |y| + y^2 = \sin(x) + 1 $$</span></p> <p>for <span class="math-container">$y$</span> when <span class="math-container">$x$</span> is known.</p> <p>There are many root finding algorithms, and since calculus knowledge is assumed in any differential equations course, Newton's method is not excluded as a possibility. First year calculus courses often make mention of it in connection applications of the derivative or (later on) with power series, etc.</p> <p>But there is an approach you can test out with a spreadsheet application: reframe the root-finding problem as a fixed-point problem.</p> <p>Here we have a left hand side that depends only on <span class="math-container">$y$</span> and a right hand side that depends only on <span class="math-container">$x$</span>. So once argument <span class="math-container">$x$</span> is chosen, finding <span class="math-container">$y$</span> becomes a matter of solving <span class="math-container">$f(y) = C$</span> where <span class="math-container">$C = \sin(x) + 1$</span> and:</p> <p><span class="math-container">$$ f(y) = \ln |y| + y^2 $$</span></p> <p>A fixed-point iteration comes about if we can rewrite <span class="math-container">$f(y) = C$</span> in the form:</p> <p><span class="math-container">$$ y = g(y) $$</span></p> <p>Then with a suitable &quot;initial guess&quot; <span class="math-container">$y_0$</span>, one hopes that the sequence:</p> <p><span class="math-container">$$ y_{k+1} = g(y_k) $$</span></p> <p>will converge. If it does, and function <span class="math-container">$g(y)$</span> is continuous, then the limit of that iteration will satisfy <span class="math-container">$y = g(y)$</span> and hence (if we did the algebra correctly) also satisfy <span class="math-container">$f(y) = C$</span> as originally desired.</p> <p>There are many ways to do this, and indeed Newton's method falls into this pattern of &quot;root finding&quot; too. With a bit of experience one learns that the goal is to choose a rewritten <span class="math-container">$y = g(y)$</span> so that <span class="math-container">$g(y)$</span> varies <em>slowly</em> with <span class="math-container">$y$</span>. Indeed one measure of this is that <span class="math-container">$|g'(y)| \lt 1$</span> in the region where we seek a solution, and sometimes this &quot;contraction mapping property&quot; will come up in a differential equations class in the treatment of solutions to <em>nonlinear</em> ODEs (which the currently initial value problem gives us an example of).</p> <p>In any event I'd propose putting the rapidly changing part of <span class="math-container">$f(y)$</span>, the <span class="math-container">$y^2$</span> term, on one side of the equation and the slowly changing part on the other side, lumped with the &quot;constant&quot; term <span class="math-container">$C = \sin(x) + 1$</span>:</p> <p><span class="math-container">$$ y^2 = C - \ln |y| $$</span></p> <p>Then take the square root of both sides:</p> <p><span class="math-container">$$ y = \sqrt{C - \ln |y|\,} $$</span></p> <p>There are some details to think about here, and these are connected with the side question of whether the absolute values around <span class="math-container">$y$</span> are really necessary. As the Comments below the Question have already started to sketch out, because the initial value <span class="math-container">$y(0) = 1$</span> is positive, it is conceivable that <span class="math-container">$y(x)$</span> will stay positive. If it does the absolute value of <span class="math-container">$y$</span> is just <span class="math-container">$y$</span>, and the &quot;modulus operator&quot; is indeed unnecessary. A more rigorous, but still elementary treatment can be built, but this would take us on a detour from the present narrative.</p> <p>Suffice it to say that choosing the <em>positive square root</em> above for <span class="math-container">$y$</span> is the right choice, and that while we can allow <span class="math-container">$\ln y$</span> to become negative (so the term under the square root remains nonnegative), we should avoid steps in the iteration where <span class="math-container">$\ln y \gt 0$</span> is so large that the term under the square root becomes negative. This affects our choice of starting guesses <span class="math-container">$y_0$</span> in particular.</p> <p>So it is left as an exercise for the Reader to carry out a few iterations of the function <span class="math-container">$g(y) = \sqrt{\sin(x) + 1 - \ln y \;}$</span> using <span class="math-container">$y_0 = 1$</span> and a sampling of arguments for <span class="math-container">$x$</span>. Note that because the right hand side <span class="math-container">$\sin(x) + 1$</span> is <em>periodic function</em> of <span class="math-container">$x$</span>, the solution <span class="math-container">$y(x)$</span> will also be periodic. So it is only necessary to experiment with values <span class="math-container">$x \in [0,2\pi]$</span>, or more restricted to the half-period <span class="math-container">$x \in [-\pi/2,+\pi/2]$</span>.</p> <p>I've done such an experiment, and it seems that when <span class="math-container">$\sin(x) \ge 0$</span> the iterations converge reliably. However for <span class="math-container">$\sin(x)$</span> sufficiently negative the term under the square roots becomes negative after a few iterations, which of course makes the iterations undefined.</p> <p>For example, with <span class="math-container">$x = -1$</span> (or any argument differing by an integer multiple of <span class="math-container">$2\pi$</span>, the iteration breaks down after about half a dozen steps. With <span class="math-container">$x = -1.5$</span> the iteration breaks down more quickly, after a couple of steps. This problem is mitigated by using more &quot;accurate&quot; starting guesses than the simple <span class="math-container">$y_0 = 1$</span>, but it seems that this simple method is not reliably converging when <span class="math-container">$C = \sin(x) + 1$</span> is close to zero.</p> <p>After thinking more carefully about this equation, I see that in the region where <span class="math-container">$C = \sin(x) + 1$</span> is <em>small</em> (positive), the term <span class="math-container">$\ln y$</span> is changing more rapidly than the term <span class="math-container">$y^2$</span>. Hence I'm going to try the iteration &quot;the other way around&quot;:</p> <p><span class="math-container">$$ \ln y = C - y^2 $$</span></p> <p><span class="math-container">$$ y = e^{C - y^2} $$</span></p> <p>In this approach an initial guess <span class="math-container">$y_0 = 1$</span> isn't appropriate when <span class="math-container">$C$</span> is small. In the extreme, when <span class="math-container">$C = 0$</span> we need <span class="math-container">$y^2 + \ln y = 0$</span>, which occurs around <span class="math-container">$y \approx 0.65$</span>. So I'll report my attempts with that <span class="math-container">$y_0$</span> as a starting value.</p> <hr /> <p>Using the alternative function iteration isn't particularly successful except very near <span class="math-container">$C=0$</span>, although better starting guesses <span class="math-container">$y_0$</span> are an improvement when <span class="math-container">$0\le C \lt 0.7$</span> than the earlier <span class="math-container">$y_0 = 1$</span>. The following graph (made with online graphing calculator <a href="https://www.desmos.com/calculator/imodjckz0c" rel="nofollow noreferrer">Desmos</a>) shows how the function <span class="math-container">$f(y) = \ln |y| + y^2$</span> behaves over the range <span class="math-container">$C\in [0,2]$</span>:</p> <p><img src="https://i.stack.imgur.com/995F1m.png" alt="graph of ln y + y^2" /></p> <p>With this insight I can see how the fixed point iteration can be improved, albeit the smoothness of this curve makes it clear to trained eyes that bisection, regula falsi, or even Newton's method would be well-behaved for the purpose of inverting this function.</p>
3,841,266
<p>Let <span class="math-container">$E$</span> and hilbert space and <span class="math-container">$f(x)=\| x\|$</span> for all <span class="math-container">$x\in E$</span>. Study the differentiability of <span class="math-container">$f$</span> on <span class="math-container">$0$</span> and find <span class="math-container">$df(x)h$</span> for all <span class="math-container">$h\in E$</span> and <span class="math-container">$x\neq0$</span>.</p> <p><strong>My attempt :</strong> <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>, in fact consider <span class="math-container">$E=\mathbb R^n$</span> and <span class="math-container">$\|x\|=\sqrt{x_1^2+\cdots+x_n^2}=f(x_1,\cdots,x_n).$</span> <span class="math-container">$\displaystyle\lim_{x_i \to 0^+} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=1$</span> and <span class="math-container">$\displaystyle\lim_{x_i \to 0^-} \dfrac {f(0,..,0,x_i,0,..,0)-f(0,\cdots,0)}{x_i}=-1$</span>, so <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=0$</span>.</p> <p>Let <span class="math-container">$x\neq 0$</span>, I have proved in a previous question that <span class="math-container">$ \psi : x \mapsto\|x\|^2$</span> is differentiable and that <span class="math-container">$d\psi(x)h=2\langle x,h\rangle$</span>, so <span class="math-container">$f(x=\sqrt{\psi(x)}=\varphi\circ\psi(x)$</span> with <span class="math-container">$\varphi(t)=\sqrt t$</span> for all <span class="math-container">$t\geqslant 0$</span>, so <span class="math-container">$$df(x)h=d\varphi(\psi(x))\circ d\psi(x)h=\frac{1}{2\|x\|}2\langle x,h\rangle=\langle\frac{x}{\|x\|},h \rangle.$$</span> Is my attempt correct? Thanks in advance !</p>
Narasimham
95,860
<p>If you have not done numerical methods so far (Regula Falsi, Newton Raphson &amp;c.) then an approximate way is to paper plot the graph of (inverse) function and from it read off approximate <span class="math-container">$y$</span> for a given <span class="math-container">$x:$</span></p> <p><span class="math-container">$$ x= \sin^{-1}( \log |y| +y^2-1) $$</span></p> <p>The online graphing calculator <a href="https://www.desmos.com/calculator/ucpipjtdyj" rel="nofollow noreferrer">Desmos</a> can do this for us (click on image for larger size):</p> <p><a href="https://i.stack.imgur.com/6ksCh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ksChm.png" alt="x = arcsin(log |y| + y^2 - 1)" /></a></p> <p>Keep in mind that by convention <span class="math-container">$\sin^{-1}$</span> here only returns <span class="math-container">$x \in [-\pi/2,+\pi/2]$</span>. The two branches correspond to using the absolute value of <span class="math-container">$y$</span>, but since the initial point <span class="math-container">$(0,1)$</span> is on the upper branch, the continued solution remains on that branch. Visualize the complete solution by reflecting that upper branch backward and forward to yield a periodic &quot;wave&quot;.</p>
2,938,372
<p>Seems to me like it is. There are only finitely many distinct powers of <span class="math-container">$x$</span> modulo <span class="math-container">$p$</span>, by Fermat's Little Theorem (they are <span class="math-container">$\{1, x, x^2, ..., x^{p-2}\}$</span>), and the coefficient that I choose for each of these powers can only be taken from <span class="math-container">$\{0,1,2,..., p-1\}$</span>. So essentially I'm choosing amongst <span class="math-container">$p$</span> things <span class="math-container">$p-1$</span> many times, resulting in at most <span class="math-container">$p^{p-1}$</span> distinct polynomials. </p> <p>Yet an assignment claims that <span class="math-container">$\mathbb{Z}_p[x]$</span> is infinite. </p>
Community
-1
<p><span class="math-container">$x^p-x$</span> is a degree <span class="math-container">$p$</span> polynomial. Whereas for <span class="math-container">$a\in\mathbb Z_p$</span> we have Fermat's little theorem, <span class="math-container">$x^p=x$</span> is not true in general. </p> <p>In fact (by the factor theorem), <span class="math-container">$p(x)=x^p-x$</span> has at most <span class="math-container">$p$</span> roots.</p> <p>So <span class="math-container">$n\not=m\implies x^n\not=x^m$</span>.</p> <p><span class="math-container">$\mathbb Z_p[x]$</span> is in fact an infinite dimensional vector space over <span class="math-container">$\mathbb Z_p$</span>, with basis <span class="math-container">$\{1,x,x^2,\dots\}$</span>.</p> <p>Hence it is an infinite set.</p>
216,473
<p>Let me begin with some background: I used to enjoy mathematics immensely in school, and wanted to pursue higher studies. However, everyone around me at that time told me it was a stupid area (that I should focus on earning as soon as possible), and so instead I opted for an engineering degree. While in college, I used to find myself fascinated by higher math and other stuff, and used to score near-perfect all the time (but only in mathematics!). I must make it clear that I don't have an extraordinary talent for math; it's just that I enjoy exploring it very much, and then describing it to others in interesting ways. Anyway, it so happened that I went on changing job after job and was never satisfied. Now at the age of 27, I'm sitting home and realizing that I should have given mathematics a thought. So I'm thinking of taking up a graduate course and picking up where I left off.</p> <p>HOWEVER . . .</p> <p>I dream of becoming a teacher cum researcher some day, even if it's at school-level only. Before my graduate course begins (in 2-3 months), I've started reviewing math from my school books. Now the point is that I expect myself to be much more mature and smarter by now. That means I should be able to solve any problem and prove any theorem given in school math. But I can't, and it's shattering me. I mean, if I can't even prove basic theorems related to Euclid's geometry, how can I ever hope to do some authentic research like the mathematicians I so admire? This makes me wonder if I’ll ever be fit for teaching. If, at the age of 27, I can’t even master basic school mathematics with certainty, how can I ever hope to tackle problems in calculus and polynomials that students bring me tomorrow? It’s as if I’m cheating myself and those who’ll come to me for instruction.</p> <p>Am I being too hard on myself? Am I expecting too much too soon? Does there come a point in a person’s math studies when he is able to discern properties and theorems all on his own? Or are all students of mathematics struggling and hiding their weaknesses? Or I really have no talent for math and it’s just an idle indulgence of mine. I mean, I’m not sure “how good” I’m supposed to be in order to feel confident that I can pull it off. In general, I find myself wondering how much do the others know math. Do all the teachers have perfect knowledge of, say, geometry, and can tackle every problem? If not, what gives them the right to call themselves teachers?</p> <p>I have a feeling most of these questions are absurd, but I’ll be very thankful if someone can put me out of my misery.</p>
Morten Jensen
41,838
<p>I took a bachelors in software engineering at an engineering college. 3 years coursework/projects and a 6 month obligatory internship -- I'm from Denmark in Scandinavia so YMMV.</p> <p>We had calculus, algorithms, discrete maths and circuit analysis. We had theory and application but with little emphasis on the proofs come the exams. </p> <p>When I graduated I continued to pursue a Masters degree at the local university's faculty of computer science. Now everything is about proofs and I have had more maths than I thought I could bear. I <strong>REALLY</strong> struggled with the proofs at first, but to me it comes down to method and practice. You have to get comfortable with various ways of proofs: by induction, by construction, by exhaustion etc. I think it's because much of the basic prerequisites of being a mathematician isn't clear to "outsiders". It certainly wasn't, and still isn't, for me.</p> <p>I find that I cannot follow advanced courses on certain subjects, if I haven't done introductory coursework that was related. Most of that comes naturally by the course having prerequisite courses, but it still is a factor.</p> <p>Personally I do great in courses that interest me, so if you're hungry for learning maths, I say go for it! I think you'll do good and have fun at it. Don't let it scare you away - math is very hard to learn for many people, and understandably. Many branches are very abstract and hard to visualize mentally. Just be devoted and take your time, and don't be afraid to flunk a course or two. I think that if you've never flunked or just baaarely passed a course, then you've never challenged yourself hard enough.</p>
3,105,184
<p>Three shooters shoot at the same target, each of them shoots just once. The three of them respectively hit the target with a probability of 60%, 70%, and 80%. What is the probability that the shooters will hit the target</p> <p>a) at least once</p> <p>b) at least twice</p> <hr> <p>I have an approach to this but I'm not sure if there's just a formula for this type of thing. For part (a) I was thinking of simply adding the probability of all of the valid scenarios. For instance <span class="math-container">$$[P(A)*(1-P(B))*(1-P(C))]+[(1-P(A))*P(B)*(1-P(C))]+...$$</span></p> <p>and so on until I cover all scenarios in which "at least one" hits. Is there a simpler way? (i assume this would also apply to part b)</p>
jvdhooft
437,988
<p>Let <span class="math-container">$X$</span> be the number of times the target is hit. The probability <span class="math-container">$P(X \ge 1)$</span> then equals 1 minus the probability of missing the target three times:</p> <p><span class="math-container">$$P(X \ge 1) = 1 - (1 - P(A)) (1 - P(B)) (1 - P(C)) = 1 - 0.4 \cdot 0.3 \cdot 0.2 = 0.976$$</span></p> <p>To find the probability <span class="math-container">$P(X \ge 2)$</span> of hitting the target at least twice, you can consider two cases: either two people hit the target and one does not, or all people hit the target. We find:</p> <p><span class="math-container">$$P(X \ge 2) = 0.4 \cdot 0.7 \cdot 0.8 + 0.6 \cdot 0.3 \cdot 0.8 + 0.6 \cdot 0.7 \cdot 0.2 + 0.6 \cdot 0.7 \cdot 0.8 = 0.788$$</span></p>
1,847,609
<blockquote> <p>Let $R$ be a ring with identity element such that every ideal of which is idempotent or nilpotent. Is it true that the Jacobson radical $J(R)$ of $R$ is nilpotent?</p> </blockquote> <p>If $R$ is Noetherian and $J(R)$ is idempotent the Nakayama lemma yields $J(R)=0$. So, for Noetherian rings whose Jacobson radicals are nonzero we have an affirmative answer to the raised question.</p> <p>Thanks for any help or suggestion!</p>
E.R
325,912
<p>Let $a$ be an element of $R$. If $\left&lt;a\right&gt;=\left&lt;a\right&gt;^2$, then it is clear that the exists $e^2=e\in \left&lt;a\right&gt;$ such that $\left&lt;a\right&gt;=\left&lt;e\right&gt;$. Now, if $0\not= a\in J (R)$, then $\left&lt;a\right&gt;$ is nilpotent or idempotent. If $\left&lt;a\right&gt;$ is idempotent, by above argument there exists $e^2=e\in\left&lt;a\right&gt;$ such that $\left&lt;a\right&gt;=\left&lt;e\right&gt;$. Since $e\in J (R)$ , $1-e$ is a unit idempotent and so $1-e=1$. Thus, $\left&lt;a\right&gt;=0$, a contradiction. Therefore, every element of $J (R)$ in nilpotent and so $J (R)$ is nil. If $J (R)$ is finitely generated $J (R)$ is nilpotent.</p>
1,847,609
<blockquote> <p>Let $R$ be a ring with identity element such that every ideal of which is idempotent or nilpotent. Is it true that the Jacobson radical $J(R)$ of $R$ is nilpotent?</p> </blockquote> <p>If $R$ is Noetherian and $J(R)$ is idempotent the Nakayama lemma yields $J(R)=0$. So, for Noetherian rings whose Jacobson radicals are nonzero we have an affirmative answer to the raised question.</p> <p>Thanks for any help or suggestion!</p>
Jason Juett
214,136
<p>As noted in Rostami's answer, you do get that $J(R)$ is nil, hence is nilpotent if it is finitely generated. (Here is an alternative proof, just for fun. Since idempotents are locally 0 or 1, these assumptions imply $J(R) = $nil$(R)$ holds locally, hence globally.)</p> <p>But here is the main point of my answer: an example of a quasilocal ring whose maximal ideal is idempotent and all other proper ideals are nilpotent. So the answer to your question is "no".</p> <p>(Edited with parenthetical remarks justifying the claims, as requested.)</p> <p>Let $K$ be a field, $D := K[\{X^s \mid s \in \mathbb{Q}^+\}]$, $M$ be the maximal ideal consisting of elements with zero constant term, and $\overline D_M := D_M/(X)_M$. Note that the nonzero elements of $D_M$ are each a unit multiple of a power of $X$. (Given $f \in D$, factor out the biggest power of $X$ that you can to get $f = X^sf_0$, where $f_0 \notin M$. Since elements of $D$ that are not in $M$ are units in $D_M$, any element of $D_M$ with a numerator of $f$ is a unit multiple of $X^s$.) Thus $M_M = M_M^2$, hence $\overline M_M = \overline M_M^2$. If $I_M$ is any other nonzero proper ideal of $D_M$, then there is a positive lower bound on the powers of $X$ it contains, hence $\overline I_M$ is nilpotent. (If $I_M$ is a proper ideal and there is no positive lower bound on the powers of $X$ that $I_M$ contains, then for each positive rational $s$, there is a $t &lt; s$ with $X^t \in I_M$, hence $X^s = X^{s-t}X^t \in I_M$. So $I_M$ would contain every power of $X$, hence $I_M = M_M$.)</p>
256,138
<p>I need to generate four positive random values in the range [.1, .6] with (at most) two significant digits to the right of the decimal, and which sum to exactly 1. Here are three attempts that do not work.</p> <pre><code>x = {.15, .35, .1, .4}; While[Total[x] != 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; x = {.25, .25, .25, .25}; While[Total[x] == 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; NestWhileList[Total[x], x = Table[Round[RandomReal[{.1, .6}], .010], 4], Plus @@ x == 1][[1]] </code></pre>
Gosia
5,372
<p>If the four numbers are to add to 1, then only three of them can be 'random'.</p> <pre><code> In[98]:= sim = {1, 1, 1}; While[6/10 &gt; Total[sim] || Total[sim] &gt; 9/10, sim = RandomReal[{1/10, 6/10}, 3, WorkingPrecision -&gt; 2] ] In[100]:= sim Out[100]= {0.18, 0.37, 0.23} In[101]:= res = Append[sim, 1 - Total[sim]] Out[101]= {0.18, 0.37, 0.23, 0.22} In[102]:= Total[res] Out[102]= 1.0 </code></pre>
636,359
<p>How do you proof that $\angle CTP = \angle CBP+ \angle BCN$ ? Please check the image below:</p> <p><img src="https://i.stack.imgur.com/SqeXa.png" alt="enter the great image description here"></p>
vadim123
73,324
<ol> <li>angles in a triangle sum to 180 degrees</li> <li>supplementary angles sum to 180 degrees</li> </ol>
3,612,916
<p>Find a function <span class="math-container">$f$</span> such that <span class="math-container">$f(x)=0$</span> for <span class="math-container">$x\leq0$</span>, <span class="math-container">$f(x)=1$</span> for <span class="math-container">$x\geq1$</span>, and <span class="math-container">$f$</span> is infinitely differentiable. </p> <p>I've tried cojoining two quarters of a circle to get an S-shaped graph of the function and tried with different combinations of <span class="math-container">$e^{\frac{-1}{x}}$</span> and <span class="math-container">$e^{\frac{1}{x-1}}$</span> but so far nothing has worked. </p> <p>Any help would be appreciated.</p> <p>Edit: Not allowed to use integrals</p>
AlanD
356,933
<p>Try <span class="math-container">$$ f(x)= \begin{cases} 0, &amp; x\leq 1\\ C\int_0^x \exp\left(\frac{r}{t(t-1)}\right)\,dt,&amp; 0&lt;x&lt;1 \\ 1, &amp;x\geq 1 \end{cases} $$</span> for <span class="math-container">$r&gt;0$</span>. What is <span class="math-container">$C$</span>?</p>
1,916,232
<p>I have massive problems with questions like these:</p> <p>Let $\{v_1, . . . , v_r\}$ be a set of linearly independent vectors in $\mathbb{R}^n$ (with $r &lt; n$), and let $w\in\mathbb{R}^n$ be a vector such that $w \in \mathrm{span}\{v_1, . . . , v_r\}$. Prove that $\{v_1, . . . , v_r, w\}$ is a linearly independent set.</p> <p>Let $U$ and $V$ be subspaces of $\mathbb{R}^n$ Define the set $U + V = \{u + v|u ∈ U, v ∈ V \}$. Prove that $U + V$ is a subspace of $\mathbb{R}^n$.</p> <p>I'm not looking for the answers to these 2 questions but instead I want to know how do I learn to approach these problems. These proving problems are my absolute Achilles' heel. I can't get the solution at all. What can I do to learn to solve these problems? Any online resources you guys can recommend? I usually learn how to do real questions by following examples but I get nothing from watching people talk about these theories and principles behind how it's done...</p>
JMP
210,189
<p>Linear Algebra questions are usually based on the definition of linear independence. Once you get the hang of it, the problems make much more sense.</p> <p>For a good on-line resource, you could try <a href="https://www.khanacademy.org/math/linear-algebra" rel="nofollow">Khan Academy</a>.</p>
1,438,512
<p>I was given this problem at school to look at home as a challenge, after spending a good 2 hours on this I can't seem to get further than the last part of the equation. I'd love to see the way to get through 2) before tomorrow's lesson as a head start.</p> <p>So the problem is as follows:</p> <p>1) Quadratic Equation $$2x^2 + 8x + 1 = 0$$ </p> <p>i. Find roots $$\alpha + \beta$$</p> <p>ii. Find roots $$\alpha\beta$$</p> <p>2) Find an Equation with integer coefficients who's roots are:</p> <p>$$2\alpha^4+\frac{1}{\beta^2}$$$$2\beta^4+\frac{1}{\alpha^2}$$</p> <p>I'm completely puzzled on the second part of the question and I've tried following the method I was taught. Sorry if formatting is a bit off, first time posting here :) </p> <p>Thanks in advance for any help!</p>
Bernard
202,857
<p><strong>Hint:</strong></p> <p>You must calculate $S=\alpha^4+\dfrac1{\beta^2}+\beta^4+\dfrac1{\alpha^2}$ and $P=\Bigl(\alpha^4+\dfrac1{\beta^2}\Bigr)\Bigl(\beta^4+\dfrac1{\alpha^2}\Bigr)$. They will be the roots of the quadratic equation $\;x^2-Sx+P=0$.</p> <p>Now any symmetric rational function of $\alpha$ and $\beta$, by a theorem of Newton, can be expressed as a rational function of the elementary symmetric functions $s=\alpha+\beta$ and $p=\alpha\beta$: \begin{align*} \alpha^2+\beta^2&amp;=s^2-2p,\enspace\text{hence}\quad\frac1{\alpha^2}+\frac1{\beta^2}=\frac{s^2-2p}{p^2},\\ \alpha^4+\beta^4&amp;=\bigl(\alpha^2+\beta^2)^2-2\alpha^2\beta^2=(s^2-2p)^2-2p^2=s^2-4ps^2+2p^2 \end{align*} whence $S$.</p> <p>Similarly: $$P=\alpha^4\beta^4+\alpha^2+\beta^2+\frac1{\alpha^2\beta^2}=p^4+s^2-2p+\frac1{p^2}.$$</p>
2,830,969
<p>I did not find an example when the denominator $x$ approximates to $0$.</p> <p>$f(0) + f'(0)x$ does not work because $f(0)$ would be $+\infty$. </p>
James
549,970
<p>Well, $f(x) = \frac{1}{x}$ is not even defined at $x=0$ which obviously means it can't be differentiated and in particular we cannnot find a linear approximation on that point.</p>
4,512,164
<p>Given that correlation coefficient between X and Y is (, ) and a, b, c, d are constants, prove that the correlation coefficient between U = + and = + is equal to (, ) = (, )</p> <p>My take on the problem:</p> <p>Prove that <span class="math-container">$p(U, V)=p(X, Y)$</span> <span class="math-container">$$ \begin{aligned} &amp;\operatorname{Cov}(X, Y)=E[X Y]-E[X] \cdot E[Y] \\ &amp;\rho(U, V)=p(X, Y) \\ &amp;\frac{\operatorname{Cov}(U, V)}{\sqrt{\operatorname{Var}(U) \operatorname{Var}(V)}}=\frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;\frac{E[U V]-E[U] E[V]}{\sqrt{\operatorname{Var}(U) \operatorname{Var}(V)}}=\frac{E[X Y]-E[X] E[Y]}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(U)}} \end{aligned} $$</span> <span class="math-container">$$ \begin{aligned} &amp;\frac{a c \cdot E[X Y]+a d E[X]+b c E[Y]-E[X] \cdot a E[Y] \cdot c}{a c \sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;=\frac{E[X Y]-E[X] E[Y]}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;\frac{E[X Y]+\frac{a d}{a c} E[X]+\frac{b c}{a c} E[Y]-E[Y] E[Y]}{a c / a c} \\ &amp;=E[X Y]-E[X] E[Y] \\ &amp;\frac{d}{c} E[X]+\frac{b}{a} E[Y]=0 \\ &amp;\frac{d}{c} E[X]=-\frac{b}{a} E[Y] \end{aligned} $$</span></p> <p>How to get to a conclusion from here?</p>
user1080911
1,080,911
<p>Hint: use <span class="math-container">$\rho(X,Y)=Cov(\frac{X-\mathbb{E}X}{\sigma(X)},\frac{Y-\mathbb{E}Y}{\sigma(Y)})$</span>.</p> <p>Should be a one line exercise. Here <span class="math-container">$\sigma$</span> is the standard deviation. Explore the properties of variance and mean value.</p> <blockquote class="spoiler"> <p><span class="math-container">$\rho(aX+b,cY+d)=Cov(\frac{aX+b-a\mathbb{E}X-b}{\sigma(aX+b)},\frac{cY+d-c\mathbb{E}Y-d}{\sigma(cY+d)})=Cov(\frac{a(X-\mathbb{E}X)}{|a|\sigma(X)},\frac{c(Y-\mathbb{E}Y)}{|c|\sigma(Y)})=\rho(X,Y)$</span></p> </blockquote>
4,512,164
<p>Given that correlation coefficient between X and Y is (, ) and a, b, c, d are constants, prove that the correlation coefficient between U = + and = + is equal to (, ) = (, )</p> <p>My take on the problem:</p> <p>Prove that <span class="math-container">$p(U, V)=p(X, Y)$</span> <span class="math-container">$$ \begin{aligned} &amp;\operatorname{Cov}(X, Y)=E[X Y]-E[X] \cdot E[Y] \\ &amp;\rho(U, V)=p(X, Y) \\ &amp;\frac{\operatorname{Cov}(U, V)}{\sqrt{\operatorname{Var}(U) \operatorname{Var}(V)}}=\frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;\frac{E[U V]-E[U] E[V]}{\sqrt{\operatorname{Var}(U) \operatorname{Var}(V)}}=\frac{E[X Y]-E[X] E[Y]}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(U)}} \end{aligned} $$</span> <span class="math-container">$$ \begin{aligned} &amp;\frac{a c \cdot E[X Y]+a d E[X]+b c E[Y]-E[X] \cdot a E[Y] \cdot c}{a c \sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;=\frac{E[X Y]-E[X] E[Y]}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}} \\ &amp;\frac{E[X Y]+\frac{a d}{a c} E[X]+\frac{b c}{a c} E[Y]-E[Y] E[Y]}{a c / a c} \\ &amp;=E[X Y]-E[X] E[Y] \\ &amp;\frac{d}{c} E[X]+\frac{b}{a} E[Y]=0 \\ &amp;\frac{d}{c} E[X]=-\frac{b}{a} E[Y] \end{aligned} $$</span></p> <p>How to get to a conclusion from here?</p>
Clarinetist
81,560
<p>A huge problem with what you're doing is you're assuming what you want to show to start with, which is a bad practice. You should also make use of covariance <a href="https://en.wikipedia.org/wiki/Covariance#Properties" rel="nofollow noreferrer">properties</a> to make your life easier.</p> <p>Let <span class="math-container">$U = aX + b$</span> and <span class="math-container">$V = cY + d$</span>. Then <span class="math-container">\begin{align} \text{Cov}(U, V) &amp;= \text{Cov}(aX + b, cY + d) \\ &amp;= \text{Cov}(aX, cY) \\ &amp;= ac\text{Cov}(X, Y)\text{.} \end{align}</span> Furthermore, we have that <span class="math-container">$$\text{Var}(U) = \text{Var}(aX + b) = \text{Var}(aX) = a^2\text{Var}(X)$$</span> and similarly, <span class="math-container">$\text{Var}(V) = c^2\text{Var}(Y)$</span>.</p> <p>Do not forget that <span class="math-container">$\sqrt{x^2} = |x|$</span>, hence</p> <p><span class="math-container">\begin{align} &amp;\sqrt{\text{Var}(U)} = \sqrt{a^2\text{Var}(X)} = |a|\sqrt{\text{Var}(X)} \\ &amp;\sqrt{\text{Var}(V)} = \sqrt{c^2\text{Var}(Y)} = |c|\sqrt{\text{Var}(Y)} \end{align}</span></p> <p>Hence the correlation coefficient <span class="math-container">\begin{align} \rho(U, V) &amp;= \dfrac{\text{Cov}(U, V)}{\sqrt{\text{Var}(U)}\sqrt{\text{Var}(V)}} \\ &amp;= \dfrac{ac\text{Cov}(X, Y)}{|a|\sqrt{\text{Var}(X)} \cdot |c|\sqrt{\text{Var}(Y)}} \\ &amp;= \dfrac{ac}{|a||c|} \cdot \dfrac{\text{Cov}(X, Y)}{\sqrt{\text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &amp;= \dfrac{ac}{|a||c|} \cdot \rho(X, Y)\text{.} \end{align}</span></p> <p>We now have to consider <span class="math-container">$\dfrac{ac}{|a||c|}$</span>.</p> <p>Obviously if both <span class="math-container">$a, c$</span> are positive, we have <span class="math-container">$|a||c| = ac$</span>, so <span class="math-container">$\dfrac{ac}{|a||c|} = 1$</span>.</p> <p>Suppose, without loss of generality, that <span class="math-container">$a$</span> is negative and <span class="math-container">$c$</span> is positive. Then <span class="math-container">$|a| = -a$</span> by <a href="https://en.wikipedia.org/wiki/Absolute_value#Definition_and_properties" rel="nofollow noreferrer">definition of the absolute value</a> and <span class="math-container">$|c| = c$</span>. Hence <span class="math-container">$$\dfrac{ac}{|a||c|} = \dfrac{ac}{(-a)c} = -1\text{.}$$</span></p> <p>If both <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are negative, then <span class="math-container">$$\dfrac{ac}{|a||c|} = \dfrac{ac}{(-a)(-c)} = \dfrac{ac}{ac} = 1\text{.}$$</span></p> <p>Thus, we conclude:</p> <p><span class="math-container">$\fbox{If both $a$ and $c$ are of the same sign, then $\rho(U, V) = \rho(X, Y)$. Otherwise, $\rho(U, V) = -\rho(X, Y)$.}$</span></p>
3,460,749
<p>I'm an undergraduate student currently studying mathematical analysis. </p> <p>Our professor uses Zorich's Mathematical Analysis, but I found the text too difficult to understand. </p> <p>After exploring some textbooks, I found that Abbott was easier to follow, so I studied Abbott until I realized that there's a significant amount of content in Zorich that Abbott doesn't cover.</p> <p>So I was wondering if there's a book out there that covers as much content as Zorich but is more readable?</p> <p>Thank you for any help.</p>
UESTCfresh
798,645
<p>Herbert Amann's analysis which has three volumes is more detailed than zorich's analysis.However, I think it is too difficult to read.</p>
2,533,186
<p>How would you find all pairs $(z,w)$ that satisfy $zw=-1$ </p> <p>Im stuck on this problem and the best way I can think of would be to guess and check?</p>
Jepsilon
505,973
<p>For $zw=-1$ it is the multiplicative inverse in the complex field but with a minus sign:</p> <p>$w=-\frac{\bar{z}}{z\bar{z}}=$ and $z=-\frac{\bar{w}}{w\bar{w}}=$ for $w,z\neq0$</p>
1,511,518
<p>Let <span class="math-container">$GL_n^+$</span> be the group of <span class="math-container">$n \times n$</span> real invertible matrices with positive determinant</p> <p>Does there exist a left invariant isotropic Riemannian metric on <span class="math-container">$GL_n^+$</span>?</p> <p>(By "isotropic" I mean that the sectional curvature <span class="math-container">$k(p,\sigma)$</span> for <span class="math-container">$\sigma \subseteq T_pM$</span> does not deped on <span class="math-container">$\sigma$</span>. By left invariance it does not depend on <span class="math-container">$p$</span> as well).</p> <hr> <p>I present a proof below, but I suspect that there are simpler ones:</p> <p>Suppose such a metric <span class="math-container">$g$</span> exists. Then the left invariance implies <span class="math-container">$(GL_n^+,g)$</span> is complete, hence it's universal covering space is complete, simply connected, and with constant curvature, so it must be diffeomorphic to <span class="math-container">$\mathbb{R}^{n^2}$</span> or <span class="math-container">$\, \mathbb{S}^{n^2}$</span>. </p> <p>Since compact spaces can cover only compact spaces, this rules out <span class="math-container">$\mathbb{S}^{n^2}$</span>, so the universal cover of <span class="math-container">$GL_n^+$</span> must be <span class="math-container">$\mathbb{R}^{n^2}$</span>.</p> <p>However, <a href="http://www.sciencedirect.com/science/article/pii/S0001870876800023" rel="nofollow noreferrer">Milnor states here</a> that the universal cover of a Lie group <span class="math-container">$G$</span> is <strong>not</strong> homeomorphic to Euclidean space iff <span class="math-container">$G$</span> contains a compact non-abelian subgroup*. Since <span class="math-container">$GL_n^+$</span> contains <span class="math-container">$SO(n)$</span> we have a contradiction, at least for <span class="math-container">$n \ge 3$</span>.</p> <hr> <p>*I actually don't know how to prove this. Any hints or references would be welcome. Milnor states this in theorem 3.4.</p>
Qiaochu Yuan
232
<p>$GL_n^{+}$ deformation retracts onto $SO(n)$. The universal cover of this (for $n \ge 3$) is a double cover, since $\pi_1(SO(n)) = \mathbb{Z}_2$. It's known as the Spin group $Spin(n)$, and in particular, because it's a double cover it's compact. Hence it's certainly not homotopy equivalent to a Euclidean space (e.g. because its top $\mathbb{Z}_2$ homology is nontrivial), which means the universal cover of $GL_n^{+}$ can't be either.</p> <p>Various other arguments are possible. For example, we also know that (again, for $n \ge 3$) $\pi_3(Spin(n)) \cong H_3(Spin(n)) \cong \mathbb{Z}$, so the same is true of the universal cover of $GL_n^{+}$. </p>
1,266,525
<p>I'm shown part of the function $g(x) = \sqrt{4x-3}$. Is it injective? I said yes as per definition if $f(x) = f(y)$, then $x =y$. Is this right? </p> <p>Under what criteria is $g(x)$ bijective? For what domain and co domain does $g(x)$ meet this criteria? I'm totally lost on this one! </p> <p>Find the inverse of $g(x)$, sketch the graph and explain how $g(x)$ and $g^{-1}(x)$ relate geometrically. </p> <p>Any help on this is greatly appreciated. </p>
ajotatxe
132,456
<p>It really makes no sense to ask whether a function is surjective if you don't have the codomain.</p> <p>That is: you have something like $f(x)=\sqrt{4x-3}$. If you are working with real numbers, $x$ must be $\ge\frac34$ in order to the square root makes sense. Furthermore, the values that takes $f(x)$ are nonnegative.</p> <p>So we can, in most cases, assume that the domain of $f$ is "as greatest as possible"; in this case, $[\frac34,\infty)$. But a function is not only an expression and a domain. It's a codomain, too.</p> <p>The codomain is the set in which the values of $f$ lie. But this set can have more elements that $f$ never reaches.</p> <p>For example: for our $f(x)=\sqrt{4x-3}$, we can set $f:[\frac34,\infty)\to [0,\infty)$ and if $f$ is defined this way, $f$ is surjective. Because any element from $[0,\infty)$ (this is the codomain) is reached by $f$.</p> <p>But if we set $f:[\frac34,\infty)\to\Bbb R$, then $f$ is not surjective, because $f$ does never take negative values.</p>
345,260
<p>I'm trying to prove that there exists a multiplicative linear functional in $\ell_\infty^*$ that extends the limit funcional that is defined in $c$ (i.e., im looking for a linear functional $f \colon \ell_\infty \to \mathbb K$ such that $f( (x_n * y_n) ) = f (x_n) f(y_n)$, for every $(x_n), (y_n) \in \ell_\infty$, and such that $f((x_n)) = \lim x_n$ , if $(x_n)$ converges). </p> <p>I found a lot of references saying that it exists but I can't find a detailed proof. The usual Hahn-Banach approach doesn't work because I get a shift invariant functional, and thats inconsistent with multiplicavity. The references I found suggest that using ultrafilters to define the limit should work. I found this interesting proof of the reciprocal: every multiplicative linear functional in $\ell_\infty^*$ is a limit along an ultrafilter: <a href="https://math.stackexchange.com/questions/175553/every-multiplicative-linear-functional-on-ell-infty-is-the-limit-along-an">Every multiplicative linear functional on $\ell^{\infty}$ is the limit along an ultrafilter.</a> it assumes $\mathbb K = \mathbb R$, but I could adapt so I think (assuming I made no mistakes) it works for $\mathbb C$ as well. </p> <p>In sum, I'm looking for a proof of the result here <a href="http://planetmath.org/BasicPropertiesOfALimitAlongAFilter" rel="nofollow noreferrer">http://planetmath.org/BasicPropertiesOfALimitAlongAFilter</a> but it should work for $\mathbb C$ and the $(x_n) \mapsto \mathcal F -\lim (x_n)$ functional should be continuous. Is there a good reference for this? or is it just trivial?</p> <p>thanks</p>
Nate Eldredge
822
<p>This argument avoids the use of ultrafilters.</p> <p>Let $e_n \in \ell_\infty^*$ be the evaluation map $e_n(x) = x_n$. The set $\{e_n\}$ is contained in the unit ball of $\ell_\infty^*$, which by Alaoglu's theorem is weak-* compact, hence $\{e_n\}$ has a weak-* cluster point; call it $f$. I claim this $f$ has the properties you desire.</p> <p>It is easy to check that the set of multiplicative linear functionals is weak-* closed in $\ell_\infty^*$; each $e_n$ is multiplicative and hence so is $f$.</p> <p>For each $x \in \ell^\infty$, let $\pi_x : \ell_\infty^* \to \mathbb{C}$ be the evaluation functional $\pi_x(g) = g(x)$. By definition of the weak-* topology, $\pi_x$ is weak-* continuous. Suppose $x \in c \subset \ell_\infty$ is convergent, with $x_n \to a$. By continuity of $\pi_x$, $f(x) = \pi_x(f)$ must be a cluster point of $\{\pi_x(e_n)\} = \{x_n\}$. But $x$ is a convergent sequence so the only cluster point of $\{x_n\}$ is $a$. Thus $f(x) = a$.</p> <p>$f$ has another interesting property: since $\mathbb{C}$ is metric, all cluster points in $\mathbb{C}$ are subsequential limits. Thus for any $x \in \ell^\infty$, $f(x)$ is a subsequential limit of $\{x_n\}$. For instance, if $x_n = (-1)^n$, $f(x)$ must be either -1 or 1, whereas a Banach limit must assign $x$ the limit value of 0. A corollary of this is that for any real sequence $\{x_n\}$, we must have $\liminf x_n \le f(x) \le \limsup x_n$.</p> <p>It is also interesting to note that $f$ is an element of $\ell_\infty^*$ that cannot correspond to an element of $\ell^1$. So this gives an alternate proof that $\ell^1$ is not reflexive.</p> <p>If you like, you can instead produce $f$ as a limit of a subnet of $\{e_n\}$, or a limit of $\{e_n\}$ along an ultrafilter. In any case it is a cluster point.</p>
345,260
<p>I'm trying to prove that there exists a multiplicative linear functional in $\ell_\infty^*$ that extends the limit funcional that is defined in $c$ (i.e., im looking for a linear functional $f \colon \ell_\infty \to \mathbb K$ such that $f( (x_n * y_n) ) = f (x_n) f(y_n)$, for every $(x_n), (y_n) \in \ell_\infty$, and such that $f((x_n)) = \lim x_n$ , if $(x_n)$ converges). </p> <p>I found a lot of references saying that it exists but I can't find a detailed proof. The usual Hahn-Banach approach doesn't work because I get a shift invariant functional, and thats inconsistent with multiplicavity. The references I found suggest that using ultrafilters to define the limit should work. I found this interesting proof of the reciprocal: every multiplicative linear functional in $\ell_\infty^*$ is a limit along an ultrafilter: <a href="https://math.stackexchange.com/questions/175553/every-multiplicative-linear-functional-on-ell-infty-is-the-limit-along-an">Every multiplicative linear functional on $\ell^{\infty}$ is the limit along an ultrafilter.</a> it assumes $\mathbb K = \mathbb R$, but I could adapt so I think (assuming I made no mistakes) it works for $\mathbb C$ as well. </p> <p>In sum, I'm looking for a proof of the result here <a href="http://planetmath.org/BasicPropertiesOfALimitAlongAFilter" rel="nofollow noreferrer">http://planetmath.org/BasicPropertiesOfALimitAlongAFilter</a> but it should work for $\mathbb C$ and the $(x_n) \mapsto \mathcal F -\lim (x_n)$ functional should be continuous. Is there a good reference for this? or is it just trivial?</p> <p>thanks</p>
Chao You
62,630
<p>I have done similar work before, which is included in the paper: Chao You, <em>A note on $\tau$-convergence, $\tau$-convergent algebra and applications</em>, Topology Appl. 159, No. 5, 1433-1438 (2012). It may happen that you cannot define a multiplicative linear functional on $l^{\infty}(\mathbb{N})$, but on a subalgebra of it. I used the concept of multiplier there. Hopefully, this will be useful to you. Good luck!</p>
423,158
<p>Maybe I'm too naive in asking this question, but I think it's important and I'd like to know your answer. So, for example I always see that people just write something like "let $f:R\times R\longrightarrow R$ such that $f(x,y)=x^2y+1$", or for example "Let $g:A\longrightarrow A\times \mathbb{N}$ such that for every function $f:\mathbb{N}\longrightarrow A$ we have $g(f(n))=(f(n),n)$)". They never justify the existence, but rather they assume the function to exist. So my question is why? How do you justify the existence? </p>
Zen
72,576
<p>In general it suffices to prove that for your function, each $x$ in the domain maps to exactly one $f(x)$ in the codomain. Assuming the existence of a function is therefore equivalent to assuming this property, which is rather trivial to prove in most cases (but not all; Cantor-Bernstein is one notable example where the existence of a function with certain properties is <em>non</em>-obvious).</p>
423,158
<p>Maybe I'm too naive in asking this question, but I think it's important and I'd like to know your answer. So, for example I always see that people just write something like "let $f:R\times R\longrightarrow R$ such that $f(x,y)=x^2y+1$", or for example "Let $g:A\longrightarrow A\times \mathbb{N}$ such that for every function $f:\mathbb{N}\longrightarrow A$ we have $g(f(n))=(f(n),n)$)". They never justify the existence, but rather they assume the function to exist. So my question is why? How do you justify the existence? </p>
Dan Christensen
3,515
<p>Your first example is the just the composition of several real-valued functions. For simplicity, lets recast it as $f(x)=xxy+1$. It is just the composition of known functions: the multiplication and addition of real numbers. Multiplication maps every pair of real numbers to a real number. Likewise for addition. (Things get more complicated with division and exponentiation.) So, it is easy to verify that for every pair of reals numbers $x$ and $y$, $xxy+1$ gives you a real number, and that $f$ is indeed a function.</p> <p>Not all functions are defined in terms of simple compositions of known functions, however. If you are starting from, say, Peano's Axioms, and want to prove the existence of an add function on the natural numbers, you would have to construct a suitable subset $A$ of the set of ordered triples of natural numbers (tricky!) and prove that:</p> <p>$\forall a,b\in N (\exists c\in N((a,b,c)\in A))$</p> <p>$\forall a,b,c_1, c_2\in N((a,b,c_1)\in A\land(a,b,c_2)\in A \rightarrow c_1=c_2)$ </p> <p>Then you would be entitled to use the function notation $A: N^2 \rightarrow N$, or (my preference) $\forall a,b\in N (A(a,b)\in N)$. Then, of course, you would have to prove that the function $A$ has all the required properties of an add function: associativity, commutativity, etc.</p> <p>BTW, that subset $A$ is such that:</p> <p>$\forall a,b,c ((a,b,c)\in A \leftrightarrow (a,b,c)\in N^3$ </p> <p>$\land \forall d\in P(N^3) (\forall e\in N ((e,1,s(e)\in d))$ </p> <p>$\land \forall e,f,g\in N ((e,f,g)\in d \rightarrow (e,s(f),s(g))\in d)$ </p> <p>$ \rightarrow (a,b,c)\in d)))$</p> <p>where $1$ is the first natural number and $s$ is the usual successor function.</p> <p>From this construction, you can derive:</p> <p>$\forall a\in N (A(a,1)=s(a)$</p> <p>$\forall a,b\in N (A(a,s(b))=s(A(a,b)))$</p>
4,563,725
<p>Is it possible for two non real complex numbers a and b that are squares of each other? (<span class="math-container">$a^2=b$</span> and <span class="math-container">$b^2=a$</span>)?</p> <p>My answer is not possible because for <span class="math-container">$a^2$</span> to be equal to <span class="math-container">$b$</span> means that the argument of <span class="math-container">$b$</span> is twice of arg(a) and for <span class="math-container">$b^2$</span> to be equal to <span class="math-container">$a$</span> means that arg(a) = 2.arg(b) but the answer is it is possible.</p> <p>How is it possible when arg(b) = 2.arg(a) and arg(a) = 2.arg(b) contradict each other?</p>
Misha Lavrov
383,078
<p>The arguments <span class="math-container">$\arg(a)$</span> and <span class="math-container">$\arg(b)$</span> are only defined up to a multiple of <span class="math-container">$2\pi$</span>. Or, if we require that <span class="math-container">$\arg(z) \in (-\pi,\pi]$</span> for all <span class="math-container">$z$</span>, then it's not necessarily the case that <span class="math-container">$\arg$</span> is multiplicative: we might end up having to add or subtract <span class="math-container">$2\pi$</span>.</p> <p>If we go with the first option, we know that <span class="math-container">$\arg(a) \equiv 2\arg(b) \pmod{2\pi}$</span> and <span class="math-container">$\arg(b) \equiv 2\arg(a) \pmod{2\pi}$</span>, but this is not a problem. Substituting, we get <span class="math-container">$\arg(a) \equiv 4 \arg(a) \pmod{2\pi}$</span>, or <span class="math-container">$3\arg(a) \equiv 0 \pmod{2\pi}$</span>; this is possible and requires <span class="math-container">$\arg(a)$</span> to be a multiple of <span class="math-container">$\frac{2\pi}{3}$</span>.</p> <p>Or, if we go with the second option, we should consider alternatives to <span class="math-container">$\arg(a) = 2\arg(b)$</span> and <span class="math-container">$\arg(b) = 2\arg(a)$</span>. For example, if we have <span class="math-container">$\arg(a) = 2\arg(b) + 2\pi$</span> and <span class="math-container">$\arg(b) = 2\arg(a) - 2\pi$</span>, then we get <span class="math-container">$\arg(a) = 4\arg(a) - 2\pi$</span>, which leads to <span class="math-container">$\arg(a) = \frac{2\pi}{3}$</span> as the answer.</p>
369,435
<p>My task is as in the topic, I've given function $$f(x)=\frac{1}{1+x+x^2+x^3}$$ My solution is following (when $|x|&lt;1$):$$\frac{1}{1+x+x^2+x^3}=\frac{1}{(x+1)+(x^2+1)}=\frac{1}{1-(-x)}\cdot\frac{1}{1-(-x^2)}=$$$$=\sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k$$ Now I try to calculate it the following way:</p> <p>\begin{align} &amp; {}\qquad \sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k \\[8pt] &amp; =(-x+x^2-x^3+x^4-x^5+x^6-x^7+x^8-x^9+\cdots)\cdot(-x^2+x^4-x^6+x^8-x^{10}+\cdots) \\[8pt] &amp; =x^3-x^4+0 \cdot x^5+0 \cdot x^6 +x^7-x^8+0 \cdot x^9 +0 \cdot x^{10} +x^{11}+\cdots \end{align}</p> <p>And now I conclude that it is equal to $\sum_{k=0}^{\infty}(x^{3+4 \cdot k}-x^{4+4 \cdot k})$ ($|x|&lt;1$) Is it correct? Are there any faster ways to solve that types of tasks? Any hints will be appreciated, thanks in advance.</p>
Fly by Night
38,495
<p>If I recall, if you have two power series, based at the same point, then the radius of convergence of their product is at least the smaller of the two radii of convergence. Generally it will be the smaller of the two radii of convergence. The at least part comes from the possibility of numerators in one cancelling with denominators in another. This isn't the case in this problem.</p> <p>Note that $1+x+x^2+x^3 \equiv (1+x)(1+x^2)$, and so:</p> <p>$$\frac{1}{1+x+x^2+x^3} \equiv \frac{1}{1+x} \times \frac{1}{1+x^2}$$</p> <p>Both factors of the right are got from geometric series with initial terms $1$, and then common ratios of $-x$ and $-x^2$ respectively. The converge for $|x|&lt;1$ and $|x^2|&lt;1$ respectively. These both have radius of convergence $\rho=1$, so the radius of convergence of their product is $\rho=1$.</p>
1,632,677
<p>How can I arrive at a series expansion for $$\frac{1}{\sqrt{x^3-1}}$$ at $x \to 1^{+}$? Experimentation with WolframAlpha shows that all expansions of things like $$\frac{1}{\sqrt{x^y - 1}}$$ have $$\frac{1}{\sqrt{y}\sqrt{x-1}}$$ as the first term, which I don’t know how to obtain.</p>
marty cohen
13,079
<p>I like to expand around zero. So, in $\frac{1}{\sqrt{x^3-1}} $, let $x = 1+y$. Then</p> <p>$\begin{array}\\ x^3-1 &amp;=(1+y)^3-1\\ &amp;=1+3y+3y^2+y^3-1\\ &amp;=3y+3y^2+y^3\\ &amp;=y(3+3y+y^2)\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ \frac{1}{\sqrt{x^3-1}} &amp;=\frac{1}{\sqrt{y(3+3y+y^2)}}\\ &amp;=\frac1{\sqrt{y}}\frac{1}{\sqrt{3+3y+y^2}}\\ &amp;=\frac1{\sqrt{3y}}\frac{1}{\sqrt{1+y+y^2/3}}\\ &amp;=\frac1{\sqrt{3y}}(1+y+y^2/3)^{-1/2}\\ \end{array} $</p> <p>Now apply the generalized binomial theorem in the form $(1+a)^b =\sum_{n=0}^{whatever}\binom{b}{n}a^n $ with $a=y+y^2/3$ and $b = -\frac12$.</p> <p>Remember that $\binom{b}{n} =\frac{\prod_{k=0}^{n-1} (b-k)}{n!} $, so that $\binom{-\frac12}{0} =1 $, $\binom{-\frac12}{1} =-\frac12 $, and $\binom{-\frac12}{1} =\frac12(-\frac12)(-\frac12-1) =\frac12(-\frac12)(-\frac32) =\frac38 $.</p> <p>This will give you the first few terms in terms of $y$. To get them in terms of $x$, replace $y$ by $x-1$ and expand.</p>
96,952
<p>How would I go about solving the following for <code>c</code>?</p> <pre><code>Solve[0 == Sum[(t[i]*m[i] - c*t[i]^2)/s[i]^2, {i, 1, n}], c, Reals] </code></pre> <p>I get the error </p> <blockquote> <p>Solve::nsmet : This system cannot be solved with the methods available to Solve</p> </blockquote> <p>but it is fairly straight forward to get that </p> <pre><code>c = Sum[m[i]*t[i]/s[i]^2, {i, 1, n}] / Sum[t[i]^2/s[i]^2, {i, 1, n}] </code></pre> <p>I need to repeat this for a similar equation that is not quite so straight forward to do on paper. I intended to use <em>Mathematica</em> to check my result but since I cannot verify the one I am confident in, I may be out of luck. I am new to <em>Mathematica</em> so I would not be surprised if the problem is my ignorance.</p> <p><strong>Edit:</strong> A friend pointed out that <code>N</code> was a defined function in <em>Mathematica</em> (also mentioned in a comment below). I replaced that with <code>n</code> with the same overall outcome. </p>
A.G.
7,060
<p>If your goal is to check the work you do on paper you may want to go this route :</p> <p>First define</p> <pre><code>c[n_Integer] := Sum[m[i]*t[i]/s[i]^2, {i, 1, n}]/Sum[t[i]^2/s[i]^2, {i, 1, n}]; ShouldBeZero[n_Integer] := Sum[(t[i]*m[i] - c[n]*t[i]^2)/s[i]^2, {i, 1, n}] // FullSimplify </code></pre> <p>then check a few values :</p> <pre><code>Table[ShouldBeZero[n], {n, 1, 20}] (* {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} *) </code></pre>
62,444
<p>I have not seen this question treated in the literature. Does anyone have more information? There are several OEIS sequences (<a href="http://oeis.org/A097056" rel="noreferrer">A097056</a>, <a href="http://oeis.org/A117896" rel="noreferrer">A117896</a>, <a href="http://oeis.org/A117934" rel="noreferrer">A117934</a>) dealing with this question, but no answers.</p>
André Henriques
5,690
<p>From <a href="http://oeis.org/A097056/internal" rel="nofollow">http://oeis.org/A097056/internal</a>:<br> <i>"Empirically, there seem to be no intervals between consecutive squares containing more than two nonsquare perfect powers."</i></p> <p>is probably the closest you'll ever get to an answer.</p>
62,444
<p>I have not seen this question treated in the literature. Does anyone have more information? There are several OEIS sequences (<a href="http://oeis.org/A097056" rel="noreferrer">A097056</a>, <a href="http://oeis.org/A117896" rel="noreferrer">A117896</a>, <a href="http://oeis.org/A117934" rel="noreferrer">A117934</a>) dealing with this question, but no answers.</p>
Gjergji Zaimi
2,384
<p>If you let $a_1,a_2,\dots$ represent the sequence $1,4,8,9,\dots$ of perfect powers, it is a conjecture of Erdos that $$a_{n+1}-a_n &gt;c'n^c.$$ The weaker conjecture $$\liminf a_{n+1}-a_n =\infty$$ is known as Pillai's conjecture. This seems to be much beyond the reach of the strongest methods available today. Note, however, that your conjecture in the OP implies $$a_{n+2}-a_n&gt;cn^{1/2}$$ which is a very strong, related result. To give an idea of how hard it is to prove that consecutive powers tend to space out more and more, consider how hard it was to prove Catalan's conjecture (consecutive powers can not be 1 apart, except for 8 and 9) or even the still open Hall's conjecture which asserts that if $k=x^3-y^2\neq 0$ then $$k&gt;\max\{x^3,y^2\}^{1/6}-o(1)$$ which is related to some abc type conjectures.</p> <p>Now on the specific problem you mention, there is some work done on powers in relatively short intervals. Start with </p> <blockquote> <p>J. H. Loxton, Some problems involving powers of integers, Acta Arith., 46 (1986), pp. 113–123</p> </blockquote> <p>where it is proved that the interval $[N,N+\sqrt{N}]$ contains at most $$\exp(40(\log\log N \log\log\log N)^{1/2})$$ integer powers. This is far from the bound of $3$ but it is a nice result. There was a gap in the paper above, but it was corrected by two different authors, see</p> <blockquote> <p>D. J. Bernstein, Detecting perfect powers in essentially linear time, Math. Comput., 67 (1998), pp. 1253–1283</p> </blockquote> <p>and </p> <blockquote> <p>C. L. Stewart, On heights of multiplicatively dependent algebraic numbers, Acta Arith., 133 (2008), pp. 97–108</p> <p>C. L. Stewart, On sets of integers whose shifted products are powers, J. Comb. Theory, Ser. A, 115 (2008), pp. 662–673</p> </blockquote> <p>In the first paper by Stewart above, it is conjectured that there are infinitely many integers $N$ for which the interval $[N,N +\sqrt{N}]$ contains three integers one of which is a square, one a cube and one a fifth power. He also conjectures that for sufficiently large $N$ this interval does not contain four distinct powers and if it contains three distinct powers then one of is a square, one is a cube and the third is a fifth power, which is a refined version of your conjecture.</p> <p>For more references and a longer survey see Waldschmidt's <a href="http://www.math.jussieu.fr/~miw/articles/pdf/PerfectPowers.pdf" rel="noreferrer">"Perfect Powers: Pillai’s works and their developments"</a>.</p>
62,444
<p>I have not seen this question treated in the literature. Does anyone have more information? There are several OEIS sequences (<a href="http://oeis.org/A097056" rel="noreferrer">A097056</a>, <a href="http://oeis.org/A117896" rel="noreferrer">A117896</a>, <a href="http://oeis.org/A117934" rel="noreferrer">A117934</a>) dealing with this question, but no answers.</p>
Felipe Voloch
2,290
<p>It follows from the ABC conjecture that there are only finitely many. First, show (unconditionally) that for all but finitely many such triples of perfect powers, at most one of them is a cube. So, the others are fifth or higher powers, say $n^2&lt;A&lt;B&lt;(n+1)^2$. Now, apply ABC to $B-A=C$. Then $B&gt;n^2, C = O(n)$ and the radicals of $A$ and $B$ are $O(n^{2/5})$, so the ABC conjecture bounds $n$.</p>
2,227,135
<p>I have this information from my notes:<span class="math-container">$\def\rk{\operatorname{rank}}$</span></p> <p>Let <span class="math-container">$A ∈ \mathbb{R}^{m\times n}$</span>. Then </p> <ul> <li><span class="math-container">$\rk(A) = n$</span></li> <li><span class="math-container">$\rk(A^TA) = n$</span></li> <li><span class="math-container">$A^TA$</span> is invertible.</li> </ul> <p>In my case, <span class="math-container">$n = 1$</span>, so I would need to show <span class="math-container">$\rk(vv^T) = \rk(v^Tv) = \rk(v) = 1$</span>. Suppose <span class="math-container">$A^TAx = 0$</span>. Because <span class="math-container">$A^TA$</span> is invertible, I can multiply both sides by its inverse to get <span class="math-container">$x = 0$</span>, meaning the nullity of <span class="math-container">$A^TA$</span> is <span class="math-container">$0$</span>. Can I apply the same logic to <span class="math-container">$AA^T$</span>? i.e. I have some matrix <span class="math-container">$B = A^T$</span>, so <span class="math-container">$B^TB$</span> = <span class="math-container">$AA^T $</span> has a nullity of <span class="math-container">$0$</span> (and therefore they have the same rank by the rank-nullity theorem)?</p>
Harambe
357,206
<p>You don't need to use any machinery to prove this. The <strong>rank</strong> of a matrix is the <strong>dimension of the column space</strong> of the matrix. It's easy to see that \begin{align*} vv^T = \begin{pmatrix} v_1v_1 &amp; v_1v_2 &amp; \cdots &amp; v_1v_n\\ v_2v_1 &amp; v_2v_2 &amp; \cdots &amp; v_2v_n\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ v_nv_1 &amp; v_nv_2 &amp; \cdots &amp; v_nv_n \end{pmatrix}, \mathrm{where}\ v = \pmatrix{v_1\\v_2\\\vdots\\v_n} \end{align*} Then the column space of $vv^T$ is the span of the column vectors, i.e. \begin{align*} \mathrm{span} \begin{Bmatrix} \begin{pmatrix} v_1v_1\\ v_2v_1\\ \vdots\\ v_nv_1\\ \end{pmatrix}, \begin{pmatrix} v_1v_2\\ v_2v_2\\ \vdots\\ v_nv_2\\ \end{pmatrix}, \cdots, \begin{pmatrix} v_1v_n\\ v_2v_n\\ \vdots\\ v_nv_n\\ \end{pmatrix} \end{Bmatrix} \end{align*} But clearly each column vector is a scalar multiple of $v$, for example the first column is equal to $v_1v$ where $v_1$ is a scalar. Thus the span of all of these vectors is just equal to the span of $v$, so the dimension of the column space is $1$.</p> <p>Hence rank$(vv^T)= 1$.</p>
2,830,926
<p>I am implementing a program in C which requires that given 4 points should be arranged such that they form a quadrilateral.(assume no three are collinear)<br> Currently , I am ordering the points in the order of their slope with respect to origin.<br> See <a href="https://ibb.co/cfDHeo" rel="nofollow noreferrer">https://ibb.co/cfDHeo</a> .<br> In this case a,b,c,d are in descending order of slope but on joining a to b , b to c , c to d , d to a - I don't get a quadrilateral .<br> So my method fails in such cases.<br></p> <p>I need suggestion.</p> <p>Thanks.</p>
heropup
118,193
<p>Another way to do this is to observe that there is a relationship between the moment generating function (MGF) and the <strong>probability generating function</strong> (PGF); namely, $$m_X(\log u) = \operatorname{E}[e^{X \log u}] = \operatorname{E}[e^{\log u^X}] = \operatorname{E}[u^X] = P_X(u).$$ Furthermore, the PGF has the property that $$\operatorname{E}[X(X-1)\ldots(X-n+1)] = \frac{d^n}{du^n}\left[P_X(u)\right]_{u=1}.$$ So for the negative binomial case, $$P_X(u) = \left(\frac{p}{1-(1-p)u}\right)^r,$$ and the first derivative is $$\frac{dP}{du} = p^r (-r)(-(1-p))(1-(1-p)u)^{-r-1} = \frac{r(1-p)}{p} \left(\frac{p}{1-(1-p)u}\right)^{r+1}.$$ Evaluating at $u = 1$ gives $$\operatorname{E}[X] = \frac{r(1-p)}{p}$$ as desired. But now note $$\frac{dP}{du} = \operatorname{E}[X] P_{X^*}(u),$$ where $X^* \sim \operatorname{NegBinomial}(r+1,p)$. So the second derivative is trivial: $$\frac{d^2 P}{du^2} = \operatorname{E}[X] \operatorname{E}[X^*] P_{X^{**}}(u),$$ where $X^{**} \sim \operatorname{NegBinomial}(r+2,p)$, and the pattern continues, letting us conclude in general that $$\frac{d^n}{du^n}\left[P_X(u)\right]_{u=1} = \prod_{k=0}^{n-1} (r+k) \frac{1-p}{p} = \left(\frac{1-p}{p}\right)^n \frac{(r+n-1)!}{(r-1)!},$$ which is rather nice since now $$\operatorname{E}\left[\binom{X}{n}\right] = \binom{r+n-1}{n} \left(\frac{1-p}{p}\right)^n.$$ This is quite above and beyond the original request, but we can now easily compute the variance: $$\begin{align*} \operatorname{Var}[X] &amp;= \operatorname{E}[X^2]-\operatorname{E}[X]^2 \\ &amp;= \operatorname{E}[X(X-1)+X] - \operatorname{E}[X]^2 \\ &amp;= \operatorname{E}[X(X-1)]+\operatorname{E}[X] - \operatorname{E}[X]^2 \\ &amp;= \operatorname{E}[X](\operatorname{E}[X^*] + 1 - \operatorname{E}[X]) \\ &amp;= \frac{r(1-p)}{p} \left( \frac{(r+1)(1-p)}{p} + 1 - \frac{r(1-p)}{p}\right) \\ &amp;= \frac{r(1-p)}{p^2} ((r+1)(1-p) - r(1-p) + p) \\ &amp;= \frac{r(1-p)}{p^2} (1-p + p) \\ &amp;= \frac{r(1-p)}{p^2}. \end{align*}$$</p>
2,727,974
<blockquote> <p>I am in need of this important inequality $$\log(x+1)\leqslant x$$.</p> </blockquote> <p>I understand that $\log(x)\leqslant x$. For $c\in\mathbb{R}$. However is it true that $\log(x+c)\leqslant x$?</p> <p>It is hard to accept because it seems like $c$ cannot be arbitrary. I have tried to prove this inequality:</p> <p>$\log(x+c)\leqslant x\iff x+c\leqslant e^x$</p> <p>It is true that $f(x)=x$ grows much faster than $g(x)=\log(x+c)$, since the $\frac{df(x)}{dx}=1\geqslant \frac{1}{x+c}=\frac{dg(x)}{dx}$ </p> <p><strong>Question:</strong></p> <p>Is the derivative argument enough to prove the more general inequality $\log(x+c)\leqslant x$?</p> <p>Thanks in advance!</p>
Hagen von Eitzen
39,174
<p><strong>Hint:</strong> The claim is equivalent to $$ e^x\ge 1+x$$</p>
1,900,788
<blockquote> <p>The probability of rain in Greg's area on Tuesday is $0.3$. The probability that Greg's teacher will give him a pop quiz in Tuesday is $0.2$. The events occur independently of each other. What is the probability of neither events occur?</p> </blockquote> <p>My approach: </p> <p>Probability of rain or quiz or both = $0.3+0.2= 0.5$</p> <p>So, probability of neither= $1-0.5$ = $0.5$</p> <p>Question: Actual probability of this problem is $0.7\cdot0.8$ = $0.56$. But, I don't understand what is the mistake in above approach? </p>
Reese Johnston
351,805
<p>Your approach double-counts some possibilities. The $0.3$ covers all possibility of rain - so, it includes both "rain and quiz" and "rain and no quiz". The $0.2$ covers all possibility of a quiz, so it includes both "rain and quiz" and "quiz and no rain". So $0.3 + 0.2$ covers "rain and quiz", "rain and no quiz", "rain and quiz" <em>again</em>, and "quiz and no rain". To cut out one of the countings of "rain and quiz", we need to subtract the probability of that event, which is $0.3 \cdot 0.2 = 0.06$. So the probability of "rain or quiz or both" is $0.2 + 0.3 - 0.06 = 0.44$. The probability of neither is then $1 - 0.44 = 0.56$.</p>
72,012
<p>I'm trying to discretize a region with "pointy" boundaries to study dielectric breaking on electrodes with pointy surfaces. So far I've tried 3 types of boundaries, but the meshing functions stall indefinitely. I've let it run for several hours and it doesn't complete. Seems strange that it would take so long</p> <pre><code>ParametricPlot[{-Cos[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10), Sin[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= (0.5 + Cos[(u - (Pi/2))/2]^8 Cos[14 (u - (Pi/2))]^8)}, {r, u}] ird = DiscretizeRegion[ir, MaxCellMeasure -&gt; 0.02] (* never completes *) Needs["NDSolve`FEM`"] m = ToElementMesh[ir, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre> <p>Another attempt:</p> <pre><code>ParametricPlot[{ Cos[u] Max[ 0.1, -Cos[u]] (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)]), (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)]) Sin[u] Max[ 0.1, -Cos[u]]}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir2 = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= Max[0.1, -Cos[u]] (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)])}, {r, u}] m = ToElementMesh[ir2, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre> <p>Last attempt:</p> <pre><code>ParametricPlot[{-Cos[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10), Sin[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir3 = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {r, u}] m = ToElementMesh[ir3, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre>
Michael E2
4,999
<pre><code>fn = {-Cos[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10), Sin[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}; plot = ParametricPlot[fn, {u, 0, 2 Pi}, PlotPoints -&gt; Round[2 Pi (Sqrt@MaxValue[#.# &amp;@D[fn, u], u])/0.2], PlotRange -&gt; All]; Cases[plot, Line[p_] :&gt; Polygon[p], Infinity] // First // DiscretizeRegion </code></pre> <p><img src="https://i.stack.imgur.com/5EjmH.png" alt="Mathematica graphics"></p> <p>Or:</p> <pre><code>DiscretizeGraphics[Show[plot /. Line -&gt; Polygon, PlotRange -&gt; All]] </code></pre> <p><img src="https://i.stack.imgur.com/86jz2.png" alt="Mathematica graphics"></p> <p>I get an error if I don't reset <code>PlotRange</code>. Perhaps a bug.</p>
2,078,142
<p>Let $X$ be locally compact metric space which is $\sigma$-compact also. I want to show that $X=\bigcup\limits_{n=1}^{\infty}K_n$, where $K_n$'s are compact subsets of $X$ satisfying $K_n\subset K_{n+1}^{0}$ for all $n\in \mathbb N$. </p> <p>I know that since $X$ is $\sigma-$compact, therefore there exists a sequence of compact subsets $(C_n)$ of $X$ such that $X=\bigcup\limits_{n=1}^{\infty}C_n$. I am not getting any idea how to construct $K_n$'s. Please help!</p>
bof
111,012
<p>Since $X$ is locally compact, every compact subset of $X$ is contained in an open set whose closure is compact.</p> <p>Since $X$ is $\sigma$-compact, there are compact subsets $C_n$ such that $X=\bigcup_{n=1}^\infty C_n.$</p> <p>Since $C_1$ is compact, there is an open set $U_1$ such that $\overline{U_1}$ is compact and $C_1\subseteq U_1.$</p> <p>Since $\overline{U_1}\cup C_2$ is compact, there is an open set $U_2$ such that $\overline{U_2}$ is compact and $\overline{U_1}\cup C_2\subseteq U_2.$</p> <p>Continue in this manner. Let $K_n=\overline{U_n}.$ Then $K_n$ is compact, $\bigcup_{n=1}^\infty K_n=\bigcup_{n=1}^\infty \overline{U_n}\supseteq\bigcup_{n=1}^\infty U_n\supseteq\bigcup_{n=1}^\infty C_n=X,$ and $K_n=\overline{U_n}\subseteq U_{n+1}\subseteq K_{n+1}^\circ.$</p>
195,556
<p>My math background is very narrow. I've mostly read logic, recursive function theory, and set theory.</p> <p>In recursive function theory one studies <a href="http://en.wikipedia.org/wiki/Partial_functions">partial functions</a> on the set of natural numbers. </p> <p>Are there other areas of mathematics in which (non-total) partial functions are important? If so, would someone please supply some references?</p> <p>Thanks!</p>
William
13,579
<p>I think that the reason you have this particular interests in partial function or even heard of the term is mostly because you have read recursion theory. </p> <p>To elaborate, in most other areas of mathematics, function are by definition totally defined on their domain. To say that $f : X \rightarrow Y$ but $f$ is not define on all of $X$ is not a usual mathematical practice. Most people would restrict the domain so that $f$ is totally defined. For example, the function $f(x) = \frac{1}{x}$ where $x$ comes from some field. $f : \mathbb{R} \rightarrow \mathbb{R}$ then $f$ would be partial. Most people would likely modify this to say $f : \mathbb{R} - \{0\} \rightarrow \mathbb{R}$. In other areas, people would likely be careful to specify the domain and range. </p> <p>Partial functions do appear in other areas of mathematics. Qiaochu Yuan seem to mention the study of the poles of complex value functions. For example, the residue formula gives some useful information about complex value functions using the poles. </p> <p>However, in computability theory or recursion theory, there is a useful significance to introduce total and partial functions. In computability theory, you study subsets of the natural numbers and function defined on these subset. Because these function and sets correspond to Turing Machine and algorithms, their inputs (for simplicity) can be coded as natural numbers. By specifying the domain of function, in the context of recursion theory, to be $\omega$, represents the intuitive idea of an algorithm. Moreover, many computer programs, Turing machines, or algorithm used in real practice don't terminate on all inputs. Because recursion theory is the study of the computational aspect of sets, it makes sense to include these sort of partial functions. </p> <p>The above is sort of an intuitive idea of why recursion theory (the study of computation) should naturally consider partial functions because algorithms naturally don't halt on all input. </p> <p>Moreover, partial functions play a major role in recursion theory that is not seen in many other areas of mathematics. The partial functions are essential to computability theory. First, you are aware of the enumeration theorem for partial computable functions (i.e. the universal Turing Machine exists). Almost every theorem in computability theory uses this fact. Moreover, it can be prove that there there is no effective enumeration of the computable functions. This enumeration is absolutely necessary in many of the construction such as finite injury arguments. Moreover, the c.e. sets play a very important role in computability theory. They are defined to be the domain of the partial computable functions. Many natural problems in mathematics such that Halting Problem, diophantine equations, etc have a correspond sets that is c.e. </p> <p>The term partial function is more ubiquitous in computability theory because the most fundamental objects of the theory the enumeration theorem and the c.e. sets are naturally expressed using them.</p>
1,691,825
<p>My textbook goes from</p> <p>$$\frac{\left( \frac{6\ln^22x}{2x} \right)}{\left(\frac{3}{2\sqrt{x}}\right)}$$</p> <p>to:</p> <p>$$\frac{6\ln^22x}{3\sqrt{x}}$$</p> <p>I don't see how this is right. Could anyone explain?</p>
Michael Hardy
11,667
<p>$$ \frac{\left( \frac{6\ln^22x}{2x} \right)}{\left(\frac{3}{2\sqrt{x}}\right)} = \frac{6\ln^2 2x}{2x} \cdot \frac{2\sqrt x}{3} $$</p> <p>Then use $\dfrac 6 3 = 2$ and $\dfrac {\sqrt x} x = \dfrac{\sqrt x}{\sqrt x \cdot\sqrt x} = \dfrac 1 {\sqrt x}$.</p>
1,691,825
<p>My textbook goes from</p> <p>$$\frac{\left( \frac{6\ln^22x}{2x} \right)}{\left(\frac{3}{2\sqrt{x}}\right)}$$</p> <p>to:</p> <p>$$\frac{6\ln^22x}{3\sqrt{x}}$$</p> <p>I don't see how this is right. Could anyone explain?</p>
John_dydx
82,134
<p>The textbook has <a href="http://www.bbc.co.uk/education/guides/z7fbkqt/revision/2" rel="nofollow">rationalised the denominator</a> for $\frac{3}{2\sqrt{x}}$ before simplifying further </p> <p>i.e.</p> <p>$$\frac{3}{2\sqrt{x}} = \frac{3\times \sqrt{x}}{2\sqrt{x}\times \sqrt{x}}= \frac{3\sqrt{x}}{2x}$$</p> <p>Now Simplify:</p> <p>$$ \frac{\left(\frac{6\ln^2 2x}{2x}\right)}{\left(\frac{3}{2\sqrt{x}}\right)}= \frac{\left(\frac{6\ln^2 2x}{2x}\right)}{\left(\frac{3\sqrt{x}}{2x}\right)} = \frac{6 \ln^2 2x}{3\sqrt{x}}$$</p>
732,334
<p>How can we solve this equation? $x^4-8x^3+24x^2-32x+16=0.$ </p>
Community
-1
<p>To start, this polynomial is called a <strong>biquadratic equation</strong>.</p> <p>First, factor out the polynomial. $$x^4 - 2x^3 - 6x^3 + 12x^2 + 12x^2 - 24x - 8x + 16 = 0$$ $$x^3(x - 2) - 6x^2(x - 2) + 12x(x - 2) - 8(x - 2) = 0$$ $$(x^3 - 6x^2 + 12x - 8)(x - 2) = 0$$</p> <p>Then after, continue the factorization of the polynomial.</p> <p>$$(a + b)^2 = a^2 + 2ab + b^2$$ $$(a + b)^3 = a^3 + 3a^2 b + 3ab^2 + b^3$$ $$(a + b)^4 = a^4 + 4a^3b + 6a^2 b^2 + 4ab^3 + b^4$$ $$x^4 + 4 \times x^2 \times (-2) + 6 \times x^2 \times (-2)^2 + 4 \times x \times (-2)^3 + (-2)^4 = 0$$ </p> <p>So $\mathbf{(x - 2)^4 = 0}$. </p>
2,082,815
<p>Find the $100^{th}$ power of the matrix $\left( \begin{matrix} 1&amp; 1\\ -2&amp; 4\end{matrix} \right)$.</p> <p>Can you give a hint/method?</p>
gowrath
255,605
<p>You want to find the <a href="https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix" rel="nofollow noreferrer">eigen decomposition</a> for the matrix and transform it into a diagonal matrix (for which computing powers is easy). </p> <p>The eigenvectors of the matrix $M$ given by </p> <p>$$ M = \left( \begin{matrix} 1&amp; 1\\ -2&amp; 4\end{matrix} \right) $$</p> <p>are:</p> <p>$$ v_1 = \left ( \begin{matrix} 1\\ 2\end{matrix} \right) \qquad v_2 = \left ( \begin{matrix} 1\\ 1\end{matrix} \right) $$</p> <p>giving us the change of base matrix $B$ as:</p> <p>$$ B = \left( \begin{matrix} 1&amp; 1\\ 2&amp; 1\end{matrix} \right) $$ </p> <p>The matrix, $\Gamma = B^{-1}MB$, is a diagonal matrix as such:</p> <p>$$ \begin{align} \Gamma &amp;= B^{-1}MB \\ &amp;= \left( \begin{matrix} -1&amp; 1\\ 2&amp; -1\end{matrix} \right) \left( \begin{matrix} 1&amp; 1\\ -2&amp; 4\end{matrix} \right) \left( \begin{matrix} 1&amp; 1\\ 2&amp; 1\end{matrix} \right) \\ &amp;= \left( \begin{matrix} 3&amp; 0\\ 0&amp; 2\end{matrix} \right) \\ \end{align} $$ </p> <p>Now we compute $\Gamma^{100}$ which is easy since it is diagonalizable:</p> <p>$$ \Gamma^{100} = \left( \begin{matrix} 3^{100} &amp; 0\\ 0&amp; 2^{100}\end{matrix} \right) $$ </p> <p>Now we convert it back to the standard basis:</p> <p>$$ \begin{align} M^{100} &amp;= B\Gamma^{100}B^{-1} \\ &amp;= \left( \begin{matrix} 1&amp; 1\\ 2&amp; 1\end{matrix} \right) \left( \begin{matrix} 3^{100}&amp; 0\\ 0 &amp; 2^{100}\end{matrix} \right) \left( \begin{matrix} -1&amp; 1\\ 2&amp; -1\end{matrix} \right) \\ &amp;= \left( \begin{matrix} -3^{100} + 2^{101}&amp; 3^{100} - 2^{100}\\ -2\cdot 3^{100} + 2^{101} &amp; 2\cdot 3^{100} - 2^{100}\end{matrix} \right) \\ \end{align} $$</p>
2,983,199
<p>I am looking for an example demonstrating that <span class="math-container">$\lim\inf x_n+\lim \inf y_n&lt;\lim \inf(x_n+y_n)$</span> but for the life of me i can't find one. any suggestions?</p>
Alex Ortiz
305,215
<p>Try some alternating sequences where <span class="math-container">$\liminf x_n = \liminf y_n = -1$</span> but <span class="math-container">$x_n + y_n = 0$</span> for each <span class="math-container">$n$</span>.</p>
1,091,653
<p>Is this correct ?</p> <p>$$ \frac{d}{dt} \left( \int_0^t \phi(t)dt \right) = \phi(t) $$</p> <p>If not, how can I recover $$ \phi(t) $$ knowing only $$ \int_0^t \phi(t)dt $$ ?</p>
Scientifica
164,983
<p>Let $\Phi$ the antiderivative of $\phi$ ($\Phi'(t)=\phi(t)$) such as $\Phi(0)=0$ </p> <p>We have $\Phi(t)=\int_0^t \phi(t)dt$ thus $\dfrac{d}{dt}\left(\int_0^t \phi(t)dt\right)=\dfrac{d\Phi}{dt}=\phi(t)$</p>
1,787,460
<blockquote> <p>Suppose the n<em>th</em> pass through a manufacturing process is modelled by the linear equations <span class="math-container">$x_n=A^nx_0$</span>, where <span class="math-container">$x_0$</span> is the initial state of the system and</p> <p><span class="math-container">$$A=\frac{1}{5} \begin{bmatrix} 3 &amp; 2 \\ 2 &amp; 3 \end{bmatrix}$$</span></p> <p>Show that</p> <p><span class="math-container">$$A^n= \begin{bmatrix} \frac{1}{2} &amp; \frac{1}{2} \\ \frac{1}{2} &amp; \frac{1}{2} \end{bmatrix}+\left( \frac{1}{5} \right)^n \begin{bmatrix} \frac{1}{2} &amp; -\frac{1}{2} \\ -\frac{1}{2} &amp; \frac{1}{2} \end{bmatrix}$$</span></p> <p>Then, with the initial state <span class="math-container">$x_0=\begin{bmatrix} p \\ 1-p \end{bmatrix}$</span> , calculate <span class="math-container">$\lim_{n \to \infty} x_n$</span>.</p> </blockquote> <p><a href="https://i.stack.imgur.com/1G6NO.png" rel="nofollow noreferrer">(the original is here)</a></p> <p>I am not sure how to do the proof part</p> <p>The <a href="https://i.stack.imgur.com/FI5XE.png" rel="nofollow noreferrer">hint</a> is:</p> <blockquote> <p>First diagonalize the matrix; eigenvalues are <span class="math-container">$1, \frac{1}{5}$</span>.</p> </blockquote> <p>I understand the hint and have diagonalised it but I don't know how to change it into the given form? After diagonalisation, I just get 3 matrices multiplied together</p>
Dark
208,508
<p><strong>Hint :</strong></p> <p>$A^{n}=(P^{-1} D P)^{n}=P^{-1} D^{n} P$</p>
2,965,756
<p>I want to intuitively say tht the answer is yes, but if it so happens that <span class="math-container">$|\mathbf{a}|=|\mathbf{b}|cos(\theta)$</span>, where <span class="math-container">$\theta$</span> is the angle between the two vectors, then the equation will be satisfied without the two vectors being the same.</p> <p>However, my friend keeps telling me that I'm wrong and that this would contradict the given result in our homework question anyway, which tells us that <span class="math-container">$\mathbf{a}\times\mathbf{b} = \mathbf{a}-\mathbf{b}$</span> and then asks us to prove <span class="math-container">$\mathbf{a}=\mathbf{b}$</span> (the equation in the question was obtained by dotting both sides with <span class="math-container">$\mathbf{a}$</span>. </p> <p>Which one of us is wrong, and why?</p>
YukiJ
380,302
<p>As a simple counter-example see</p> <p><span class="math-container">$$\begin{pmatrix}1\\2\\3\end{pmatrix}.\begin{pmatrix}1\\2\\3\end{pmatrix}=14 = \begin{pmatrix}1\\2\\3\end{pmatrix}.\begin{pmatrix}1\\5\\1\end{pmatrix}$$</span> and clearly </p> <p><span class="math-container">$$\begin{pmatrix}1\\2\\3\end{pmatrix} \not = \begin{pmatrix}1\\5\\1\end{pmatrix}.$$</span></p>
2,947,911
<p>I have a homework with this equation: <span class="math-container">$$(x+2y)dx + ydy = 0$$</span></p> <p>However I have no idea how to solve it. I tried couple things:</p> <ol> <li>Is it linear equation, or does it have "standard form" : <span class="math-container">$\frac{dy}{dx}+\frac{x}{y} + 2 = 0$</span>. Well <span class="math-container">$\frac{1}{y}$</span> is not a standard form.</li> <li>Is it exact? <span class="math-container">$\frac{∂M}{∂y}=2$</span> and <span class="math-container">$\frac{∂N}{∂x}=0$</span>. I tried using <span class="math-container">$\mu(y)=e^{\int\frac{2}{y}dy}=y^2$</span> to make it exact, but that didn't work</li> <li>Is it homogeneous? In theory yes as <span class="math-container">$M(tx,ty)=tx+2ty=t(x+2y)=tM()$</span> and <span class="math-container">$N(tx,ty)=ty=tN()$</span> and I reached up to <span class="math-container">$$x(dx)+2ux(dx)+u^2x(dx)+x^2u(du)=0$$</span> But I can't separate it.</li> </ol> <p>Tried searching online and going with subsitution (method #3) was suggested, but they never went beyond that point. Maybe it's just too late and I'm not thinking straight.</p> <p>If you could point me in the right direction, or list steps instead of giving solution if would be great.</p>
Graham Kemp
135,106
<p>Without a falsum constant, your rule of <em>ex falso quodlibet</em> should look something like <span class="math-container">$\phi, \neg \phi \vdash \psi$</span>. &nbsp; So from the contradictory assumptions, <span class="math-container">$\neg\neg a$</span> and <span class="math-container">$\neg a$</span>, you may immediately derive <span class="math-container">$a$</span>.</p> <p><span class="math-container">$$\def\fitch#1#2{~~\begin{array}{|l}#1\\\hline #2\end{array}}\fitch{\neg\neg a}{\vdots\\\fitch{\neg a}{a \qquad \text{ex falso quodlibet }\neg a, \neg\neg a \vdash a }\\\vdots}$$</span></p> <hr> <p><strong>PS:</strong> If <em>ex falso quodlibet</em> looks like <span class="math-container">$\phi\wedge\neg\phi\vdash\psi$</span>, simply introduce a conjunction <em>then</em> use it.</p> <hr> <p><strong>PPS</strong></p> <p>If <em>ex falso quodlibet</em> is not accepted as fundamental, but <em>disjunctive syllogism</em> is, then introduce a disjunction and use that. <span class="math-container">$$\fitch{\neg\neg a}{\vdots\\\fitch{\neg a}{\neg a\vee a\qquad:\text{disjunctive introduction }\neg a\vdash \neg a\vee a\\a \qquad\qquad:\text{disjunctive syllogism }\neg a\vee a, \neg\neg a \vdash a }\\\vdots}$$</span></p> <hr> <p><strong>P<span class="math-container">$^3$</span>S</strong></p> <p>Disjunctive Syllogism and Ex Falso Quodlibet are interprovable - each may be justified only by accepting the other - so which is considered fundamental by a proof system is a matter of preference. &nbsp; (EFQ has a smaller format.) <span class="math-container">$$\tiny\fitch{((p\lor q)\land\lnot p)\to q:\text{Premise of Disj.Sylogism}}{\fitch{p\land\lnot p}{\lnot p:\text{Conj. Elim.}\\p:\text{Conj.Elim.}\\p\lor q:\text{Disj.Intro}\\(p\lor q)\land\lnot p:\text{Conj.Into.}\\q:\text{Cond.Elim.}}\\(p\land\lnot p)\to q:\text{Cond.Into}}\fitch{(p\land\lnot p)\to q:\text{Premise of Ex Falso Quodlibet}}{\fitch{(p\vee q)\land\lnot p}{\lnot p:\text{Conj.Elim.}\\p\lor q:\text{Conj.Elim.}\\\fitch{p}{p\land\lnot p:\text{Conj.Intro.}\\q:\text{Cond.Elim.}}\\p\to q:\text{Cond.Into.}\\\fitch{q}{}\\q\to q:\text{Cond.Intro.}\\q:\text{Disj.Elim.}}\\((p\vee q)\land\lnot p)\to q:\text{Cond.Into.}}$$</span></p> <hr> <p><strong>P<span class="math-container">$^4$</span>S</strong> Oh! &nbsp; As Mauro comments, if <span class="math-container">$\phi\to\psi,\phi\to\neg \psi\vdash \neg \phi$</span> is considered as the negation introduction rule, then <span class="math-container">$\phi,\neg\phi\vdash\psi$</span> (EFQ) could be considered the corresponding negation elimination rule. &nbsp; Thereby cutting out the need for a falsum symbol.</p>
306,744
<p>So if the definition of continuity is: $\forall$ $\epsilon \gt 0$ $\exists$ $\delta \gt 0:|x-t|\lt \delta \implies |f(x)-f(t)|\lt \epsilon$. However, I get confused when I think of it this way because it's first talking about the $\epsilon$ and then it talks of the $\delta$ condition. Would it be equivalent to say: $\forall$ $\delta \gt 0$ $\exists$ $\epsilon \gt0$ $:|x-t|\lt \delta \implies|f(x)-f(t)|\lt \epsilon$. I guess what I'm asking is whether there is a certain order proofs or more formal statements need to follow. I know I only changed the place where I said there is a $\delta$ but is that permissable in a "formal" way of writing?</p>
Cameron Buie
28,900
<p>Let's translate it into words, to see how they compare. Continuity at a given point (in this context) means that if we want to keep the $y$-coordinate within a certain interval (no matter how small), then all we have to do is keep the $x$-coordinate from straying too far. (You actually gave the definition for uniform continuity, as it turns out.)</p> <p>If we simply swap "$\delta&gt;0$" for "$\epsilon&gt;0$" in the definition of continuity, then it means that no matter how far we let the $x$-coordinate stray, the $y$-coordinate will at least manage to stay within <strong>some</strong> interval (which could be arbitrarily large).</p> <p>Do you see how (perhaps surprisingly) substantially different the two are?</p>