qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
3,911,297 | <p>Chapter 12 - Problem 26)</p>
<blockquote>
<p>Suppose that <span class="math-container">$f(x) > 0$</span> for all <span class="math-container">$x$</span>, and that <span class="math-container">$f$</span> is decreasing. Prove that there is a <em>continuous</em> decreasing function <span class="math-container">$g$</span> such that <span class="math-container">$0 < g(x) \le f(x)$</span> for all <span class="math-container">$x$</span>.</p>
</blockquote>
<p>So this question has already been asked and "solved" on MSE 9 years ago <a href="https://math.stackexchange.com/questions/30777/spivaks-calculus-chapter-12-problem-26">here</a>, but the accepted answer isn't very detailed, and I think it is in fact flawed (or I've just misunderstood it). I tried commenting to open up the question again, but it seems pretty dead now, hence why I'm making this follow up.</p>
<p>The answer says to <em>"make <span class="math-container">$g$</span> piecewise linear with <span class="math-container">$g(n) = f(n+1)$</span>"</em>. Can someone explain what this means exactly? I will write my thoughts below, but it's a lot, so feel free to skip.</p>
<p>My thoughts: Notice that if we try to simply let <span class="math-container">$g(x) = f(x+1)$</span>, then it works perfectly except for the fact that <span class="math-container">$g$</span> may not be continuous, because <span class="math-container">$f$</span> need not be continuous (otherwise we could just let <span class="math-container">$g(x) = f(x)$</span> in that case!). So if we could just modify this <span class="math-container">$g$</span> to make it continuous somehow then we're done.</p>
<p>Fortunately, <span class="math-container">$f$</span> is decreasing on <span class="math-container">$\mathbb{R}$</span>, which means the left and right limits do exist, however they might disagree. This means <span class="math-container">$f$</span> can only have jump discontinuities that jump downwards.</p>
<p>So what if we took all the points in <span class="math-container">$\mathbb{R}$</span> where <span class="math-container">$f$</span> has a jump discontinuity, and just joined lines between them? (I think this is what the answer meant by piecewise linear function?) This would guarantee that <span class="math-container">$g$</span> is continuous, however, this approach has some fixable flaws.</p>
<p>First flaw, for starters, it isn't necessarily true that this <span class="math-container">$g$</span> would be always smaller than <span class="math-container">$f$</span>! For example, consider this picture, where <span class="math-container">$f$</span> is the red function, and <span class="math-container">$g$</span> is the black function:</p>
<p><a href="https://i.stack.imgur.com/WrfB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WrfB5.png" alt="" /></a></p>
<p>Sure <span class="math-container">$g$</span> is continuous now, but we've lost the <span class="math-container">$g(x) \leq f(x)$</span> property! We can fix this easily by letting <span class="math-container">$g$</span> be the smaller of the piecewise linear function and <span class="math-container">$f$</span>. Then the picture becomes like this:</p>
<p><a href="https://i.stack.imgur.com/2if25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2if25.png" alt="" /></a></p>
<p>To make this a bit more rigorous, first we need the set of all the points where <span class="math-container">$f$</span> is discontinuous:</p>
<p><span class="math-container">$S = \big\{x: \lim_{y \rightarrow x^-} (f(y)) = \lim_{y \rightarrow x^+} (f(y)) \big\}$</span></p>
<p>Then let <span class="math-container">$l(x)$</span> be the piecewise linear function joining all points <span class="math-container">$\big(x, \lim_{y \rightarrow x^+} [f(y)] \big)$</span>, where <span class="math-container">$x \in S$</span>.</p>
<p>Then finally let <span class="math-container">$g(x) = \text{Min}\big(f(x), l(x)\big)$</span>.</p>
<p>Now this would work fine, so long as <span class="math-container">$l(x)$</span> is well defined. But must it necessarily be so? I'm not sure, and this is where I'm stuck. For example, what if the set <span class="math-container">$S$</span> contains not isolated points, but an entire interval of points? For example, what if <span class="math-container">$f$</span> is a function that has a jump discontinuity at every point in <span class="math-container">$[0,1]$</span>? Then to construct <span class="math-container">$l(x)$</span>, we'd need to join all these jump discontinuity points in <span class="math-container">$[0,1]$</span>, of which it isn't obvious at all we can do that.</p>
<p>Now you might say that an interval of jump discontinuities is impossible, and you'd be right. However the proof of that comes much much later in the book and is certainly beyond the knowledge of this chapter. But more importantly, even if <span class="math-container">$f$</span> doesn't have an interval of jump discontinuities, there are other ways <span class="math-container">$l(x)$</span> can be questionable.</p>
<p>Consider this monstrous example: <span class="math-container">$f(x) =
\begin{cases}
1-\frac{x}{2^{\lfloor 1 - \log_2(|x|)-1 \rfloor}} & :x \leq \frac{1}{2} \\
\frac{6}{6x+5} & :x > \frac{1}{2}
\end{cases} \Biggr\}$</span></p>
<p>Looks something like this (click <a href="https://www.desmos.com/calculator/j13mpqs9io" rel="nofollow noreferrer">here</a> to view in Desmos): <a href="https://i.stack.imgur.com/3dPdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dPdp.png" alt="" /></a></p>
<p>As it turns out, this <span class="math-container">$f$</span> satisfies the questions premises, but it also has the cool property that it contains an infinite number of jump discontinuities in any neighbourhood around 0! As such, in order to construct the piecewise linear function for it, you'd have to join lines between an infinite number of points, and still have a function, which might be possible? But for sure it isn't obvious that it is so...</p>
<p>Those are my thoughts on the problem. So Q1) Is my approach so far in the right direction? Or could it be that I've missed some super simple trick that will make the problem trivial and everything I've said above redundant? Q2) If I am in the right direction, how can I justify taking a linear piecewise function for an infinite number of points in a given interval?</p>
| J. W. Tanner | 615,567 | <p>Using the Bezout relation <span class="math-container">$1=12\times3-7\times5$</span>, we have <span class="math-container">$(a^{12})^3/(a^7)^5=a\in\mathbb Q$</span>.</p>
|
1,276,177 | <p>Four fair 6-sided dice are rolled. The sum of the numbers shown on the dice is 8. What is the probability that 2's were rolled on all four dice?</p>
<hr>
<p>The answer should be 1/(# of ways 4 numbers sum to 8)
However, I can't find a way, other than listing all possibilities, to find the denominator. </p>
| WW1 | 88,679 | <p>I proved a relevant theorem <a href="https://math.stackexchange.com/questions/1259869/rocks-and-squares-balls-and-sticks/1260278#1260278">here</a> ( for some reason this was closed )</p>
<p>Theorem 1: The number of distinct $n$-tuples of whole numbers whose components sum to a whole number $m$ is given by </p>
<p>$$ N^{(m)}_n \equiv \binom{m+n-1}{n-1}.$$</p>
<p>to apply this to your problem , you need to take into account that the theorem allows a minimum value of zero, but the dice show a minimum value of one. So you can just add one to each die so we are looking for the number of 4-tuples whose components sum to 4. I guess we are lucky that no element could possibly be greater than 6, so it will work for 6 sided dice.</p>
<p>The denominator is thus given by $$\binom 73 = 35 $$ which I have verified by enumeration.</p>
|
2,473,132 | <p>Compute $$ \int_{0}^{ \infty }\frac{\ln xdx}{x^{2}+ax+b}$$
I tried to compute the indefinite integral by factorisation and partial fraction decomposition but it became nasty pretty soon. There must be another way to directly evaluate it without actually computing the indefinite integral which I don't know!</p>
| DXT | 372,201 | <p>Let $$I = \int^{\infty}_{0}\frac{\ln x}{x^2+ax+b}dx$$</p>
<p>Put $\displaystyle x = \frac{b}{t}$ and $\displaystyle dx = -\frac{b}{t^2}dt$</p>
<p>So $$I = \int^{\infty}_{0}\frac{\ln (b)-\ln(t)}{t^2+at+b}dt = \ln (b)\int^{\infty}_{0}\frac{1}{t^2+at+b}dt-I$$</p>
<p>So $$I = \frac{\ln (b)}{2}\int^{\infty}_{0}\frac{1}{\left(t+\frac{a}{2}\right)^2+\left(b-\frac{a^2}{4}\right)}dt = \frac{\ln (b)}{2}\cdot \frac{4}{4b-a^2}\cdot \tan^{-1}\left(\frac{4t+2a}{4b-a^2}\right)\Bigg|^{\infty}_{0}$$</p>
<p>for $\displaystyle b-\frac{a^2}{4}>0$</p>
|
749,035 | <p>Call a square-free number a 3-prime if it is the product of three primes. Similarly for 2-primes, 4-primes , 5-primes, etc. Are there two consecutive 3-primes with no 2-prime between them?Are there infinitely many?</p>
| Robert Israel | 8,508 | <p>I hope I got the programming right: I get $679$ pairs less than $10000$, of which the first few are
$$
\eqalign{
[102,105], &[170,174], &[230,231], &[238,246], &[255,258], &[282,285], &[285,286],\cr
[366,370], &[399,402], &[429,430], &[430,434], &[434,435], &[438,442], &[598,602],\cr
[602,606], &[606,609], &[609,610], &[615,618], &[638,642], &[642,645], &[645,646],\cr
[651,654], &[663,665], &[741,742], &[759,762], &[805,806], &[822,826], &[826,830],\cr
[854,861], &[902,903], &[935,938], &[969,970], &[986,987], &[1001,1002], &[1022,1023],\cr
[1030,1034], &[1065,1066], &[1085,1086], &[1086,1090], &[1102,1105], &[1105,1106], &[1130,1131],\cr
[1178,1182], &[1182,1185], &[1221,1222], &[1245,1246], &[1265,1266], &[1295,1298], &[1309,1310],\cr
[1310,1311], &[1334,1335], &[1358,1362], &[1374,1378], &[1406,1407], &[1419,1426], &[1426,1434],\cr
[1434,1435], &[1442,1443], &[1443,1446], &[1462,1463], &[1490,1491], &[1491,1495], &[1505,1506],\cr
[1533,1534], &[1542,1545], &[1547,1551], &[1578,1581], &[1581,1582], &[1595,1598], &[1598,1599],\cr
[1605,1606], &[1606,1614], &[1614,1615], &[1626,1630], &[1634,1635], &[1662,1670], &[1695,1698],\cr
}$$
The sequence 102, 170, 230, 238, ... doesn't seem to be in the OEIS, although <a href="http://oeis.org/A215217" rel="nofollow">http://oeis.org/A215217</a> has the list of n such that n and n+1 are
both products of three distinct primes.</p>
<p>Given any three distinct odd primes $a,b,c$, there are positive integers $s$ and $t$ such that $2as - bct = 1$, and then $2ax - bcy = 1$ for $x = s + bcn$, $y = t + 2an$ for any integer $n$. The arithmetic progressions $\{s+bcn: n \in {\mathbb N}\}$ and $\{t + 2an: n \in {\mathbb N}\}$ each have infinitely many
primes by Dirichlet's theorem. I think it's likely that there are infinitely many $n$ for which both $x = s + bcn$ and $y = t + 2an$ are prime, so that $[bcy, 2ax]$ is in the list, but I suspect that proving this is far beyond our current capabilities.</p>
|
825,848 | <p>I'm trying to show that the minimal polynomial of a linear transformation $T:V \to V$ over some field $k$ has the same irreducible factors as the characteristic polynomial of $T$. So if $m = {f_1}^{m_1} ... {f_n}^{m_n}$ then $\chi = {f_1}^{d_1} ... {f_n}^{d_n}$ with $f_i$ irreducible and $m_i \le d_i$.</p>
<p>Now I've managed to prove this using the primary decomposition theorem and then restricting $T$ to $ker({f_i}^{m_i})$ and then using the fact that the minimal polynomial must divide the characteristic polynomial (Cayley-Hamilton) and then the irreducibility of $f_i$ gives us the result.</p>
<p>However I would like to be able to prove this directly using facts about polynomials/fields without relying on the primary decomposition theorem for vector spaces. Is this fact about polynomials true in general?</p>
<p>We know that $m$ divides $\chi$ and so certainly $\chi = {f_1}^{m_1} ... {f_n}^{m_n} \times g$ but then how do we show that $g$ must have only $f_i$ as it's factors? I'm guessing I need to use the fact that they share the same roots. And I'm also guessing that it depends on $k$, i.e. if $k$ is algebraically closed then it is easy because the polynomials split completely into linear factors.</p>
<p>Help is much appreciated,
Thanks</p>
| Marc van Leeuwen | 18,880 | <p>If you believe in invariant factors, this is easily seen as follows. The minimal polynomial is the last of the invariant factors (arranged so that each divides the next) and the characteristic polynomial is their product. It follows that the minimal polynomial divides the characteristic one (Cayley-Hamilton) and that the characteristic polynomial divides the minimal polynomial to the power the number of invariant factors. With each one dividing a power of the other, they must have the same irreducible factors.</p>
<p>Or if you don't (want to) believe in invariant factors, you can see the non-C-H direction by induction on the dimension, as follows. Leaving the $0$-dimensional base case as exercise, choose any nonzero vector$~v$, let $W$ be the cyclic submodule it generates and let $P\in k[X]$ be the minimal degree monic polynomial such that $P[T](v)=0$. Then $P$ is the characteristic polynomial of the restriction to$~W$ of$~T$ (a well known fact about cyclic modules; it also follows from C-H and dimension consideration). Then the full characteristic polynomial$~\chi$ of$~T$ is the product of $P$ and the characteristic polynomial$~\chi'$ of the linear operator $T_{/W}$ that $T$ induces in the quotient module $V/W$ (this follows by considering the block triangular matrix of$~T$ on a basis of$~V$ that extends one of$~W$). Now let $f$ be an irreducible factor of$~\chi$. If $f$ divides$~P$ we are done, since the minimal polynomial$~\mu$ of$~T$ is a multiple of$~P$ (since $\mu(T)(v)=0$). So suppose the contrary, then $f$ must divide$~\chi'$. By the inductive hypothesis $f$ divides the minimal polynomial$~\mu'$ of $T_{/W}$, which is the minimal degree monic polynomial such that $\mu'[T](V)\subseteq W$. But $\mu$ satisfies $\mu[T](V)=\{0\}\subseteq W$, so $\mu$ is a multiple of $\mu'$, and we conclude $f\mid\mu'\mid\mu$. QED</p>
|
2,340,606 | <p>$$x^2-2(3m-1)x+2m+3=0$$
Find the sum of solutions. It says that the sum equals to $-1$. I just can't wrap my head around this? Any help? Thx</p>
| Dr. Sonnhard Graubner | 175,066 | <p>$$x_{1,2}=3m-1\pm\sqrt{9m^2-8m-2}$$ therefore
$$x_1+x_2=2(3m-1)$$</p>
|
3,129,852 | <p>My question is pretty basic.</p>
<p>Here it goes:</p>
<blockquote>
<p>Is it always true that <span class="math-container">$$\prod_{j=1}^{w}{(2s_j + 1)} \equiv 1 \pmod 4$$</span>
where the <span class="math-container">$s_j$</span>'s are positive integers, and may be odd or even?</p>
</blockquote>
<p>We can perhaps assume that <span class="math-container">$w \geq 10$</span>.</p>
<p>I am thinking that it might be possible to disprove this, but I cannot think of a counterexample at this moment.</p>
<p><strong>MY ATTEMPT</strong></p>
<p>How about
<span class="math-container">$$w=10$$</span>
<span class="math-container">$$s_1 = s_2 = s_3 = s_4 = s_5 = s_6 = s_7 = s_8 = s_9 = 2$$</span>
and
<span class="math-container">$$s_{10} = 1$$</span>
for a counterexample?</p>
| Fabio Lucchini | 54,738 | <p>Since <span class="math-container">$2x+1\equiv(-1)^x\pmod 4$</span>, we have
<span class="math-container">$$\prod_{j=1}^{w}{(2s_j + 1)} \equiv (-1)^{\sum_{j}s_j} \pmod 4$$</span>
hence
<span class="math-container">$$\prod_{j=1}^{w}{(2s_j + 1)} \equiv 1\pmod 4\iff\sum_{j=1}^ws_j\equiv 0\pmod 2$$</span></p>
|
873,582 | <p>How check that $ \sqrt[3]{\frac{1}{9}}+\sqrt[3]{-\frac{2}{9}}+\sqrt[3]{\frac{4}{9}}=\sqrt[3]{\sqrt[3]2-1} $?</p>
| Martial | 99,769 | <p>use three equations:
$$a^3+b^3=(a+b)(a^2-ab+b^2)\quad (1)$$
$$a^3-b^3=(a-b)(a^2+ab+b^2)\quad (2)$$
$$(a+b)^3=a^3+3a^2b+3ab^2+b^3\quad (3)$$
for your problem:
$$left\\=(\sqrt[3]{\frac{1}{3}})^2-\sqrt[3]{\frac{1}{3}}\sqrt[3]{\frac{2}{3}}+(\sqrt[3]{\frac{2}{3}})^2\\=\frac{\frac{1}{3}+\frac{2}{3}}{\sqrt[3]{\frac{1}{3}}+\sqrt[3]{\frac{2}{3}}}\quad using(1)\\=\sqrt[3]{\frac{1}{(\sqrt[3]{\frac{1}{3}}+\sqrt[3]{\frac{2}{3}})^3}}\\=\frac{\sqrt[3]{3}}{1+\sqrt[3]{2}}\\=\sqrt[3]{\frac{3}{(1+\sqrt[3]{2})^3}}\\=\sqrt[3]{\frac{3}{1+3\sqrt[3]{2}+3\sqrt[3]{2^2}+2}}\quad using(3)\\=\sqrt[3]{\frac{1}{1+\sqrt[3]{2}+\sqrt[3]{2^2}}}\\=\sqrt[3]{\sqrt[3]{2}-1}\quad using(2)$$</p>
|
317,981 | <p>Prove the following:</p>
<p>$$\lim_{n \to \infty} \displaystyle \int_0 ^{2\pi} \frac{\sin nx}{x^2 + n^2} dx = 0$$</p>
<p>How would I prove this? I know you have to show your steps, but I'm literally stuck on the first one, so I can't. </p>
| robjohn | 13,854 | <p>The simplest approach seems to be to note that
$$
\left|\frac{\sin(nx)}{x^2+n^2}\right|\le\frac1{n^2}
$$
so that
$$
\left|\int_0^{2\pi}\frac{\sin(nx)}{x^2+n^2}\,\mathrm{d}x\right|\le\frac{2\pi}{n^2}
$$</p>
|
602,973 | <p>I have a problem:</p>
<blockquote>
<p>For $\Omega$ be a domain in $\Bbb R^n$. Show that the function $u(x)=1 \in W^{m,\ 2}(\Omega)$, but not in $W_0^{m,\ 2}(\Omega)$, for all $m \ge 1$.</p>
</blockquote>
<p>=============================</p>
<p>Any help will be appreciated! Thanks!</p>
| user42070 | 113,916 | <p>First I would like to answer the first comment : It is only if you take the whole $\mathbb{R}^n$ that this two spaces coincide not at all for bounded domain. </p>
<p>For the problem : I will do it on $\mathbb{R}$ and $m=1$ and $\Omega=(0,1)$ (easy to adapt). To show that $u$ belongs to $W^{m,2}(\Omega)$ you only need to show that the integration by parts formula (against function in $\mathcal{C}^\infty_c(\Omega)$) holds. Or just notice that $u\in \mathcal{C}^{\infty}(\Omega)$ theferore the classic (then the weak) derivatives exist and are $0$.</p>
<p>Now $W^{m,2}_0(\Omega)$ is by definition the closure of $\mathcal{C}^\infty_c(\Omega)$ with respect to the norm of $W^{m,2}(\Omega)$ :
$$\|\phi\|_{W^{1,2}(\Omega)}=\int|\phi|^2+\int|\phi^{\prime}|^2.$$ </p>
<p>Now assume that $u$ belongs to $W^{m,2}_0(\Omega)$ then there exists a sequence $\phi_n$ in $\mathcal{C}^\infty_c(\Omega)$ such that. $\|\phi_n-u\|_{W^{2,1}(\Omega)}\rightarrow 0$ which by definition imply the followings
$$\int|\phi_n-1|^2\rightarrow 0\\ \int|\phi_n^{\prime}|^2\rightarrow 0\\
\phi_n(0)=\phi_n(1)=0$$ and integration by
parts of the quantity $\int(\phi_n-1)$ leads easily to a contradiction. Generally the only constant function which belongs to this space is $0$.</p>
<p>Roughly speaking $W^{m,2}_0(\Omega)$ are functions (with appropriate weak derivatives) which vanishes in some sense at the boundary (rigorously the trace sense) and this is not the case for $u=1$. </p>
<p>Note : if you are already familiar with the boundary trace of function in sobolev spaces it is obvious : $W^{m,p}_0(\Omega)$ are the function of $W^{m,p}(\Omega)$ for which the trace at the boundary is 0. But since $u$ is continuous the trace of $u$ on $\partial\Omega$ is only $u_{|_{\partial\Omega}}=1\neq 0$
and it is over.</p>
|
3,455,967 | <p>Solve: (Hint: use <span class="math-container">$x\ln y=t$</span>)<span class="math-container">$$(xy+2xy\ln^2y+y\ln y)\text{d}x+(2x^2\ln y+x)\text{d}y=0$$</span></p>
<p>My Work:</p>
<p><span class="math-container">$$x\ln y=t, \text{ d}t=\ln y \text{ d}x+\frac{x}{y} \text{ d}y$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y+\ln y)\text{ d}x+\left(\frac{2x^2}{y}\ln y+\frac{x}{y}\right)\text{d}y=0$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y)\text{ d}x+\left(\frac{2x^2}{y}\ln y\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$(x+2t\ln y)\text{ d}x+\left(\frac{2x}{y}t\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$\left(\frac{x}{t}+2\ln y\right)\text{d}x+\left(2\frac{x}{y}\right)\text{d}y+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x+2\text{ d}t+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x=\left(-2-\frac{1}{t}\right)\text{d}t$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-t^2-t+c$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-x^2\ln^2y-x\ln y+c$$</span></p>
<p><strong>1.</strong> Is my answer correct? </p>
<p><strong>2.</strong> How we could recognize that we should use <span class="math-container">$x\ln y=t$</span>. If question didn't Hint? </p>
<p><strong>3.</strong> All the way that I did to solve this equation was weird for me (because for example we had <span class="math-container">$dx,dy,dt$</span> in line 3) and had not saw this way to solve differential equation before is There any other way to simplification?</p>
| J.G. | 56,861 | <ol>
<li>You're right. Your final result implies <span class="math-container">$x=-2x\ln^2y-\frac{2x^2\ln y}{y}y^\prime-\ln y-\frac{x}{y}y^\prime$</span> and <span class="math-container">$y^\prime=-\frac{(2x\ln y+1)y\ln y+x}{x(1+2x\ln y)}$</span>, which is equivalent to the original equation.</li>
<li>The coefficients of <span class="math-container">$dx,\,dy$</span> are polynomials in <span class="math-container">$x,\,y,\,\ln y$</span>, so an Ansatz <span class="math-container">$t=x^a\ln^by$</span> is natural. The result ends up especially simple if <span class="math-container">$a=b=1$</span>.</li>
<li>Given <span class="math-container">$f(x,\,y)dx+g(x,\,y)dy=0$</span>, we hope some functions <span class="math-container">$h(x,\,y),\,j(x,\,y)$</span> satisfy <span class="math-container">$jf=\partial_xh,\,jg=\partial_yh$</span> (so that the equation is equivalent to <span class="math-container">$dh=0$</span>), whence <span class="math-container">$\partial_x(jg)=\partial_y(jf)$</span>. Again, there's a natural Ansatz, <span class="math-container">$j=x^cy^d\ln^ky$</span>.</li>
</ol>
|
348,395 | <p>If <span class="math-container">$f$</span> a distribution with compact support then they exist <span class="math-container">$m$</span> and measures <span class="math-container">$f_\beta$</span>,<span class="math-container">$|\beta|\leq m$</span> such that
<span class="math-container">$$f=\sum_{|\beta|\leq m}\frac{\partial^\beta f_\beta}{\partial x^\beta}$$</span></p>
<p>how to demonstrate this result ?</p>
| paul garrett | 15,629 | <p>In addition to the other two good answers, one can make a somewhat stronger assertion: let <span class="math-container">$u$</span> be a distribution on <span class="math-container">$\mathbb R^n$</span>, in the Sobolev space <span class="math-container">$H^{-\infty}$</span> (which contains all compactly-supported distributions, since <span class="math-container">$H^{+\infty}\subset C^\infty$</span> by Sobolev imbedding, and compactly-supported distributions are the dual of <span class="math-container">$C^\infty$</span>). Then for index <span class="math-container">$s$</span> such that <span class="math-container">$u\in H^s$</span>, for sufficiently large positive <span class="math-container">$k$</span>, <span class="math-container">$f=(|x|^2+1)^{-k} \widehat{u}$</span> gives <span class="math-container">$\widehat f$</span> inside <span class="math-container">$H^{{n\over 2}+\epsilon}\subset C^o$</span>, again by Sobolev imbedding. Then (up to normalization) <span class="math-container">$(\Delta -1)^k\widehat{f}=u$</span>.</p>
|
4,093,406 | <p>Which of the equations have at least two real roots?
<span class="math-container">\begin{aligned}
x^4-5x^2-36 & = 0 & (1) \\
x^4-13x^2+36 & = 0 & (2) \\
4x^4-10x^2+25 & = 0 & (3)
\end{aligned}</span>
I wasn't able to notice something clever, so I solved each of the equations. The first one has <span class="math-container">$2$</span> real roots, the second one <span class="math-container">$4$</span> real roots and the last one does not have real roots. I am pretty sure that the idea behind the problem wasn't solving each of the equations. What can we note to help us solve it faster?</p>
| Will Jagy | 10,400 | <p><span class="math-container">$$ x^4 -13 x^2 + 36 = (x^2 + 6)^2 - 25 x^2 = (x^2 + 5x+6) (x^2 - 5x+6) $$</span>
and both quadratic factors have real roots(positive discriminants).</p>
<p><span class="math-container">$$ 4 x^4 -10 x^2 + 25 = (2x^2 + 5)^2 - 30 x^2 = (2x^2 + x \sqrt{30}+5) (2x^2 - x \sqrt{30}+5) $$</span>
and both quadratic factors have negative discriminants. OR
<span class="math-container">$$ 4 x^4 -10 x^2 + 25 = (2x^2 - 5)^2 +10 x^2 $$</span>
is always positive</p>
|
2,549,834 | <p>Ive been looking at this problem and trying to use examples online to try to solve it but I get stuck. </p>
<p>It says to use mathematical induction to prove
(1/1*4)+(1/4*7)+(1/7*10)+ ... + (1/(3n-2)(3n+1)) = n/(3n+1)</p>
<p>I solve for n=1 and substitute k for n, but I don’t really know what to do after that step.</p>
<p>It would really help to see this problem laid out step by step. Thank you!!</p>
| Community | -1 | <p>Not sure if you are allowed to use this in your answer, but note that $|f(x)|=g(f(x))$ where $g:\mathbb R\to \mathbb R$, $g(x)=|x|$. Thus, you can first prove that $g$ is continuous, and then invoke the theorem that the composition of two continuous functions is continuous.</p>
<p>Now, note that $g$ is continuous on $(-\infty,0]$ (where it coincides with $x\mapsto -x$) and on $[0,+\infty)$ (where it coincides with $x\mapsto x$), so it is continuous on the whole $\mathbb R$.</p>
|
507,975 | <p>I'm new here and unsure if this is the right way to format a problem, but here goes nothing. I'm currently trying to solve an inequality proof to show that $n^3 > 2n+1$ for all $n \geq 2$.</p>
<p>I proved the first step $(P(2))$, which comes out to $8>5$, which is true.</p>
<p>In the next step we assume that for some $k \geq 2$, $k^3 > 2k+1$.
Then we consider the quotient $\frac{f(k+1)}{f(k)} > \frac{g(k+1)}{g(k)}$. </p>
<p>I so far have simplified it to the following:</p>
<p>$$\begin{align*}
\frac{(k+1)^3}{k^3} &> \frac{2(k+1)+1}{2k+1}\\
&= \frac{k^3+3k^2+3k+1)}{k^3}\\ &> \frac{2k+3}{2k+1}\\
&= 1 + \frac{3}{k} + \frac{3}{k^2} + \frac{1}{k^3}\\ &> \frac{2k+3}{2k+1}
\end{align*}$$</p>
<p>I don't know how to simplify the right side anymore (my algebra is terrible). I know that I have to simplify that inequality and multiply it by our previous assumption. </p>
<p>I should end up with some variant of $(k+1)^3 > 2(k+1)+1$. (This is $P(k+1)$). I just need help simplifying. Thanks!</p>
| njguliyev | 90,209 | <p>Hint: $$\lim_{t\to 0} \sin t \ln \frac{\cos t}{\sin^2t} = -2 \lim_{t\to 0} (\sin t \cdot \ln \sin t) = 0.$$</p>
|
507,975 | <p>I'm new here and unsure if this is the right way to format a problem, but here goes nothing. I'm currently trying to solve an inequality proof to show that $n^3 > 2n+1$ for all $n \geq 2$.</p>
<p>I proved the first step $(P(2))$, which comes out to $8>5$, which is true.</p>
<p>In the next step we assume that for some $k \geq 2$, $k^3 > 2k+1$.
Then we consider the quotient $\frac{f(k+1)}{f(k)} > \frac{g(k+1)}{g(k)}$. </p>
<p>I so far have simplified it to the following:</p>
<p>$$\begin{align*}
\frac{(k+1)^3}{k^3} &> \frac{2(k+1)+1}{2k+1}\\
&= \frac{k^3+3k^2+3k+1)}{k^3}\\ &> \frac{2k+3}{2k+1}\\
&= 1 + \frac{3}{k} + \frac{3}{k^2} + \frac{1}{k^3}\\ &> \frac{2k+3}{2k+1}
\end{align*}$$</p>
<p>I don't know how to simplify the right side anymore (my algebra is terrible). I know that I have to simplify that inequality and multiply it by our previous assumption. </p>
<p>I should end up with some variant of $(k+1)^3 > 2(k+1)+1$. (This is $P(k+1)$). I just need help simplifying. Thanks!</p>
| Felix Marin | 85,343 | <p>$\lim_{x \to \pi/2}\left[\sec\left(x\right)\tan\left(x\right)\right]^{\cos\left(x\right)} = \lim_{x \to 0}x^{-2x} = \lim_{x \to 0}{\rm e}^{-2x\ln\left(x\right)}$.</p>
<p>Since
$\lim_{x \to 0}\left[-2x\ln\left(x\right)\right] = -2\lim_{x \to 0}{\ln\left(x\right) \over 1/x} = -2\lim_{x \to 0}{1/x \over -1/x^{2}} = 0$, $\displaystyle{\color{#ff0000}{\large\lim_{x \to \pi/2}\left[\sec\left(x\right)\tan\left(x\right)\right]^{\cos\left(x\right)} = 1}}$</p>
<p>$\newcommand{\abs}[1]{\left\vert #1\right\vert}$
${\bf\mbox{Without L'H$\hat{\rm o}$pital}}$:
$$
\abs{x\ln\left(x\right)}
\leq
\abs{x\left(x - 1\right)}
\to
0\quad\mbox{when}\quad x \to 0
$$</p>
|
2,814,793 | <p>I don't know what I did wrong. Can anyone point out my mistake? The problem is:</p>
<blockquote>
<p>For $\lim\limits_{x\to1}(2-1/x)=1$, finding $\delta$, such that if $0<|x-1|<\delta$, then $|f(x)-1|<0.1$</p>
</blockquote>
<p>Here is what I did:</p>
<p>Since $|f(x)-1|=|2-1/x-1|=|1-1/x|=|x-1|/x<0.1$, then $|x-1|<0.1x$.
We know that $0<|x−1|<δ$, so $\delta=0.1x=x/10$</p>
<p>The answer sheet says $\delta=1/11$. I don't understand it.</p>
| ShyGuy | 563,183 | <p>The answer means:</p>
<p><strong>Claim.</strong> If $|x - 1| < 1/11$, then $|f(x) - 1| < 1/10$.</p>
<p>Now we prove this. If $|x - 1| < 1/11$, then $10/11 < x < 12/11$. It means</p>
<p>\begin{align}
f(x) - 1 &= 1 - 1/x < 1 - 11/12 = 1/12 < 1/ 10 \\
f(x) - 1 &= 1 - 1/x > 1 - 11/10 = - 1/10.
\end{align}</p>
<p>So $|f(x) - 1| < 1/10$.</p>
|
1,678,922 | <p>let B be a commutative unital real algebra and C its complexification
viewed as the cartesian product of B with itself.
If M is a maximal ideal in A, is the cartesian product of M with itself
a maximal ideal in C? </p>
| ray | 319,283 | <p>indeed, (z,iz) (z,-iz)=(0,0); so (z,iz) belongs to a max ideal which strictly contains the ideal $\{(0,0)\}$. What is then the relation between the
max ideals in B and its complexification C?</p>
|
1,920,776 | <p>As I know there is a theorem, which says the following: </p>
<blockquote>
<p>The prime ideals of $B[y]$, where $B$ is a PID, are $(0), (f)$, for irreducible $f \in B[y]$, and all maximal ideals. Moreover, each maximal ideal is of the form $m = (p,q)$, where $p$ is an irreducible element in $B$ and $q$ is an irreducible element in $(B/(p))[y]$.</p>
</blockquote>
<p>I managed to prove, that any prime non-principal ideal $m \subset B[y]$ is maximal. This gives the first part of the theorem. Now I want to prove, that $m = (p,q)$. I can show, that $m \cap B \neq (0)$. Thus $m$ contains a non-zero element $a$ from B. Since $B$ is PID and $m$ is prime, $m$ must contain an irreducible element from $B$ (we can take an irreducible factor $p$ of $a$). </p>
<p>As I understand, now I need to show, that $m$ contains an element $q$ (see the theorem above). Then it follows that $(p,q) \subseteq m$. And after that I have to prove, that $m \subseteq (p,q)$. How do I make these last steps?</p>
<p>Yes, I have seen similar questions with $B = k[x]$ or $\mathbb{Z}$, but I still cannot prove the last part.</p>
| Rob | 151,459 | <p>The way we represent it is ambiguous. If you are explicit about all unstated bits, then simply flipping all bits negates the number. But you need to have a representation with a decimal point and bits stating what all unstated bits are. For example, say that we are explicit about what unstated bits are so that we can combine signed, unsigned, and uncarried numbers explicitly:</p>
<p>This is zero:
0..0000.0000..0</p>
<p>Use "..1" to represent that there are no zeroes on the right:
0..0000.1111..1</p>
<p>If you perform the carry, you get zeroes on the right and it becomes 1.
This is a mechanical definition that can be checked and done in an actual (finite) machine. Then -1 is:</p>
<p>1..1111.0000..0</p>
<p>Add 1 + -1:</p>
<p>0..0000.1111..1 + 1.1111.0000..0</p>
<p>That gives you: 1..1111.1111..1, which has a carry to be done due to the "..1". So after carry, it's 0..0000.0000..0.</p>
<p>So now we negate an integer. It has all zero bits on the right. So it triggers a carry, which is the same as adding 1.</p>
<p>0..010.000..0</p>
<p>1..101.111..1</p>
<p>1..110.000..0</p>
<p>So the fact that you "flip the bits and add 1" is a little bit of an illusion created by being ambiguous about what all the bits are. It is an emergent optimization for finite integer representations to state it that way.</p>
<p>Negate 1/2:</p>
<p>0..000.100..0</p>
<p>1..111.011..1</p>
<p>1..111.100..0</p>
<p>That's -1 + 1/2.</p>
|
1,624,888 | <h2>Question</h2>
<p>In the following expression can $\epsilon$ be a matrix?</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<h2>Background</h2>
<p>So in quantum mechanics we generally have a solution $|m\rangle$ to a Hamiltonian:</p>
<p>$$ H | m\rangle = E |m\rangle $$</p>
<p>Now using perturbation theory:</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<p>I was curious and substituted $\epsilon$ as a matrix:</p>
<p>$$ \epsilon =
\left( \begin{array}{cc}
0 & 0 \\
1 & 0 \end{array} \right) $$</p>
<p>where $\epsilon$ now, is the nilpotent matrix, we get:</p>
<p>$$ \left( \begin{array}{cc}
H | m \rangle & 0 \\
H_1 |m_1 \rangle + H | m\rangle & H |m_1 \rangle \end{array} \right) = \left( \begin{array}{cc}
E | m \rangle & 0 \\
E_1 |m_1 \rangle + E | m\rangle & E |m_1 \rangle \end{array} \right)$$</p>
<p>Which is what we'd expect if we compared powers of $\epsilon$'s. All this made me wonder if $\epsilon$ could be a matrix? Say something like $| m_k\rangle \langle m_k |$ ? Say we chose $\epsilon \to \hat I \epsilon$</p>
<p>then there exists a radius of convergence. What is the radius of convergence in a general case of any matrix?</p>
| user103093 | 303,143 | <p>Yes and No. When I say yes I mean it is possible in several ways. But it does not make sense.</p>
<p>As usual for a physicist you did not specify what the space is in which your states (kets) live and thus not what the $H$ and $H_1$ are. But of course they are meant to be operators which you can consider to be generalizations of matrices to infinite dimensions. So when you assume $\epsilon$ to be a matrix you could as well absorb this into $H_1$. </p>
<p>Further you put $\epsilon$ to be a constant matrix. Then you can just leave the $\epsilon$ away. The epsilon is there to control the perturbation $H_1$. If you set $\epsilon$ to be a constant you directly solve the problem.</p>
|
142,939 | <p>I have the following question: Let $M$ be an even dimensional Riemannian manifold. Under which conditions does there exists a homotopy to some symplectic manifold? is there any chance that such a homotopy exists even if $M$ is not symplectic? how does the homotopy look like? is it differentiable, only continous ... ? is there any chance that $M$ is homotopic to a complex manifold? Is there any reference in this direction ?</p>
<p>greetings
mirta</p>
| Igor Rivin | 11,142 | <p>This question is discussed at length in <a href="http://www.emis.de/journals/UIAM/actamath/PDF/38-105-128.pdf" rel="nofollow">the very nice survey by A. Tralle.</a> (Homotopy properties of closed symplectic manifolds).</p>
|
142,939 | <p>I have the following question: Let $M$ be an even dimensional Riemannian manifold. Under which conditions does there exists a homotopy to some symplectic manifold? is there any chance that such a homotopy exists even if $M$ is not symplectic? how does the homotopy look like? is it differentiable, only continous ... ? is there any chance that $M$ is homotopic to a complex manifold? Is there any reference in this direction ?</p>
<p>greetings
mirta</p>
| Ian Agol | 1,345 | <p>A <a href="http://www.ams.org/mathscinet-getitem?mr=1625732" rel="nofollow">result of Szabo</a> implies that there are infinitely many homeomorphic but non-diffeomorphic 4-manifolds which do not admit a symplectic structure (the fact that they are homeomorphic is not explicitly stated, but follows from Freedman's classification). However, they are homeomorphic to a symplectic manifold, in fact a Kahler surface, from Freedman's classification. </p>
|
2,470,062 | <blockquote>
<p><span class="math-container">$$\sqrt{k-\sqrt{k+x}}-x = 0$$</span></p>
<p>Solve for <span class="math-container">$k$</span> in terms of <span class="math-container">$x$</span></p>
</blockquote>
<p>I got all the way to
<span class="math-container">$$x^{4}-2kx^{2}-x+k^{2}-x^{2}$$</span>
but could not factor afterwards. My teacher mentioned that there was grouping involved</p>
<p>Thanks Guys!</p>
<p>Edit 1 : The exact problem was solve for <span class="math-container">$x$</span> given that <span class="math-container">$$\sqrt{4-\sqrt{4+x}}-x = 0$$</span> with a hint of substitute 4 with k</p>
| zwim | 399,263 | <p>In this kind of problem, you have to be very careful about the domain of definition. Squaring the equation an find an equivalent polynomial equations is not enough, you need to verify if the solutions found are effective solutions of the original equation.</p>
<p>First two remarks : </p>
<ul>
<li>$x=\sqrt{\cdots}\quad$ thus $x\ge 0$</li>
<li>If $k<0$ then $k-\sqrt{k+x}<0$ and we cannot take the square root of this, so $k\ge 0$</li>
</ul>
<p><br/>
In particular: with $x\ge 0$ and $k\ge 0$</p>
<ul>
<li>$\sqrt{x+k}$ is also well defined </li>
<li>$\sqrt{(x+a)^2}=x+a$ for any $a\ge 0$, we will use that property later.</li>
</ul>
<p><br/></p>
<p>The equation squared twice becomes $(x^2-k)^2=x+k$</p>
<p>$\iff k^2-k(2x^2+1)+(x^4-x)=0$ </p>
<p>with $\Delta=(2x^2+1)^2-4(x^4-x)=(2x+1)^2$</p>
<p>So $k=\frac 12(2x^2+1\pm(2x+1))=x(x-1)$ or $(x^2+x+1)$</p>
<p><br/></p>
<p>Let's no substitute back in the original problem to eliminate superfluous solutions.</p>
<ul>
<li>$k=x(x-1)$</li>
</ul>
<p>$\sqrt{k-\sqrt{k+x}}=\sqrt{x^2-x-\sqrt{x^2}}=\sqrt{x^2-x-x}=\sqrt{x^2-2x}$ </p>
<p>This can be equal to $x$ if and only if $x=0\qquad$ [$x^2-2x=x^2\iff x=0$]</p>
<ul>
<li>$k=x^2+x+1$</li>
</ul>
<p>$\sqrt{k-\sqrt{k+x}}=\sqrt{x^2+x+1-\sqrt{(x+1)^2}}=\sqrt{x^2+x+1-(x+1)}=\sqrt{x^2}=x$ </p>
<p>So the equation is always verified</p>
<blockquote>
<p>Finally the solutions are $(0,0)$ and $(x\ge 0,k=x^2+x+1)$</p>
</blockquote>
|
2,263,431 | <p>Let $$v=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)}+\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p>
<p>Then find difference between maximum and minimum of $v^2$.</p>
<p>I understand both of them are distance of a point on ellipse from origin, but how do we find maximum and minimum?</p>
<p>I tried guessing, and got maximum $v$ when $x=45^{o}$ and minimum when $x=0$, but how do we justify this?</p>
| Andreas | 317,854 | <p>Set $\sin^2(x) = 1 - \cos^2(x) = y^2$ Then you have
$$
v=\sqrt{a^2(1-y^2)+b^2 y^2}+\sqrt{b^2 (1-y^2)+a^2 y^2}
$$
One extremum is obtained (differentiate w.r.t. y) at
$$
\sqrt{a^2(1-y^2)+b^2 y^2}= \sqrt{b^2 (1-y^2)+a^2 y^2}
$$
or
$$
y^2 = 1/2
$$</p>
<p>The other extremum is obtained at $y=0$. </p>
<p>So the difference of $v^2$ between maximum and minimum is
$$
2(a^2 + b^2) - (a+b)^2 = (a-b)^2
$$ </p>
|
2,263,431 | <p>Let $$v=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)}+\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p>
<p>Then find difference between maximum and minimum of $v^2$.</p>
<p>I understand both of them are distance of a point on ellipse from origin, but how do we find maximum and minimum?</p>
<p>I tried guessing, and got maximum $v$ when $x=45^{o}$ and minimum when $x=0$, but how do we justify this?</p>
| Narasimham | 95,860 | <p>There is no need to do a lot of calculus. The ellipses are symmetric with respect to axes of symmetry which fact should be exploited. </p>
<p>It is periodic function, period = $ 2 \pi$. Maximum inter-distance at $ \theta= n \pi/2 $ between ends of major/minor axes as shown and minimum distance $ =0 $ at $ \theta= (2 m-1) \pi/4$ ellipse intersections. </p>
<p>Imho, when you see history of calculus... it came out of geometry and not really so much the other way round .. symbols never created geometry. So the above is adequate proof for your question.</p>
<p>EDIT1:</p>
<p>For example, if we had 4th or 6th order ellipses instead.. we satisfy same conditions of symmetry about the coordinate axes ..and consequently obtain the very same locations of $\theta$ extrema.</p>
<p><a href="https://i.stack.imgur.com/QmdEX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QmdEX.png" alt="enter image description here"></a></p>
|
1,781,467 | <p>First, I know what the right answer is, and I know how to solve it. What I'm trying to figure out is why I can't get the following process to work.</p>
<p>The probability that we get 2 consecutive heads with one flip is 0. The probability that we get 2 consecutive heads with 2 flips = 1/4. The probability of getting 2 consecutive heads with 3 flips = 1/6. The probability of getting 2 consecutive heads with 4 flips = 2/10 = 1/5. And the probability of getting 2 consecutive heads with 5 flips = 3/16. </p>
<p>Am I doing something wrong. I don't see any easy way to use these numbers to solve the original problem of finding the expected number of coin flips to get 2 consecutive heads. </p>
| Med | 261,160 | <p>The problem is that your experiment is not well-defined. To solve the mentioned expectation problem, you would define the experiment as follows:</p>
<p>You toss a coin until you get two heads in a row and then you would stop. So there are no two consecutive heads before that.</p>
<p>each of the probabilities, that you have calculated in your process, is related to the following experiment:</p>
<p>you toss a coin n times. what is the probability of having 2 heads in a row.</p>
<p>By the way, according to the last experiment, you have miscalculated the probabilities in your process.</p>
|
388,815 | <p>I am trying to compute the integral: $$\int_{4}^{5} \frac{dx}{\sqrt{x^{2}-16}}$$
The question is related to hyperbolic functions, so I let $x = 4\cosh(u)$ therefore the integral becomes: $$-\int_{0}^{\ln(2)}\frac{4\sinh(u)}{\sqrt{16-16\cosh^{2}(u)}}du = -\int_{0}^{\ln(2)}1du = -\ln(2)$$</p>
<p>The answer is $\ln(2)$ so if someone could point out where I went wrong that would be great, thanks </p>
| Community | -1 | <p>Setting $x = 4 \cosh(u)$, gives us $dx = 4 \sinh(u)du$. Hence the integral becomes
$$\int_0^{\ln(2)} \dfrac{4 \sinh(u) du}{\sqrt{16\cosh^2(u) - 16}} = \int_0^{\ln(2)} \dfrac{4 \sinh(u) du}{4 \sinh(u)} = \ln(2)$$</p>
|
1,770,508 | <p>I am confronted with the following definition:</p>
<blockquote>
<p>Let <span class="math-container">$K$</span> be a field and <span class="math-container">$e_1,e_2,\ldots,e_n$</span> the standard basis of the <span class="math-container">$K$</span> vector space <span class="math-container">$K^n$</span>.</p>
<p>For <span class="math-container">$1\leq i\leq n$</span> let <span class="math-container">$V_i=Ke_1+Ke_2+\dots+Ke_n$</span>.</p>
</blockquote>
<p>For given <span class="math-container">$0<m<n$</span> let
<span class="math-container">$$P=\{~g\in GL_n(k)~|~g(V_m)=V_m~\}$$</span></p>
<hr />
<p>The part of the definition I don't understand is highlighted, the rest might use as clarification.</p>
<p>So what does <span class="math-container">$Ke_1$</span> mean? (Side question: how does picking a different <span class="math-container">$i$</span> make a difference?)</p>
| Jack D'Aurizio | 44,121 | <p>I think it is best to use some elementary geometry. The volume of a cone is one third of the product between the base area and the height, by Cavalieri's principle. The distance of the vertex of the cone (i.e. the origin) from the plane $x+4z=a$ is $\frac{a}{\sqrt{17}}$ by <a href="http://mathworld.wolfram.com/Point-PlaneDistance.html" rel="nofollow">a well-known formula</a>. The boundary of the base is the set of points $(x,y,z)$ such that $x+4z=a$ and $z^2=\frac{x^2}{4}+y^2$, that is a subset of the elliptic cylinder $\left(\frac{a-x}{4}\right)^2=\frac{x^2}{4}+y^2$. You just have to find the cross-section of such a cylinder (that depends on the determinant of the matrix associated with the last quadratic form) and the cosine of the angle between the plane $z=0$ and the plane $x+4z=0$ (easy to do with dot products) to have the base area.</p>
|
1,852,664 | <p>I'm working through Mumford's Red Book, and after introducing the definition of a sheaf, he says "Sheaves are almost standard nowadays, and we will not develop their properties in detail." So I guess I need another source to read about sheafs from. Does anybody know of any expository papers that cover them? I'd prefer to not have to dig deep into a separate textbook if possible.</p>
| Babai | 36,789 | <p>I understand that you don't want to dig deep into a separate text book of Sheaf Theory. Still my suggestion would <strong>Sheaf Theory by B. R. Tennison</strong>. It is a 163 page book. But just for the introduction to sheaf you can just read the first two chapters of it, which is some 30 pages. The book is self content. It is very detailed (For example, he defines inductive limit in order to define stalk of a sheaf at a point).</p>
|
3,070,127 | <p>Let <span class="math-container">$B := \{(x, y) \in \mathbb{R}^2
: x^2 + y^2 \le 1\}$</span>be the closed ball in <span class="math-container">$\mathbb{R^2}$</span> with center at the origin.
Let I denote the unit interval <span class="math-container">$[0, 1].$</span> Which of the following statements are true?</p>
<p>Which of the following statements are true?</p>
<p><span class="math-container">$(a)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow \mathbb{R}$</span> which is one-one</p>
<p><span class="math-container">$(b)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow \mathbb{R}$</span> which is onto.</p>
<p><span class="math-container">$(c)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow I × I$</span> which is one-one.</p>
<p><span class="math-container">$(d)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow I × I$</span> which is onto.</p>
<p>I thinks none of option will be correct</p>
<p>option <span class="math-container">$a)$</span> and option <span class="math-container">$b)$</span> is false Just using the logics of compactness, that is <span class="math-container">$\mathbb{R}$</span> is not compacts
option c) and option d) is false just using the logic of connectedness that is <span class="math-container">$B-\{0\}$</span> is not connected but <span class="math-container">$I × I-\{0\}$</span> is connectedness</p>
<p>Is my logics is correct or not ?</p>
<p>Any hints/solution will be appreciated </p>
<p>thanks u</p>
| jmerry | 619,637 | <p>It's correct, but we can do better - a closed form in which we're not summing an increasing number of terms.</p>
<p>Exactly half of the functions with nonzero sum have positive sum, by symmetry. There are <span class="math-container">$2^{2n}$</span> total functions. There are <span class="math-container">$\binom{2n}{n}$</span> functions with sum zero, by your argument. Therefore, our answer is
<span class="math-container">$$\frac12\left(2^{2n}-\binom{2n}{n}\right)$$</span></p>
|
400,296 | <p>There are n persons.</p>
<p>Each person draws k interior-disjoint squares.</p>
<p>I want to give each person a single square out of his chosen k, so that the n squares I give are interior-disjoint.</p>
<p>What is the minimum k (as a function of n) for which I can do this?</p>
<p>NOTES:</p>
<ul>
<li>For n=1, obviously k=1.</li>
<li>For n=2, obviously k must be more than 2, since with 2 squares per person, it is easy to think of situations where both squares of person 1 intersect both squares or person 2. It seems that k=3 is enough, but I couldn't prove this formally.</li>
<li>If we don't limit ourselves to squares, but allow general rectangles, then even for n=2, no k will be large enough, as it is possible that every rectangle of player 1 intersects every other rectangle of player 2. So, the sqauare limitation is important.</li>
</ul>
<p>EDIT: The problem has two versions: in one version, the squares are all axis-aligned. In the second version, the squares may be rotated. Solutions to any of these versions are welcome. </p>
<p>EDIT: Here is a possibly useful claim, relevant for the axis-aligned version:</p>
<p><strong>Claim 1</strong>: If two axis-aligned squares, A and B, intersect, then one of the following 3 options hold:</p>
<ul>
<li>At least 2 corners of A are covered by B, and B is as large or larger than A;</li>
<li>One corner of A is covered by B, and one corner of B is covered by A,</li>
<li>At least 2 corners of B are covered by A, and A is as large or larger than B.</li>
</ul>
<p>Thus, if A intersects B, then, out of the 8 corners of A and B, at most 6 corners remain uncovered.</p>
| Erel Segal-Halevi | 29,780 | <p>This answer: <a href="https://math.stackexchange.com/questions/412831/square-coloring">Square coloring</a> proves that, in the axis-aligned version, with $n=2$ people, $k=3$ squares are enough. This is a tight bound.</p>
<p>This answer: <a href="https://cs.stackexchange.com/questions/12275/team-construction-in-tri-partite-graph">https://cs.stackexchange.com/questions/12275/team-construction-in-tri-partite-graph</a> proves that, with $n=3$ people, $k=15$ squares are always enough, but this is obviously not a tight bound (in a previous answer I showed that $k=9$ squares are enough in this case). It is possible that $k=8$ squares are also enough, but this depends on a solution of a SAT problem, and in any case, it is not provably tight.</p>
|
3,497,328 | <p>I'm stuck. Can I get a hint? I heard the answer is zero.</p>
<p>I'm guessing we use the SSA congruent triangle theorem. </p>
<p>If <span class="math-container">$m∠A = 50°$</span>, side <span class="math-container">$a = 6$</span> units, and side <span class="math-container">$b = 10$</span> units, what is the maximum number of distinct triangles that can be constructed? </p>
| Community | -1 | <p><a href="https://i.stack.imgur.com/xcF3rm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xcF3rm.png" alt="enter image description here"></a></p>
<p>The rumors were true: there are no points common to the ray and the circle.</p>
|
2,959,628 | <p>I'm struggling to find the density function of <span class="math-container">$Y=X^3$</span> where <span class="math-container">$X \sim \mathcal{N}(0,1)$</span> with density function <span class="math-container">$\phi(\cdot)$</span>. </p>
<p>Since <span class="math-container">$g(X) = X^3$</span> is monotonic I suppose that the "change of variables" formula <span class="math-container">$$f_Y(y) = f_X(g^{-1}(y)) \cdot \biggr\lvert\frac{d}{dy}g^{-1}(y) \biggr\rvert$$</span> is applicable here? With <span class="math-container">$g^{-1}(Y)=Y^{1/3}$</span> and <span class="math-container">$f_X(x) = \phi(x)$</span> this yields <span class="math-container">$$f_Y(y) = \phi(y^{1/3}) \cdot \biggr\lvert\frac{y^{-2/3}}{3}\biggr\rvert = \frac{\phi(y^{1/3})}{3y^{2/3}}$$</span> which is only defined for <span class="math-container">$y>0$</span> and does not integrate to unity so it can't be correct. </p>
<p>What am I doing wrong? </p>
| DonAntonio | 31,254 | <p>Observe that</p>
<p><span class="math-container">$$\sqrt{x^3+4x}-\sqrt{x^3+x}=\frac{3}{\sqrt x\left(\sqrt{1+\frac4{x^2}}+\sqrt{1+\frac1{x^2}}\right)}\xrightarrow[x\to\infty]{}0$$</span></p>
|
2,811,908 | <p>I have this monster as the part of a longer calculation. My goal would be to somehow make it... nicer.</p>
<p>Intuitively, I would try to somehow utilize the derivate and the integral against each other, but I have no idea, exactly how.</p>
<p>I suspect, this might be a relative common problem, i.e. if we want to derivate a parametrized definite integral against one of its parameters. Maybe it has even some trick... or methodology to handle it.</p>
<p>Of course it is not a problem if the integral remains. The primary goal would be in this stage, to eliminate or significantly simplify the derivative operator.</p>
| Hashimoto | 425,635 | <p>If $f$ is continuous you can just use the <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow noreferrer">Leibniz integral rule</a> to assert: $\frac{d}{db}\int_0^1 e^{bx} f(x)dx = \int_0^1 x e^{bx} f(x)dx$</p>
<p>The Leibniz integral rule says that if $g$ and its partial derivatives are continuous, then
$$
\frac{d}{d y} \int_a^b g(x,y)dx = \int_a^b \frac{\partial}{\partial y} g(x,y)dx
$$</p>
|
2,806,164 | <p>I recently came across a question that asked for the derivative of $e^x$ with respect to $y$. I answered $\frac{d}{dy}e^x$ but the answer was $e^x\frac{dx}{dy}$. How is that the answer? I am confused.</p>
| Aritra Chakraborty | 546,416 | <p>I feel the question needs to be clearer. It needs to be mentioned what exactly is $y$. Like one of the comment says, $y$ may not be a dependent variable. In that case, $\frac{\partial{e^{x}}}{\partial{y}}$ equates to zero. However, in case $x$ is a function of $y$ then, $\frac{de^{x}}{dy}$ can be written as $$\frac{de^{x}}{dy}×\frac{dx}{dx}$$ $$=\frac{de^{x}}{dx}×\frac{dx}{dy}$$ This gives the answer as, $$e^{x}×\frac{dx}{dy}$$</p>
|
165,382 | <p>Is there any number $a+b\sqrt{5}$ with $a,b \in \mathbb{Z}$ with norm (defined by $|a^2−5b^2|$) equal 2?</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\rm\,\ 2\:|\:a^2\!-\!5b^2\! = (a\!-\!b)(a\!+\!b)\!-\!4b^2\Rightarrow\, 2\:|\:a\pm b\:\Rightarrow\:2\:|\:a\mp b\:\Rightarrow\:4\:|\:a^2\!-\!5b^2$</p>
|
4,220,751 | <p>In the example in the following slide, we follow the highlighted formula. With regard to the highlight, I'm confused why the number is greater <strong>or equal</strong> to <span class="math-container">$2^{n-1}$</span>, while only need to be less than <span class="math-container">$2^n$</span> (not less than <strong>or equal to</strong> <span class="math-container">$2^n$</span>)?
<a href="https://i.stack.imgur.com/YZ9He.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YZ9He.png" alt="enter image description here" /></a></p>
| Asinomás | 33,907 | <p>You want to count the walks of length at most <span class="math-container">$20$</span> in the digraph.</p>
<p>Separate them by the different vertices that appear, since the graph without loops is just a directed path:</p>
<p>If there's only one vertex it's <span class="math-container">$20$</span>.</p>
<p>If there is exactly two vertices its <span class="math-container">$\sum_{i=0}^{18}\binom{i+1}{1}$</span> by stars and bars, which is <span class="math-container">$\binom{19}{2}$</span> by hockey stick.</p>
<p>If there is exactly three vertices it's <span class="math-container">$\sum\limits_{i=0}^{17} \binom{i+2}{2}$</span> by starts and bars, which is <span class="math-container">$\binom{18}{3}$</span> by hockey stick.</p>
<p>it follows the answer is <span class="math-container">$3\cdot 20 + 3\cdot \binom{19}{2} + \binom{18}{3}$</span>.</p>
|
531,080 | <blockquote>
<p>Let $P$ be a plane in $\mathbb{R}^3$ parallel to the $xy$-plane. Let $\Omega$ be a closed, bounded set in the $xy$-plane with $2$-volume $B$. Pick a point $Q$ in $P$ and make a pyramid by joining each point in $\Omega$ to $Q$ with a straight line segment. Find the $3$-volume of this pyramid.</p>
</blockquote>
<p>I know that the volume with be dependent on $B$ and the distance from $\Omega$ to $P$, and the solution probably involves multiple integration, but beyond that I don't know where to start.</p>
<p>Any help would be appreciated.</p>
<p>N.b.: Although tagged as "homework," this problem is no longer "live," i.e. the homework has already been turned in.</p>
| Bill Kleinhans | 73,675 | <p>Euclid solves this by dividing a triangular prism into three pyramids of equal volume.</p>
|
4,480,276 | <p>[This question is rather a very easy one which I found to be a little bit tough for me to grasp. If there is any other question that has been asked earlier which addresses the same topic then kindly link this here as I am unable to find such questions by far.]</p>
<p>let <span class="math-container">$f(x)=x.$</span> If we are to find limit of it at <span class="math-container">$a$</span> by the definition <span class="math-container">$|f(x)-a|<\epsilon \implies |x-a|<\epsilon$</span>. Again <span class="math-container">$|x-a|<\delta$</span>. Then how does it prove that <span class="math-container">$\epsilon=\delta$</span>. Can't <span class="math-container">$\delta$</span> be larger or smaller than <span class="math-container">$\epsilon$</span>?</p>
<p>N.B: I am self learning limits from Calculus Early Transcendentals by James Stewart. I could reach upto the lesson "Precise Definition of Limits" which involves such a problem and didn't explain much about the explained problem</p>
| Tryst with Freedom | 688,539 | <p>You want an interval around a point in the domain such that difference of the value of function evaluated at that point from the function evaluated at other points in the interval is less than a number.</p>
<p>In this case, you want to find the size of <span class="math-container">$\delta$</span> such that for <span class="math-container">$|x-a|< \delta$</span> , we have <span class="math-container">$|x-a| < \epsilon$</span> for some given value of <span class="math-container">$\epsilon$</span>. The book says why not take the interval of the domain to be exactly same size as that of the permissible error? (error = |f(a)-L|)</p>
<p>Of course you could take a subset of <span class="math-container">$|x-a|<\epsilon$</span> centered at <span class="math-container">$a$</span> and it would still work , for instance you can take <span class="math-container">$ \delta = \frac{\epsilon}{2}, \frac{\epsilon}{3}... \text{etc}$</span> because even in these smaller interval you'd still be below <span class="math-container">$\epsilon$</span> error in the output.</p>
<p>Hope this helps.</p>
|
1,648,354 | <p>We have been taught that linear functions, usually expressed in the form $y=mx+b$, when given a input of 0,1,2,3, etc..., you can get from one output to the next by adding some constant (in this case, 1).
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 3
\end{array}
$$</p>
<p>But with exponential functions (which are usually expressed in the form $y=a\cdot b^x $), instead of adding a constant, you multiply by a constant. (In this case, 2)
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 8
\end{array}
$$
But... we can keep going, can't we?
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 16\\
4 & 256
\end{array}
$$
In this example, you square the last output to get to the next one. I cannot find a 'general form' for such an equation, nor can I find much information online. Is there a name for these functions? Is there a general form for them? And can we keep going even past these 'super exponential functions'?</p>
| Sri-Amirthan Theivendran | 302,692 | <p>The map satisfies the recurrence relation $x_{n+1}={x_n}^2$ with base case $x_1=2$. By backtracking or induction one can show that $x_n=2^{2^{n-1}}$ for all $n\in\mathbb{N}$.</p>
|
2,447,677 | <p>Consider the parametric equations given by \begin{align*} x(t)&=\sin{t}-t,\\ y(t) & = 1-\cos{t}.\end{align*}</p>
<p>I want to write these parametric equations in Cartesian form. </p>
<p>In order to eliminate the sine and cosine terms I think I probably need to consider some combination of $x(t),y(t), x(t)^2$ and $y(t)^2$ but I can't see exactly how to do this. </p>
| velut luna | 139,981 | <p>$$t=\arccos(1-y)$$
$$x=\sin(\arccos(1-y))-\arccos(1-y)$$
$$=\sqrt{1-(1-y)^2}-\arccos(1-y)$$</p>
|
1,570,983 | <p>The volume of a sphere with radius $r$ is given by the formula $V(r) = \frac{4 \pi}{3} r^3$.</p>
<p>a) If $a$ is a given fixed value for $r$, write the formula for the linearization of the volume function $V(r)$ at $a$.</p>
<p>b) Use this linearization to calculate the thickness $\Delta r$ (in $cm$) of a layer of paint on the surface of a spherical ball with radius $r=52cm$ if the total volume of paint used is $340cm^3$. </p>
<p>The first part is easy to calculate, but I don't know exactly how to get the second part?</p>
| Christian Blatter | 1,303 | <p>The volume $\Delta V$ of paint is approximatively given by
$$\Delta V=V(a+\Delta r)-V(a)\doteq V'(a)\>\Delta r=4\pi a^2\>\Delta r\ .\tag{1}$$
In your problem the unknown is the thickness $\Delta r$ of the paint layer. From $(1)$ we immediately get
$$\Delta r\doteq{\Delta V\over 4\pi\>a^2}\ .$$</p>
|
1,885,434 | <p>I'm currently reading through <em>Introductory Discrete Mathematics</em> by V.K. Balakrishnan and came across the following theorem:</p>
<p>If $X$ is a set of cardinality $n$, then the number of $r$-collections from $X$ is $\binom{r + n - 1}{n - 1}$, where $r$ is any positive integer.</p>
<p>To me, it seems like the number of such $r$-collections ought to be $\dfrac{n^r}{r!}$ since each collection will have $r$ elements and for each of those $r$ elements there are $n$ choices. But, obviously, $n^r$ would be an overestimation considering order does not matter and since each collection of $r$ elements could be permuted in $r!$ ways, we divide by $r!$. </p>
<p>It seems that either I have made a mistake (very likely), he author has made a mistake (much less likely), or maybe my solution and the author's are actually equivalent. If anyone could shed some light on this I'd greatly appreciate it!</p>
| Community | -1 | <p>I know your point is not to get another solution.</p>
<p>Anyway, I would prefer to rewrite the terms with $b-r,b,b+r$, giving
$$(b-r)^2-b(b+r),b^2-(b-r)(b+r),(b+r)^2-(b-r)b$$
or
$$-3br+r^2,r^2,3br+r^2.$$</p>
|
165,154 | <p>a) Let $\,f\,$ be an analytic function in the punctured disk $\,\{z\;\;;\;\;0<|z-a|<r\,\,,\,r\in\mathbb R^+\}\,$ . Prove that if the limit $\displaystyle{\lim_{z\to a}f'(z)}\,$ exists finitely, then $\,a\,$ is a removable singularity of $\,f\,$</p>
<p><strong>My solution and doubt:</strong> If we develop $\,f\,$ is a Laurent series around $\,a\,$ we get
$$f(z)=\frac{a_{-k}}{(z-a)^k}+\frac{a_{-k+1}}{(z-a)^{k-1}}+\ldots +\frac{a_{-1}}{z-a}+a_0+a_1(z-a)+\ldots \Longrightarrow$$
$$\Longrightarrow f'(z)=-\frac{ka_{-k}}{(z-a)^{k+1}}-\ldots -\frac{a_{-1}}{(z-a)^2}+a_1+...$$
and since $\,\displaystyle{\lim_{z\to a}f'(z)}\,$ exists <em>finitely</em> then it must be that
$$a_{-k}=a_{-k+1}=...=a_{-1}=0$$
getting that the above series for $\,f\,$ is, in fact, a Taylor one and thus $\,f\,$ has a removable singularity at $\,a\,$ .</p>
<p><em><strong>My doubt:</em></strong> is there any other "more obvious" or more elementary way to solve the above without having to resource to term-term differentiating that Laurent series? </p>
<p>b) Evaluate, using some complex contour, the integral
$$\int_0^\infty\frac{\log x}{(1+x)^3}\,dx$$</p>
<p><em><strong>First doubt:</em></strong> it is given in this exercise the hint(?) to use the function
$$\frac{\log^2z}{(1+z)^3}$$Please do note the square in the logarithm! Now, is this some typo or perhaps it really helps to do it this way? After checking with WA, the original real integral equals $\,-1/2\,$ and, in fact, it is doable without need to use complex functions, and though the result is rather ugly <em>it nevertheless is</em> an elementary function (rational with logarithms, no hypergeometric or Li or stuff).</p>
<p>The real integral with the logarithm squared gives the beautiful result of $\,\pi^2/6\,$ but, again, I'm not sure whether "the hint" is a typo.</p>
<p><em><strong>Second doubt:</em></strong> In either case (logarithm squared or not), what would be the best contour to choose? I though using one quarter of the circle $\,\{z\;\;;\;\;|z|=R>1\}\,$ <em>minus one quarter of the circle</em> $\,\{z\;\;;\;\;|z|=\epsilon\,\,,0<\epsilon<<R\}\,$, in the first quadrant both, because</p>
<p>$(i)\,$ to get the correct limits on the $\,x\,$-axis when passing to the limits $\,R\to\infty\,\,,\,\epsilon\to 0\,$</p>
<p>$(ii)\,$ To avoid the singularity $\,z=0\,$ of the logarithm (not to mention going around it and changing logarithmic branch and horrible things like this!). </p>
<p>Well, I'm pretty stuck here with the evaluations on the different segments of the path, besides being baffled by "the hint", and I definitely need some help here.</p>
<p>As before: these exercises are supposed to be for a first course in complex variable and, thus, I think they should be more or less "elementary", though this integral looks really evil.</p>
<p>For the time you've taken already to read this long post I already thank you, and any help, hint or ideas will be very much appreciated.</p>
| tatterdemalion | 34,528 | <p>The baseband bandwidth is defined to be the highest frequency of the signal.</p>
<p>In our case, the baseband would be $f_{base} = (2k+1)f_{square}$, the largest frequency that is an odd multiple of the fundamental square wave frequency smaller than $4 kHz$.</p>
|
1,377,927 | <p>Prove using mathematical induction that
$(x^{2n} - y^{2n})$ is divisible by $(x+y)$.</p>
<p><strong>Step 1:</strong> Proving that the equation is true for $n=1 $</p>
<p>$(x^{2\cdot 1} - y^{2\cdot 1})$ is divisible by $(x+y)$ </p>
<p><strong>Step 2:</strong> Taking $n=k$</p>
<p>$(x^{2k} - y^{2k})$ is divisible by $(x+y)$</p>
<p><strong>Step 3:</strong> proving that the above equation is also true for $(k+1)$</p>
<p>$(x^{2k+2} - y^{2k+2})$ is divisible by $(x+y)$.</p>
<p>Can anyone assist me what would be the next step? Thank You in advance!</p>
| Mythomorphic | 152,277 | <p>For $n=k$, assume $P(k)$ is true, we have</p>
<p>$$x^{2k}-y^{2k}=A(x-y)$$, where A is a polynomial.</p>
<p>For $n=k+1$, </p>
<p>\begin{align}
x^{2k+2}-y^{k+2}&=x^2[A(x-y)+y^{2k}]-y^{2k+2}\\&=A(x-y)x^2+x^2y^{2k}-y^{2k+2}\\&=A(x-y)x^2+y^{2k}(x^2-y^2)\\&=A(x-y)x^2+y^{2k}(x-y)(x+y)\\&=(x-y)[Ax^2+y^{2k}(x+y)]\\&=B(x-y)\text{, where } B \text{ is a polynomial}.
\end{align}</p>
|
1,363,074 | <p>Q- If roots of quad. Equation $x^2-2ax+a^2+a-3=0$ are real and less than $3$ then,</p>
<p>a) $a<2$ </p>
<p>b)$2<a<3$ </p>
<p>c)$a>4$</p>
<p>In this ques., i used $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ and then if $a$ will be $1,2$ only then the root will be defined but if we use $3$ then there will be only one root but in ques. Roots are mentioned. Is the right.</p>
| egreg | 62,967 | <p>The discriminant of the polynomial is
$$
4a^2-4(a^2+a-3)=12-4a
$$
so you know that $12-4a\ge0$ and so $a\le 3$, which excludes (c).</p>
<p>For $a=0$, the equation is
$$
x^2-3=0
$$
and the roots are less than $3$, which excludes (b).</p>
<p>Now, how can you completely verify the assert (a)? The largest root of the equation is
$$
\frac{2a+\sqrt{12-4a}}{2}=a+\sqrt{3-a}
$$
and the condition is thus
$$
a+\sqrt{3-a}<3
$$
that's the same as
$$
\sqrt{3-a}<(\sqrt{3-a})^2.
$$
Since $a=3$ does not satisfy the inequality, we can reduce this to
$$
1<\sqrt{3-a}
$$
or
$$
1<3-a
$$
that's $a<2$.</p>
|
91,766 | <p>Hello,
I am looking for a proof for the Chern-Gauss-Bonnet theorem. All I have found so far that I find satisfactory is a proof that the euler class defined via Chern-Weil theory is equal to the pullback of the Thom class by the zero section, but I would like a proof of the fact that this class gives the Euler characteristic when coupled to the fundamental class. Thanks in advance. </p>
| Liviu Nicolaescu | 20,302 | <p>For a complete proof of the Gauss-Bonnet-Chern for <em>arbitrary</em> vector bundles (not just tangent bundles) see Section 8.3.2 of <a href="http://www.nd.edu/~lnicolae/Lectures.pdf" rel="noreferrer">these notes</a>. The proof is Chern's original proof, based on Chern-Weil theory, but the language is more modern.</p>
<p>For a purely topological proof, see Section 5.3 of <a href="http://www.nd.edu/~lnicolae/MS.pdf" rel="noreferrer">these notes</a>.</p>
|
533,628 | <p>I'm learning how to take indefinite integrals with U-substitutions on <a href="https://www.khanacademy.org/math/calculus/integral-calculus/u_substitution/v/u-substitution" rel="nofollow">khanacademy.org</a>, and in one of the videos he says that: $$\int e^{x^3+x^2}(3x^2+2x) \, dx = e^{x^3+x^2} + \text{constant}$$
I understand that the differential goes away, but not how the whole $(3x^2+2x)$ term go away together with the $dx$.</p>
| Arash | 92,185 | <p>Hint: Use <a href="http://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="noreferrer">Stolz-Cesaro Lemma</a>.</p>
<hr>
<p>To see the direct proof, consider that if :
$$
\displaystyle\left|\frac{\displaystyle\sum_{i=1}^na_i}{n}-\frac{\displaystyle\sum_{i=1}^{n-1}a_i}{n-1}\right|\\
=\displaystyle\left|\frac{\displaystyle\sum_{i=1}^{n-1}a_i}{n(n-1)}-\frac{a_n}{n}\right|\leq \displaystyle\left|\frac{\displaystyle\sum_{i=1}^{n-1}a_i}{n(n-1)}\right|+\left|\frac{a_n}{n}\right|\\
\leq \frac{\displaystyle\max_{1\le i\le n}\left|a_i\right|}{n}\\
$$
Because $a_n$ is convergent to $a$, if $a<\infty$ then RHS goes to zero as $n$ goes to infinity and therefore the LHS goes to zero too and the sequence is convergent.</p>
|
3,766,146 | <p>Show that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<hr />
<p><strong>My attempt:</strong></p>
<p>We build a Riemann sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=\frac{n}{N}+1,\,\,\,n\in\mathbb{N}_0$</span></p>
<p>That gives us:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N \left(\frac{n}{N}+1-\left(\frac{n-1}{N}+1\right)\right)\frac{1}{\frac{n}{N}+1}=\sum\limits_{n=1}^N \frac{1}{N}\frac{N}{N+n}=\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p>We know from the definition, that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\int\limits_1^2 \frac{dx}{x}$$</span></p>
<p>Now we show that,</p>
<p><span class="math-container">$$\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<p>First we choose another Rieman sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=2^{\frac{n}{N}},\,\,\,n\in\mathbb{N}_0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\sum\limits_{n=1}^N 2^{\frac{1}{N}}-1=N\left(2^{\frac{1}{N}}-1\right)$$</span></p>
<p>Since we know that (with <span class="math-container">$x \in \mathbb{R})$</span>:</p>
<p><span class="math-container">$$\lim\limits_{x\rightarrow0}\frac{2^x-1}{x}=\ln(2)\Longrightarrow \lim\limits_{x\rightarrow \infty}x(2^{\frac{1}{x}}-1)=\ln(2)\Longrightarrow \lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\ln(2)$$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\ln(2)=\lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\lim\limits_{N\rightarrow \infty}\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\int\limits_1^2 \frac{dx}{x}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p><span class="math-container">$\Box$</span></p>
<hr />
<p>Hey it would be great, if someone could check my reasoning (if its correct) and give me feedback and tips :)</p>
| Oliver Díaz | 121,671 | <p>Your solution using a Riemann sum approximation to the integral <span class="math-container">$\int^2_1\frac{dx}{x}$</span> looks fine to me. Yves Daoust is much more direct.
A similar method was develop <a href="https://www.youtube.com/watch?v=TyA8kQrYzNE" rel="nofollow noreferrer">here</a> to estimate another nice integral, I hope you appreciate it.</p>
<p>Here is a different method similar to Claude Leibovici's solution, but using a more elementary asymptotics for the harmonic sequence <span class="math-container">$H_n=\sum^n_{k=1}\frac{1}{k}$</span>.</p>
<p>It is known that</p>
<p><span class="math-container">$$
\begin{align}
0<H_n-\ln(n)-\gamma < \frac{1}{n+1}\tag{1}\label{one}
\end{align}
$$</span></p>
<p>for all <span class="math-container">$n\in\mathbb{N}$</span>, where <span class="math-container">$\gamma$</span> is a famous <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="nofollow noreferrer">Euler-Mascheroni</a> constant. The derivation of this is not difficult. It is based on a comparison between the integral <span class="math-container">$\int^n_1\frac{dx}{x}$</span> and <span class="math-container">$H_n$</span>.</p>
<p>Using <span class="math-container">$\eqref{one}$</span> with <span class="math-container">$n=2N$</span> and <span class="math-container">$n=N$</span> gives</p>
<p><span class="math-container">$$
0<H_{2N}-\ln(2N)-\gamma < \frac{1}{2N+1}\tag{2}\label{two}
$$</span></p>
<p><span class="math-container">$$
\begin{align}
0<H_{N}-\ln(N)-\gamma < \frac{1}{N+1}\tag{3}\label{three}
\end{align}
$$</span></p>
<p>Subtracting <span class="math-container">$\eqref{three}$</span> from <span class="math-container">$\eqref{two}$</span> gives
<span class="math-container">$$
-\frac{1}{N+1}< H_{2N}-H_N -\ln(2N)+\ln(N)<\frac{1}{2N+1}
$$</span></p>
<p>The term <span class="math-container">$\ln(2N)-\ln(N)=\ln(2)=\int^2_1 \frac{dx}{x}$</span>. Then applying the squeeze lemma you obtain
<span class="math-container">$$
\lim_{N\rightarrow\infty}\sum^N_{n=1}\frac{1}{N+n}=\lim_{N\rightarrow\infty}\sum^{2N}_{n=N+1}\frac{1}{n}=\lim_{N\rightarrow\infty}\big(H_{2N}-H_N\big)=\ln 2
$$</span></p>
<p>I learned this method from this <a href="https://www.youtube.com/watch?v=3AaLutx1Asg&t=6s" rel="nofollow noreferrer">source</a> where they use it to estimate another cool limit: <span class="math-container">$\lim_{n\rightarrow\infty}(H_{F_n}-H_{F_{n-1}} )$</span>, there <span class="math-container">$F_n$</span> is the Fibonacci sequence.</p>
|
4,396,765 | <p>What would be an efficient algorithm to determine if <span class="math-container">$n \in \mathbb{N}$</span> can be written as <span class="math-container">$n = a^b$</span> for some <span class="math-container">$a,b \in \mathbb{N}, b>1$</span>?</p>
<p>So far, I've tried:</p>
<pre><code>def ispower(n):
if n<=3:
return False
LIM = math.floor(math.log2(n))
for b in range(2,LIM+1):
a = math.pow(n, 1/b)
a_floor = math.floor(a)
print(a,a_floor)
if a == a_floor:
return True
return False
</code></pre>
<p>That is, checking if the <span class="math-container">$b-th$</span> roots are integer, for <span class="math-container">$b$</span> from 2 to <span class="math-container">$LIM$</span>, where <span class="math-container">$LIM$</span> stands for the ultimate limit of n being a power of 2.</p>
<p>Thanks for your comments.</p>
| Gareth Ma | 948,125 | <p>If you restrict <span class="math-container">$b\geq 2$</span>, then the most efficient way is <em>probably</em> simply testing each root <span class="math-container">$b = 2, 3, \ldots, \lfloor\log_2 n\rfloor$</span>. This is of runtime <span class="math-container">$O(\log n)$</span> and can be implemented using the <code>gmpy2</code> module efficiently:</p>
<pre><code>import gmpy2
p = 197
e = 989
n = p ** e
b = 2
while 2 ** b <= n:
root, exact = gmpy2.iroot(n, b)
if exact:
print(f"n is a {b}-th power")
break
b += 1
else:
print(f"n is not a perfect power.")
<span class="math-container">```</span>
</code></pre>
|
2,889,651 | <p>The given points are $M(3,-1,2)$ and $M1(0,1,2)$ , plane A is passing though this 2 points. Plane B $2x-y+2z-1=0$ and its normal to A. How to we find the equation of plane A? I have tried with cross product of vector $MM1$ and vector $b=(2,-1,2)$, but it didn't work.</p>
| P Vanchinathan | 28,915 | <p>Take an odd permutation on $n$ symbols for large $n$, for example three parallel transpositions for $n=6$ such as (12)(34)(56). The corresponding permutation matrix will be orthogonal and have determinant $-1$.</p>
<p>Its eigen space corresponding to $-1$ eigenvalue is 3-dimensional and hence not a reflection matrix.</p>
<p>Much easier to see that diagonal matrices with $\pm1$'s in the diagonal are orthogonal and among them $-1$ determinant is easy to find again providing counterexamples.</p>
<p>Third kind is: get many $2\times2$ reflection matrices, say 3 of them. Call them $A,B,C$. Now construct a $6\times6$ matrix in block-diagonal form using $A,B,C$ as diagonal blocks.</p>
|
72,943 | <p>I want find the minima of a (multivariable) function under a constraint which has to be fulfilled on a whole interval, let's say
$$
\nabla f (\underline x) = 0 \ \\ \ c(\underline x,s)\geq0\ \forall s\in [0,1].
$$
How do I implement such a condition into the <code>Minimize[{f[x1,x2,...,xn],c[x1,...,xn,s]>=0 ?},{x1,x2,...,xn}]</code> function?
Thanks in advance!</p>
<p>Edit: Ok, small mistake. I wanted a condition to be an inequality. If I just change that in the example proposed it is stated that this are not valid constraints:</p>
<pre><code>f[x_, y_] := x^2 + y^2;
c[x_, y_, s_] = 2 x + 3 y + s;
NMinimize[{f[x, y], c[x, y, s] >= 0, 1 >= s >= 0}, {x, y}]
</code></pre>
| Elena Fortina | 53,568 | <p>This belongs to the class of Semi-Infinite Programming problems for which ad-hoc algorithms must be used. The most intuitive one (discretization) involves building a finite grid on the interval and imposing the constraint on the grid points only, thus obtaining a classical constrained optimization problem with n constraints. One solves a sequence of such discretized problems on increasingly fine grids until a suitable convergence criterion is satisfied.
You can check out the Wikipedia page for a list of references on this and other approaches:
<a href="https://en.m.wikipedia.org/wiki/Semi-infinite_programming" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Semi-infinite_programming</a></p>
|
2,409,377 | <p>I'm trying to prove that if $F \simeq h_C(X)$ or "$X$ represents the functor $F$", then $X$ is unique up to unique isomorphism. I already know that if $h_C(X) \simeq F \simeq h_C(Y)$ that $s: X \simeq Y$ since Yoneda says that $h_C(X)$ is fully faithful, so reflects isomorphisms (in either direction). If $h_C(X) \simeq h_C(Y)$ is unique then I'm done as then $\psi(s) = \psi(s')$ ans so $s = s'$, where $\psi : h_C(X,Y) \to \text{Hom}_{C^{\wedge}}(h_C(X), h_C(Y))$ is the Yoneda bijection. </p>
<p>But how do I know that $h_C(X) \simeq h_C(Y)$ is unique?</p>
| Community | -1 | <p>You're asking the wrong question — you're considering the wrong kind of isomorphism. Natural isomorphisms between functors are irrelevant here — the subject is about natural isomorphisms between <em>fuctors equipped with a natural transformation from $F$</em></p>
<p>Put differently, the relevant notions of isomorphism are those of the coslice category $F / \widehat{C}$ </p>
<p>Simplifying, you are only interested in natural isomorphisms $h_C(X) \to h_C(Y)$ that make a commutative triangle</p>
<p>$$ \begin{matrix}
F &\to & h_C(X)
\\ & \searrow & \downarrow
\\ & & h_C(Y) \end{matrix} $$</p>
<p>and <em>this</em> isomorphism is unique.</p>
<p>Regarding the isomorphisms bewteen $X$ and $Y$, the same applies — you are only interested in those isomorphisms that make the above triangle commute. Or put differently, the notion of isomorphism in the comma category $(F, h_C)$.</p>
|
9,540 | <p>I'm following <a href="http://reference.wolfram.com/mathematica/ref/FinancialData.html">http://reference.wolfram.com/mathematica/ref/FinancialData.html</a></p>
<p>I get the following:</p>
<pre><code>In[6]:= DateListLogPlot[FinancialData["^DJI", All]]
</code></pre>
<blockquote>
<p>During evaluation of In[6]:= DateListLogPlot::ntdt: The first argument to DateListLogPlot should be a list of pairs of dates and real values, a list of real values, or a list of several such lists. >></p>
<p>Out[6]= DateListLogPlot[Missing["NotAvailable"]]</p>
</blockquote>
<pre><code>In[8]:= FinancialData["DJI", All]
</code></pre>
<blockquote>
<p>During evaluation of In[8]:= FinancialData::notent: DJI is not a known entity, class, or tag for FinancialData. Use FinancialData[] for a list of entities. >></p>
<p>Out[8]= FinancialData["DJI", All]</p>
</blockquote>
<pre><code>In[9]:= FinancialData["^DJI"]
</code></pre>
<blockquote>
<p>Out[9]= Missing["NotAvailable"]</p>
</blockquote>
<p>What's going on here? Is the DJI data unavailable somehow?</p>
| user2047 | 2,047 | <p>The problem here is with the data provider Yahoo!. There has been intermittent problems with Yahoo! over the past few weeks with the DJI. The workaround is <code>WolframAlpha[]</code> as described in one of the answers above.</p>
<hr>
<p>(from Searke's comment)</p>
<p>Yahoo! is <a href="http://help.yahoo.com/kb/index?page=content&y=PROD_FIN&locale=en_US&id=SLN2332" rel="nofollow">no longer licensed to provide data downloads for the Dow Jones Index</a>. Since Yahoo! is (one of) the data providers used by <code>FinancialData[]</code>, <code>FinancialData[]</code> is also affected by this restriction.</p>
|
1,419,185 | <p>I am attempting to help someone with their homework and these concepts are a bit above me. I apologize for the terrible graph drawing. I am using a surface pro 3 and it has an awful camera so I can't take a picture of the problem so I attempted to trace it.</p>
<p><a href="https://i.stack.imgur.com/FzMSf.png" rel="nofollow noreferrer">http://i.stack.imgur.com/FzMSf.png</a></p>
<p>The problem shows a graph in the shape of a W. The left part comes downward to -3,0, the center is at 0,0 and the right is at 3,0. There are no numbers shown on the y axis. I believe the W shape indicates that it is a graph of a quartic equation.</p>
<p>The problem states:</p>
<p>"Find the formula for the graph above, given that it is a polynomial, that all zeroes of the polynomial are shown, that the exponents of each of the zeroes are the least possible, and that it passes through the point (-1, -8)."</p>
<p>Now from what I could find trying to research quartic equations is that my formula should look like ax^4+bx^3+cx^2+dx+e, but I have no idea where to start.</p>
<p>Edit: I managed to get the solution as x^4-9x^2 by using Desmos and playing with the graph using your comments to guide me, but I am not sure how to go through the steps mathematically.</p>
| lhf | 589 | <p>If this is a quartic then you're given the three roots of its derivative. You can integrate this cubic to recover the quartic and use the known point to find the leading coefficient and the constant of integration. Note that $x=0$ is a zero of both the quartic and the cubic.</p>
<p>Solution:</p>
<blockquote class="spoiler">
<p> The derivative has roots $-3,0,3$ and so is $ax(x^2-9)$. The quartic is thus $a (x^4/4-9 x^2/2)+ c$. Since $0$ is a root, $c=0$. Now solve $-8=a(1/4-9/2)$ to find $a$.</p>
</blockquote>
|
698,702 | <p>Let $V$ be a vector space with finite dimension $n$ and $T:V\longrightarrow V$ is a linear transformation such that $T^{2}=0$. Then</p>
<ol>
<li><p>$rank(T)\leq\frac{n}{2}$</p></li>
<li><p>$n(T)\leq\frac{n}{2}$</p></li>
<li><p>$rank(T)\geq n(T)$</p></li>
<li><p>$rank(T)\geq \frac{n}{2}$</p></li>
</ol>
| J.R. | 44,389 | <p>I assume that you mean $n(T)=\dim \ker T$.</p>
<ol>
<li><strong>True</strong>. This follows from <a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Properties" rel="nofollow">Sylvester's rank inequality</a>:
$$\operatorname{rank}(A)+\operatorname{rank}(B)-n\le \operatorname{rank}(AB)$$</li>
</ol>
<p>2., 3., 4. <strong>False</strong>. Counterexample: $T=0$.</p>
<p><em>Note:</em> 3. is true with the reverse inequality, because $T^2=0$ implies $\mathrm{im}\,T\subseteq\ker T$.</p>
|
788,814 | <p>I need to generate binomial random numbers:</p>
<blockquote>
<p>For example, consider binomial random numbers. A binomial random
number is the number of heads in N tosses of a coin with probability p
of a heads on any single toss. If you generate N uniform random
numbers on the interval $(0,1)$ and count the number less than $p$, then
the count is a binomial random number with parameters $N$ and $p$.</p>
</blockquote>
<p>In my case, my N could range from $1\times10^{3}$ to $1\times10^{10}$. My p is around $1\times10^{-7}$.</p>
<p>Often my $n\times p$ is around $1 \times 10^{-3}$.</p>
<p>There is a trivial implementation to generate such binomial random number through loops:</p>
<pre><code>public static int getBinomial(int n, double p) {
int x = 0;
for(int i = 0; i < n; i++) {
if(Math.random() < p)
x++;
}
return x;
}
</code></pre>
<p>This native implementation is very slow. I tried the Acceptance Rejection/Inversion method [1] implemented in the Colt (<a href="http://acs.lbl.gov/software/colt/" rel="nofollow">http://acs.lbl.gov/software/colt/</a>) lib. It is very fast, but the distribution of its generated number only agrees with the native implementation when $n \times p$ is not very small. In my case when $n\times p = 1\times 10^{-3}$, the native implementation can still generate the number $1$ after many runs, but the Acceptance Rejection/Inversion method can never generate the number $1$. (always returns $0$) </p>
<p>Does anyone know what is the problem here? Or can you suggest a better binomial random number generating algorithm that can solve my case.</p>
<p>[1] V. Kachitvichyanukul, B.W. Schmeiser (1988): Binomial random variate generation, Communications of the ACM 31, 216-222.</p>
| Greg Martin | 16,078 | <p>Here's an algorithm that gives a good approximation precisely when $np$ is small. The probability that there are exactly $k$ heads is $\binom nk p^k(1-p)^{n-k}$. Note that
$$
\frac{\binom n{k+1} p^{k+1}(1-p)^{n-(k+1)}}{\binom nk p^k(1-p)^{n-k}} = \frac p{1-p} \frac{n-k}{k+1},
$$
which is small when $np$ is small; hence each term is very large compared to the next term.</p>
<p>Now the conditional probability of having at least $j+1$ heads, given that you have at least $j$ heads, is
$$
\frac{\sum_{k=j+1}^n \binom nk p^k(1-p)^{n-k}}{\sum_{k=j}^n \binom nk p^k(1-p)^{n-k}} \approx \frac{\binom n{j+1} p^{j+1}(1-p)^{n-(j+1)}}{\binom nj p^j(1-p)^{n-j}} = \frac p{1-p} \frac{n-j}{j+1},
$$
since both sums are dominated by their initial terms.</p>
<p>Therefore you can generate the number of heads this way: provisionally assume there are $0$ heads. Then change $0$ to $1$ with probability $\frac p{1-p} n$, else halt. If you get $1$, change $1$ to $2$ with probability $\frac p{1-p} \frac{n-1}2$, else halt. And so on.</p>
|
788,814 | <p>I need to generate binomial random numbers:</p>
<blockquote>
<p>For example, consider binomial random numbers. A binomial random
number is the number of heads in N tosses of a coin with probability p
of a heads on any single toss. If you generate N uniform random
numbers on the interval $(0,1)$ and count the number less than $p$, then
the count is a binomial random number with parameters $N$ and $p$.</p>
</blockquote>
<p>In my case, my N could range from $1\times10^{3}$ to $1\times10^{10}$. My p is around $1\times10^{-7}$.</p>
<p>Often my $n\times p$ is around $1 \times 10^{-3}$.</p>
<p>There is a trivial implementation to generate such binomial random number through loops:</p>
<pre><code>public static int getBinomial(int n, double p) {
int x = 0;
for(int i = 0; i < n; i++) {
if(Math.random() < p)
x++;
}
return x;
}
</code></pre>
<p>This native implementation is very slow. I tried the Acceptance Rejection/Inversion method [1] implemented in the Colt (<a href="http://acs.lbl.gov/software/colt/" rel="nofollow">http://acs.lbl.gov/software/colt/</a>) lib. It is very fast, but the distribution of its generated number only agrees with the native implementation when $n \times p$ is not very small. In my case when $n\times p = 1\times 10^{-3}$, the native implementation can still generate the number $1$ after many runs, but the Acceptance Rejection/Inversion method can never generate the number $1$. (always returns $0$) </p>
<p>Does anyone know what is the problem here? Or can you suggest a better binomial random number generating algorithm that can solve my case.</p>
<p>[1] V. Kachitvichyanukul, B.W. Schmeiser (1988): Binomial random variate generation, Communications of the ACM 31, 216-222.</p>
| Hagen von Eitzen | 39,174 | <p>The following method is theoretically exact, provided we have a "good" random generator <code>urand()</code>$\in[0,1)$. It uses the geometric distribution, i.e. the number of trials until a probability $p$ event occurs for the first time:</p>
<pre><code>int getGeometric(double p) {
return ceil( log( urand() ) / log(1-p) );
}
int getBinomial(int N, double p) {
int count = 0;
for (;;) {
int wait = getGeometric(p);
if (wait > N) return count;
count++;
N -= wait;
}
}
</code></pre>
<p>However, if $p$ is really small, you may want to replace <code>log(1-p)</code> with <code>(-p)</code>. Also <code>getGeometric</code> exhibits some rounding artefacts when <code>urand()</code>is <em>very</em> small.</p>
<hr>
<p>The following is roughly based on suggestions in Knuth, <em>Seminumerical Algorithms</em>, 3.4.1.F(2). Many improvements especially for <code>getBeta</code> and <code>getGamma</code> are possible:</p>
<pre><code>#define NOTSOMANY 50
int getBinomial(int N, double p) {
if (N < NOTSOMANY) {
int count = 0;
for (int i=0; i<N; i++) if (urand() < p) count++;
return count;
}
int a = 1+N/2;
int b = N+1-a;
double X = getBeta(a,b);
if (X>p)
return getBinomial(a-1,p/X);
else
return a + getBinomial(b-1,(p-X)(1-X));
}
#define SOMELIMIT 3.0
// must be > 1.0, should be >= 3.0 to make getGamma fast,
// but should also be fairly small to make loop short
double getBeta(double a, double b) { // a>0, b>0
if (a < SOMELIMIT && b < SOMELIMIT) {
for (;;) {
double Y1 = exp(ln(urand())/a), Y2 = exp(ln(urand())/b);
if (Y1+Y2 <1) return Y1/(Y1+Y2);
}
} else {
double X1 = getGamma(a), X2 = getGamma(b);
return X1/(X1+X2);
}
}
double getGamma(double a) { // a > 1
for (;;) {
double Y = tan(Pi*urand());
double X = sqrt(2*a-1)*Y+a-1;
if ( X>0 && urand()
<= (1+Y*Y)*exp((a-1)*ln(X/(a-1))-sqrt(2*a-1)*Y) )
return X;
}
}
</code></pre>
<hr>
<p>Yet another approach: Instead of $N$ trials with $p$, make $N/2$ trials with $1-(1-p)^2=(2-p)p$ to count how many pairs ($M$, say, with $M\approx Np$ if $p$ is small) of consecutive trials have at least one hit. Among these, the pairs with two hits are $(M,p/(2-p))$ binomially distributed</p>
<pre><code>int getBinomial(int N, double p) {
if (N < NOTSOMANY) {
int count = 0;
for (int i=0; i<N; i++) if (urand() < p) count++;
return count;
}
if ( p > 0.5 )
return N-getBinomial(N,1-p);
else {
int M = getBinomial(N/2,p*(2-p));
int result = M + getBinomial(M, p/(2-p));
if (N&1)
if (urand() < p) result++;
return result;
}
}
</code></pre>
<p>I didn't investigate that further, but it is definitely useful only if $Np$ is not big (as each step of the result ultimately comes from some <code>count++</code>)</p>
|
306,337 | <p>I have a question.
Is this integral improper?
$$\int_0^\infty \frac{5x}{e^x-e^{-x}} \, dx = \int_0^a \frac{5x}{e^{x}-e^{-x}} \, dx+ \int_a^\infty \frac{5x}{e^x-e^{-x}} \, dx$$</p>
<p>Why is $\displaystyle\int_0^a \frac{5x}{e^{x}-e^{-x}}dx$ an improper integral at point $x=0$? And $\displaystyle\int_a^\infty \frac{5x}{e^{x}-e^{-x}} \, dx$ is improper integral at $x \to \infty $?</p>
<p>Thanks!</p>
| Michael Hardy | 11,667 | <p>An integral
$$
\int_a^b f(x) \, dx
$$
is improper at $a$ if it must be defined as
$$
\lim_{c\downarrow a} \int_c^b f(x)\,dx.
$$
In Lebesgue's theory of integration, that wuold be necessary only if for every $c>a$, the integral $\displaystyle\int_a^c f(x)\,dx$ cannot be defined because the integrals of the positive and negative parts are both infinite. For example $\displaystyle \int_0^\infty \frac{\sin x}{x}\,dx$ is improper at $\infty$ because if one integrates only over those regions where $(\sin x)/x$ is positive, one gets $+\infty$, and if one integrates only over regions where $(\sin x)/x$ is negative, one gets $-\infty$. But $\displaystyle\lim_{b\to\infty} \int_0^b \frac{\sin x}{x}\,dx$ nonetheless exists and is finite.</p>
<p>This doesn't happen with $\displaystyle\int_0^b \frac{5x}{e^x-e^{-x}}\,dx$ at $0$. But in other contexts than Lebesgue's theory, when it happens that the only good way to find $\displaystyle\int_0^b$ is by finding $\displaystyle\lim_{c\downarrow 0}\int_c^b$, then one might call it an improper integral. In this case, the function approaches $5$, and thus remains bounded, as $x$ approaches $0$, and that's how one knows it's not improper in Lebesgue's sense. But the function is undefined at $0$, and therefore so is its antiderivative, so if you're using the fundamental theorem of calculus, i.e. if you're finding the integral by finding the antiderivative, it may be that taking the limit is the only way to find the integral, and so in that sense it's improper.</p>
|
120,808 | <pre><code>Limit[Sum[k/(n^2 - k + 1), {k, 1, n}], n -> Infinity]
</code></pre>
<p>This should converge to <code>1/2</code>, but <code>Mathematica</code> simply returns <code>Indeterminate</code> without calculating (or so it would appear). Any specific reason why it can't handle this? Did I make a mistake somewhere?</p>
| Jens | 245 | <p>In cases where you can't get a symbolic result, it's also possible to use a completely numerical approach:</p>
<pre><code>Needs["NumericalCalculus`"]
sum[n_?NumberQ] := NSum[k/(n^2 - k + 1), {k, 1, n}]
NLimit[sum[n], n -> Infinity]
(* ==> 0.499999 *)
</code></pre>
|
2,229,244 | <p>combinatorial proof of $2^{n+1}\nmid (n+1)(n+2)\dots (2n)$.</p>
<p>I don't have any ideas about proving $sth \nmid sthx$ using combinatorics for showing $sth \mid sthx$ we show that there is a problem that gives $\frac{sthx}{sth*k}$ but what about proving sth doesn't divide sth using combinatorics?</p>
| Ethan Bolker | 72,858 | <p>You have the logic backwards. The argument shows directly that "evaluation at $i$" is a surjection, without any mention or discussion of cardinality. There's no "because" needed in that part of the proof.</p>
<p>Now <em>because</em>
you have found a surjection you know the cardinality of $\mathbb{R}[x]$ is at least as great as the cardinality of $\mathbb{C}$ .</p>
<p>There are other polynomials in $\mathbb{R}[x]$ besides the linear ones. Evaluation at $i$ does map them to complex numbers, so "evaluation at $i$" is not injective.</p>
|
3,162,294 | <p>I've been trying to prove this statement by opening up things on the left hand side using the chain rule but am really getting nowhere. Any tips/hints would be very helpful and appreciated!</p>
| Peter Foreman | 631,494 | <p>I assume throughout that the die used is <span class="math-container">$6$</span> sided.</p>
<p>Taking each value of <span class="math-container">$X$</span> separately there are <span class="math-container">$11$</span> possible values the 'larger sum' can take - <span class="math-container">$\{2,3,4,5,6,7,8,9,10,11,12\}$</span>. In order to achieve <span class="math-container">$X=2$</span> in <span class="math-container">$n$</span> rolls of the die, one must roll <span class="math-container">$n$</span> <span class="math-container">$1$</span>s which has a probability of <span class="math-container">$\frac1{6^n}$</span> of occurring. For <span class="math-container">$X=3$</span> one must roll a single <span class="math-container">$2$</span> and otherwise <span class="math-container">$1$</span>s. The probability of this occurring is <span class="math-container">$\binom{n}{1}\frac{1}{6^n}=\frac{n}{6^n}$</span> because the <span class="math-container">$2$</span> can be rolled in any of the <span class="math-container">$n$</span> rolls and there are <span class="math-container">$n$</span> total rolls occurring with a probability of <span class="math-container">$\frac16$</span> of occurring. For <span class="math-container">$X=4$</span> one can roll any number of <span class="math-container">$2$</span>s greater than or equal to two and otherwise roll a <span class="math-container">$1$</span>. The probability of this occurring is equal to <span class="math-container">$\frac1{3^n}-\frac{1+n}{6^n}=\frac{2^n-n-1}{6^n}$</span> because this is just equal to the probability of rolling <span class="math-container">$n$</span> <span class="math-container">$1$</span>s or <span class="math-container">$2$</span>s and not rolling either all <span class="math-container">$1$</span>s or only one <span class="math-container">$2$</span> (<span class="math-container">$X=2$</span> and <span class="math-container">$X=3$</span> respectively). One can continue like this to formulate every possibility up to <span class="math-container">$X=12$</span>.</p>
|
1,063,774 | <blockquote>
<p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p>
</blockquote>
<p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
| Barry Cipra | 86,747 | <p>Let <span class="math-container">$u=1-|r|$</span> and note that <span class="math-container">$|r|\lt1$</span> implies <span class="math-container">$0\lt u\le1$</span>. This implies <span class="math-container">$0\le1-u^2\lt1$</span>, which in turn implies <span class="math-container">$0\le1-u\lt1/(1+u)$</span>, which in turn implies the first (strict) inequality in the string of assertions</p>
<p><span class="math-container">$$|r^n-0|=|r|^n=(1-u)^n\lt{1\over(1+u)^n}\le{1\over1+nu}\lt{1\over nu}=\epsilon{1\over nu\epsilon}\le\epsilon{\lceil1/(u\epsilon)\rceil\over n}$$</span></p>
<p>for all <span class="math-container">$n\ge1$</span> and any <span class="math-container">$\epsilon\gt0$</span>. So letting <span class="math-container">$N_\epsilon=\lceil1/(u\epsilon)\rceil$</span>, we find that <span class="math-container">$n\ge N_\epsilon$</span> implies <span class="math-container">$|r^n-0|\lt\epsilon$</span>, which tells us <span class="math-container">$\lim_{n\to\infty}r^n=0$</span>.</p>
<p>Remark: This answer follows much the same logic as those of aes and Richard, differing mainly in style of presentation. Its one advantage, if any, is that it does not split out the trivial case of <span class="math-container">$r=0$</span>. The key inequality, <span class="math-container">$(1+u)^n\ge1+nu$</span>, can, if need be, be proved by induction. The other equalities and inequalities in the display are straightforward to verify; in particular, the equal sign preceding the sudden appearance of <span class="math-container">$\epsilon$</span> is justified by the fact that all we're doing there is multiplying and dividing by a nonzero quantity.</p>
|
2,940,649 | <p>solve <span class="math-container">$$\frac{|x|-1}{|x|-3} \ge 0$$</span></p>
<p>here <span class="math-container">$x$</span> is not equal to <span class="math-container">$3$</span> and <span class="math-container">$-3$</span>.</p>
| Phil H | 554,494 | <p>You don't have enough information to determine the intersection of fish with red and blue stripes. Knowing the total number of fish in the tank would help. Let's call that <span class="math-container">$N$</span>.</p>
<p>Then <span class="math-container">$.45N = 70 + 50 - 2RB$</span> where <span class="math-container">$RB$</span> is the number of fish with both red and blue stripes.</p>
<p>Therefore <span class="math-container">$RB = 60 - .225N$</span></p>
<p>i) Just red striped fish <span class="math-container">$R = 70 - RB$</span> </p>
<p>ii) Just blue striped fish <span class="math-container">$B = 50 - RB$</span></p>
<p>iii) Both red and blue <span class="math-container">$= RB$</span></p>
|
1,912,628 | <p>Can someone give me an example or a hint to come up with a countable compact set in the real line with infinitely many accumulation points?
Thank you in advance!</p>
| user326210 | 326,210 | <p>What about if we define $H = \{ \frac{1}{n} : n\in \mathbb{N}\} \cup \{0\}$, a sort of standard countable compact set with 0 at its sole limit point, then define your countable compact set to be:</p>
<p>$$S = \{ x + y \mid x, y \in H\}.$$</p>
<p>To unpack the thought behind this definition:</p>
<ol>
<li>This set $S$ is countable.</li>
<li>This set $S$ is bounded, because its smallest member is 0 and its largest member is 1+1 = 2.</li>
<li>Each $x \in H$ is a limit point of $S$. If we fix $x$ and vary $y\in H$ in the definition of $S$, we can see that $x$ is a limit point of $S$.</li>
<li>This set $S$ is closed (and therefore compact, because it is a bounded subset of the real line.) ($S$ is closed because it is the sum of two compact subsets of $\mathbb{R}$ and is therefore closed in $\mathbb{R}$.)</li>
</ol>
|
4,075,667 | <p>[this is question. I want to know about iv.] I want to know that without being defined everywhere ,can a mapping be onto and one-to-one?
In iv - D has four elements and B has three elements, while in question only three elements are used. So it cannot be function as per definition. Then how it will onto or one to one without function? Is it possible?
<a href="https://i.stack.imgur.com/QfArD.png" rel="nofollow noreferrer">1</a></p>
| Ykh | 897,215 | <p>One- one and onto is merely a type of function . So yes the function needs to be defined everywhere in it's domain for it to be one - one or onto. The question you are referring to will therefore will be neither defined in domain nor one- one or onto</p>
|
1,715,232 | <p>Suppose $f_n$ converges uniformly to $f$ and $f_n$ are differentiable. Is it true that f will be differentiable?</p>
<p>My initial guess is no because $f_n= \frac{\sin(nx)}{\sqrt n}.$ Is this right? And more examples would be greatly appreciated.</p>
| Plutoro | 108,709 | <p>Your sequence converges uniformly to 0, which is differentiable. Consider $f_n(x)=\sqrt{x^2+1/n}$, which converges uniformly to $|x|$. Here is an outline of how you would prove that:</p>
<ol>
<li>Prove that the largest difference between $f_n(x)$ and $|x|$ occurs when $x=0$.</li>
<li>Prove that if $n$ is large enough, that $|f_n(0)-|0||$ can be made arbitrarily small.</li>
<li>Explain why this means that the convergence is uniform.</li>
<li>Use the definition of the derivative to show that $|x|$ is not differentiable at $x=0$.</li>
</ol>
|
535,226 | <p>I am trying to check whether or not the sequence $$a_{n} =\left\{\frac{n^n}{n!}\right\}_{n=1}^{\infty}$$ is bounded, convergent and ultimately monotonic (there exists an $N$ such that for all $n\geq N$ the sequence is monotonically increasing or decreasing). However, I'm having a lot of trouble finding a solution that sufficiently satisfies me.</p>
<p>My best argument so far is as follows,</p>
<p>$$a_{n} = \frac{n\cdot n\cdot n\cdot \ldots\cdot n}{n(n-1)(n-2)(n-3)\dots(2)(1)} = \frac{n}{n}\cdot \frac{n}{n-1}\cdot \ldots \cdot \frac{n}2\cdot n$$</p>
<p>so $\lim a_{n}\rightarrow \infty$ since $n<a_{n}$ for all $n>1$. Since the sequence is divergent, it follows that the function must be ultimately monotonic.</p>
<p>This feels a little dubious to me, I feel like I can form a much better argument than that, or at the very least a more elegant one. I've tried to assume $\{a_{n}\}$ approaches some limit $L$ so there exists some $N$ such that</p>
<p>$|a_{n} - L| < \epsilon$ whenever $n>N$ and derive a contradiction, but this approach got me nowhere.</p>
<p>Finally, I've also tried to use the fact that $\frac{a_{n+1}}{a_n}\rightarrow e$ to help me, but I couldn't find an argument where that fact would be useful.</p>
| Brian M. Scott | 12,042 | <p>HINT for the last part: Note that</p>
<p>$$\frac{a_{n+1}}{a_n}=\frac{\frac{(n+1)^{n+1}}{(n+1)!}}{\frac{n^n}{n!}}=\frac{(n+1)^{n+1}n!}{n^n(n+1)!}=\frac{(n+1)^{n+1}}{n^n(n+1)}=\left(\frac{n+1}n\right)^n\;.$$</p>
|
323,781 | <p>In the <a href="https://arxiv.org/abs/1902.07321" rel="noreferrer">paper</a> by Griffin, Ono, Rolen and Zagier which appeared on the arXiv today, (Update: published now in <a href="https://www.pnas.org/content/early/2019/05/20/1902572116" rel="noreferrer">PNAS</a>) the abstract includes</p>
<blockquote>
<p>In the case of the Riemann zeta function, this proves the GUE random
matrix model prediction in derivative aspect. </p>
</blockquote>
<p>In more detail, towards the bottom of the second page they say</p>
<blockquote>
<p>Theorem 3 in the case of the Riemann zeta function is the <em>derivative
aspect Gaussian Unitary Ensemble</em> (GUE) random matrix model
prediction for the zeros of Jensen polynomials. To make this precise,
recall that Dyson, Montgomery, and Odlyzko ... conjecture that the
non-trivial zeros of the Riemann zeta function are distributed like
the eigenvalues of random Hermitian matrices. These eigenvalues
satisfy Wigner's Semicircular Law, as do the roots of the Hermite
polynomials <span class="math-container">$H_d(X)$</span>, when suitably normalized, as
<span class="math-container">$d\rightarrow+\infty$</span> ... The roots of <span class="math-container">$J_{\gamma}^{d,0}(X)$</span>, as
<span class="math-container">$d\rightarrow+\infty$</span>, approximate the zeros of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span>, ... and so GUE predicts that
these roots also obey the Semicircular Law. Since the derivatives of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span> are also predicted to satisfy
GUE, it is natural to consider the limiting behavior of
<span class="math-container">$J_{\gamma}^{d,n}(X)$</span> as <span class="math-container">$n\rightarrow+\infty$</span>. The work here proves
that these derivative aspect limits are the Hermite polynomials
<span class="math-container">$H_d(X)$</span>, which, as mentioned above, satisfy GUE in degree aspect.</p>
</blockquote>
<p>I am hoping someone can further explain this. In particular, does this result shed any light on the horizontal distribution of the zeros of the derivative of the Riemann zeta function?</p>
<hr>
<p>Edit: Speiser showed that the Riemann hypothesis is equivalent to <span class="math-container">$\zeta^\prime(s)\ne 0$</span> for <span class="math-container">$0<\sigma<1/2$</span>. Since then quite a lot of work has gone into studying the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span>. For example, <a href="https://arxiv.org/abs/1002.0372" rel="noreferrer">Duenez et. al.</a><a href="https://iopscience.iop.org/article/10.1088/0951-7715/23/10/014" rel="noreferrer">compared</a> this distribution with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix. (Caveat: I'm not up to date on all the relevant literature.)</p>
<p>This is a very significant question. If the GUE distribution holds for the Riemann zeros, then rarely but infinitely often there will be pair with less than than half the average (rescaled) gap. From this, by the work of Conrey and Iwaniec, one gets good lower bounds for the class number problem.</p>
<p>In <a href="https://arxiv.org/abs/1002.1616" rel="noreferrer">this</a> <a href="https://www.sciencedirect.com/science/article/pii/S0001870812001600" rel="noreferrer">paper</a> Farmer and Ki showed that if the derivative of the Riemann zeta function has sufficiently many zeros close to the
critical line, then the zeta function has many closely spaced zeros, which by the above, also solves the class number problem. </p>
<p>The question of modeling the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span> with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix, is intimately connected to the class number problem. Based on the answer of Griffin below, I don't think that's what the Griffin-Ono-Rolen-Zagier paper does, but it's worth asking about.</p>
| Will Sawin | 18,060 | <p>The GUE random matrix model predicts that the zeroes should satisfy the local statistics of random matrices. It doesn't predict that the zeroes should satisfy the global statistics of random matrices, because it's not clear what that would even mean unless the zeroes are all contained in some bounded interval. (In fact they get more frequent the further away from the origin).</p>
<p>The Hermite polynomials satisfy the global statistics of random matrices (the semicircular law) but not the local statistics (their zero spacing is very even). So it is not clear how a precise relationship between the GUE conjecture and this new theorem could be formulated.</p>
<p>For a toy model of what's going on here, we could take the function field model, where zeta functions look like <span class="math-container">$$ \frac{ \sum_{n=0}^{2g} a_n q^{-ns} }{ (1-q^{-s}) (1-q^{1-s} ) }.$$</span> The first step here is to multiply by a factor to make the functional equation as simple as possible, which for us is <span class="math-container">$q^{gs}$</span>, and the second step is to multiply by a factor that kills the poles, which for us is <span class="math-container">$ (1-q^{-s}) (1-q^{1-s} )$</span>. So the object being differentiated is <span class="math-container">$$\sum_{n=0}^{2g} a_n q^{(g-n) s}.$$</span> The <span class="math-container">$k$</span>th derivative (with respect to <span class="math-container">$s$</span>) is <span class="math-container">$$ (\log q)^k \sum_{n=0}^{2g} a_n (g-n)^k q^{g-ns}.$$</span> Renormalizing, we get <span class="math-container">$$ \sum_{n=0}^{2g} a_n \left(1 - \frac{n}{g} \right)^k q^{g-ns}.$$</span> As <span class="math-container">$k$</span> goes to <span class="math-container">$\infty$</span> and remains even, this converges to <span class="math-container">$$ a_0 q^{gs} + a_{2g} q^{-2g s},$$</span> which has all roots on the half-line, perfectly evenly spaced, since <span class="math-container">$$a_{2g} = q^g a_0$$</span> by the functional equation. </p>
<p>If we view these roots as lying on a circle (i.e. we take <span class="math-container">$q^{-s}$</span> for <span class="math-container">$s$</span> a root), we can say that they perfectly satisfy the global GUE statistics, being evenly distributed on the circle, but do not satisfy local GUE statistics, being perfectly evenly spaced. This is true regardless of whether our original zeta function behaved like a characteristic polynomial of a random matrix, or even whether it satisfied the Riemann hypothesis.</p>
<p>It is possible that a similar phenomenon is occurring for the Jensen polynomials of high derivatives.</p>
|
1,535,914 | <p>When I plot the following function, the graph behaves strangely:</p>
<p><span class="math-container">$$f(x) = \left(1+\frac{1}{x^{16}}\right)^{x^{16}}$$</span></p>
<p>While <span class="math-container">$\lim_{x\to +\infty} f(x) = e$</span> the graph starts to fade at <span class="math-container">$x \approx 6$</span>. What's going on here? (plotted on my trusty old 32 bit PC.)
<a href="https://i.stack.imgur.com/YL7WQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YL7WQ.png" alt="Approximating e" /></a></p>
<p>I guess it's because of computer approximation and loss of significant digits. So I started calculating the binary representation to see if this is the case. However in my calculations values of <span class="math-container">$x=8$</span> should still behave nicely.</p>
<p>If computer approximation would be the problem, then plotting this function on a 64 bit pc should evade the problem (a bit). I tried the Wolfram Alpha servers:</p>
<p><a href="https://i.stack.imgur.com/joNc2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/joNc2.png" alt="Wolfram alpha" /></a></p>
<p>The problem remains for the same values of <span class="math-container">$x$</span>.</p>
<h3>Questions</h3>
<ol>
<li>Could someone pinpoint the problem? What about the 32 vs 64 bit plot?</li>
<li>Is there a way to predict at which <span class="math-container">$x$</span>-value the graph of the function below would start to fail?
<span class="math-container">$$f_n(x) = \left(1+\frac{1}{x^n}\right)^{x^n}$$</span></li>
</ol>
| hmakholm left over Monica | 14,366 | <p>Your problem is the finite precision of floating-point arithmetic. There are only so many numbers near $1$ that can be represented by the computer's floating-point format, and the larger your $x$ is, the more of the <em>difference</em> between $1$ and $1+\frac{1}{x^{16}}$ (which is what really matters when raising to a huge power) will be lost to rounding of the latter intermediate result.</p>
<p>In order to make precise calculations with such expressions possible, many programming languages provide primitive functions for computing $x\mapsto \log(1+x)$ and $x\mapsto e^x-1$ directly, <em>without</em> ever representing the intermediate result $1+x$ and $e^x$ explicitly.</p>
<p>To use this in your case you would rewrite
$$ (1+a)^b = e^{b \log (1+a)}$$
and write something like (e.g., in C, with <code><math.h></code>):</p>
<pre><code>double x16 = pow(x,16);
double result = exp(log1p(1/x16) * x16);
</code></pre>
<hr>
<p>You shouldn't expect any difference between computing on a 32-bit or a 64-bit machine, because even on "32-bit" machines the usual precision of floating point calculations is a 64-bit representation with about 16 significant digits of precision.</p>
<p>(Conversely, both 32-bit and 64-bit machines do support a 32-bit floating-point format, but operations on it are rarely any faster (in contrast to integer arithmetic, where 64-bit operations <em>are</em> slower on 32-bit machines), so in practice it is only used when one needs to store so many floating-point numbers that the combined size of them all becomes a concern.)</p>
|
3,217,130 | <p>How can I prove this by induction? I am stuck when there is a <span class="math-container">$\Sigma$</span> and two variables, how would I do it? I understand the first step but have problems when i get to the inductive step.</p>
<p><span class="math-container">$$\sum_{j=1}^n(4j-1)=n(2n+1)$$</span></p>
| Hongyi Huang | 619,069 | <p>Argue by contradiction: suppose there exists <span class="math-container">$a,b$</span> such that <span class="math-container">$ab\ne ba$</span>, then <span class="math-container">$b\ne a^{-1}ba$</span>. Let <span class="math-container">$c = a^{-1}ba$</span>, then <span class="math-container">$ba = ac$</span> and so <span class="math-container">$b = c = a^{-1}ba$</span> by assumption, a contradiction.</p>
|
40,463 | <p>If a rectangle is formed from rigid bars for edges and joints
at vertices, then it is flexible in the plane: it can flex
to a parallelogram.
On any smooth surface with a metric, one can define a linkage
(e.g., a rectangle) whose edges are geodesics of fixed length,
and whose vertices are joints, and again ask if it is rigid
or flexible on the surface.
This leads to my first, specific question:</p>
<blockquote>
<p><b>Q1</b>.
Is a rhombus, or a rectangle,
always flexible on a sphere?</p>
</blockquote>
<p><br /><img src="https://i.stack.imgur.com/FRW0R.jpg" alt="alt text" /><br /></p>
<p>It seems the answer should be <em>Yes</em> but I am a bit uncertain
if there must be a restriction on the edge lengths.
(In the above figure, the four arcs are each <span class="math-container">$49^\circ$</span> in length, comfortably short.)</p>
<blockquote>
<p><b>Q2</b>.
The same question for other surfaces: Arbitrary convex surfaces? A torus?</p>
</blockquote>
<p>I am especially interested to learn if there are situations where a linkage that is flexible
in the plane is rendered rigid when embedded on some surface.
It seems this should be possible...?</p>
<blockquote>
<p><b>Q3</b>.
More generally,
<a href="https://mathworld.wolfram.com/LamansTheorem.html" rel="nofollow noreferrer">Laman's theorem</a>
provides a combinatorial characterization of the rigid linkages in the plane.
The <span class="math-container">$n{=}4$</span> rectangle is not rigid because it has fewer than <span class="math-container">$2n-3 = 5$</span> bars:
it needs a 5th diagonal bar to rigidify.
Has Laman's theorem been extended to arbitary (closed, smooth) surfaces
embedded in <span class="math-container">$\mathbb{R}^3$</span>?
Perhaps at least to spheres, or to all convex surfaces?</p>
</blockquote>
<p>Thanks for any ideas or pointers to relevant literature!</p>
<p><b>Addendum</b>.
I found one paper related to my question:
"Rigidity of Frameworks Supported on Surfaces"
by A. Nixon, J.C. Owen, S.C. Power.
<a href="https://arxiv.org/abs/1009.3772v1" rel="nofollow noreferrer" title="Journal version at doi:10.1137/110848852. zbMATH review at https://zbmath.org/?q=an:1266.52018">arXiv:1009.3772v1 math.CO</a>
In it they prove an analog of Laman's theorem for
the circular cylinder in <span class="math-container">$\mathbb{R}^3$</span>.
If one phrases Laman's theorem as requiring for
rigidity that the number of edges <span class="math-container">$E \ge 2 V - 3$</span>
in both the graph and in all its subgraphs,
then their result (Thm. 5.3) is that, on the cylinder, rigidity requires
<span class="math-container">$E \ge 2 V -2$</span> in the graph and in all its subgraphs.
This is not the precise statement of their theorem.
They must also insist that the graph be <em>regular</em>
in a sense that depends on the rigidity matrix achieving maximal rank
(Def. 3.3).
They give as examples of <em>irregular</em> linkages on a sphere
one that
contains an edge with antipodal endpoints, or one that includes
a triangle all three of whose vertices lie on a great circle.
But modulo excluding irregular graphs and other minor technical
details, they essentially replace the constant 3 in Laman's
theorem for the plane with 2 for the cylinder.</p>
<p>Theirs is a very recent paper but contains few citations to related
work on surfaces, suggesting that perhaps the area
of linkages embedded on surfaces is
not yet well explored.
In light of this apparent paucity of information, it seems appropriate that I 'accept'
one of the excellent answers received. Thanks!</p>
<p><b>Addendum [31Jan11].</b>
I just learned of a 2010 paper by Justin Malestein and Louis Theran,
"Generic combinatorial rigidity of periodic frameworks"
<a href="https://arxiv.org/abs/1008.1837v2" rel="nofollow noreferrer" title="Journal version at doi:10.1016/j.aim.2012.10.007. zbMATH review at https://zbmath.org/?q=an:1268.52021">arXiv:1008.1837v2 (math.CO)</a>,
which pretty much completely solves the problem of linkages on a flat 2-torus,
generalizing to flat orbifolds. They obtain a combinatorial characterization for generic minimal
rigidity for "planar periodic frameworks," which encompass these surfaces.</p>
| Sergei Ivanov | 4,354 | <p>Q3: Laman's theorem is the same on the sphere.</p>
<p>Indeed, a configuration with $n$ vertices and $m$ edges is defined by a system of $m$ equations in $2n-3$ variables (there are $2n$ coordinates of points, but we may assume that the first point is fixed and the direction of one of the edges from the first point is fixed too). The the left-hand sides are analytic functions of our variables and the right-hand sides are the squares of the lengths of the bars (on the sphere, cosines rather than squares).</p>
<p>Consider this system as a map $f:\mathbb R^{2n-3}\to\mathbb R^m$. The rigidity means that a generic point $x\in\mathbb R^{2n-3}$ cannot be moved within the pre-image of $f(x)$. This implies that $rank(df)=2n-3$ on an open dense set. Choose a configuration from this set and project it to the sphere of radius $R\to\infty$. The equations on the sphere converge to those in the plane, hence the rank of the linearization on the sphere will be maximal ($=2n-3$) for all large $R$. So we get an open set of configuration on the sphere where the linearization has the maximal rank (and this implies rigidity). Since all functions involved are analytic and the rank is maximal on an open set, it is maximal generically. So our linkage is generically rigid on the sphere.</p>
<p>Conversely, consider a flexible linkage on the plane. If $m<2n-3$, it will be flexible on the sphere by a dimension counting argument. Otherwise, by Laman's theorem, there is a subgraph with $r$ vertices and more than $2r-3$ edges. Consider such a subgraph for which $r$ is minimal. Then, by Laman's theorem, we can remove some edges so that this subgraph remains rigid. And, by the above argument, it is rigid on the sphere too. So the edges that we removed were redundant both in the plane and in the sphere. Let's forget about them and repeat the procedure. Eventually we will get a linkage with fewer than $2n-3$ edges.</p>
|
40,463 | <p>If a rectangle is formed from rigid bars for edges and joints
at vertices, then it is flexible in the plane: it can flex
to a parallelogram.
On any smooth surface with a metric, one can define a linkage
(e.g., a rectangle) whose edges are geodesics of fixed length,
and whose vertices are joints, and again ask if it is rigid
or flexible on the surface.
This leads to my first, specific question:</p>
<blockquote>
<p><b>Q1</b>.
Is a rhombus, or a rectangle,
always flexible on a sphere?</p>
</blockquote>
<p><br /><img src="https://i.stack.imgur.com/FRW0R.jpg" alt="alt text" /><br /></p>
<p>It seems the answer should be <em>Yes</em> but I am a bit uncertain
if there must be a restriction on the edge lengths.
(In the above figure, the four arcs are each <span class="math-container">$49^\circ$</span> in length, comfortably short.)</p>
<blockquote>
<p><b>Q2</b>.
The same question for other surfaces: Arbitrary convex surfaces? A torus?</p>
</blockquote>
<p>I am especially interested to learn if there are situations where a linkage that is flexible
in the plane is rendered rigid when embedded on some surface.
It seems this should be possible...?</p>
<blockquote>
<p><b>Q3</b>.
More generally,
<a href="https://mathworld.wolfram.com/LamansTheorem.html" rel="nofollow noreferrer">Laman's theorem</a>
provides a combinatorial characterization of the rigid linkages in the plane.
The <span class="math-container">$n{=}4$</span> rectangle is not rigid because it has fewer than <span class="math-container">$2n-3 = 5$</span> bars:
it needs a 5th diagonal bar to rigidify.
Has Laman's theorem been extended to arbitary (closed, smooth) surfaces
embedded in <span class="math-container">$\mathbb{R}^3$</span>?
Perhaps at least to spheres, or to all convex surfaces?</p>
</blockquote>
<p>Thanks for any ideas or pointers to relevant literature!</p>
<p><b>Addendum</b>.
I found one paper related to my question:
"Rigidity of Frameworks Supported on Surfaces"
by A. Nixon, J.C. Owen, S.C. Power.
<a href="https://arxiv.org/abs/1009.3772v1" rel="nofollow noreferrer" title="Journal version at doi:10.1137/110848852. zbMATH review at https://zbmath.org/?q=an:1266.52018">arXiv:1009.3772v1 math.CO</a>
In it they prove an analog of Laman's theorem for
the circular cylinder in <span class="math-container">$\mathbb{R}^3$</span>.
If one phrases Laman's theorem as requiring for
rigidity that the number of edges <span class="math-container">$E \ge 2 V - 3$</span>
in both the graph and in all its subgraphs,
then their result (Thm. 5.3) is that, on the cylinder, rigidity requires
<span class="math-container">$E \ge 2 V -2$</span> in the graph and in all its subgraphs.
This is not the precise statement of their theorem.
They must also insist that the graph be <em>regular</em>
in a sense that depends on the rigidity matrix achieving maximal rank
(Def. 3.3).
They give as examples of <em>irregular</em> linkages on a sphere
one that
contains an edge with antipodal endpoints, or one that includes
a triangle all three of whose vertices lie on a great circle.
But modulo excluding irregular graphs and other minor technical
details, they essentially replace the constant 3 in Laman's
theorem for the plane with 2 for the cylinder.</p>
<p>Theirs is a very recent paper but contains few citations to related
work on surfaces, suggesting that perhaps the area
of linkages embedded on surfaces is
not yet well explored.
In light of this apparent paucity of information, it seems appropriate that I 'accept'
one of the excellent answers received. Thanks!</p>
<p><b>Addendum [31Jan11].</b>
I just learned of a 2010 paper by Justin Malestein and Louis Theran,
"Generic combinatorial rigidity of periodic frameworks"
<a href="https://arxiv.org/abs/1008.1837v2" rel="nofollow noreferrer" title="Journal version at doi:10.1016/j.aim.2012.10.007. zbMATH review at https://zbmath.org/?q=an:1268.52021">arXiv:1008.1837v2 (math.CO)</a>,
which pretty much completely solves the problem of linkages on a flat 2-torus,
generalizing to flat orbifolds. They obtain a combinatorial characterization for generic minimal
rigidity for "planar periodic frameworks," which encompass these surfaces.</p>
| j.c. | 353 | <p>The first paragraph of <a href="https://arxiv.org/abs/0709.3354" rel="nofollow noreferrer">Some notes on the equivalence of first-order rigidity in various geometries</a> by Franco V. Saliola and Walter Whiteley states:</p>
<blockquote>
<p>In this paper, we explore the connections among the theories of first-order rigidity of bar and joint frameworks (and associated structures) in various metric geometries extracted from the underlying projective space of dimension n, or <span class="math-container">$\mathbb{R}^{n+1}$</span>. The standard examples include Euclidean space, elliptical (or spherical) space, hyperbolic space, and a metric on the exterior of hyperbolic space.</p>
</blockquote>
<p>Section 4 of the notes proves that a framework is first-order rigid in the upper hemisphere <span class="math-container">$S^n_+$</span> iff a corresponding framework in <span class="math-container">$\mathbb{R}^n$</span> is first-order rigid.</p>
<p>Section 5 extends this to geometries on the space <span class="math-container">$X^n_{c,k}=\{x\in\mathbb{R}^{n+1}|\langle x,x\rangle_k=c,x_{n+1}>0\}$</span> where <span class="math-container">$\langle x,x\rangle_k=x_1y_1+\cdots+x_{n-k+1}y_{n-k+1}-x_{n-k+2}y_{n-k+2}-\cdots-x_{n+1}y_{n+1}$</span>. Note that <span class="math-container">$k=1,c=-1$</span> is hyperbolic space <span class="math-container">$\mathbb{H}^n$</span>, the case <span class="math-container">$k=0,c=1$</span> is <span class="math-container">$S^n_+$</span>.</p>
|
730,018 | <p>Assume ZFC (and AC in particular) as the background theory.</p>
<p>If $(M,\in^M)$ is a model of ZFC (not necessarily transitive or standard), must there exist a bijection between $M$ and $$\{x \in M \mid (M,\in^M) \models x \mbox{ is an ordinal number}\}?$$</p>
<p>I am also interested in the cases where $M$ is assumed to be a model of ZFC2.</p>
<p><strong>Remark.</strong> I think this is different from what was asked <a href="https://math.stackexchange.com/questions/336956/on-models-of-zfc-does-there-exist-a-bijection-between-von-neumann-universe-and">here</a>. Please comment if you believe otherwise; I am happy to discuss.</p>
| Andrés E. Caicedo | 462 | <p>The answer is yes. However, the answer is no if we require the model itself to know the bijection. More specifically, the existence of a class bijection between $V$ and its ordinals is equivalent to the axiom of global choice, and it is consistent that $\mathsf{ZFC}$ holds but global choice fails. </p>
<p>Now, given any (set) model $M$ of $\mathsf{ZFC}$, for each $\alpha$ ordinal of $M$ there is (in $M$) a bijection $f$ between $V_\alpha$ and some ordinal $\beta$ of $M$. Any such $f$ gives us a true bijection between $\hat\beta=\{a\in M\mid M\models a<\beta\}$ and $\hat V_\alpha=\{b\in M\mid M\models b\in V_\alpha\}$, simply by setting $$\hat f=\{(a,b)\in M\times M \mid M \models a<\beta,b\in V_\alpha,\& \,b=f(a)\}.$$ This proves that $$|M|=\left|\bigl(\bigcup_{a\in\mathsf{ORD}^M}V_a\bigr)^M\right|\le|\mathsf{ORD}^M|\sup_{a\in\mathsf{ORD}^M}|\hat V_a|\le |\mathsf{ORD}^M|\sup_{b\in\mathsf{ORD}^M}|\hat b|=|\mathsf{ORD}^M|.$$</p>
<p>The answer is no if we replace $\mathsf{ZFC}$ with $\mathsf{ZF}$. Indeed, we can have transitive models of $\mathsf{ZF}$ that are uncountable and have countably many ordinals, see for instance <a href="https://mathoverflow.net/a/129768/6085">here</a>.</p>
|
603,291 | <p>Suppose $f:(a,b) \to \mathbb{R} $ satisfy $|f(x) - f(y) | \le M |x-y|^\alpha$ for some $\alpha >1$
and all $x,y \in (a,b) $. Prove that $f$ is constant on $(a,b)$. </p>
<p>I'm not sure which theorem should I look to prove this question. Can you guys give me a bit of hint? First of all how to prove some function $f(x)$ is constant on $(a,b)$? Just show $f'(x) = 0$?</p>
| Brian M. Scott | 12,042 | <p>HINT: Your idea is a good one. What happens when you divide the inequality by $|x-y|$?</p>
|
3,690,076 | <p>Is there a rigorous proof that <span class="math-container">$|G|=|\text{Ker}(f)||\text{Im}(f)|$</span>, for some homomorphism <span class="math-container">$f\,:\,G\rightarrow G'$</span>? Can anyone provide such a proof with explanations?</p>
| user792277 | 792,277 | <p><span class="math-container">$G$</span> acts on <span class="math-container">$Im(f)$</span> by <span class="math-container">$\theta_g(f(h))=f(gh)$</span>. Then we can use the formula <span class="math-container">$|G|=|Stab(e)||Orbit(e)|$</span>.</p>
|
3,690,076 | <p>Is there a rigorous proof that <span class="math-container">$|G|=|\text{Ker}(f)||\text{Im}(f)|$</span>, for some homomorphism <span class="math-container">$f\,:\,G\rightarrow G'$</span>? Can anyone provide such a proof with explanations?</p>
| diracdeltafunk | 19,006 | <p>Here is a proof in full detail.</p>
<p>You said in the comments that you know the first isomorphism theorem, which will make the proof quite simple. Let <span class="math-container">$f : G \to G'$</span> be the group homomorphism. The first isomorphism theorem tells us that <span class="math-container">$G / \ker(f)$</span> is isomorphic to <span class="math-container">$\operatorname{img}(f)$</span>; hence <span class="math-container">$\lvert G / \ker f \rvert = \lvert \operatorname{img}(f) \rvert$</span>. Next, we claim that <span class="math-container">$\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert$</span>. Once we can show this, we will have that <span class="math-container">$\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert = \lvert \ker(f) \rvert \lvert \operatorname{img}(f) \rvert$</span>, as desired! So here's the proof that <span class="math-container">$\lvert G \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert$</span>:</p>
<p><em>Proof</em>. Let <span class="math-container">$\pi : G \to G / \ker f$</span> be the canonical projection, and pick any splitting <span class="math-container">$\ell : G / \ker f \to G$</span> (all we're doing here is picking a representative of each coset), so that <span class="math-container">$\pi \circ \ell = \operatorname{id}_{G / \ker f}$</span>. Now define <span class="math-container">$\varphi : G \to (\ker f) \times (G / \ker f)$</span> by <span class="math-container">$$\varphi(g) = (g^{-1} \ell(\pi(g)), \pi(g)).$$</span>
<span class="math-container">$\varphi$</span> is well-defined because <span class="math-container">$$\pi(g^{-1}\ell(\pi(g))) = \pi(g)^{-1}\pi(\ell(\pi(g))) = \pi(g)^{-1} \operatorname{id}_{G / \ker f}(\pi(g)) = \pi(g)^{-1} \pi(g) = 1,$$</span>
so <span class="math-container">$g^{-1} \ell(\pi(g)) \in \ker \pi = \ker f$</span> for all <span class="math-container">$g \in G$</span>. Next, we will show that <span class="math-container">$\varphi$</span> is injective. Suppose <span class="math-container">$\varphi(x) = \varphi(y)$</span> for some <span class="math-container">$x, y \in G$</span>. Since <span class="math-container">$\pi(x) = \pi(y)$</span>, we know that <span class="math-container">$\ell(\pi(x)) = \ell(\pi(y))$</span>. At the same time, we have <span class="math-container">$x^{-1} \ell(\pi(x)) = y^{-1} \ell(\pi(y))$</span>, so we conclude that <span class="math-container">$x^{-1} = y^{-1}$</span>, whence <span class="math-container">$x = y$</span>. Finally, we need to show that <span class="math-container">$\varphi$</span> is surjective. Let <span class="math-container">$(a, b) \in (\ker f) \times (G / \ker f)$</span> be arbitrary, and let <span class="math-container">$x = \ell(b)a^{-1}$</span>. Then
<span class="math-container">$$\pi(x) = \pi(\ell(b)a^{-1}) = \pi(\ell(b)) \pi(a)^{-1} \operatorname{id}_{G / \ker f}(b) 1^{-1} = b,$$</span>
so
<span class="math-container">$$\varphi(x) = (x^{-1} \ell(\pi(x)), \pi(x)) = (a \ell(b)^{-1}\ell(b),b) = (a,b).$$</span>
We have now shown that <span class="math-container">$\varphi$</span> is a bijection, so
<span class="math-container">$$\lvert G \rvert = \lvert (\ker f) \times (G / \ker f) \rvert = \lvert \ker f \rvert \lvert G / \ker f \rvert.$$</span></p>
|
1,017,738 | <p>So I know that the derivative of arccos is:
$-dx/\sqrt{1-x^2}$</p>
<p>So how would I find the derivative of $\arccos(x^2)$? What does the $-dx$ mean in the above formula?</p>
<p>Would it just be $-2x/\sqrt{1-x^2}$ ?</p>
| mfl | 148,513 | <p>If you have a composite function $h(x)=(g\circ f)(x)$ then its derivative is given by $$h'(x)=g'(f(x))f'(x).$$ (This is known as the Chain rule. See <a href="http://en.wikipedia.org/wiki/Chain_rule" rel="nofollow">http://en.wikipedia.org/wiki/Chain_rule</a> for more information.)</p>
<p>In your case, $f(x)=x^2$ and $g(x)=\arccos x.$ Since $f'(x)=2x$ and $g'(x)=\frac{-1}{\sqrt{1-x^2}},$ you can get the derivative just applying the Chain rule. (Note that $g'$ must be evaluated at $f(x)$ and not at $x.$)</p>
|
1,017,738 | <p>So I know that the derivative of arccos is:
$-dx/\sqrt{1-x^2}$</p>
<p>So how would I find the derivative of $\arccos(x^2)$? What does the $-dx$ mean in the above formula?</p>
<p>Would it just be $-2x/\sqrt{1-x^2}$ ?</p>
| Sujaan Kunalan | 77,862 | <p>$$\frac{d}{dx}\arccos x=-\frac{1}{\sqrt{1-x^2}}$$</p>
<p>Using the chain rule, we can evaluate $\frac{d}{dx}\arccos (x^2)$.</p>
<p>$$\frac{d}{dx}\arccos(x^2)=-\frac{1}{\sqrt{1-(x^2)^2}}\frac{d}{dx}(x^2)=-\frac{1}{\sqrt{1-x^4}}\frac{d}{dx}(x^2)=-\frac{1}{\sqrt{1-x^4}}\cdot 2x$$</p>
|
169,919 | <blockquote>
<p>If $p$ is a prime, show that the product of the $\phi(p-1)$ primitive roots of $p$ is congruent modulo $p$ to $(-1)^{\phi(p-1)}$.</p>
</blockquote>
<p>I know that if $a^k$ is a primitive root of $p$ if gcd$(k,p-1)=1$.And sum of all those $k's$ is $\frac{1}{2}p\phi(p-1)$,but then I don't know how use these $2$ facts to show the desired result.<br>
Please help.</p>
| lab bhattacharjee | 33,337 | <p>We know $ ord_m(a^k) =\frac{d}{(d, k)} $ where d=$ord_ma$ => $ord_ma=ord_m(a^{-1})$ where m is a natural number.</p>
<p>So, $a,a^{-1}$ must belong to the same order(d).</p>
<p>Now by the previous solution, $a≢a^{-1}(mod\ m)$ if d>2.</p>
<p>So, the product of all number belonging to the same order(d)≡1(mod m) if d>2.</p>
<p>Now, $\phi(m)>2$ if m=5 or >6.</p>
<p>Now, $\phi(m)=2$ if m=3,4,6.</p>
<p>In all the three cases, the primitive root is (m-1)≡-1(mod m).</p>
<p>Here m does not need to be prime and $2<d≤\phi(m)$. </p>
|
749,097 | <p>In any calculus course, one of the first thing we learn is that $(uv)'=u'v+v'u$ rather than the what I've written in the title. This got me wondering: when is this <em>dream product rule</em> true? There are of course trivial examples, and also many instances where the equality is true at a handful of points. Less obvious though, is the following:</p>
<blockquote>
<p>Are there non-constant $u,v$ such that there exists an interval $I$ where
$(uv)'=u'v'$ over $I?$</p>
</blockquote>
<p>I have a feeling there should be, but I am having trouble constructing such a pair.</p>
| pxc3110 | 139,697 | <p>Let $u$ and $v$ be functions of $t$.
<br/>then $(uv)'=u'v'$
<br/>$\iff u'v+uv'=u'v'$
<br/>$\iff u'(v-v')+v'u=0$
<br/>$\iff u'+\frac {v'}{v-v'}u=0$
<br/>Let u be the unkown function, then multiply both sides by $e^{\int \frac{v'}{v-v'}}dt$:
$u'(e^{\int \frac{v'}{v-v'}dt})+(e^{\int \frac{v'}{v-v'}dt}\frac {v'}{v-v'})u=0 (*)$
<br/>Notice that $(e^{\int \frac{v'}{v-v'}dt})'=e^{\int \frac{v'}{v-v'}dt}\frac {v'}{v-v'}$, <br/>therefore (*) becomes: $u'(e^{\int \frac{v'}{v-v'}dt})+(e^{\int \frac{v'}{v-v'}dt})'u=0 $
<br/>or $(ue^{\int \frac{v'}{v-v'}dt})'=0$
<br/>or $ue^{\int \frac{v'}{v-v'}dt}=C$
<br/>or $u=Ce^{-\int \frac{v'}{v-v'}dt}$
<br/>Note: While evaluating $\int \frac{v'}{v-v'}dt$, please omit the integration constant. $u$ and $v$ can be replaced by each other while the equation still holds.
<br/>Replacing $u$ or $v$ by a given function and assign C a given value, we can solve for the other one.
<br/>For example, choose $v=t^2, C=1,integral constant=0$, then $u=e^{-\int \frac{2t}{t^2-2t}dt}=e^{-\int \frac 2{t-2}dt}=e^{-2ln|t-2|}=e^{ln \frac 1{(t-2)^2}}=\frac 1{(t-2)^2}$</p>
|
2,319,341 | <blockquote>
<p>If a rubber ball is dropped from a height of $1\,\mathrm{m}$ and continues to rebound to a height that is nine tenth of its previous fall, find the total distance in meter that it travels on falls only.</p>
</blockquote>
<h3>My Attempt:</h3>
<p>I tried if it could be solved using arithmetic progression for which the first term is $(a) = 1\,\mathrm{m}$ and the common difference is $(d) = \frac{9}{10}$. But I could not get any more information.</p>
| REVOLUTION | 454,435 | <p>During the first drop it covers a distance of $1$m and then rises to a distance of $1\bullet\frac{9}{10}$ and then falls to that distance again to rise by $1\bullet\frac{9}{10}\bullet\frac{9}{10}$ and so on. You will notice that this is forming a geometric progression than a arithmetic one.</p>
<p>The summation of all the distances will be,</p>
<p>$1+1\bullet\frac{9}{10}+1\bullet(\frac{9}{10})^2+1\bullet(\frac{9}{10})^3...$</p>
<p>$=\frac{1}{1-\frac{9}{10}}=10$</p>
|
694,090 | <p>Let $V$ and $W$ be vector spaces over $\Bbb{F}$ and $T:V \to W$ a linear map. If $U \subset V$ is a subspaec we can consider the map $T$ for elements of $U$ and call this the restriction of $T$ to $U$, $T|_{U}: U \to W$ which is a map from $U$ to $W$. Show that</p>
<p>$$\ker T|_{U} = \ker T\cap U.$$</p>
<p>I know the definition of a linear map is </p>
<p>$f(x+y)=f(x) +f(y)$ and </p>
<p>$f(ax)=a\cdot f(x)$</p>
<p>I also know the kernel is the set of points which are mapped to zero.</p>
<p>However, I am struggling to piece this all together.</p>
<p>Thanks in advance for all the help.</p>
| 5xum | 112,884 | <p>First, take a vector $v$ from $\ker T|_U$ and show that it lies both in $\ker T$ and in $U$. It is simple to do both, since $\ker T|_U$ is a subspace of $U$ and $0=T|_U(v)=T(v)$.</p>
<p>Then, show that if a vector $v$ lies both in $U$ and $\ker T$, it also lies in $\ker T|_U$. Again, a simple task, since $T_U(v)=T(v)$ for $v$ (because $v\in U$) and $T(v)=0$ (because $v\in\ker T$).</p>
|
694,090 | <p>Let $V$ and $W$ be vector spaces over $\Bbb{F}$ and $T:V \to W$ a linear map. If $U \subset V$ is a subspaec we can consider the map $T$ for elements of $U$ and call this the restriction of $T$ to $U$, $T|_{U}: U \to W$ which is a map from $U$ to $W$. Show that</p>
<p>$$\ker T|_{U} = \ker T\cap U.$$</p>
<p>I know the definition of a linear map is </p>
<p>$f(x+y)=f(x) +f(y)$ and </p>
<p>$f(ax)=a\cdot f(x)$</p>
<p>I also know the kernel is the set of points which are mapped to zero.</p>
<p>However, I am struggling to piece this all together.</p>
<p>Thanks in advance for all the help.</p>
| Riccardo | 74,013 | <p>Try to prove the double inclusion: Let $T'$ the restriction, let $x \in ker T'$, obviously $x \in U$, by definition of Kernel of a linear operator defined over $U$. But by the fact that $U \subset V$ then if $u \in U \Rightarrow u \in V$ and $0=T'(u) = T(u)$</p>
|
268,482 | <p>One has written a paper, the main contribution of which is a few conjectures. Several known theorems turned out to be special cases of the conjectures, however no new case of the conjectures was proven in the paper. In fact, no new theorem was proven in the paper. </p>
<p>The work was reported on a few seminars, and several experts found the conjectures interesting. </p>
<p>One would like to publish this paper in a refereed journal. The paper was rejected from a certain journal just two days after its submission because "this genre of article does not fit the journal".</p>
<blockquote>
<p><strong>QUESTION.</strong> Are there examples of publications of this genre in refereed journals?</p>
</blockquote>
<p><strong>ADD:</strong> The mentioned paper explains the background, states the conjectures, discusses various special cases and consequences, and lists known cases. It is 20 pages long. </p>
| David White | 11,540 | <p>Such a paper might be appropriate for the <a href="http://www.gradmath.org/" rel="nofollow noreferrer">Graduate Journal of Mathematics</a>, since a readership of grad students might enjoy a bunch of interesting conjectures. I published a <a href="https://www.gradmath.org/article/an-overview-of-schema-theory/" rel="nofollow noreferrer">paper</a> there that was mostly a rewriting of results known in computer science, in more mathematical language, plus a list of open problems.</p>
|
19,848 | <p><strong>{Xn}</strong> is a sequence of independent random variables each with the same
<strong>Sample Space {0,1}</strong> and <strong>Probability {1-1/$n^2$ ,1/$n^2$}</strong>
<br> <em>Does this sequence converge with probability one (Almost Sure) to the <strong>constant 0</strong>?</em>
<br><br>Essentially the {Xn} will look like this (its random,but just to show that the frequency of ones drops)<br>
010010000100000000100000000000000001......</p>
| mercio | 17,445 | <p>Yes, but how fast does the frequency drop ? The faster it does the more it is probable that (Xn) converges to 0.</p>
<p>Lemma : If (Xn) is a sequence of elements in {0,1}, Xn converges to 0 if and only if Xn has finitely many 1s :</p>
<p>If a sequence (Xn) converges to 0, then by definition of the limit, there exists some integer N such that for all n>=N, |Xn| <= 1/2. Now since Xn is 0 or 1, |Xn| <= 1/2 implies that Xn = 0.
Thus, for all n >= N, Xn = 0.
This means that the sequence (Xn) has finitely (in fact, less than N) many 1s.</p>
<p>Conversely, if a sequence (Xn) has finitely many 1s, there exist an integer N (the index of the last 1 of the sequence) such that Xn = 0 for all n > N.
Then the sequence (Xn) converges to 0 because for any $\varepsilon$ > 0, we do have that for all n > N, $|X_n - 0| = 0 < \varepsilon$.</p>
<p>Here, define $Y_n = \Sigma_{k=1}^n X_n$ and $Y_\infty = \lim_{n \rightarrow \infty} Y_n$ = the number of 1s in the sequence (Xn). The lemma tells us that (Xn) converges to 0 if and only if $Y_\infty < \infty$.</p>
<p>You'll notice that $E[Y_\infty] = \lim E[Y_n] \lt \infty$ thanks to the fact that $\Sigma \frac{1}{n^2}$ is convergent.</p>
<p>This implies that $Y_\infty < \infty$ almost surely, which in turn shows that (Xn) does converge with probability 1.</p>
<p>If you have the frequency drop too slowly, this won't work. For example if P(Xn = 1) = 1/log(n) for n>=3, it will almost always diverge.</p>
|
423,479 | <p>I know that $$\lim_{n\rightarrow \infty}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{n} \Gamma(n)}=1,$$ but I'm interested in the exact behaviour of </p>
<p>$$a_n =1- \left( \frac{\Gamma(n+\frac{1}{2})}{\sqrt{n} \Gamma(n)} \right) ^2$$</p>
<p>particularily compared to $$b_n = \frac{1}{4n}$$</p>
<p>I haven't studied asymptotics yet, so I have no idea how to approach this, but I need this particular result in a statistics problem I'm working on.</p>
| Mhenni Benghorbal | 35,472 | <p><strong>Hint:</strong> Use <a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">Stirling approximation</a></p>
<p>$$ n!=\Gamma(n+1) \sim \left(\frac{n}{e}\right)^n\sqrt{2 \pi n}. $$</p>
<p><strong>Added:</strong> If you use the above approximation, you will get</p>
<p>$$ a_n\sim 1-{\frac {{{\rm e}^{-1}} \left( n-1 \right) ^{1-2\,n}{4}^{-n} \left( -
1+2\,n \right) ^{2\,n}}{n}}
.$$</p>
<p>Double check the the calculations.</p>
|
423,479 | <p>I know that $$\lim_{n\rightarrow \infty}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{n} \Gamma(n)}=1,$$ but I'm interested in the exact behaviour of </p>
<p>$$a_n =1- \left( \frac{\Gamma(n+\frac{1}{2})}{\sqrt{n} \Gamma(n)} \right) ^2$$</p>
<p>particularily compared to $$b_n = \frac{1}{4n}$$</p>
<p>I haven't studied asymptotics yet, so I have no idea how to approach this, but I need this particular result in a statistics problem I'm working on.</p>
| Julián Aguirre | 4,791 | <p>The following code in Mathematica</p>
<pre><code>Series[1 - (Gamma[x + 1/2]/(Sqrt[x] Gamma[x]))^2, {x, Infinity, 6}]
</code></pre>
<p>gives
$$
\frac{1}{4 x}-\frac{1}{32 x^2}-\frac{1}{128 x^3}+\frac{5}{2048 x^4}+\frac{23}{8192 x^5}-\frac{53}{65536 x^6}+O\left[\frac{1}{x}\right]^7
$$</p>
|
2,780,832 | <p>Let's define: </p>
<p>$$\sin(z) = \frac{\exp(iz) - \exp(-iz)}{2i}$$
$$\cos(z) = \frac{\exp(iz) + \exp(-iz)}{2}$$</p>
<blockquote>
<p>We are to prove that
$$\sin(z+w)=\sin(w) \cos(z) + \sin(z)\cos(w), \forall_{z,w \in \mathbb{C}}$$
using only the following statement: $\exp(z+w) = \exp(w)\exp(z)$.</p>
</blockquote>
<p>I managed only to show that:
$$\sin(z + w) = \frac{\exp(iz)\exp(iw)}{2i} - \frac{\exp(-iz)\exp(-iw)}{2i}.$$<br>
Where can I go from here?</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>Since OP starts from left-hand-side and asks "where can I go from here", I'll give a solution staring from the left-hand-side.</p>
<p>\begin{align}
& \sin(z+w) \\
&= \frac{\exp(i(z+w)) - \exp(-i(z+w))}{2i} \\
&= \frac{\exp(iz)\exp(iw) - \exp(-iz)\exp(-iw)}{2i} \\
&= \frac{\exp(iz)\exp(iw) \color{blue}{-\exp(iz)\exp(-iw) + \exp(iz)\exp(-iw)} - \exp(-iz)\exp(-iw)}{4i} \\
&+ \frac{\exp(iz)\exp(iw) \color{blue}{-\exp(-iz)\exp(iw) + \exp(-iz)\exp(iw)} - \exp(-iz)\exp(-iw)}{4i} \\
&= \frac{\exp(iz)(\exp(iw)-\color{blue}{\exp(-iw)}) + \exp(-iz)(\color{blue}{\exp(iw)}-\exp(-iw))}{4i} \tag{terms 1,2,7,8} \\
&+ \frac{\exp(-iw)(\color{blue}{\exp(iz)}-\exp(-iz)) + \exp(iw)(\exp(iz) - \color{blue}{\exp(-iz)})}{4i} \tag{terms 3-6} \\
&= \frac{(\exp(iz)+\exp(-iz))(\exp(iw)-\color{blue}{\exp(-iw)})}{2 \cdot 2i} \\
&+ \frac{(\exp(iw)+\exp(-iw))(\color{blue}{\exp(iz)}-\exp(-iz))}{2 \cdot 2i} \\
&= \cos(z)\sin(w) + \cos(w)\sin(z)
\end{align}</p>
|
357,138 | <blockquote>
<p>If $a_1,a_2,\dotsc,a_n $ are positive real numbers, then prove that</p>
</blockquote>
<p>$$\lim_{x \to \infty} \left[\frac {a_1^{1/x}+a_2^{1/x}+.....+a_n^{1/x}}{n}\right]^{nx}=a_1 a_2 \dotsb a_n.$$ </p>
<p>My Attempt: </p>
<p>Let $P=\lim_{x \to \infty} \left[\dfrac {a_1^{\frac{1}{x}}+a_2^{\frac {1}{x}}+.....+a_n^{\frac {1}{x}}}{n}\right]^{nx} \implies \ln P=\lim_{x \to \infty} \ln \left[\frac {a_1^{\frac{1}{x}}+a_2^{\frac {1}{x}}+.....+a_n^{\frac {1}{x}}}{n}\right]^{nx} =\lim_{x \to \infty} nx \ln \left[\frac {a_1^{\frac{1}{x}}+a_2^{\frac {1}{x}}+.....+a_n^{\frac {1}{x}}}{n}\right]= \lim_{x \to \infty} n \left[\frac {\ln (a_1^{1/x}+a_2^{1/x}+...+a_n^{1/x})-\ln n}{1/x}\right]$ </p>
<p>and this is $0/0$ form and so I have to apply L'Hospital's rule. Now things get a bit complicated during derivative. </p>
<p>Can someone point me in the right direction? Thanks in advance for your time.</p>
| Community | -1 | <p>Another method to resolve this problem is, take the inequality $GM \leqslant AM \leqslant x \text {th root mean}$, then use squeeze theorem.</p>
|
117,836 | <p>I'm not the best at math(but eager to learn) so please excuse me if I'm not explaining this problem correctly, I will try to add as much info to make it clear. I basically receive 2 pieces of data, one is a list of integers and the other is a target_sum, and I want to figure out all the ways I can use the list to equal the target sum. So for a list of <code>[1,2,4]</code> to a target_sum of <code>10</code>, I would get:</p>
<pre><code>2 * 4 + 1 * 2 + 0 * 1
2 * 4 + 0 * 2 + 2 * 1
1 * 4 + 3 * 2 + 0 * 1
1 * 4 + 2 * 2 + 2 * 1
1 * 4 + 1 * 2 + 4 * 1
1 * 4 + 0 * 2 + 6 * 1
0 * 4 + 5 * 2 + 0 * 1
0 * 4 + 4 * 2 + 2 * 1
0 * 4 + 3 * 2 + 4 * 1
0 * 4 + 2 * 2 + 6 * 1
0 * 4 + 1 * 2 + 8 * 1
0 * 4 + 0 * 2 + 10 * 1
</code></pre>
<p>The current algorithm I'm using is two parts, one builds a look up table of what combinations are possible and the other builds the actual table:
Table building:</p>
<pre><code>for i = 1 to k
for z = 0 to sum:
for c = 1 to z / x_i:
if T[z - c * x_i][i - 1] is true:
set T[z][i] to true
</code></pre>
<p>Possibility construction:</p>
<pre><code>function RecursivelyListAllThatWork(k, sum) // Using last k variables, make sum
/* Base case: If we've assigned all the variables correctly, list this
* solution.
*/
if k == 0:
print what we have so far
return
/* Recursive step: Try all coefficients, but only if they work. */
for c = 0 to sum / x_k:
if T[sum - c * x_k][k - 1] is true:
mark the coefficient of x_k to be c
call RecursivelyListAllThatWork(k - 1, sum - c * x_k)
unmark the coefficient of x_k
</code></pre>
<p>This is the basic idea, my actual code is a slightly different because I am using bounds to remove the possibility of infinite values(I say a single value cannot exceed the value of the sum). </p>
<p>The problem is, the table building part does not scale. It is flawed in, at least two ways, one is its dependent on the previous number to be completed(thus I cannnot break it and run it individually for each number) and the second problem is it requires to read a table before it writes(I am learning about how to get around this technically but currently it makes the program very slow).</p>
<p>Is there a more efficient way to do this that scales?</p>
<p>Here's an approach I tried to take but failed(so far):</p>
<pre><code>create a large table full of all possible values.z to target_sum..
create another large table of T[z - c * x_i][i - 1] and compare if the values exist.
If they do exists, add T[z][i] to a third table that contains the correct master
</code></pre>
<p>I don't need code just the logic(if this is possible). If it helps you(as it often helps me understand) here is some python code with my approach/examples:</p>
<pre><code>#data = [-2,10,5,50,20,25,40]
#target_sum = 100
data = [1,2,3,4,5,6,7,8,9,10]
target_sum = 10
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = [] # all values are False by default
T.append([0, 0]) # base case
R=200 # Maximum size of any partial sum
max_percent=0.3 # Maximum weight of any term
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(-R,R+1): #set the range of one higher than sum to include sum itself
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
if [s - c * x, i] in T:
T.append([s, i+1])
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
max_value = int(abs((target_sum * max_percent)/x_k))
for c in range(max_value + 1):
if [sum - c * x_k, k - 1] in T:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)
</code></pre>
<p>Any help or suggestions would be appreciated. I have worked on this for a long time and all my experiments have failed. I'm hoping to get the correct answer but even ideas of different approaches would be great so I can experiment with them.</p>
<p>Thank you.</p>
<p>p.s. I have asked a question on stackoverflow 2 days ago about improving my existing algo, but I got answers from posters who admitted to not fully understanding what I was asking for and because they have answered the question, I am unable to delete it to post here. I have flagged it for deletion. </p>
<p>Update: Regarding some of the comments,I'm not looking for a fast way of doing this(although it would be nice), I'm looking for a scalable way..my method works but each loop is dependent on the last loop which causes it to be bound to a single process. The math is in such a way that it builds upon previous results. If I can somehow break the process up into independent parts then I can use more cpu/computers to handle the work. I know it'll take a long time, but if it takes 600 hours on one cpu, then two should cut it down a bit and so on..right now I can't use other computers so I'm forced to wait 600 hours(while everything else on the system is ideal). please help!</p>
<p>Also the results are large but not infinite as I have bounds set so the number cannot exceed a certain percent of target_sum. </p>
| Stephen | 146,439 | <p>This is not an answer, since (as indicated in the comments) there is not going to be a fast way to write down the set you are looking at just because it can be so huge! There are reasonable algorithms for computing exactly how huge; here is one simple idea (it is not the most efficient way known to humans).</p>
<p>We want to compute the number of ways to write $n$ as a non-negative integer linear combination of a given set $A=\{a_1,a_2,\dots,a_k\}$ of positive integers (that is, for the number of integer partitions of $n$ with parts in $A$). Here is one strategy: let $p_A(n)$ denote the number of integer partitions of $n$ with parts in $A$. Let
$$f_A(x)=\sum_{n=0}^\infty p_A(n) x^n=\prod_{i=1}^k \frac{1}{1-x^{a_i}}$$ be the "generating function" for the sequence $p_A(n)$. The product expansion (which I am not proving here) shows that it is a rational generating function, and this means that the sequence $p_A(n)$ satisfies a finite linear recurrence, which you can use to compute the exact value or (much more quickly) an asymptotic formula. Rational generating functions are the subject of Chapter 4 of Stanley's famous book "Enumerative combinatorics" (vol. 1), where you might look for more information. For instance, the form of the generating function above implies that the sequence $p_A(n)$ is a "quasi-polynomial" (section 4.4 of Stanley).</p>
|
106,887 | <p>Statements like</p>
<pre><code>A) A is false.
</code></pre>
<p>or</p>
<pre><code>B1) B2 is true.
B2) B1 is false.
</code></pre>
<p>cannot be assigned a truth-value due to their paradoxical use of self-reference. Are <em>all</em> statements lacking a truth-value self-referential, or are there non-self-referential statements that also cannot be assigned a truth-value?</p>
<p>Phrased another way: Can every non-self-referential statement be assigned a truth-value?</p>
<p>Edit: I <em>think</em> what I mean by self-referential is a set of statements where at least one statement in the set refers to a statement in the set. But perhaps there is a better definition.</p>
| JDH | 413 | <p>One of the main discoveries of set-theoretic research over the past fifty years is the widespread independence phenomenon, the phenomenon by which numerous fundamental statements of set theory are independent of the principal axioms of set theory. Many instances of this ubiquitous phenomenon are described in <a href="https://mathoverflow.net/questions/1924/what-are-some-reasonable-sounding-statements-that-are-independent-of-zfc">this mathoverflow question</a>. Not only is the <a href="http://en.wikipedia.org/wiki/Continuum_hypothesis" rel="nofollow noreferrer">continuum hypothesis</a> independent of ZFC, but an enormous number of other natural questions arising in set theory, infinite combinatorics and many related fields are independent of ZFC, to the point that set-theorists now begin with the expectation of any given nontrivial set-theoretic statement, that it is reasonably likely to be independent of our axioms.</p>
<p>None of these naturally arising independent statements instantiating the independence phenomenon is self-referential, and since they are independent, in most cases set-theorists are at a loss to explain what is their correct truth value. Thus, these statements can be seen as instances of the kind you seek: non-self-referential statements, which we seem unable to assign a definite truth value.</p>
<p>The question of mathematical truth for such assertions runs into deeply philosophical issues on the nature of mathematical truth and existence. For a taste of this, I can recommend some of the literature we read for <a href="http://boolesrings.org/hamkins/philosophy-of-set-theory-fall-2011/" rel="nofollow noreferrer">my recent course at NYU on the philosophy of set theory</a>.</p>
|
106,887 | <p>Statements like</p>
<pre><code>A) A is false.
</code></pre>
<p>or</p>
<pre><code>B1) B2 is true.
B2) B1 is false.
</code></pre>
<p>cannot be assigned a truth-value due to their paradoxical use of self-reference. Are <em>all</em> statements lacking a truth-value self-referential, or are there non-self-referential statements that also cannot be assigned a truth-value?</p>
<p>Phrased another way: Can every non-self-referential statement be assigned a truth-value?</p>
<p>Edit: I <em>think</em> what I mean by self-referential is a set of statements where at least one statement in the set refers to a statement in the set. But perhaps there is a better definition.</p>
| Community | -1 | <p>JDH has given a deep, interesting answer -- but it's deep and interesting in part because it relates to ZFC, which is a deep and interesting theory. Formal mathematical theories don't have to be deep and interesting, and it is possible to answer this question using very simple and straightforward examples.</p>
<p>Here is an example of a mathematical theory:</p>
<p>There are three well-formed formulas in this theory: x, y, and z. We have an axiom that says that x is true, and another axiom that says y is false. That's it. This theory has no grammatical rules for producing more complex formulas from simpler ones, no other machinery. In this theory, z cannot be assigned a truth value.</p>
<p>A less trivial example is the following. Take <a href="http://en.wikipedia.org/wiki/Tarski%27s_axioms" rel="nofollow">Tarski's axioms</a> and delete the axiom of Euclid. This is a formal system that represents the same ideas as Euclid's original formulation of plane geometry, but without the parallel postulate. In this system, we have various statements that cannot be assigned truth values. One such statement is the axiom of Euclid (i.e., basically the parallel postulate). Another would be the Pythagorean theorem.</p>
<p>The self-referential thing is an interpretation of a particular strategy used by Godel for constructing undecidable statements in theories that can describe a certain amount of arithmetic. Note the three parts: (1) an interpretation, (2) a particular strategy, and (3) only for theories that can describe a certain amount of arithmetic.</p>
<p>The examples I've given above don't require Godel's strategy. Furthermore, Godel's strategy doesn't even work for Tarski's system, because Tarski's system can't describe the necessary amount of arithmetic.</p>
<p>Even in examples that do use Godel's strategy, the self-referentialism is only an interpretation. The undecidable statements don't literally refer to themselves -- they can just be interpreted that way.</p>
|
373,313 | <p>Given a manifold <span class="math-container">$M$</span>, we can always embed it in some Euclidian space (general position theorem). Hence we can define the minimal embedding space of <span class="math-container">$M$</span> to be the smallest euclidean space that we can embed <span class="math-container">$M$</span> in. My question is, will this depend on the category of <span class="math-container">$M$</span> (piece-wise linear or smooth)? I am not an expert in this area and I know the difference between these two categories can be subtle. Any pointer is very appreciated.</p>
| Ryan Budney | 1,465 | <p>Similar to Sander's example, the Poincare Dodecahedral space does not smoothly embed in <span class="math-container">$\mathbb R^4$</span>, but it does embed topologically.</p>
|
147,361 | <p>i'm new in MATHEMATICA. I want to create an operator $D^{(f)}=\partial_x+f'-\partial^2_x$ and $D^{(g)}=\partial_x+g'-\partial^2_x$ and put it into a matrix element, then multiplied by a vector whose components are functions on $x$, say $u(x), v(x)$. For example $$\begin{pmatrix}D^{(f)} & D^{(g)}\\ D^{(g)} & D^{(f)} \end{pmatrix} \begin{pmatrix} u(x) \\ v(x) \end{pmatrix}$$</p>
<p>I saw the posts: <a href="https://mathematica.stackexchange.com/questions/69149/computing-with-matrix-differential-operators">Computing with matrix differential operators</a>, <a href="https://mathematica.stackexchange.com/questions/120324/how-to-do-matrix-operation-if-the-first-matrix-is-an-operator">How to do matrix operation if the first matrix is an operator?</a>; but i'm very newbie on this.</p>
<p>My problem lives on the fact that I have problems with the matrices products because I have operators</p>
<p>Thanks!</p>
<p>EDIT: $f=f(x,y,z)$ and $g=g(x,y,z)$, both of them are function of a vector $(x,y,z)$, but we can neglect the components $y,z$ and consider only the $x$ part</p>
| Carl Woll | 45,431 | <p>You can make use of my <a href="https://mathematica.stackexchange.com/a/162590/45431"><code>DifferentialOperator</code></a> paclet to do this. Install with:</p>
<pre><code>PacletInstall["https://github.com/carlwoll/DifferentialOperator/releases/download/0.1/DifferentialOperator-0.0.2.paclet"]
</code></pre>
<p>and load with:</p>
<pre><code><<DifferentialOperator`
</code></pre>
<p>Here is an animation:</p>
<p><a href="https://i.stack.imgur.com/vWOgt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vWOgt.gif" alt="enter image description here"></a></p>
<p>Also, notice how I used two different variations of the second order differential operator.</p>
<p>Here is the final answer so that it can be compared with other answers:</p>
<pre><code>{
u[x] f'[x] + v[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x],
v[x] f'[x] + u[x] g'[x] + u'[x] + v'[x] - u''[x] - v''[x]
}
</code></pre>
|
1,197,875 | <p>I'm just starting partials and don't understand this at all. I'm told to hold $y$ "constant", so I treat $y$ like just some number and take the derivative of $\frac{1}{x}$, which I hope I'm correct in saying is $-\frac{1}{x^2}$, then multiply by $y$, getting $-\frac{y}{x^2}$.</p>
<p>But apparently the correct answer is $\frac{1}{x}$. What am I missing?</p>
| Mnifldz | 210,719 | <p>When you take the derivative of $\frac{y}{x}$ with respect to $y$ you are computing $\frac{\partial }{\partial y} \frac{y}{x} = \frac{1}{x}$ because here you are holding $x$ constant. If you take the derivative of the same expression with respect to $x$ then you compute $\frac{\partial}{\partial x} \frac{y}{x} = - \frac{y}{x^2}$ and this is when you hold $y$ constant.</p>
|
1,197,875 | <p>I'm just starting partials and don't understand this at all. I'm told to hold $y$ "constant", so I treat $y$ like just some number and take the derivative of $\frac{1}{x}$, which I hope I'm correct in saying is $-\frac{1}{x^2}$, then multiply by $y$, getting $-\frac{y}{x^2}$.</p>
<p>But apparently the correct answer is $\frac{1}{x}$. What am I missing?</p>
| leftaroundabout | 11,107 | <p>There is <em>nothing</em> special about the symbol $x$. Unfortunately, there's a certain bias to calling $f$ <strong>the</strong> function and $x$ <strong>the</strong> variable, but really it should never<sup>1</sup> matter how variables are labelled as long as it's done consistently.</p>
<p>So in particular, if you write
$$
f(x,y) = \tfrac{y}{x}
$$
it means exactly the same thing as
$$
f(y,x) = \tfrac{x}{y}.
$$
Note that $x$ and $y$ aren't actually part of the definition: this equation defines only $f$, and introduces two new “<a href="https://en.wikipedia.org/wiki/Class_%28computer_programming%29#Member_accessibility" rel="nofollow">private</a>” symbols, <em>locally</em>, to do that. I could also have written
$$
f(\mathscr{Y},\Xi) = \tfrac{\Xi}{\mathscr{Y}}.
$$</p>
<p>Now, what you're doing is actually
$$
f'(x,y) \equiv \tfrac{\partial}{\partial y} f(x,y),
$$
and that again is the same as
$$
f'(y,x) \equiv \partial_x f(y,x) = \partial_x \tfrac{x}{y}.
$$
You certainly won't doubt that this is $\tfrac1y$... although again that statement is meaningless without context: really I should say that $f'(y,x) = \tfrac1y$, and therefore $f'(x,y) = \tfrac1x$.</p>
<hr>
<p><sup>1</sup><sub>Alas, in many applications of maths this is widely neglected! In physics, two equations may be interpreted as something completely different depending on how the variables are labelled.</sub></p>
|
84,982 | <p>I am a new professor in Mathematics and I am running an independent study on Diophantine equations with a student of mine. Online I have found a wealth of very helpful expository notes written by other professors, and I would like to use them for guided reading. <strong>I am wondering whether it is customary to ask permission of the author before using his or her online notes for my own reading course.</strong> </p>
<p>Also, if anyone has suggestions for good sources Diophantine Equations please feel free to enlighten me.</p>
| Community | -1 | <p>The world has changed rapidly with the advent of the web and desktop publishing, so it's going to be difficult to say what is customary. You will probably find that there are generational differences as well as differences between fields and institutions. For example, what is customary at MIT is for professors to put their course materials in OpenCourseware.</p>
<p>It's always been common for a set of lecture notes to be written down, then polished over the years, and eventually transformed into a textbook. That means that the distinction between a textbook and a set of lecture notes has always been fuzzy. However, I think the amount of fuzziness has increased recently, because publication in print no longer has to mark the threshold. Many people go through the whole process of evolution on the web, with no involvement from an editor or the traditional publishing industry.</p>
<p>But to the extent that the distinction between free online lecture notes and a free online textbook is detectable, detecting it may help you to guess what is the most polite and constructive approach.</p>
<p>If it's really a complete, standalone book, then the author has put a lot of effort into making it into a polished product, and will definitely be grateful to hear that the effort is paying off, and that you and your students are using it. With traditional books, the author always knows how many sales s/he's made; with a free online book, s/he doesn't know unless the users take the trouble to make contact.</p>
<p>If it's really a set of lecture notes, then it may be less polished and self-contained. The author may feel that its unfinished state doesn't reflect as well on him/her as he/she would like professionally, and may have only put the notes online with the intention of making them available to his/her own students.</p>
<p>As the author of some free online physics textbooks, I find it a nuisance that there are so many outdated versions of the books sitting on the web. The most recent version is the best and reflects the best on me. For this reason, I'd suggest not mirroring people's notes on your own server, even if they have a Creative Commons license that permits that. If you're concerned that they'll evaporate off of the author's server, just keep a private copy as insurance against that eventuality.</p>
<p>I have a hard time believing that anyone who has posted lecture notes online will be upset that a student studying them prints them out so s/he can read them, highlight them, etc. However, I think many people would be more sensitive about a situation where the notes are being reproduced and sold in a campus bookstore or at a copy shop. Sometimes they'll state their expectations explicitly on the web page. If they choose an explicit license such as a Creative Commons license, then just make sure you don't violate the license.</p>
<p>Solutions to problems can be a sticky point. E.g., if you're thinking of distributing solutions to your students, I would discuss this with the author of the problems. Distributing solutions on paper may be more acceptable than distributing them electronically.</p>
|
2,390,215 | <p>There exist a bijection from $\mathbb N$ to $\mathbb Q$, so $\mathbb Q$ is countable. And by well ordering principle $\mathbb N$ has a least member say $n_1$ which is mapped to something in $\mathbb Q$, In this way $\mathbb Q$ can be well arranged? Is my argument good? </p>
| John Griffin | 466,397 | <p>Yes! If you have a bijection $f:W\to S$, where $(W,\leq)$ is well ordered, then one can show that $(S,\preceq)$ is a well ordering where $\preceq$ is defined as follows.
Given $s_1,s_2\in S$, there must be unique $w_1,w_2\in W$ such that $s_1=f(w_1)$ and $s_2=f(w_2)$. We will say that
$$
s_1 \prec s_2\ \text{if}\ w_1<w_2.
$$</p>
<p>I'm assuming here that when you say "well ordering principle", that you are only using that $\mathbb{N}$ is a well ordered set. There is a stronger version, equivalent to the axiom of choice, which says that any set can be well ordered. However, for countable sets this stronger version isn't needed because of the argument described above and the fact (or assumption) that $\mathbb{N}$ is well ordered.</p>
<p>It's also useful to point out that despite being able to well order $\mathbb{Q}$, this order has <em>nothing</em> to do with its natural order. Indeed, while the nonempty set $(0,1)\cap\mathbb{Q}$ does have a $\preceq$-minimal element if $\preceq$ is defined as above using $\mathbb{N}$, it certainly does <em>not</em> have a $\leq$-minimal element.</p>
|
188,139 | <p>Let $f$ be entire and non-constant. Assuming $f$ satisfies the functional equation $f(1-z)=1-f(z)$, can one show that the image of $f$ is $\mathbb{C}$?</p>
<p>The values $f$ takes on the unit disc seems to determine $f$...</p>
<p>Any ideas?</p>
| Arkady | 23,522 | <p>Since the function is entire, it misses at most one point by Picard's little theorem. If the point is say, $y$, then $1-y$ must be in the range, unless $y=1-y$. But, in this case, $y=\frac{1}{2}$ and we can deduce that $f(\frac{1}{2})=\frac{1}{2}$. So, $y\neq 1-y$. So, if there is a $z$ such that $f(z)=1-y$, then, $f(1-z)=1-f(z)=y$. So, the range of $f$ is $\mathbb C$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.