qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
2,265,782 | <p>Number of twenty one digit numbers such that Product of the digits is divisible by $21$</p>
<p>Since product is divisible by $21$ the number should contain the digits $3,6,7,9$ But i am unable to decide how to proceed...can i have any hint</p>
| amWhy | 9,003 | <p>Yes, your initial work is correct:</p>
<p>$$(p\land q)\to p \equiv \lnot (p\land q)\lor p\tag{implication}$$</p>
<p>$$\equiv (\lnot p \lor \lnot q) \lor p\tag{DeMorgan's rule}$$</p>
<hr>
<p>Now using the properties of associativity and commutativity of the $\lor$-operator, we have:</p>
<p>$$\equiv \lnot p \lor \lnot q \lor p\tag{associativity}$$</p>
<p>$$\equiv \lnot p \lor p \lor \lnot q \tag{commutativity}$$</p>
<p>$$\equiv (\lnot p \lor p)\lor \lnot q \tag{associativity}$$</p>
<p>$$\equiv \underbrace{(\lnot p \lor p)}_{\large \top} \lor \lnot q$$</p>
<p>$$\equiv \top \lor \lnot q\tag{Law of the excluded middle}$$
$$ \equiv \top \tag{annihilator of $\lor$}$$</p>
<p>Note that $\top$ is represents <em>tautologically true.</em></p>
<p>This is true regardless of the truth value assigned to p or to q. </p>
|
2,691,232 | <p>Let $E$ be a complex Hilbert space.</p>
<blockquote>
<p>I look for an example of $A,B\in \mathcal{L}(E)$ such that $A\neq 0$ and $B\neq 0$ but $AB=0$.</p>
</blockquote>
| José Carlos Santos | 446,262 | <p>In $\ell^2(\mathbb{C})$, define$$A(x_1,x_2,x_3,\ldots)=(x_1,0,x_3,0,x_5,0,\ldots)\text{ and }B(x_1,x_2,x_3,\ldots)=(0,x_2,0,x_4,0,x_6,\ldots).$$</p>
|
1,959,949 | <blockquote>
<p>We introduce new variables as
$\begin{cases}\xi:=x+ct\\\eta:=x-ct\end{cases}
$
which implies that
$
\begin{cases} \partial_ x=\partial_\xi+\partial_\eta\\\partial_t=c\partial_\xi+c\partial_\eta
\end{cases}
$</p>
</blockquote>
<p>This is from page 34 of <em>Partial Differential Equation --- an Introduction</em> (2nd edition) by Strauss. I don't understand the implication here. </p>
<p>How does this proceeds? Thanks a lot.</p>
| avs | 353,141 | <p>Use your equations to solve for $x$ and for $t$ in terms of $\xi, \eta$.</p>
|
1,237,077 | <p>For a periodic function we have: $$\int_{b}^{b+a}f(t)dt = \int_{b}^{na}f(t)dt+\int_{na}^{b+a}f(t)dt = \int_{b+a}^{(n+1)a}f(t)dt+\int_{an}^{b+a}f(t)dt = \int_{na}^{(n+1)a}f(t)dt = \int_{0}^{a}f(t)dt.$$ , but I don't understand how we obtain $\int _{b+a}^{\left(n+1\right)a}\:f\left(t\right)\:dt=\int _b^{na}\:f\left(t\right)dt$ in our equality?</p>
| mathifold.org | 231,554 | <p>If $a$ is the length of one period, then $f(t)=f(t-a)$ and so on. Then</p>
<p>$\int_{b+a}^{na+a}f(t)dt=\int_{b+a}^{na+a}f(t-a)dt=\text{(take $u=t-a$)}=\int_{b}^{na}f(u)du$</p>
|
8,997 | <p>I have a set of data points in two columns in a spreadsheet (OpenOffice Calc):</p>
<p><img src="https://i.stack.imgur.com/IPNz9.png" alt="enter image description here"></p>
<p>I would like to get these into <em>Mathematica</em> in this format:</p>
<pre><code>data = {{1, 3.3}, {2, 5.6}, {3, 7.1}, {4, 11.4}, {5, 14.8}, {6, 18.3}}
</code></pre>
<p>I have Googled for this, but what I find is about importing the entire document, which seems like overkill. Is there a way to kind of cut and paste those two columns into <em>Mathematica</em>? </p>
| Sjoerd C. de Vries | 57 | <p>This imports a whole sheet:</p>
<pre><code>Grid[#, Dividers -> All] &@
Import["http://joliclic.free.fr/html/object-tag/en/data/test.sxc", {"Data", 1}]
</code></pre>
<p><img src="https://i.stack.imgur.com/CE4S2.png" alt="Mathematica graphics"></p>
<p>And this imports three contiguous rows and two non-contiguous columns:</p>
<pre><code>Grid[#, Dividers -> All] &@
Import["http://joliclic.free.fr/html/object-tag/en/data/test.sxc",
{"Data", 1, Range[1, 3], {1, 3}}
]
</code></pre>
<p><img src="https://i.stack.imgur.com/hF4sc.png" alt="Mathematica graphics"></p>
<p>Details can be found on the <a href="http://reference.wolfram.com/mathematica/ref/format/SXC.html" rel="nofollow noreferrer">SXC Import</a> doc page (the range selections themselves are actually not documented there).</p>
|
786,086 | <p>For my research I am working with approximations to functions which I then integrate or differentiate and I am wondering how this affects the order of approximation.</p>
<p>Consider as a minimal example the case of $e^x$ for which integration and differentiation doesn't change anything. Now if I would approximate this function with a second order taylor series I get:
$$e^x\approx 1+x+\frac{x^2}{2}+O(x^3) \tag{1}$$</p>
<p>If I were to integrate this function I get:
$$ \int 1+x+\frac{x^2}{2}+O(x^3) dx = C+x+\frac{x^2}{2}+\frac{x^3}{6}+O(x^{?4?}) \tag{2}$$</p>
<p>I wrote $O^{?4?}$ because that is what my question is about: <strong>do I indeed get a higher order approximation when I do this integration or is it appropriate to cut-off the solution to the integral at $O(x^3)$, thus removing the $\frac{x^3}{6}$ term?</strong></p>
<p>And what about differentiation? In that case I seem to lose an order of accuracy, is that indeed the case?</p>
| Urgje | 95,681 | <p>Suppose that $f(x)$ has a Taylor series expansion about $x=0$ with a radius
of convergence $r>0$. For convenience we set $f(0)=1$.</p>
<p>We write
$$
f(x)=1+xf^{(1)}(0)+\frac{x^{2}}{2}f^{(2)}(0)+\mathcal{O}%
(x^{3})=1+xf^{(1)}(0)+\frac{x^{2}}{2}f^{(2)}(0)+g(x),
$$
where, in a neighbourhood of $0$,
$$
|x^{-3}g(x)|<c.
$$
Then, for the primitive $F(x)$,</p>
<p>$$
F(x)-F(0)=\int_{0}^{x}dyf(y)=x\int_{0}^{1}duf(xu),
$$
where
$$
f(xu)=1+xuf^{(1)}(0)+\frac{(xu)^{2}}{2}f^{(2)}(0)+g(xu).
$$
Then
\begin{eqnarray*}
\int_{0}^{1}duf(xu) &=&1+\frac{1}{2}xf^{(1)}(0)+\frac{x^{2}}{6}%
f^{(2)}(0)+\int_{0}^{1}dug(xu) \\
&=&1+\frac{1}{2}xf^{(1)}(0)+\frac{x^{2}}{6}f^{(2)}(0)+\int_{0}^{1}du(xu)^{3}%
\frac{g(xu)}{(xu)^{3}} \\
&=&1+\frac{1}{2}xf^{(1)}(0)+\frac{x^{2}}{6}f^{(2)}(0)+x^{3}%
\int_{0}^{1}duu^{3}\frac{g(xu)}{(xu)^{3}} \\
\left\vert \int_{0}^{1}duu^{3}\frac{g(xu)}{(xu)^{3}}\right\vert &\leqslant
&c\int_{0}^{1}duu^{3}=\frac{1}{4}c,
\end{eqnarray*}
so
\begin{eqnarray*}
F(x)-F(0) &=&x\{1+\frac{1}{2}xf^{(1)}(0)+\frac{x^{2}}{6}f^{(2)}(0)+x^{3}%
\int_{0}^{1}duu^{3}\frac{g(xu)}{(xu)^{3}}\} \\
&=&x+\frac{1}{2}f^{(1)}(0)x^{2}+\frac{1}{6}f^{(2)}(0)x^{3}+x^{4}%
\int_{0}^{1}duu^{3}\frac{g(xu)}{(xu)^{3}} \\
&=&x+\frac{1}{2}f^{(1)}(0)x^{2}+\frac{1}{6}f^{(2)}(0)x^{3}+\mathcal{O}%
(x^{4}).
\end{eqnarray*}
For the derivative</p>
<p>\begin{eqnarray*}
\partial _{x}f(x) &=&\partial _{x}\{1+xf^{(1)}(0)+\frac{x^{2}}{2}%
f^{(2)}(0)+g(x)\} \\
&=&f^{(1)}(0)+xf^{(2)}(0)+\partial _{x}g(x).
\end{eqnarray*}
Now, according to l'Hôpital's rule,</p>
<p>$$
\lim_{x\rightarrow 0}\frac{g(x)}{x^{3}}=\lim_{x\rightarrow 0}\frac{\partial
_{x}g(x)}{3x^{2}},
$$
so
$$
\partial _{x}g(x)=\mathcal{O}(x^{2})
$$</p>
|
1,109,552 | <p>So the Norm for an element $\alpha = a + b\sqrt{-5}$ in $\mathbb{Z}[\sqrt{-5}]$ is defined as $N(\alpha) = a^2 + 5b^2$ and so i argue by contradiction assume there exists $\alpha$ such that $N(\alpha) = 2$ and so $a^2+5b^2 = 2$ , however, since $b^2$ and $a^2$ are both positive integers then $b=0$ and $a=\sqrt{2}$ however $a$ must be an integer and so no such $\alpha$ exists, same goes for $3$.</p>
<p>I already proved that </p>
<ol>
<li>$N(\alpha\beta) = N(\alpha)N(\beta)$ for all $\alpha,\beta\in\mathbb{Z}[\sqrt{-5}]$.</li>
<li>if $\alpha\mid\beta$ in $\mathbb{Z}[\sqrt{-5}]$, then $N(\alpha)\mid N(\beta)$ in $\mathbb{Z}$.</li>
<li>$\alpha\in\mathbb{Z}[\sqrt{-5}]$ is a unit if and only if $N(\alpha)=1$.</li>
<li>Show that there are no elements in $\mathbb{Z}[\sqrt{-5}]$ with $N(\alpha)=2$ or $N(\alpha)=3$. (I proved it above)</li>
</ol>
<p>Now I need to prove that $2$, $3$, $1+ \sqrt{-5}$, and $1-\sqrt{-5}$ are irreducible.</p>
<p>So I also argue by contradiction, assume $1 + \sqrt{-5}$ is reducible then there must exists Non unit elements $\alpha,\beta \in \mathbb{Z}[\sqrt{-5}]$ such that $\alpha\beta = 1 + \sqrt{-5} $ and so $N(\alpha\beta) =N(\alpha)N(\beta)= N(1 + \sqrt{-5}) = 6$ but we already know that $N(\alpha) \neq 2$ or $3$ and so $N(\alpha) = 6$ and $N(\beta) = 1$ or vice verse , in any case this contradicts the fact that both $\alpha$ and $\beta$ are both non units.I just want to make sure i am on the right track here. And how can i prove that $2$, $3$, $1+ \sqrt{-5}$, and $1-\sqrt{-5}$ are not associate to each other.</p>
| azimut | 61,691 | <p>Yes, you are on the right track. All your reasoning makes sense to me.</p>
<p><strong>On your question about the associates</strong></p>
<p>By the properties of the norm, associates have the same norm. So the only possible associates in your list are $1 + \sqrt{-5}$ and $1 - \sqrt{-5}$.</p>
<p>Now determine all units of $\mathbb{Z}[\sqrt{-5}]$ by finding all integer solutions of $N(a + b\sqrt{-5}) = 1$. (The list will be quite short.)</p>
<p>Then check every unit $u$ for $u (1 + \sqrt{-5}) = (1 - \sqrt{-5})$.
You will see that this never happens and thus, the two elements are not associate.</p>
|
3,058,139 | <p>Let us consider the statement <span class="math-container">$\exists x P(x)$</span> - translated into English, "there exists an <span class="math-container">$x$</span> in our universe of discourse such that <span class="math-container">$P(x)$</span> is true." In writing the negation of this, we are taught to switch quantifiers (<span class="math-container">$\exists \leftrightarrow \forall$</span>) and to negate that statement <span class="math-container">$P(x)$</span>.</p>
<p>Thus, </p>
<p><span class="math-container">$$\neg (\exists x P(x)) = \forall x (\neg P(x))$$</span></p>
<p>However, humor me for a second. Let's consider what negation is - it is the "logical complement." The negation of a statement is always false when the statement is true, and vice versa. In that light, why would we not say the following is also the negation?</p>
<p><span class="math-container">$$\neg (\exists x P(x)) = \not \exists x P(x)$$</span></p>
<p>Or, taking this a bit further, why would we not write this as well?</p>
<p><span class="math-container">$$\forall x (\neg P(x)) = \not \exists x P(x)$$</span></p>
<p>Both seem to imply the same thing: there does not exist an <span class="math-container">$x$</span> such that <span class="math-container">$P(x)$</span> is true (and thus for all <span class="math-container">$x$</span>, <span class="math-container">$P(x)$</span> is false, or, rather, <span class="math-container">$\neg P(x)$</span> is true). </p>
<p>So is there some underlying reason why we don't do negations in this way? As far as I can tell, they mean the same thing, yet I always have seen the <span class="math-container">$\forall$</span> version as above. Looking around MSE, I've only seen some posts which have <span class="math-container">$\neg \exists$</span> (basically the same as <span class="math-container">$\not \exists$</span>), but they're only in the context of simplifying a logical expression. </p>
<p>So I guess, if indeed these are logically equivalent, my follow-up question would be - why is <span class="math-container">$\forall$</span> considered a simplification of <span class="math-container">$\neg \exists$</span> or <span class="math-container">$\not \exists$</span>?</p>
<p>My only guess is that "<span class="math-container">$\not \exists$</span>" isn't a standard notation, or so I recall from some notes my complex analysis professor gave us last semester. Or perhaps to say "for all <span class="math-container">$x$</span>, this is false" more immediately is understood (or a more "direct" way of saying it) than "there does not exist <span class="math-container">$x$</span> such this is true?"</p>
<p><em>(Footnote of note: I haven't had much education in predicate logic and such. We went over it for a little while in one of my classes so I understand some basics like the above but we never went into much detail. So I apologize if this question is poorly framed or worded.)</em></p>
| hmakholm left over Monica | 14,366 | <p>Which form to consider simpler is basically a matter of taste and convention.</p>
<p>There are some accounts of predicate logic that consider <span class="math-container">$\exists$</span> the only primitive quantifier and treat <span class="math-container">$\forall x\,\varphi$</span> as an abbreviation for <span class="math-container">$\neg\exists x\neg\varphi$</span>.</p>
<p>My impression is that this is something of a minority option these days, but it is not <em>wrong</em> as such.</p>
|
2,117,420 | <p>We all know that particular solution of $A_{n} = A_{(n-1)} + f(n)$</p>
<p>where $f(n)=n^c$ , c is a random positive integer.</p>
<p>Can be set to $(n^c+n^{(c-1)}+.....+1)$</p>
<p>But what about when $c\lt0$?</p>
<p>How do we find a particular solution of the form:</p>
<blockquote>
<p>$A_{n} = A_{(n-1)} + f(n)$</p>
<p>where $f(n)=n^{(c)}$ , c is a random negative integer.</p>
</blockquote>
<p>For example. What's the particular solution of $T(n)=T(n-1)+{1\over n}$</p>
<p>(P.S I know it's Harmonic series and we can use Integral test to prove that it's diverges by comparing its sum with an improper integral. But it doesn't matter. )</p>
<p>I can't find any clue in "Discrete Mathematics, 7th Edition". </p>
<p>Does anyone know the answer?</p>
| Mark Fischler | 150,362 | <p><strong>HINT</strong></p>
<p>You know how to diagonalize a matrix, that is, to find a diagonal matrix $D$ and an orthogonal matrix $P$ (which will be the matrix whose columns are the eigenvectors of $M$) such that
$$
M = PDP^{-1}
$$
($D$ will have the eigenvalues of $M$ on the diagonal elements.)</p>
<p>Then for all $k$,
$$
M^k = PD^kP^{-1}
$$
since pairs of $P^{-1}P$ cancel out except at the left and right sides of the product.</p>
<p>Finally, notice how this lets you relate $e^M$ to $P e^D P^{-1}$ since you know how to exponentiate the diagonal matrix $D$.</p>
|
136,021 | <p>There is an equivalence relation between inclusion of finite groups coming from the world of <a href="http://en.wikipedia.org/wiki/Subfactor" rel="noreferrer">subfactors</a>:</p>
<p><strong>Definition</strong>: <span class="math-container">$(H_{1} \subset G_{1}) \sim(H_{2} \subset G_{2})$</span> if <span class="math-container">$(R^{G_{1}} \subset R^{H_{1}})\cong(R^{G_{2}} \subset R^{H_{2}})$</span> as subfactors.</p>
<p>Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_1$</span> factor (a particular von Neumann algebra), and the groups <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> act by outer automorphisms.
The notation <span class="math-container">$R^G$</span> refers to the fixed-point algebra.</p>
<p><strong>Theorem</strong>: Let <span class="math-container">$(H \subset G)$</span> be a subgroup and let <span class="math-container">$K$</span> be a normal subgroup of <span class="math-container">$G$</span>, contained in <span class="math-container">$H$</span>, then:<br />
<span class="math-container">$(H \subset G) \sim (H/K \subset G/K)$</span>. In particular, if <span class="math-container">$H$</span> is itself normal: <span class="math-container">$(H \subset G) \sim (\{1\} \subset G/K) $</span><br />
<strong>Theorem</strong> : <span class="math-container">$(\{1\} \subset G_{1}) \sim(\{1\} \subset G_{2})$</span> iff <span class="math-container">$G_1 \simeq G_2$</span> as groups.</p>
<p><strong>Remark</strong> : the relation <span class="math-container">$\sim$</span> remembers the groups, but not necessarily the subgroups:<br />
<strong>Exemple</strong> (<a href="http://www.mscand.dk/article/view/14281" rel="noreferrer">Kodiyalam-Sunder</a> p47) : <span class="math-container">$(\langle (1234) \rangle \subset S_4) \sim (\langle (13),(24) \rangle \subset S_4)$</span></p>
<blockquote>
<p>Is there a purely group-theoretic reformulation of the relation <span class="math-container">$\sim$</span> ?</p>
</blockquote>
<p><strong>Motivations</strong>: See <a href="https://mathoverflow.net/questions/136171/an-upper-bound-for-the-maximal-subgroups-at-fixed-index">here</a> and <a href="https://mathoverflow.net/questions/135806/are-subfactor-planar-algebras-hard-to-classify-at-index-6/135994#135994">here</a>.</p>
<hr />
<p><strong>Some definitions:</strong> A <em>subfactor</em> is an inclusion of factors. A <em>factor</em> is a von Neumann algebra with a trivial center. The <em>center</em> is the intersection with the commutant. A <em>von Neumann algebra</em> is an algebra of bounded operators on an Hilbert space, closed by taking bicommutant and dual. Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_{1}$</span> factor. <span class="math-container">$R^{G}$</span> is the subfactor of <span class="math-container">$R$</span> containing all the elements of <span class="math-container">$R$</span> invariant under the natural action of the finite group <span class="math-container">$G$</span>. In its <a href="http://www.ams.org/books/memo/0237/" rel="noreferrer">thesis</a>, Vaughan Jones shows that, for all finite group <span class="math-container">$G$</span>, this action exists and is unique (up to outer conjugacy, see <a href="https://perswww.kuleuven.be/%7Eu0018768/artikels/bourbaki-popa.pdf" rel="noreferrer">here</a> p8), and the subfactor <span class="math-container">$R^{G} \subset R$</span> completely characterizes the group <span class="math-container">$G$</span>. See the book <a href="http://www.cambridge.org/us/academic/subjects/mathematics/abstract-analysis/introduction-subfactors" rel="noreferrer"><em>Introduction to subfactors</em></a> (1997) by Jones-Sunder.</p>
| Sebastien Palcoux | 34,538 | <blockquote>
<p>We give here an easy and purely group-theoretic sufficient condition (but not necessary) : </p>
</blockquote>
<p><strong>Definition</strong> : Let $\sim_2$ be the equivalence relation on inclusions of finite groups, defined by :<br>
$(A \subset B) \sim_2 (C \subset D)$ if $(A/A_B \subset B/A_B) \simeq (C/C_D \subset D/C_D)$ with $A_B$ the <a href="http://groupprops.subwiki.org/wiki/Normal_core" rel="nofollow noreferrer">normal core</a>. </p>
<p><strong>Remark</strong> : $\sim$ does not imply $\sim_2$ thanks to the Kodiyalam-Sunder example (see above and <a href="https://math.stackexchange.com/questions/677407/are-these-two-inclusions-of-finite-groups-equivalent">here</a>). </p>
<p><strong>Theorem</strong> : $\sim_2$ implies $\sim$.<br>
<em>Proof:</em> <a href="http://www.mscand.dk/article/view/14281" rel="nofollow noreferrer">here</a> p47 + Izumi's thm2.3 p3 <a href="http://www.math.kyoto-u.ac.jp/preprint/2002/2.ps" rel="nofollow noreferrer">here</a>, with $N= \{ 1\}$ and $\omega = 1$ (<em>there is certainly a direct proof</em>).</p>
<p><strong>Corollary</strong>: Let $G$ be a finite group, $H$ a subgroup, $\phi \in Aut(G) $ then : $(H \subset G ) \sim (\phi(H) \subset G)$. </p>
<p><strong>Warning</strong> : the dual version of the corollary means that for every $\phi \in Aut(G)$ and subgroup $H \subset G$, it exists $\Phi \in Aut(R \rtimes G)$,
such that $\Phi(R \rtimes H) = R \rtimes \phi(H)$. Let $\sigma$ be a (faithful) outer action of $G$ on $R$ ($\sigma_g(x) . u_g=u_g . x$) and $\phi \neq id$, then $\Phi$ is <strong>not</strong> defined by $\Phi(x) = x$ if $x \in R$ and $\Phi(u_g) = u_{\phi(g)}$, because else $\Phi(u_g . x) = \Phi(u_g).x $ and $ \Phi(\sigma_g(x) . u_g) = \sigma_g(x) . \Phi(u_g) $. So $ \sigma_g(x) . u_{\phi(g)}= u_{\phi(g)}.x$, then $\forall g \in G$, $ \sigma_g = \sigma_{\phi(g)}$, so $\phi = id$ because $\sigma$ faithful, <strong>contradiction</strong>. </p>
<p><strong>Remark</strong>: We have the following useful properties (see M Izumi <a href="https://www.youtube.com/watch?v=I52MOU9F-sg&index=5&list=LLhntpxxSKIETTxsQMN8nywg" rel="nofollow noreferrer">here</a>, 23:30 and 27:50):<br>
If $(A \subset B) \sim (C \subset D)$ then $Rep(A/B_A) \simeq Rep(C/D_C)$.<br>
If moreover the inclusions are maximal then $(A \subset B) \sim_2 (C \subset D)$. </p>
<p><strong>Corollary</strong>: $\sim_2$ $\Leftrightarrow$ $\sim$ if the inclusions are maximal.<br>
<strong>Problem</strong>: Extension to all the "natural" inclusions (see the definition in the optional part of <a href="https://mathoverflow.net/questions/160577/are-the-homogeneous-single-chain-subfactors-dedekind">this post</a>).<br>
The Kodiyalam-Sunder counter-examples are not "natural" inclusions, because they are single chain but not homogeneous single chain.</p>
|
1,537,881 | <p>Find the values of $a$ and $b$ if $$ \lim_{x\to0} \dfrac{x(1+a \cos(x))-b \sin(x)}{x^3} = 1 $$
I think i should use L'Hôpital's rule but it did not work.</p>
| parsiad | 64,601 | <p>The easiest way (in my opinion) is to plug in the power series expansion of $x(1+a\cos x)-b\sin x$ around zero. Then, the limit becomes</p>
<p>$$\lim_{x\rightarrow 0} \frac{x(a-b+1)+x^3(b-3a)/6+O(x^5)}{x^3}=1.$$</p>
<p>Now you have two equations involving $a$ and $b$ to satisfy (can you figure out what those equations are?)</p>
|
2,409,183 | <p>Good evening all! I'm trying to find the eigenvalues and eigenvectors of the following problem</p>
<p>$$
\begin{bmatrix}
-10 & 8\\
-18 & 14\\
\end{bmatrix}*\begin{bmatrix}
x_{1}\\
x_{2}\\
\end{bmatrix}
$$</p>
<p>I've found that $λ_{1},_{2}=2$ where $λ$ is double eigenvalue of our matrix and now I'm trying to find its eigenvectors. So we have to solve the above system
$$(-10-λ)*x_{1}+8*x_{2}=0 , -18*x_{1}+(14-λ)*x_{2}=0$$
So what I do here is to replace $λ=2$ and we have
$$-12*x_{1}+8*x_{2}=0 , -18*x_{1}+12*x_{2}=0$$</p>
<p>From here I get that $3*x_{1}=2*x_{2}$ and I think that gives us the eigenvector $\begin{bmatrix}
2/3\\
1\\
\end{bmatrix}$</p>
<p>My book is telling me that the eigenvector for $λ=2$ is $\begin{bmatrix}
2\\
3\\
\end{bmatrix}$ but I can't find the same solution. Can someone help me?</p>
<p>EDIT : I added my own try at solving it.</p>
| yeahyeah | 233,771 | <p>Let me show you the universal trick in computing any integral of the form
<span class="math-container">$$
I_{k_1...k_n}= \int d \Omega \, \hat r_{k_1}...\hat r_{k_n}.
$$</span></p>
<p>Just notice that (without being too formal) <span class="math-container">$I_{k_1...k_n}$</span> can be obtained by taking derivatives <span class="math-container">$\frac{\partial}{\partial j}$</span> of
<span class="math-container">$$Z(j):=\int d \Omega \, e^{i j \cdot \hat r },$$</span>
and then setting <span class="math-container">$j=0$</span>: <span class="math-container">$$
I_{k_1...k_n} = \frac{1}{i^{k_n}} \frac{\partial^{{k_n}}}{\partial j_{k_1} ... \partial j_{k_n}} Z(j) \vert_{j=0},
$$</span>
..but we can easily compute the "partition function" <span class="math-container">$Z$</span>! For three dimensions <span class="math-container">$r \in \mathbb{R}^3$</span> we have:<span class="math-container">$$
Z(j)= 4\pi \frac{sin j }{j}= 4\pi \left(1-\frac{j\cdot j}{3!}+ \frac{(j\cdot j)^2}{5!}+ \cdots\right),$$</span>
where <span class="math-container">$j\cdot j$</span> is the euclidean scalar product.</p>
|
48,989 | <p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
| xenon | 12,426 | <p>I used a way to prove this, which I thought may not be the most concise way but it feels very intuitive to me.
The matrix $AB$ is actually a matrix that consist the linear combination of $A$ with $B$ the multipliers. So it looks like...
$$\boldsymbol{AB}=\begin{bmatrix}
& & & \\
a_1 & a_2 & ... & a_n\\
& & &
\end{bmatrix}
\begin{bmatrix}
& & & \\
b_1 & b_2 & ... & b_n\\
& & &
\end{bmatrix}
=
\begin{bmatrix}
& & & \\
\boldsymbol{A}b_1 & \boldsymbol{A}b_2 & ... & \boldsymbol{A}b_n\\
& & &
\end{bmatrix}$$
Suppose if $B$ is singular, then when $B$, being the multipliers of $A$, will naturally obtain another singular matrix of $AB$. Similarly, if $B$ is non-singular, then $AB$ will be non-singular. Therefore, the $rank(AB) \leq rank(B)$.</p>
<p>Then now if $A$ is singular, then clearly, no matter what $B$ is, the $rank(AB)\leq rank(A)$. The $rank(AB)$ is immediately capped by the rank of $A$ unless the the rank of $B$ is even smaller.</p>
<p>Put these two ideas together, the rank of $AB$ must have been capped the rank of $A$ or $B$, which ever is smaller. Therefore, $rank(AB) \leq min(rank(A), rank(B))$.</p>
<p>Hope this helps you!</p>
|
772,391 | <p>The formula for the Chi-Square test statistic is the following:</p>
<p>$\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$</p>
<p>where O - is observed data, and E - is expected.</p>
<p>I'm curious why it depends on the absolute values? For example, if we change the units we're measuring we'll get a different statistics. Suppose we're performing a test on apple weights. One of the samples weights 165 gram, and we expect it to be 182 gram, then the part of the formula will be:</p>
<p>$\frac{(165 - 182)^2}{182} \sim 1.58791$</p>
<p><a href="http://en.wikipedia.org/wiki/Pearson's_chi-squared_test" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Pearson's_chi-squared_test</a></p>
<p>Now suppose we're living in a country where the precision is on the top. We use milligrams for everything and we get the same results in different units: 165000 milligrams and 182000, respectively. The statistic:</p>
<p>$\frac{(165000 - 182000)^2}{182000} \sim 1587.91$</p>
<p>So our conclusion will be different based on the units we used. Why? What am I missing and why the values are not normalized in the Chi-squared test?</p>
| Beojan | 531,438 | <p>The definition of $\chi^2$ you're using is for comparing <em>frequencies</em>, not measurements with units. In the later case, you divide by the square of the error, not the value itself.</p>
|
2,541,044 | <p>I read this argument on the internet about how the solution to the sleeping beauty problem is $\frac{1}{3}$:</p>
<p>All these events are equally likely in the experiment : </p>
<ol>
<li><p>Coin landed Heads, it's Monday and Beauty is awake</p></li>
<li><p>Coin landed Heads, it's Tuesday and Beauty is asleep</p></li>
<li><p>Coin landed Tails, it's Monday and Beauty is awake</p></li>
<li><p>Coin landed Tails, it's Tuesday and Beauty is awake</p></li>
</ol>
<p>All these are mutually exclusive and exhaustive and also equally likely. So, all four of these events have a probability $\frac{1}{4}$. But when Beauty is awakened, she knows that she isn't asleep. So, the second possibility can be ruled out. The rest three are still equally likely with a probability $\frac{1}{3}$. Hence the probability that the coin landed Heads is $\frac{1}{3}$.</p>
<p>But I remember something from the Monty Hall problem and this situation looks somewhat similar. The solution assumes that when possibility no. 2 is ruled out, the remaining three remain equally likely. This doesn't happen in the Monty Hall problem.</p>
<p>For example, there are 100 doors. A prize is behind one of them. Clearly, all the doors are equally likely to have the prize. We pick one random door. The probability that it has the prize is $\frac{1}{100}$. The probability that the prize is in one of the remaining doors is $\frac{99}{100}$. When doors from the set of remaining 99 doors are ruled out one by one, all the doors no longer remain equally likely. Our door still has the probability $\frac{1}{100}$ while the group of remaining doors still hold a probability of $\frac{99}{100}$.</p>
<p>Could this be true for the Sleeping Beauty Problem too? I mean the possibilities 1. and 2. that I've listed collectively hold a probability of $\frac{1}{2}$ and even when possibility no.2 is ruled out, it's probability gets transferred to possibility no.1, so that it still has a probability of $\frac{1}{2}$.</p>
<p><strong>EDIT:</strong> Suppose Beauty is the contestant on the Monty Hall Show. She is presented four doors in front of her, A, B, C, D. Clearly, all the door currently have a winning probability of $\frac{1}{4}$. But she knows that before the prize was put behind one of the doors, a coin was tossed. If it landed heads, the prize was placed in one of the doors A or B and in case it was tails, the prize was put in C or D. Beauty knows this. Now, the host rules out door B as a possibility (which is equivalent to Beauty ruling out possibility 2). Do the doors A, C and D remain equally likely to have the prize or is it safer to choose A?</p>
<p>I think it's safer to choose A because either you can assume the coin landed tails and further burden yourself in choosing between C and D or you can assume the coin landed heads and then choose A, the only remaining Heads door.</p>
| Qiaochu Yuan | 232 | <p>What's happening in the sleeping beauty problem is much worse than what's happening in the Monty Hall problem. I claim that the solution to the sleeping beauty problem is that in a world where things like the sleeping beauty problem happen to you, there is no such thing as probability. </p>
<p>One way to cash out what it means that you assign a thing happening some probability $p$ is in terms of bets: if someone were willing to offer you slightly more than $100(1 - p)$ dollars if the thing happens if you pay them slightly less than $100p$ dollars if the thing doesn't happen, you would take that bet, because the expected value would be positive, and $p$ is the largest number for which this is true. </p>
<p>But in the sleeping beauty problem, because sleeping beauty will be 1) possibly awakened twice and 2) drugged so she doesn't remember the first awakening if the second one happens, there are effectively sometimes two copies of her when she's awakened twice. So which one of them do you offer the bet to?</p>
<ol>
<li>If you only offer bets to sleeping beauty the first time she awakens, sleeping beauty should accept offers of slightly more than $50$ dollars if the coin landed heads in exchange for paying slightly less than $50$ dollars if the coin landed tails; in other words, she should bet as if she believes the coin lands heads with probability $\frac{1}{2}$. </li>
<li>If you offer bets to sleeping beauty every time she awakens, sleeping beauty should accept offers of slightly more than $33$ dollars if the coin landed heads in exchange for paying slightly less than $66$ dollars if the coin landed tails (because in the heads world she gets to collect twice); in other words, she should bet as if she believes the coin lands heads with probability $\frac{1}{3}$.</li>
</ol>
<p>My opinion is that once you notice this fact there is no longer any remaining question of what the probability "actually" is; there are only bets, and depending on the structure of the bets sleeping beauty will accept or reject accordingly. In summary:</p>
<blockquote>
<p>In a world where you are effectively sometimes copied, it becomes ambiguous what it means to offer you a bet, because I have to decide which of your copies to offer the bet to. </p>
</blockquote>
<p>(Your copies also have to decide how much they care about each other; let's assume for simplicity that they're all perfectly altruistic wrt each other, although in general this is a further complication.) </p>
|
735,470 | <p>I am having trouble with integrating the following:</p>
<p>$$\int \frac{\cos2x}{1-\cos4x}\mathrm{d}x$$</p>
<p>I have simplified it using the double angle: </p>
<p>$$\int \frac{1-2\sin^2x}{1-\cos4x}\mathrm{d}x$$</p>
<p>But i am stuck as I am not sure on how to continue on from here. Should i use the double angle formula to simplify the denominator too?</p>
<p>Then there is this other question, which I am not sure on how to solve. </p>
<p>$$\int {(2^x+3^x)}^{2}\mathrm{d}x$$</p>
<p>Should i just expand and multiply the terms, then use "$\int a^x\mathrm{d}x = \frac{1}{\ln a} a^x$ "to integrate?</p>
<p>I am confused on whether i am taking the correct approach in solving this question. </p>
<p>All help and suggestions are welcomed. Thank you very much for helping me once again, guys.</p>
| Alijah Ahmed | 124,032 | <p>Use the double angle on the $\cos 4x$ term, so you obtain $\cos 4x=1-2\sin^22x$, rather than using the double angle identity on $\cos 2x$. </p>
<p>So your integral will simplify to
$$\int\frac{\cos 2x}{1-\cos 4x}dx=\int\frac{\cos 2x}{1-(1-2\sin^22x)}dx=\frac{1}{2}\int\frac{\cos 2x}{\sin^22x}dx$$</p>
<p>Then use the substitution $u=\sin 2x$, so that $du=(2\cos 2x)dx$. I'll let you take it from here.</p>
<p>For your second question, as you suggest, you can expand the term as follows
$$\int(2^x+3^x)^2dx=\int(2^{2x}+2(2^x3^x)+3^{2x})dx=\int(2^{2x}+(2)6^x+3^{2x})dx$$</p>
<p>Simplifying further we have $$\int(2^{2x}+(2)6^x+3^{2x})dx=\int4^xdx+2\int6^xdx+\int9^xdx$$
and like you said we can use "$\int a^xdx=\frac{1}{\ln a}a^x$"</p>
|
3,444,556 | <blockquote>
<p>Let <span class="math-container">$f:[-1,1] \to \mathbb{R}$</span> be continuous on <span class="math-container">$[-1,1]$</span>.</p>
<p>Assume <span class="math-container">$\displaystyle \int_{-1}^{1}f(x)x^ndx = 0$</span> for <span class="math-container">$n = 0,1,2,...$</span> </p>
<p>Then show <span class="math-container">$f(x)=0, \ \forall x \in [-1,1]$</span></p>
</blockquote>
<p>I would like to use Weierstrass Approximation Theorem (WAT) to prove this. Here is my incomplete attempt:</p>
<p><span class="math-container">$f$</span> is continuous on <span class="math-container">$[-1,1]$</span> so by WAT <span class="math-container">$\exists$</span> a sequence polynomials <span class="math-container">$p_1, p_2, p_3,...,p_n,...$</span> s.t. </p>
<p><span class="math-container">$\displaystyle \lim_{n \to \infty} |p_n - f| = 0$</span>, so
<span class="math-container">$\displaystyle \lim_{n \to \infty} \int_{-1}^{1}|f-p_n| = 0$</span>.</p>
<p>Now, I would like to show <span class="math-container">$\int_{-1}^{1} f^2 = 0$</span> and from there I would like to conclude that <span class="math-container">$f=0$</span>.
So,
<span class="math-container">\begin{align}
\int_{-1}^{1} (f(x))^2 dx & = \int_{-1}^{1} f(x)f(x) \\
& = \int_{-1}^{1} \left[ f(x)f(x)-f(x)p_n(x)+f(x)p_n(x) \right] dx \\
& = \int_{-1}^{1} f(x) (f(x) - p_n(x)) dx + \int_{-1}^{1}f(x) p_n(x)dx
\end{align}</span></p>
<p><span class="math-container">$\int_{-1}^{1} f(x) p_n(x) dx = 0$</span> by the hypothesis since <span class="math-container">$\int_{-1}^{1} f(x) x^n dx = 0$</span> for <strong>all</strong> <span class="math-container">$n \in \mathbb{N}$</span>.</p>
<p>So that leaves me to show,</p>
<p><span class="math-container">$\int_{-1}^{1}f(x)(f(x)-p_n(x)) = 0$</span>. I know <span class="math-container">$\lim_{n \to \infty}|f-p_n|=0$</span>, but I am not clear how I can use this to show the integral is <span class="math-container">$0$</span>. First of all, we are working with a fixed <span class="math-container">$n$</span>, and second of all the limit of absolute difference is <span class="math-container">$0$</span>.</p>
<p>Should I start the proof with limits and show the integral is arbitrarily small?</p>
<p>Thanks.</p>
| ling | 670,949 | <p>Since
<span class="math-container">$$\limsup_{n\to\infty}|f(x)-p_n(x)|=0,$$</span>
we know for any <span class="math-container">$\epsilon>0$</span>, there is <span class="math-container">$N\in\mathbb{N}$</span> such that for <span class="math-container">$\forall n> N$</span>
<span class="math-container">$$|f-p_n|<\frac{\epsilon}{2 M},$$</span>
where <span class="math-container">$M:=\max_{[-1,1]} |f(x)|$</span>.</p>
<p>Hence
<span class="math-container">$$\left|\int_{-1}^1f(x) \left(f(x)-p_n(x)\right)\,dx \right|<\frac{\epsilon}{2 M}\int_{-1}^1 |f(x)|\,dx \leq\epsilon,\quad \forall n>N.$$</span></p>
|
339,880 | <p>I'm interested in examples where the sum of a set with itself is a substantially bigger set with nice structure. Here are two examples:</p>
<ul>
<li><strong>Cantor set</strong>: Let <span class="math-container">$C$</span> denote the ternary Cantor set on the interval <span class="math-container">$[0,1]$</span>. Then <span class="math-container">$C+C = [0,2]$</span>. There are several nice proofs of this result. Note that the set <span class="math-container">$C$</span> has measure zero, so is "thin" compared to the interval <span class="math-container">$[0,2]$</span> whose measure is positive. </li>
<li><strong>Goldbach Conjecture</strong>: Let <span class="math-container">$P$</span> denote the set of odd primes and <span class="math-container">$E_6$</span> the set of even integers greater than or equal to 6. Then the conjecture states is equivalent to <span class="math-container">$P + P = E_6$</span>. Note that the primes have asymptotic density zero on the integers, so the set <span class="math-container">$P$</span> is "thin" relative to the positive integers.</li>
</ul>
<p>Are there other nice examples?</p>
| Nik Weaver | 23,141 | <p>I proved this fact not too long ago: if <span class="math-container">$G$</span> is a finite group of cardinality <span class="math-container">$n$</span>, then there exists a subset <span class="math-container">$S$</span> of <span class="math-container">$G$</span> of cardinality no more than <span class="math-container">$\lceil 2\sqrt{n\ln n}\rceil$</span> such that <span class="math-container">$SS^{-1} = G$</span>. Possibly this is already known ...?</p>
<p>Edit: It IS known, in fact Seva points out in <a href="https://mathoverflow.net/questions/270864/decomposing-a-finite-group-as-a-product-of-subsets/282795#282795">this answer</a> that it has been shown that there exists a subset of size <span class="math-container">$\lceil \frac{4}{\sqrt{3}} \sqrt{n}\rceil$</span> satisfying <span class="math-container">$S^2 = G$</span>. (I still think it's interesting that a probabilistic argument gets us within <span class="math-container">$\sqrt{\ln n}$</span> of this. The stronger result relies on the classification of finite simple groups ...)</p>
|
339,880 | <p>I'm interested in examples where the sum of a set with itself is a substantially bigger set with nice structure. Here are two examples:</p>
<ul>
<li><strong>Cantor set</strong>: Let <span class="math-container">$C$</span> denote the ternary Cantor set on the interval <span class="math-container">$[0,1]$</span>. Then <span class="math-container">$C+C = [0,2]$</span>. There are several nice proofs of this result. Note that the set <span class="math-container">$C$</span> has measure zero, so is "thin" compared to the interval <span class="math-container">$[0,2]$</span> whose measure is positive. </li>
<li><strong>Goldbach Conjecture</strong>: Let <span class="math-container">$P$</span> denote the set of odd primes and <span class="math-container">$E_6$</span> the set of even integers greater than or equal to 6. Then the conjecture states is equivalent to <span class="math-container">$P + P = E_6$</span>. Note that the primes have asymptotic density zero on the integers, so the set <span class="math-container">$P$</span> is "thin" relative to the positive integers.</li>
</ul>
<p>Are there other nice examples?</p>
| Seva | 9,924 | <p>The set <span class="math-container">$Q$</span> of all squares in <span class="math-container">$\mathbb F_p$</span> is definitely thick and very nice. Can it be represented as a difference set <span class="math-container">$A-A$</span>? An open conjecture due to Sárközy is that this is impossible. (It has been <a href="https://arxiv.org/abs/1905.09134" rel="noreferrer">recently shown</a> that if <span class="math-container">$A-A=Q$</span>, then every non-zero element of <span class="math-container">$Q$</span> has exactly one representation as <span class="math-container">$a-b$</span> with <span class="math-container">$a,b\in A$</span>, so that <span class="math-container">$A$</span> must be thin.) </p>
|
3,370,750 | <p>Show that the moment if inertia of an elliptic area of mass
M
and semi-axis
a
and
b
about a semi-diameter
of length
r
is <span class="math-container">$$\frac{Ma^2b^2}{4r^2}$$</span>.
My attempt.</p>
<ol>
<li>I know that MI about ox is <span class="math-container">${Mb^2 \over 4}$</span>.</li>
<li>MI about oy axis is <span class="math-container">${Ma^2 \over 4}$</span>.
How to proceed further.... </li>
</ol>
| Connor Harris | 102,456 | <p>Partial answer, based on the fact that the image of any ellipse under affine transformation is another ellipse.</p>
<p>Let the equation of the ellipse be <span class="math-container">$$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$</span> or <span class="math-container">$$b^2 x^2 + a^2 y^2 = 1$$</span> and let the desired semi-diameter intersect the ellipse at <span class="math-container">$(h, k)$</span> where <span class="math-container">$h^2 + k^2 = r^2$</span>.</p>
<p>I claim that there is an affine transformation consisting solely of a scaling along the semi-diameter that can transform this ellipse into a circle. Specifically, write <span class="math-container">$T = PAP^{-1}$</span>, where <span class="math-container">$$P = \begin{bmatrix}h & -k \\ k & h \end{bmatrix}$$</span> and <span class="math-container">$$P^{-1} = \dfrac{1}{r^2} \begin{bmatrix} h & k \\ -k & h \end{bmatrix}$$</span> transform coordinates between the standard basis and the basis vectors <span class="math-container">$(h, k)$</span> and <span class="math-container">$(k, -h)$</span>, and <span class="math-container">$$A = \begin{bmatrix} \lambda & 0 \\ 0 & 1 \end{bmatrix}.$$</span></p>
<p>As <span class="math-container">$\det T = \lambda$</span> and <span class="math-container">$T$</span> preserves distances from the axis of rotation, the rotational inertia of the resulting shape is <span class="math-container">$\lambda$</span> times that of the original shape. The matrix expression for <span class="math-container">$T$</span> is <span class="math-container">$$T = \frac{1}{r^2} \begin{bmatrix} \lambda h^2 + k^2 & hk - \lambda h k \\ hk - \lambda hk & h^2 + \lambda k^2 \end{bmatrix}$$</span> and the squared radius <span class="math-container">$R(x, y) := ||T(x, y)||^2$</span> of the image of some point <span class="math-container">$(x, y)$</span> is thus (omitting a bunch of tedious steps in the algebra) <span class="math-container">\begin{align*}
R &= \frac{1}{r^4} \left[ (\lambda h^2 + k^2) x + (hk - \lambda hk) y \right]^2 + \frac{1}{r^4} \left[(hk - \lambda hk) x + (hk - \lambda hk) y \right]^2 \\
&= \frac{1}{r^2} \left( \lambda^2 (hx - ky)^2 + (hy + kx))^2 \right)
\end{align*}</span>
and finding the value of <span class="math-container">$\lambda$</span> for which this expression is constant (and the radius of the corresponding circle) is just a straightforward tedious exercise in algebra or differential calculus from which the final answer would follow readily.</p>
|
45,441 | <p>There is a method of constructing representations of classical Lie algebras via Gelfand-Tsetlin bases. It has also been applied to Symmetric groups by Vershik and Okounkov. Does anybody know of any application of the method to complex representations of $GL_n(\mathbb F_q)$? Or, at least, any results in this directions, like what is the centralizer of $GL_{n-1}$ in $\mathbb C[GL_n]$?</p>
| Jim Humphreys | 4,231 | <p>My earlier comment was not at all well-focused. After more thought, I'm inclined to be pessimistic about using a Gelfand-Tsetlin approach here (even if it has some success for symmetric groups). Though of course it would be interesting to be proven wrong. </p>
<p>As Matt Davis reminds me, my offhand reference to Schur-Weyl duality is not helpful here since the work of Benson, Doty, and others deals mainly with the representations of various groups over fields of prime characteristic. (See especially Doty's papers on arXiv.) Irreducible representations of finite general linear groups over $\mathbb{C}$ are very difficult to construct directly and have very little in common with the finite dimensional representations of general linear groups or their Lie algebras in characteristic 0. Instead, the theory imitates more closely the infinite dimensional Harish-Chandra approach to Lie group representations in which parabolic induction is exploited together with a study of "discrete series". </p>
<p>J.A. Green's 1955 TAMS paper followed somewhat this pattern in developing combinatorially the <em>character</em> theory of finite general linear groups. But there is little insight here into constructing the elusive discrete series characters; instead orthogonality relations and the like are exploited. The best approach to an actual construction of discrete series representations was given in Lusztig's 1974 <em>Annals of Mathematics Studies</em> No. 81. Soon after that, Deligne and Lusztig pioneered a more sophisticated method for constructing generalized characters of arbitrary finite groups of Lie type. This has become the dominant influence in the subject, since Lusztig's earlier techniques don't go far enough beyond the finite general linear case.</p>
|
3,954,410 | <p>I am solving exercises from Loring Tu.</p>
<p>Show that if <span class="math-container">$L : V \rightarrow V$</span> is a linear operator on a vector space V of dimension n, then the pullback <span class="math-container">$L^{\wedge} : A_n(V) \rightarrow A_n(V)$</span> is multiplication by determinant of L.</p>
<p>Attempt:</p>
<p>This is a linear operator between the same spaces of the dimension 1. It follows that it must be multiplication by a constant. I don't understand why is it the determinant ?</p>
| Robert Shore | 640,080 | <p>Note that <span class="math-container">$7 \vert 7n^3$</span> so the problem reduces to proving <span class="math-container">$n^7 \equiv n \pmod 7$</span>. This is obviously true for <span class="math-container">$n=1$</span>.</p>
<p>Assume <span class="math-container">$k^7 \equiv k \pmod 7$</span>. Then <span class="math-container">$(k+1)^7 = \sum_{i=0}^7 \binom 7i k^i \equiv k^7+1 \pmod 7$</span> because <span class="math-container">$7 \vert \binom 7i$</span> for <span class="math-container">$ 1 \leq i \leq 6$</span>. By our inductive hypothesis, <span class="math-container">$k^7 \equiv k \pmod 7$</span>, so <span class="math-container">$(k+1)^7 \equiv k^7+1 \equiv k+1 \pmod 7$</span> and we are done.</p>
|
2,631,284 | <p>I'm trying to find all $n \in \mathbb{N}$ such that</p>
<p>$(n+2) \mid (n^2+5)$ </p>
<p>as the title says, I've tried numbers up to $20$ and found that $1, 7$ are solutions and I suspect that those are the only $2$ solutions, however I have no idea how to show that.</p>
<p>I've done nothing but basic transformations:</p>
<p>$(n+2) \mid (n^2+5)$ </p>
<p>$\iff n^2+5 = k(n+2)$</p>
<p>$\iff n^2+5 \mod(n+2) = 0$</p>
<p>$\iff (n^2 \mod(n+2) + 5 \mod(n+2)) \mod(n+2) = 0$</p>
<p>Now I suspect the next step is to find all possible solutions for</p>
<p>$n^2 \mod(n+2)$, which I have no idea how to do.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>$$n+2\left|n^2+5\right. \implies k(n+2) = n^2+5 \implies k = \frac{n^2+5}{n+2}$$
Use that $$k = \frac{n^2+5}{n+2}=n-2+\frac{9}{n+2}$$</p>
|
3,160,563 | <p>My classmates and I were calculating the first homology group of the klein bottle, and we saw that <span class="math-container">$Ker \, \delta_1 \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}$</span> and <span class="math-container">$Im \, \delta_2 \cong \mathbb{Z} \oplus \mathbb{Z}$</span>.</p>
<p>However, <span class="math-container">$H_1(K) \ncong \mathbb{Z}$</span>. We got the correct answer, <span class="math-container">$H_1(K) \cong \mathbb{Z} \oplus \mathbb{Z_2}$</span> once we considered the group presentation where our <span class="math-container">$Im \, \delta_2$</span> were used as relations in the presentation.</p>
<p>But this led me to wonder why homology groups are not invariant under isomorphism classes?</p>
| Joshua Mundinger | 106,317 | <p>Suppose that I have a chain complex <span class="math-container">$(C,\delta)$</span>, i.e. a sequence of groups <span class="math-container">$\{C_n\}_{n\in \mathbb{Z}}$</span> and differentials <span class="math-container">$\delta_n: C_n \to C_{n-1}$</span> such that <span class="math-container">$\delta_{n-1}\delta_n = 0$</span> for all <span class="math-container">$n \in \mathbb{Z}$</span>. Then indeed chain complexes with the same underlying groups may have different homology. For example, consider:</p>
<p><span class="math-container">$$ \cdots \to 0 \to \mathbb{Z} \to_\varphi \mathbb{Z} \to 0 \to \cdots$$</span>
If <span class="math-container">$\varphi$</span> is multiplication by <span class="math-container">$2$</span>, then there is a homology group isomorphic to <span class="math-container">$\mathbb{Z}/2\mathbb{Z}$</span>; if <span class="math-container">$\varphi$</span> is the identity, there is no nonzero homology. This phenomenon is not just limited to torsion; the ranks of homology groups of complexes may also differ.</p>
<p>The issue is that:</p>
<blockquote>
<p>An isomorphism of underlying groups is not the right notion of isomorphism for chain complexes.</p>
</blockquote>
<p>Homology is defined in terms of the differentials, as well as the underlying groups. A reasonable notion of isomorphism for a chain complex should somehow respect the differentials, so that homology is indeed an invariant of isomorphism of chain complexes. Here is the definition:</p>
<p><strong>Definition</strong>: An isomorphism of chain complexes <span class="math-container">$(C,\delta)$</span> and <span class="math-container">$(C', \delta')$</span> is a sequence of maps <span class="math-container">$f_n: C_n \to C_{n}'$</span> which are isomorphisms and such that <span class="math-container">$\delta'_n f_n =f_{n-1} \delta_n$</span>, that is, the following diagram commutes:
<span class="math-container">$$\require{AMScd}\begin{CD}
C_n @>f_n>> C'_n \\
@V\delta_nVV @V\delta'_nVV \\
C_{n-1} @>f_{n-1}>> C'_{n-1}\\
\end{CD}$$</span>
It is a nice exercise to show that if two chain complexes are isomorphic in the above sense, then they have isomorphic homology groups. This may also be generalized to the notion of a <em>homomorphism</em> of chain complexes; one of the beautiful elements of algebraic topology is that (appropriate) maps on spaces induce maps on chain complexes. </p>
|
3,160,563 | <p>My classmates and I were calculating the first homology group of the klein bottle, and we saw that <span class="math-container">$Ker \, \delta_1 \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}$</span> and <span class="math-container">$Im \, \delta_2 \cong \mathbb{Z} \oplus \mathbb{Z}$</span>.</p>
<p>However, <span class="math-container">$H_1(K) \ncong \mathbb{Z}$</span>. We got the correct answer, <span class="math-container">$H_1(K) \cong \mathbb{Z} \oplus \mathbb{Z_2}$</span> once we considered the group presentation where our <span class="math-container">$Im \, \delta_2$</span> were used as relations in the presentation.</p>
<p>But this led me to wonder why homology groups are not invariant under isomorphism classes?</p>
| yoyostein | 28,012 | <p>I think your question may be related to this more general question:
<a href="https://math.stackexchange.com/questions/40881/isomorphic-quotients-by-isomorphic-normal-subgroups">Isomorphic quotients by isomorphic normal subgroups</a></p>
<p>In general, even if <span class="math-container">$H\cong K$</span>, it is not guaranteed that the quotient groups <span class="math-container">$G/H$</span> and <span class="math-container">$G/K$</span> are isomorphic.</p>
<p>Since homology is defined to be the quotient group <span class="math-container">$\ker\partial/\text{Im}\partial$</span>, by the above reasoning we can't just use "isomorphism classes" to conclude the final homology.</p>
|
4,242,116 | <p>I found this question online</p>
<blockquote>
<p>Find the limit:<span class="math-container">$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx $$</span></p>
</blockquote>
<p>I've been told that I need to use the gamma function by converting <span class="math-container">$n^{3/2}=n\sqrt{n}$</span> and doing <span class="math-container">$u$</span> sub <span class="math-container">$u=x\sqrt{n}\rightarrow du=\sqrt{n} dx$</span></p>
<p><span class="math-container">$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx =
\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{nx^2\sqrt{n}}{(x^2+1)^n}dx =
\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx
$$</span>
<span class="math-container">$$=\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx
$$</span></p>
<p>How does one proceed from here to get the gamma function?</p>
| ReinhardtΩ | 884,092 | <p><span class="math-container">$$ \int_0^{n^{1/2}}\frac{u^2}{(1+n^{-1}u^2)^{n}}\,\mathrm{d}u \overset{n\to\infty}{⟶} \int_0^\infty u^2e^{-u^2}\mathrm{d}u$$</span>
and
<span class="math-container">$$\int_0^\infty u^2e^{-u^2}\mathrm{d}u\overset{t=u^2}{=}\frac{1}{2}\int_0^\infty t^{1/2} e^{-t}\mathrm{d}t= \frac{\Gamma(3/2)}{2} = \frac{\Gamma(1/2)}{4} = \frac{\sqrt{\pi}}{4}$$</span></p>
|
4,242,116 | <p>I found this question online</p>
<blockquote>
<p>Find the limit:<span class="math-container">$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx $$</span></p>
</blockquote>
<p>I've been told that I need to use the gamma function by converting <span class="math-container">$n^{3/2}=n\sqrt{n}$</span> and doing <span class="math-container">$u$</span> sub <span class="math-container">$u=x\sqrt{n}\rightarrow du=\sqrt{n} dx$</span></p>
<p><span class="math-container">$$\lim_{n \rightarrow \infty} n^{3/2} \int _0^1 \frac{x^2}{(x^2+1)^n}dx =
\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{nx^2\sqrt{n}}{(x^2+1)^n}dx =
\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx
$$</span>
<span class="math-container">$$=\lim_{n \rightarrow \infty} \int _0^{\sqrt{n}} \frac{u^2}{(\frac{u^2}{n}+1)^n}dx
$$</span></p>
<p>How does one proceed from here to get the gamma function?</p>
| Mark Viola | 218,419 | <p>Using the inequalities <span class="math-container">$x^2-\frac12 x^4\le \log(1+x^2)\le x^2$</span>, we find that</p>
<p><span class="math-container">$$\int_0^1 x^2e^{-nx^2}e^{-nx^4/2}\,dx\le \int_0^1 \frac{x^2}{(x^2+1)^n}\,dx\le \int_0^1 x^2e^{-nx^2}\,dx\tag1$$</span></p>
<p>Then, enforcing the substitution <span class="math-container">$x\mapsto x/\sqrt n$</span> in <span class="math-container">$(1)$</span> reveals</p>
<p><span class="math-container">$$\frac1{n^{3/2}}\int_0^{\sqrt n} x^2e^{-x^2}e^{-x^4/2n}\,dx\le \int_0^1 \frac{x^2}{(x^2+1)^n}\,dx\le \frac1{n^{3/2}}\int_0^{\sqrt n} x^2e^{-x^2}\,dx\tag2$$</span></p>
<p>Finally, multiplying <span class="math-container">$(2)$</span> by <span class="math-container">$n^{3/2}$</span>, letting <span class="math-container">$n\to \infty$</span>, and applying the squeeze theorem yields the coveted limit</p>
<p><span class="math-container">$$\lim_{n\to\infty}n^{3/2} \int_0^1 \frac{x^2}{(x^2+1)^n}\,dx=\frac{\sqrt \pi}{4}$$</span></p>
|
2,247,798 | <p><strong>Question:</strong> If $\alpha$ is an angle in a triangle and $\tan{\alpha}=-2$, then one of the following is true:</p>
<p>a) $0<\alpha < \frac{\pi}{2}$</p>
<p>b) $\frac{\pi}{2}<\alpha < \pi$</p>
<p>c) Can't be decided.</p>
<p>d) There exist no such angle $\alpha$.</p>
<p>My reasoning was that there exist no such angle because of the following: Looking at a right triangle with an angle alpha and one of the sides 1,alpha should be positive between zero and 90 degrees (which is wrong).</p>
<p><a href="https://i.stack.imgur.com/xzd7A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xzd7A.png" alt="enter image description here"></a></p>
<p>Singe $1\cdot \tan{\alpha} = x,$ I don't see how a physical side on a triangle can be negative.</p>
| bjcolby15 | 122,251 | <ul>
<li>Since $\tan \alpha$ is given a value (in this case, $-2$) it would contradict there is no angle $\alpha$ (d) and that we cannot determine the information (c).</li>
<li>If $\tan \alpha$ is negative, it cannot be in the first or third quadrant (all values are positive for $\tan \alpha$), $\therefore \alpha$ is negative in the second and fourth quadrants, which are $\frac {\pi}{2} < \alpha < \pi$ and $\frac {3 \pi}{2} < \alpha < 2 \pi$. </li>
</ul>
<p>Thus <em>b.</em> is correct.</p>
|
3,893,908 | <p>I want to compute <span class="math-container">$$\int_{-\infty}^{\infty} \frac{1+\cos(x)}{(x -\pi)^2}dx$$</span></p>
<p>My approach is <span class="math-container">$$\int_{-\infty}^{\infty} \frac{1+\cos(x)}{(x -\pi)^2}dx=\int_{-\infty}^{\infty} \frac{1}{(x -\pi)^2}dx+\int_{-\infty}^{\infty} \frac{\cos(x)}{(x -\pi)^2}dx.$$</span></p>
<p>However, the first term on the RHS does not exist since there is a singularity <span class="math-container">$\pi$</span>.</p>
<p>How to overcome this problem?</p>
<p>Now follow the hint, I get <span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p>
<p>BTW, if I want to use <strong>residue calculus</strong> to do this problem, how to do it? I am confused the pole at origin. Thanks!</p>
| ratatuy | 812,151 | <p>I used @Franklin Pezutti Dyer's hint:</p>
<p><span class="math-container">$-\int_{-\infty}^{+\infty}(1-\cos{x})d\left(\frac{1}{x}\right)=-\int_{-\infty}^{\infty}\frac{\sin{x}}{x}dx=-\pi$</span></p>
|
3,637,526 | <p>I'm trying to understand the proof of Theorem 16.10, Probability and Measure, Patrick Billingsley, I put part of it here exactly as presented in the book</p>
<p>Theorem: If <span class="math-container">$f,g$</span> are nonnegative and <span class="math-container">$\int_Afd\mu=\int_Agd\mu$</span> for all <span class="math-container">$A$</span> in <span class="math-container">$\mathscr{F}$</span>, and <span class="math-container">$\mu$</span> is <span class="math-container">$\sigma$</span>-finite, then <span class="math-container">$f=g$</span> almost everywhere</p>
<p>Proof: Suppose that <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are nonnegative and that <span class="math-container">$\int_Afd\mu\leq\int_Agd\mu$</span> for all <span class="math-container">$A$</span> in <span class="math-container">$\mathscr{F}$</span>. If <span class="math-container">$\mu$</span> is <span class="math-container">$\sigma$</span>-finite, there are <span class="math-container">$\mathscr{F}$</span>-sets <span class="math-container">$A_n$</span> such that <span class="math-container">$A_n\uparrow\Omega$</span> and <span class="math-container">$\mu(A_n)<\infty$</span>. If <span class="math-container">$B_n = [0\leq g<f, g\leq n]$</span>, then the hypothesized inequality applied to <span class="math-container">$A_n\cap B_n$</span> implies
<span class="math-container">$\int_{A_n\cap B_n}fd\mu\leq\int_{A_n\cap B_n}gd\mu< \infty$</span> (finite beacuse <span class="math-container">$A_n\cap B_n$</span> has finite measure and <span class="math-container">$g$</span> is bounded there) and hence <span class="math-container">$\int I_{A_n\cap B_n}(f-g)d\mu=0 \ldots$</span> (the proof continues)</p>
<p>I understand everything that follows except from one part when the author uses the fact that <span class="math-container">$B_n = [0\leq g<f, g\leq n]$</span> is a measurable set. Why is this a measurable set? Thanks in advance</p>
| Physical Mathematics | 592,278 | <p>I'm not quite familiar with your notation, but I think this is another way writing <span class="math-container">$B_n$</span>:
<span class="math-container">$$B_n:= \{ x\in X \mid 0 \leq g(x) < f(x) \text{ and } g \leq n\}$$</span>
If that's correct, then note that:
<span class="math-container">$$B_n = \{x \in X \mid g(x) \geq 0\} \cap \{x \in X \mid (f-g)(x) > 0\} \cap \{x \in X \mid g(x) \leq n\} \\
= g^{-1}([0,\infty)) \cap (f-g)^{-1}((0,\infty)) \cap g^{-1}((-\infty,n])$$</span>
But all of the sets in the last expression are measurable as <span class="math-container">$g, f-g$</span> are both measurable (difference of measurable functions is measurable) and hence <span class="math-container">$B_n$</span> is measurable.</p>
|
1,638,051 | <p>$$\int\frac{dx}{(x^{2}-36)^{3/2}}$$</p>
<p>My attempt:</p>
<p>the factor in the denominator implies</p>
<p>$$x^{2}-36=x^{2}-6^{2}$$</p>
<p>substituting $x=6\sec\theta$, noting that $dx=6\tan\theta \sec\theta$ </p>
<p>$$x^{2}-6^{2}=6^{2}\sec^{2}\theta-6^{2}=6^{2}\tan^{2}\theta$$</p>
<p>$$\int\frac{dx}{(x^{2}-36)^{3/2}}=\int\frac{6\tan\theta \sec\theta}{36\tan^{2}\theta}=\frac{1}{6}\int\frac{\sec\theta}{\tan\theta}$$</p>
<p>using trig identities:
$$\frac{1}{6}\int\frac{\sec\theta}{\tan\theta}=\frac{1}{6}\int \sin^{-1}\theta$$</p>
<p>now using integration by parts:
$$\frac{1}{6}\int \sin^{-1}\theta$$
$$u=\sin^{-1}\theta, du=\frac{1}{\sqrt{1-\theta^{2}}}, dv=1, v=\theta$$
using $uv-\int{vdu}$</p>
<p>$$\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\int{\frac{\theta}{\sqrt{1-\theta^{2}}}}d\theta\bigg)$$</p>
<p>now using simple substitution:$$z=1-\theta^{2}, dz=-2\theta d\theta, -\frac{1}{2}du=\theta d\theta$$</p>
<p>it is apparent that</p>
<p>$$\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\bigg(-\frac{1}{2}\int{\frac{dz}{\sqrt{z}}}\bigg)\bigg)$$</p>
<p>$$=\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\bigg(-\frac{1}{2}(2\sqrt{z})\bigg)\bigg)=\frac{1}{6}\bigg(\theta \sin^{-1}\theta+\sqrt{1+\theta^{2}}\bigg)$$</p>
<p>$$=\frac{1}{6}\theta \sin^{-1}\theta+\frac{1}{6}\sqrt{1+\theta^{2}}+C$$</p>
<p>I have the following questions:</p>
<p>1.This integral seems tricky and drawn out to me, is there another method that reduces the steps/ methods of integration? I had to use trig substitution, integration by parts, and substitution in order to solve the integral, what can I do to find easier ways to complete integrals of this type?</p>
<p>2.Is this solution even correct? wolfram alpha says the solution to this integral is $-\frac{x}{36\sqrt{x^{2}-36}}+C$ how can i determine equivalence?</p>
| Community | -1 | <p><strong>Without substitution</strong>:</p>
<p>$$\int\frac{dx}{(x^{2}-36)^{3/2}}=\frac1{36}\int\frac{(x^2-(x^2-36))\,dx}{(x^{2}-36)^{3/2}}=\frac1{36}\int\frac{x^2\,dx}{(x^{2}-36)^{3/2}}-\frac1{36}\int\frac{dx}{(x^{2}-36)^{1/2}}.$$</p>
<p>Then by parts on the first term,</p>
<p>$$\int\frac{x\cdot x\,dx}{(x^{2}-36)^{3/2}}=-\frac x{(x^{2}-36)^{1/2}}+\int\frac{dx}{(x^{2}-36)^{1/2}}.$$</p>
<p>After scaling the variable with a factor $6$, you recognize a known derivative,</p>
<p>$$\int\frac{dx}{\sqrt{x^2-1}}=\text{arcosh}(x).$$</p>
|
23,994 | <p>For which values of m does the equation:
$$3 \ln x+m x^3 = 17$$
have $1$ solution? $2$ solutions? $0$ solution?</p>
<p>Thanks.</p>
| Jonas Meyer | 1,424 | <p>If $m\geq 0$, note that $f(x)=3\ln x +mx^3$ is always increasing, goes to positive infinity as $x$ does, and goes to minus infinity as $x$ decreases to zero. Also, it is continuous. This is enough information to answer the question in this case.</p>
<p>If $m\lt 0$, then $f'(x)=3(\frac{1}{x}+mx^2)$ has a unique positive root at $r=\sqrt[3]{-\frac{1}{m}}$. Thus $f$ is increasing on $(0,r)$ and decreasing on $(r,\infty)$. Also, $f$ goes to $-\infty$ as $x$ decreases to zero or as $x$ goes to $\infty$. The number of solutions in this case will be determined by the value of $f(r)$. The $3$ cases are $f(r)<17$, $f(r)=17$, and $f(r)>17$. To determine the condition on $m$ when each occurs, you can solve the logarithmic equation you get by plugging in $r$, $f(r)=17$.</p>
|
1,261,067 | <p>$$\int \left(\frac15 x^3 - 2x + \frac3x + e^x \right ) \mathrm dx$$</p>
<p>I came up with
$$F=x^4-x^2+\frac{3x}{\frac12 x^2}+e^x$$
but that was wrong.</p>
| atapaka | 79,695 | <p>The first term is $\frac{x^4}{20}$, next $x^2$, next $3 \ln(x)$ the last $\exp(x)$ and a constant</p>
|
1,261,067 | <p>$$\int \left(\frac15 x^3 - 2x + \frac3x + e^x \right ) \mathrm dx$$</p>
<p>I came up with
$$F=x^4-x^2+\frac{3x}{\frac12 x^2}+e^x$$
but that was wrong.</p>
| wythagoras | 236,048 | <p>\begin{align}
\int \frac{1}{5}(x^3)-2x+\frac{3}{x}+e^x dx & = \int \frac{1}{5}x^3 dx- \int 2x dx+ \int \frac{3}{x} dx+ \int e^x dx \\ &= \frac{1}{20}x^4 - x^2 + 3 \ln(x) + e^x + C
\end{align}</p>
<p>Important rules:</p>
<p>•Derivative of $x^n$ is $\frac{1}{n+1}x^{n+1}$ for $n \neq -1$</p>
<p>•Derivative of $x^{-1}$ is $\ln(x)$</p>
|
3,408,458 | <p>i have 4 vectors:</p>
<ul>
<li><span class="math-container">$|\vec{AC}|=|\vec{AD}|$</span></li>
<li><span class="math-container">$|\vec{BC}|=|\vec{BE}|$</span></li>
</ul>
<p><span class="math-container">$\angle (\vec{AC}, \vec{AD}) $</span> =<span class="math-container">$\angle (\vec{BC}, \vec{BE}) $</span></p>
<p><a href="https://i.stack.imgur.com/eEgie.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eEgie.png" alt="vector image" /></a></p>
<p>if i know the len and angle of all vectors: how do I find the Distance between D and E ?</p>
<h2>EDIT expanding to more vectors</h2>
<p>Given all possible vectors pairs that satisfy:</p>
<ul>
<li><span class="math-container">$|\vec{xC}|=|\vec{xy}|$</span> Vector ends at C</li>
<li><span class="math-container">$\angle (\vec{xC}, \vec{xy}) $</span> =<span class="math-container">$\angle (\vec{BC}, \vec{BE}) $</span></li>
</ul>
<p>how do you describe the line that forms between all points y. (line through y1, y2, and black dots in image)</p>
<p><a href="https://i.stack.imgur.com/4K8Zc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4K8Zc.png" alt="All y points" /></a></p>
| Kitter Catter | 166,001 | <p>Define <span class="math-container">$X$</span> as the position of <span class="math-container">$C$</span> and <span class="math-container">$\theta$</span> as the angle used</p>
<p>Isn't this defined by the parametric equations:
<span class="math-container">$$x(t) = t + (X-t) \cos\theta$$</span>
<span class="math-container">$$y(t) = (X-t) \sin\theta$$</span>
solving for <span class="math-container">$t$</span> gives:
<span class="math-container">$$ t=X-\frac{y}{\sin\theta}$$</span>
Thus
<span class="math-container">$$x=X-\frac{y}{\sin \theta}+y\frac{\cos\theta}{\sin\theta}$$</span>
<span class="math-container">$$y=\frac{(X-x)\sin \theta}{1-\cos\theta}$$</span></p>
<p>The distance between two points is thus:
<span class="math-container">$$s=\sqrt{(y-y')^2+(x-x')^2}$$</span>
<span class="math-container">$$s=\sqrt{(\frac{\sin\theta}{1-\cos\theta})^2 (x-x')^2+(x-x')^2}$$</span>
<span class="math-container">$$s=|x-x'|\sqrt{\frac{2}{1-\cos \theta}}$$</span>
or if you want to put this back in terms of <span class="math-container">$t$</span>:
<span class="math-container">$$s=|t-t'|\sqrt{2(1-\cos\theta)}$$</span></p>
|
697,402 | <p>I have this limit:</p>
<p>$$ \lim_{x\to\infty}\frac{x^3+\cos x+e^{-2x}}{x^2\sqrt{x^2+1}} $$ I tried to solve it by this:</p>
<p>$$ \lim_{x\to\infty}\frac{x^3+\cos x+e^{-2x}}{x^2\sqrt{x^2+1}} = \lim_{x\to\infty}\frac{\frac{x^3}{x^3}+\frac{\cos x}{x^3}+\frac{e^{-2x}}{x^3}}{\frac{x^2\sqrt{x^2+1}}{x^3}} = \frac{0+0+0}{\frac{\sqrt{\infty^2+1}}{\infty}}$$ I do not think that I got it right there... Wolfram also says that the answer is $1$, which this does not seems to be. How do I solve this?</p>
| Raka | 120,299 | <p>I think just divide with higher x,
like this</p>
<p>(x^4/x^4)*(1/sqrt(1+(x^-2))</p>
<p>so we get ,
(x^4)/sqrt(x^2+1)</p>
<p>and
lim x to->oo
we just divide it again with higher x
and we get,
1/sqrt(1+0)</p>
<p>and the result 1</p>
<p>sorry ,if i have mistake, i'm just trying my logic</p>
|
66,000 | <p>In 2008 I wrote a group theory package. I've recently started using it again, and I found that one (at least) of my functions is broken in Mathematica 10. The problem is complicated to describe, but the essence of it occurs in this line:</p>
<pre><code>l = Split[l, Union[#1] == Union[#2] &]
</code></pre>
<p>Here <code>l</code> is a list of sets. The intent of the line is to split <code>l</code> into sublists of identical sets. Each set is represented as a list of group elements. I say "sets" rather than "lists" because two sets are to be considered identical if they contain the same members in any order. This is the reason for comparing <code>Union</code>s of the sets. </p>
<p>This used to work, but now it doesn't. The problem is that sets that, as far as I can tell, are equal, do not compare equal by this test. fact, the comparison <code>e1 == e2</code> for indistinguishable group elements <code>e1</code> and <code>e2</code> also sometimes fails to yield <code>True</code>. (It remains unevaluated; <code>e1 === e2</code> evaluates to <code>False</code>.) The elements can be fairly complicated objects. For instance, in one case where I'm having this problem, <code>ByteCount[e1]</code> is 2448. But <code>e1</code> and <code>e2</code> are indistinguishable. For instance, <code>ToString[FullForm[e1]] === ToString[FullForm[e2]]</code> yields <code>True</code>. </p>
<p>I've shown one line where this failure to compare equal causes a problem. In this one case I could probably work around the problem by defining <code>UpValue</code>s for <code>e1 == e2</code> or <code>e1 === e2</code>. But, unfortunately, the problem raises its head in other contexts as well. For instance, I am trying to use <code>GraphPlot</code> to show a cycle graph of the elements. <code>GraphPlot</code> takes a list of edges of the form <code>ei->ej</code>. In order to recognize that edges <code>ei->ej</code> and <code>ei->ek</code> are both connected to <code>ei</code>, <code>GraphPlot</code> needs to know that the <code>ei</code> appearing in the first edge is the same as <code>ei</code> in the second. It doesn't, so I get a disconnected graph. Unlike <code>Split</code>, <code>GraphPlot</code> doesn't provide a hook to enable me to tell it how to test vertexes for equality, and it apparently doesn't use <code>Equal</code> or <code>SameQ</code>, either, as <code>UpValue</code>s I define for those are not used. </p>
<p>(Sorry about the generic tag -- I couldn't find anything more specific. Suggestions welcome.)</p>
<p>EDIT: In response to Szabolcs request, here is the <code>FullForm</code> of such an object:</p>
<pre><code>a = sdp[znz[1, 3],
aut[List[Rule[znz[1, 3], znz[1, 3]]],
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]],
Dispatch[List[Rule[znz[0, 3], znz[0, 3]],
Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]],
Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]
b = sdp[znz[1, 3],
aut[List[Rule[znz[1, 3], znz[1, 3]]],
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]],
Dispatch[List[Rule[znz[0, 3], znz[0, 3]],
Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]],
Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]
a === b
(* ==> False *)
</code></pre>
<p>Note that <code>a</code> and <code>b</code> are identical and <code>ToString[a] === ToString[b]</code> gives <code>True</code>.</p>
| MikeLimaOscar | 5,414 | <p>As @Szabolcs points out <code>Dispatch</code> does not interact well with <code>SameQ</code>, etc in Mathematica 10.</p>
<pre><code>Dispatch[1 -> 2] === Dispatch[1 -> 2]
</code></pre>
<blockquote>
<p>False</p>
</blockquote>
<pre><code>Dispatch[1 -> 2] == Dispatch[1 -> 2]
</code></pre>
<blockquote>
<p>False</p>
</blockquote>
<p>Use <code>Normal</code> to "expand" the dispatch objects and then the comparison should work.</p>
<pre><code>Normal[Dispatch[1 -> 2]] == Normal[Dispatch[1 -> 2]]
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
<p><code>Normal</code> works on your complete example expression:</p>
<pre><code>Normal[sdp[znz[1, 3],
aut[List[Rule[znz[1, 3], znz[1, 3]]],
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]],
Dispatch[
List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]],
Rule[znz[2, 3], znz[2, 3]]]]],
Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]]
</code></pre>
<blockquote>
<p>sdp[znz[1, 3],
aut[{znz[1, 3] -> znz[1, 3]}, {znz[0, 3] -> znz[0, 3],
znz[1, 3] -> znz[1, 3],
znz[2, 3] -> znz[2, 3]}, {znz[0, 3] -> znz[0, 3],
znz[1, 3] -> znz[1, 3], znz[2, 3] -> znz[2, 3]}], #2 ** #1 &]</p>
</blockquote>
<p>PS As the documentation for <a href="http://reference.wolfram.com/language/ref/Dispatch.html" rel="noreferrer"><code>Dispatch</code></a> states "The use of Dispatch will never affect results that are obtained", this should probably be classified as a bug.</p>
|
4,061,536 | <p>We know that in a finite group of order say <span class="math-container">$g$</span>, an element of the group will have order of element <span class="math-container">$m\leq g$</span>. However, is it necessarily true that at least one element in the group <span class="math-container">$\textbf{must}$</span> have order of element <span class="math-container">$g$</span>?</p>
| Shaun | 104,041 | <p>No.</p>
<p>Consider the Klein four group. It has four elements and yet no element of order four.</p>
|
1,116,022 | <p>I've always had this doubt.
It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p>
<p>But does it ever make sense to compare a real number and a complex/imaginary one?</p>
<p>For example, could one say that $5+2i> 3$ because the real part of $5+2i
$ is bigger than the real part of $3$? Or is it just a senseless statement?</p>
<p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p>
<p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p>
<p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
| Serge Ballesta | 210,087 | <p>Order is easy and non ambiguous in $\mathbb{R}$, because it is unidimensional. $\mathbb{C}$ on the other hand is generally seen as a plane. So you will easily define pre-order on it, that means transitive and reflexive relations, that do have sense such as the examples of Ross Millikan's answer.</p>
<p>But except for the lexicographic order, they are not true order relation because they are not anti-symetric : you can have $a < b$ and $b < a$ without $a = b$.</p>
<p>And the lexicographic order is not <em>natural</em> because it is not compatible with the current topology : $x+iy$ and $x +\epsilon + iy$ are topologically near, but if $\epsilon>0$, we get $x+iy < x+i(y+bigNumber) < x+epsilon+iy$ which is not <em>natural</em> because $x+iy$ and $x+i(y+bigNumber)$ are not topologicaly near.</p>
|
4,554,231 | <p>How do you evaluate this limit? I can't manage to do it, even after manipulating the limit expression in several different ways, and using L'Hôpital's rule.</p>
<p><span class="math-container">$$
\lim_{h\,\to\, 0^{+}}\,
\left(\frac{{\rm e}^{-1/h^{2}}\,}{h}\right)
$$</span></p>
| user1058602 | 1,058,602 | <p>If you evalute the limit of the inverse by using L'Hopital, and you will find that the limit of the inverse goes to zero as h goes to zero. So the limit should goes to infinity.</p>
|
26,823 | <p>Trying to solve for the area enclosed by $x^4+y^4=1$. A friend posed this question to me today, but I have no clue what to do to solve this. Keep in mind, we don't even know if there is a straightforward solution. I think he just likes thinking up problems out of thin air. </p>
<p>Anyway, the question becomes more general, since we <em>think</em> that </p>
<p>$lim_{n\to\infty}\int_0^1{(1-x^n)^{1/n}} = {1\over4}$ (it approaches a square / becomes linear)</p>
<p>can anyone confirm that this is true or not?</p>
| Thomas Kragh | 4,500 | <p>I think this question smells of homework, but another answer, which to me totally obscures the geometric nature of the question has been posted, and I feel that this justifies the following answer (even if the question is closed):</p>
<p>The $l^p$ norms $\lvert(x,y)\rvert_p = (\lvert x\rvert^p+\lvert y \rvert^p)^{1/p}$ are norms and satisfies that if $\lvert(x,y)\rvert_p=1$ and $q>p$ then $\lvert(x,y)\rvert_q\leq 1$. So the unit "circles" of which you want to find the area grows.</p>
<p>It is also a fact that $\lvert (x,y) \rvert_p \to \max (\lvert x\rvert,\lvert y\rvert)$ as $p\to \infty$. So the unit circles converges to the square which is the boundary of $[-1,1]\times [-1,1]$. This implies by monotone convergence theorem that your integral converges to 1. Because the entire square has area 4.</p>
|
246,817 | <p>We have the succession and its formula:
$$
1^2+4^2+\cdots+ (3k-2)^2 = \dfrac{k(6k^2-3k-1)}{2}
$$</p>
<p>Now we need to apply it for $k+1$:
$$
1^2+4^2+\cdots+ (3n-2)^2 +(3(k+1)-2)^2 = \\
\dfrac{k(6k^2-3k-1)}{2} + (3(k+1)-2)^2
$$</p>
<p>I know that the result must be $\frac{1}{2}(k+1)(6(k+1)^2-3(k+1)-1)$ but I wasn't able to find the elegant solution during the test. I can't even see from where to factor out $k+1$.</p>
<p>Care to give a hint?</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ Your question amounts to verifying the following polynomial equality</p>
<p>$$\rm (k\!+\!1)\, f(k\!+\!1) - k\, f(k) =\, 2\,(3k\!+\!1)^2\quad for\quad f(k) =\, 6k^2\! - 3k - 1$$</p>
<p>Since LHS and RHS are polynomials of degree $\color{#C00} 2,$ to prove that they are equal it suffices to check that they have equal values at $\,\color{#C00}3\,$ points. That's easy: $ $ LHS = RHS $ $ holds for $\rm\:k = -1,0,1,\:$ viz.
$$\begin{eqnarray}&&\rm k=-1:\ \ f(-1) = 8 \\
&&\rm k\ =\ 0:\ \ \ f(1) = 2\\
&&\rm k\ =\ 1:\ \ \ 2\, f(2)\!-\!f(1) = 32 \end{eqnarray}$$</p>
|
4,417,901 | <p>In the first chapter of "Differential Equations, Dynamical Systems and an Introduction to Chaos" by Hirch, Smale and Devaney, the authors mention the first-order equation <span class="math-container">$x'(t)=ax(t)$</span> and assert that the only general solution to it is <span class="math-container">$x(t)=ke^{at}$</span>. The assertion is proven by deriving <span class="math-container">$u(t)e^{-at}$</span> to show that it is the constant <span class="math-container">$k$</span> in the general solution mentioned.</p>
<p>My question is that, how did the authors initially arrive at the asserted solution? Because they didn't explain it in the book.</p>
| Eugene | 726,796 | <p>Let us differentiate a function
<span class="math-container">$$
\begin{aligned}
I_n(\lambda) = \int_0^{+\infty}x^ne^{-\lambda x}dx
\end{aligned}
$$</span></p>
<p>with respect to <span class="math-container">$\lambda$</span>:</p>
<p><span class="math-container">$$
\begin{aligned}
\frac{d}{d\lambda}I_n(\lambda) &= \int_0^{+\infty}x^n\frac{d}{d\lambda}\left(e^{-\lambda x}\right)dx = \int_0^{+\infty}x^n \left(-xe^{-\lambda x}\right)dx = \\
&= \int_0^{+\infty}x^{n+1}d\left(\frac{1}{\lambda}e^{-\lambda x}\right) = \\
&= \underbrace{\left. x^{n+1}\left(\frac{1}{\lambda}e^{-\lambda x}\right)\right|_0^{+\infty}}_{=0}-\int_0^{+\infty}\left(\frac{1}{\lambda}e^{-\lambda x}\right)(n+1)x^ndx = \\
&= -\frac{n+1}{\lambda}\int_0^{+\infty}x^ne^{-\lambda x}dx = -\frac{n+1}{\lambda}I_n(\lambda).
\end{aligned}
$$</span></p>
<p>Thus, we have a differential equation for <span class="math-container">$I_n(\lambda)$</span>:
<span class="math-container">$$
\begin{aligned}
\frac{d}{d\lambda}I_n(\lambda) = -\frac{n+1}{\lambda}I_n(\lambda) &\Leftrightarrow \frac{dI_n}{I_n} = -(n+1)\frac{d\lambda}{\lambda} \Leftrightarrow \log(I_n) = -(n+1)\log(\lambda) + C \Leftrightarrow \\
&\Leftrightarrow \log(I_n) = \log\left(C\lambda^{-(n+1)}\right) \Leftrightarrow I_n(\lambda) = \frac{C}{\lambda^{n+1}}.
\end{aligned}
$$</span></p>
<p>So, <span class="math-container">$I_n(\lambda) = \frac{C}{\lambda^{n+1}}$</span>. To find the constant <span class="math-container">$C$</span>, one needs an initial condition.</p>
<p>Let us calculate <span class="math-container">$I_n(1)$</span>. Then, <span class="math-container">$C = I_n(1)$</span>.</p>
<p><span class="math-container">$$
\begin{aligned}
I_n(1) &= I_n = \int_0^{+\infty}x^ne^{-x}dx = \left|\text{integrating by parts}\right| = \\
&= nI_{n-1} = n(n-1)I_{n-2} = \ldots = n!I_0 = \\
&= n!\underbrace{\int_0^{+\infty}e^{-x}dx}_{=1} = n! = C.
\end{aligned}
$$</span></p>
<p>Finalyy, we have
<span class="math-container">$$
I_n(\lambda) = \int_0^{+\infty}x^ne^{-\lambda x}dx = \frac{n!}{\lambda^{n+1}}
$$</span></p>
|
70,728 | <p>I've started taking an <a href="http://www.ml-class.org/" rel="noreferrer">online machine learning class</a>, and the first learning algorithm that we are going to be using is a form of linear regression using gradient descent. I don't have much of a background in high level math, but here is what I understand so far.</p>
<p>Given <span class="math-container">$m$</span> number of items in our learning set, with <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values, we must find the best fit line <span class="math-container">$h_\theta(x) = \theta_0+\theta_1x$</span> . The cost function for any guess of <span class="math-container">$\theta_0,\theta_1$</span> can be computed as:</p>
<p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2$$</span></p>
<p>where <span class="math-container">$x^{(i)}$</span> and <span class="math-container">$y^{(i)}$</span> are the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values for the <span class="math-container">$i^{th}$</span> component in the learning set. If we substitute for <span class="math-container">$h_\theta(x)$</span>,</p>
<p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(\theta_0 + \theta_1x^{(i)} - y^{(i)})^2$$</span></p>
<p>Then, the goal of gradient descent can be expressed as</p>
<p><span class="math-container">$$\min_{\theta_0, \theta_1}\;J(\theta_0, \theta_1)$$</span></p>
<p>Finally, each step in the gradient descent can be described as:</p>
<p><span class="math-container">$$\theta_j := \theta_j - \alpha\frac{\partial}{\partial\theta_j} J(\theta_0,\theta_1)$$</span></p>
<p>for <span class="math-container">$j = 0$</span> and <span class="math-container">$j = 1$</span> with <span class="math-container">$\alpha$</span> being a constant representing the rate of step. </p>
<p>I have no idea how to do the partial derivative. I have never taken calculus, but conceptually I understand what a derivative represents. The instructor gives us the partial derivatives for both <span class="math-container">$\theta_0$</span> and <span class="math-container">$\theta_1$</span> and says not to worry if we don't know how it was derived. (I suppose, technically, it is a computer class, not a mathematics class) However, I would very much like to understand this if possible. Could someone show how the partial derivative could be taken, or link to some resource that I could use to learn more? I apologize if I haven't used the correct terminology in my question; I'm very new to this subject.</p>
| Did | 6,179 | <blockquote>
<p>conceptually I understand what a derivative represents. </p>
</blockquote>
<p>So let us start from that. Consider a function $\theta\mapsto F(\theta)$ of a parameter $\theta$, defined at least on an interval $(\theta_*-\varepsilon,\theta_*+\varepsilon)$ around the point $\theta_*$. Then the derivative of $F$ at $\theta_*$, when it exists, is the number
$$
F'(\theta_*)=\lim\limits_{\theta\to\theta_*}\frac{F(\theta)-F(\theta_*)}{\theta-\theta_*}.
$$
Less formally, you want $F(\theta)-F(\theta_*)-F'(\theta_*)(\theta-\theta_*)$ to be small with respect to $\theta-\theta_*$ when $\theta$ is close to $\theta_*$.</p>
<p>One can also do this with a function of several parameters, fixing every parameter except one. The result is called a <em>partial derivative</em>. In your setting, $J$ depends on two parameters, hence one can fix the second one to $\theta_1$ and consider the function $F:\theta\mapsto J(\theta,\theta_1)$. If $F$ has a derivative $F'(\theta_0)$ at a point $\theta_0$, its value is denoted by $\dfrac{\partial}{\partial \theta_0}J(\theta_0,\theta_1)$. </p>
<p>Or, one can fix the first parameter to $\theta_0$ and consider the function $G:\theta\mapsto J(\theta_0,\theta)$. If $G$ has a derivative $G'(\theta_1)$ at a point $\theta_1$, its value is denoted by $\dfrac{\partial}{\partial \theta_1}J(\theta_0,\theta_1)$.</p>
<p>You consider a function $J$ linear combination of functions $K:(\theta_0,\theta_1)\mapsto(\theta_0+a\theta_1-b)^2$. Derivatives and partial derivatives being linear functionals of the function, one can consider each function $K$ separately. But, the derivative of $t\mapsto t^2$ being $t\mapsto2t$, one sees that $\dfrac{\partial}{\partial \theta_0}K(\theta_0,\theta_1)=2(\theta_0+a\theta_1-b)$ and $\dfrac{\partial}{\partial \theta_1}K(\theta_0,\theta_1)=2a(\theta_0+a\theta_1-b)$.</p>
|
70,728 | <p>I've started taking an <a href="http://www.ml-class.org/" rel="noreferrer">online machine learning class</a>, and the first learning algorithm that we are going to be using is a form of linear regression using gradient descent. I don't have much of a background in high level math, but here is what I understand so far.</p>
<p>Given <span class="math-container">$m$</span> number of items in our learning set, with <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values, we must find the best fit line <span class="math-container">$h_\theta(x) = \theta_0+\theta_1x$</span> . The cost function for any guess of <span class="math-container">$\theta_0,\theta_1$</span> can be computed as:</p>
<p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2$$</span></p>
<p>where <span class="math-container">$x^{(i)}$</span> and <span class="math-container">$y^{(i)}$</span> are the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> values for the <span class="math-container">$i^{th}$</span> component in the learning set. If we substitute for <span class="math-container">$h_\theta(x)$</span>,</p>
<p><span class="math-container">$$J(\theta_0,\theta_1) = \frac{1}{2m}\sum_{i=1}^m(\theta_0 + \theta_1x^{(i)} - y^{(i)})^2$$</span></p>
<p>Then, the goal of gradient descent can be expressed as</p>
<p><span class="math-container">$$\min_{\theta_0, \theta_1}\;J(\theta_0, \theta_1)$$</span></p>
<p>Finally, each step in the gradient descent can be described as:</p>
<p><span class="math-container">$$\theta_j := \theta_j - \alpha\frac{\partial}{\partial\theta_j} J(\theta_0,\theta_1)$$</span></p>
<p>for <span class="math-container">$j = 0$</span> and <span class="math-container">$j = 1$</span> with <span class="math-container">$\alpha$</span> being a constant representing the rate of step. </p>
<p>I have no idea how to do the partial derivative. I have never taken calculus, but conceptually I understand what a derivative represents. The instructor gives us the partial derivatives for both <span class="math-container">$\theta_0$</span> and <span class="math-container">$\theta_1$</span> and says not to worry if we don't know how it was derived. (I suppose, technically, it is a computer class, not a mathematics class) However, I would very much like to understand this if possible. Could someone show how the partial derivative could be taken, or link to some resource that I could use to learn more? I apologize if I haven't used the correct terminology in my question; I'm very new to this subject.</p>
| Hendy | 39,222 | <p>The answer above is a good one, but I thought I'd add in some more "layman's" terms that helped me better understand concepts of partial derivatives. The answers I've seen here and in the Coursera forums leave out talking about the chain rule, which is important to know if you're going to get what this is doing...</p>
<hr>
<p>It's helpful for me to think of partial derivatives this way: the variable you're
focusing on is treated as a variable, the other terms just numbers. Other key
concepts that are helpful:</p>
<ul>
<li>For "regular derivatives" of a simple form like $F(x) = cx^n$ , the derivative is simply $F'(x) = cn \times x^{n-1}$ </li>
<li>The derivative of a constant (a number) is 0.</li>
<li>Summations are just passed on in derivatives; they don't affect the derivative. Just copy them down in place as you derive.</li>
</ul>
<p>Also, it should be mentioned that the <a href="http://en.wikipedia.org/wiki/Chain_rule">chain
rule</a> is being used. The chain rule says
that (in clunky laymans terms), for $g(f(x))$, you take the derivative of $g(f(x))$,
treating $f(x)$ as the variable, and then multiply by the derivative of $f(x)$. For
our cost function, think of it this way:</p>
<p>$$ g(\theta_0, \theta_1) = \frac{1}{2m} \sum_{i=1}^m \left(f(\theta_0,
\theta_1)^{(i)}\right)^2 \tag{1}$$</p>
<p>$$ f(\theta_0, \theta_1)^{(i)} = \theta_0 + \theta_{1}x^{(i)} -
y^{(i)} \tag{2}$$</p>
<p>To show I'm not pulling funny business, sub in the definition of $f(\theta_0,
\theta_1)^{(i)}$ into the definition of $g(\theta_0, \theta_1)$ and you get:</p>
<p>$$ g(f(\theta_0, \theta_1)^{(i)}) = \frac{1}{2m} \sum_{i=1}^m \left(\theta_0 +
\theta_{1}x^{(i)} - y^{(i)}\right)^2 \tag{3}$$</p>
<p>This is, indeed, our entire cost function.</p>
<p>Thus, the partial derivatives work like this:</p>
<p>$$ \frac{\partial}{\partial \theta_0} g(\theta_0, \theta_1) = \frac{\partial}{\partial
\theta_0} \frac{1}{2m} \sum_{i=1}^m \left(f(\theta_0, \theta_1)^{(i)}\right)^2 = 2
\times \frac{1}{2m} \sum_{i=1}^m \left(f(\theta_0, \theta_1)^{(i)}\right)^{2-1} = \tag{4}$$</p>
<p>$$\frac{1}{m}
\sum_{i=1}^m f(\theta_0, \theta_1)^{(i)}$$</p>
<p>In other words, just treat $f(\theta_0, \theta_1)^{(i)}$ like a variable and you have a
simple derivative of $\frac{1}{2m} x^2 = \frac{1}{m}x$</p>
<p>$$ \frac{\partial}{\partial \theta_0} f(\theta_0, \theta_1)^{(i)} = \frac{\partial}{\partial \theta_0} (\theta_0 + \theta_{1}x^{(i)} - y^{(i)}) \tag{5}$$</p>
<p>And $\theta_1, x$, and $y$ are just "a number" since we're taking the derivative with
respect to $\theta_0$, so the partial of $g(\theta_0, \theta_1)$ becomes:</p>
<p>$$ \frac{\partial}{\partial \theta_0} f(\theta_0, \theta_1) = \frac{\partial}{\partial \theta_0} (\theta_0 + [a \
number][a \ number]^{(i)} - [a \ number]^{(i)}) = \frac{\partial}{\partial \theta_0}
\theta_0 = 1 \tag{6}$$</p>
<p>So, using the chain rule, we have:</p>
<p>$$ \frac{\partial}{\partial \theta_0} g(f(\theta_0, \theta_1)^{(i)}) =
\frac{\partial}{\partial \theta_0} g(\theta_0, \theta_1) \frac{\partial}{\partial
\theta_0}f(\theta_0, \theta_1)^{(i)} \tag{7}$$</p>
<p>And subbing in the partials of $g(\theta_0, \theta_1)$ and $f(\theta_0, \theta_1)^{(i)}$
from above, we have:</p>
<p>$$ \frac{1}{m} \sum_{i=1}^m f(\theta_0, \theta_1)^{(i)} \frac{\partial}{\partial
\theta_0}f(\theta_0, \theta_1)^{(i)} = \frac{1}{m} \sum_{i=1}^m \left(\theta_0 +
\theta_{1}x^{(i)} - y^{(i)}\right) \times 1 = \tag{8}$$</p>
<p>$$ \frac{1}{m} \sum_{i=1}^m \left(\theta_0 + \theta_{1}x^{(i)} - y^{(i)}\right)$$</p>
<hr>
<p>What about the derivative with respect to $\theta_1$?</p>
<p>Our term $g(\theta_0, \theta_1)$ is identical, so we just need to take the derivative
of $f(\theta_0, \theta_1)^{(i)}$, this time treating $\theta_1$ as the variable and the
other terms as "just a number." That goes like this:</p>
<p>$$ \frac{\partial}{\partial \theta_1} f(\theta_0, \theta_1)^{(i)} = \frac{\partial}{\partial \theta_1} (\theta_0 + \theta_{1}x^{(i)} - y^{(i)}) \tag{9}$$</p>
<p>$$ \frac{\partial}{\partial
\theta_1} f(\theta_0, \theta_1)^{(i)} = \frac{\partial}{\partial \theta_1} ([a \ number] +
\theta_{1}[a \ number, x^{(i)}] - [a \ number]) \tag{10}$$</p>
<p>Note that the "just a number", $x^{(i)}$, <em>is</em> important in this case because the
derivative of $c \times x$ (where $c$ is some number) is $\frac{d}{dx}(c \times x^1) =
c \times 1 \times x^{(1-1=0)} = c \times 1 \times 1 = c$, so the number will carry
through. In this case that number is $x^{(i)}$ so we need to keep it. Thus, our
derivative is:</p>
<p>$$ \frac{\partial}{\partial \theta_1} f(\theta_0, \theta_1)^{(i)} = 0 + (\theta_{1})^1
x^{(i)} - 0 = 1 \times \theta_1^{(1-1=0)} x^{(i)} = 1 \times 1 \times x^{(i)} =
x^{(i)} \tag{11}$$</p>
<p>Thus, the entire answer becomes:</p>
<p>$$ \frac{\partial}{\partial \theta_1} g(f(\theta_0, \theta_1)^{(i)}) =
\frac{\partial}{\partial \theta_1} g(\theta_0, \theta_1) \frac{\partial}{\partial
\theta_1} f(\theta_0, \theta_1)^{(i)} = \tag{12}$$</p>
<p>$$\frac{1}{m} \sum_{i=1}^m f(\theta_0, \theta_1)^{(i)} \frac{\partial}{\partial
\theta_1}f(\theta_0, \theta_1)^{(i)} = \frac{1}{m} \sum_{i=1}^m \left(\theta_0 +
\theta_{1}x^{(i)} - y^{(i)}\right) x^{(i)}$$</p>
<hr>
<p>A quick addition per @Hugo's comment below. Let's ignore the fact that we're dealing with vectors at all, which drops the summation and $fu^{(i)}$ bits. We can also more easily use real numbers this way.</p>
<p>$\require{cancel}$</p>
<p>Let's say $x = 2$ and $y = 4$.</p>
<p>So, for part 1 you have:</p>
<p>$$\frac{\partial}{\partial \theta_0} (\theta_0 + \theta_{1}x - y)$$</p>
<p>Filling in the values for $x$ and $y$, we have:</p>
<p>$$\frac{\partial}{\partial \theta_0} (\theta_0 + 2\theta_{1} - 4)$$</p>
<p>We only care about $\theta_0$, so $\theta_1$ is treated like a constant (any number, so let's just say it's 6).</p>
<p>$$\frac{\partial}{\partial \theta_0} (\theta_0 + (2 \times 6) - 4) = \frac{\partial}{\partial \theta_0} (\theta_0 + \cancel8) = 1$$</p>
<p>Using the same values, let's look at the $\theta_1$ case (same starting point with $x$ and $y$ values input):</p>
<p>$$\frac{\partial}{\partial \theta_1} (\theta_0 + 2\theta_{1} - 4)$$</p>
<p>In this case we <em>do</em> care about $\theta_1$, but $\theta_0$ is treated as a constant; we'll do the same as above and use 6 for it's value:</p>
<p>$$\frac{\partial}{\partial \theta_1} (6 + 2\theta_{1} - 4) = \frac{\partial}{\partial \theta_1} (2\theta_{1} + \cancel2) = 2 = x$$</p>
<p>The answer is 2 because we ended up with $2\theta_1$ and we had <em>that</em> because $x = 2$. </p>
<p>Hopefully the clarifies a bit on why in the first instance (wrt $\theta_0$) I wrote "just a number," and in the second case (wrt $\theta_1$) I wrote "just a number, $x^{(i)}$. While it's true that $x^{(i)}$ is still "just a number", since it's attached to the variable of interest in the second case it's value will carry through which is why we end up at $x^{(i)}$ for the result.</p>
|
9,111 | <p>What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? </p>
<p>I want to evaluate It and I've tried to use the most obvious way: simply typing and evaluating $(x+y)^2$, But it gives me only $(x+y)^2$ as output. I've been searching for it in the last minutes but I still got no clue, can you help me?</p>
| Artes | 184 | <h2><code>Collect</code></h2>
<p>Since it hasn't been mentioned (and one can interpret the question in another way) I'd recommend to use also <code>Collect</code> (it can be applied not only to polynomials) :</p>
<pre><code>Collect[(x + y)^2, x]
</code></pre>
<blockquote>
<pre><code>x^2 + 2 x y + y^2
</code></pre>
</blockquote>
<p>In more general cases it would be handy to use the second argument in the form of <code>List</code>, e.g. <code>Collect[(x + y)^2, {x, y}]</code>.
Comparing it to <code>Expand</code> let's try <code>Collect</code> with <code>PolynomialForm</code> :</p>
<pre><code>Collect[(x + y + z)^3, x] // PolynomialForm[ #, TraditionalOrder -> True] &
</code></pre>
<blockquote>
<pre><code>x^3 + (3 y + 3 z) x^2 + (3 y^2 + 6 z y + 3 z^2) x + y^3 + z^3 + 3 y z^2 + 3 y^2 z
</code></pre>
</blockquote>
<p>it collects terms with various powers of <code>x</code> only, while this expands terms with positive integer power in the expression :</p>
<pre><code>Expand[(x + y + z)^3]
</code></pre>
<blockquote>
<pre><code>x^3 + 3 x^2 y + 3 x y^2 + y^3 + 3 x^2 z + 6 x y z + 3 y^2 z + 3 x z^2 + 3 y z^2 + z^3
</code></pre>
</blockquote>
<h2><code>Expand</code></h2>
<p>It could be useful to take a look at the second argument of <code>Expand</code> e.g.</p>
<pre><code>Expand[(x + y)^2 + (y + z)^2, x]
</code></pre>
<blockquote>
<pre><code>x^2 + 2 x y + y^2 + (y + z)^2
</code></pre>
</blockquote>
<p>it leaves unexpanded terms free of <code>x</code>.</p>
<p><strong>Edit</strong></p>
<p>Let's add another functions which can also expand polynomials ( they serve different purposes though ) like :</p>
<h2><code>GroebnerBasis</code></h2>
<pre><code>GroebnerBasis[(x + y)^2, x][[1]]
</code></pre>
<blockquote>
<pre><code>x^2 + 2 x y + y^2
</code></pre>
</blockquote>
<pre><code>And @@ ( GroebnerBasis[(x + y + w + z)^#, x][[1]] == Expand[(x + y + w + z)^#] & /@ Range[2, 10])
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
<h2><code>PolynomialReduce</code></h2>
<pre><code>PolynomialReduce[(x + y)^2, 1, x][[1, 1]]
</code></pre>
<blockquote>
<pre><code>x^2 + 2 x y + y^2
</code></pre>
</blockquote>
<pre><code>And @@ ( PolynomialReduce[(x + y + w + z)^#, 1, {x, y, w, z}][[1,1]]
== Expand[(x + y + w + z)^# ] & /@ Range[2, 10])
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
|
626,958 | <p>I know that $E[X|Y]=E[X]$ if $X$ is independent of $Y$. I recently was made aware that it is true if only $\text{Cov}(X,Y)=0$. Would someone kindly either give a hint if it's easy, show me a reference or even a full proof if it's short? Either will work I think :) </p>
<p>Thanks.</p>
<p>Edit: Thanks for the great answers! I accepted Alecos simply for being first. I've made a followup question <a href="https://math.stackexchange.com/questions/627846/conditional-mean-on-uncorrelated-stochastic-variable-2">here</a>. (When i someday reach 15 reputation I will upvote)</p>
| Did | 6,179 | <p>Assume that $X$ is symmetric Bernoulli, that is, such that $P[X=+1]=P[X=-1]=\frac12$, that $Z$ is symmetric Bernoulli independent of $X$, and that $Y=0$ on $[X=-1]$ while $Y=Z$ on $[X=+1]$. </p>
<p>In other words, the distribution of $(X,Y)$ is $\frac12\delta_{(-1,0)}+\frac14\delta_{(+1,+1)}+\frac14\delta_{(+1,-1)}$. Still in other words, $Y=\frac12(X+1)Z$. Thus, $E[X]=E[Y]=E[XY]=0$ but $X=2Y^2-1$ almost surely hence $E[X\mid Y]=2Y^2-1\ne0$ with full probability.</p>
<p>To sum up, $\mathrm{Cov}(X,Y)=0$ does not imply $E[X\mid Y]=E[X]$ (although the opposite implication holds).</p>
|
179,223 | <p>I have posted the same question on the community (<a href="http://community.wolfram.com/groups/-/m/t/1394441?p_p_auth=YV2a4wzw" rel="nofollow noreferrer">http://community.wolfram.com/groups/-/m/t/1394441?p_p_auth=YV2a4wzw</a>).</p>
<p>I tried to register the movie posted below (compressed version here) using Mathematica but to no use. However I could manage to do the same very easily with another software (FIJI: <a href="https://fiji.sc/" rel="nofollow noreferrer">https://fiji.sc/</a> ) with a plugin "StackReg" (<a href="http://bradbusse.net/sciencedownloads.html" rel="nofollow noreferrer">http://bradbusse.net/sciencedownloads.html</a>) </p>
<p>Input Video:</p>
<p><a href="http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&userId=942204" rel="nofollow noreferrer">http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&userId=942204</a></p>
<p>The strategy that I used for the registration was as follows with both softwares:</p>
<ol>
<li><p>Inverting (ColorNegate) the image</p></li>
<li><p>Applying a Gaussian Blur of radius 10</p></li>
<li><p>Thresholding the image to obtain a mask for the object</p></li>
<li><p>Registering the binarized image (mask) and saving the transformation matrices</p></li>
<li><p>Using the transformation matrices to obtain a registered version of the image.</p></li>
</ol>
<p>for results obtained from FIJI/StackReg please see: <a href="http://community.wolfram.com//c/portal/getImageAttachment?filename=brightfield.gif&userId=942204" rel="nofollow noreferrer">http://community.wolfram.com//c/portal/getImageAttachment?filename=brightfield.gif&userId=942204</a></p>
<p>The code for Mathematica breaks when I do the same:</p>
<p><a href="https://i.stack.imgur.com/HhnEf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhnEf.png" alt="enter image description here"></a></p>
<p>Can anyone please help me figure out why ImageAlign is breaking down?</p>
<p>This is a simple problem and I expect that the ImageAlign function should not break down on such a petty case. I checked my masks and they seem to be fine. I gave the masks that I generated from Mathematica to FIJI/StackReg which can successfully align them and yield the transformation matrices in a .txt file. This brings me to the second question, is there a way to get the transformation matrix (alignment matrix) from ImageAlign. Because I need the transformation matrices after alignment in order to align the original movie.</p>
<p>Note: StackReg aligns all the images relative to the first frame.</p>
| kjosborne | 49,349 | <p><strong>Why is <code>ImageAlign</code> breaking?</strong></p>
<p>The message gives an at least partly helpful hint about why <code>ImageAlign</code> is failing here. The source of the failure is that <code>ImageAlign</code> is first using <code>ImageCorrespondingPoints</code> to find a list of points, and then doing something similar to <code>FindGeometricTransform</code> to figure out what behavior describes the transformation.</p>
<p>This message that means that in at least one case, your masks stored in <code>binarized</code> share only one keypoint with the first mask you are aligning to. These keypoints are found by the default (SURF) method. Since that is not enough to establish a transformation, the function is informing you that it cannot proceed rather than using a transformation that may well be nonsense. </p>
<p>When we get down to brass tacks, we can find one such case where we lack corresponding points:</p>
<pre><code>images = (ColorNegate@*ImageAdjust) /@ Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-optimize.gif&userId=942204"];
binarized = Binarize@DeleteSmallComponents[FillingTransform@Binarize[GaussianFilter[#, 10], 0.48]] & /@ images;
</code></pre>
<p>See:</p>
<pre><code>ImageCorrespondingPoints[binarized[[1]], binarized[[12]], TransformationClass -> "Rigid"]
</code></pre>
<p>returns:</p>
<pre><code>{{{304.516, 334.914}}, {{288.808, 198.547}}}
</code></pre>
<p>The way to get around this would be to use another keypoint method that does find enough pairs to find a good transformation.</p>
<p><strong>Potential workarounds:</strong></p>
<p>If you insist on using a keypoint method, you could try:</p>
<pre><code>aligned = ImageAlign[First@binarized, Rest[binarized], Background -> 0, TransformationClass -> "Rigid", Method -> {"Keypoints", "KAZE"}];
</code></pre>
<p>I would look at <code>ref/ImageKeypoints</code> to find a method I liked. Some of these more detailed keypoint finders can be more computationally expensive.</p>
<p>Since the binarization is destroying a lot of keypoints, and it doesn't look like you have a good marker for rotation of the cell, you may as well just use a translation to align the centroids:</p>
<pre><code>imDim = ImageDimensions[images[[1]]];
centroids = 1 /. ComponentMeasurements[#, "Centroid"] & /@ binarized;
translations = TranslationTransform[(# - centroids[[1]])/imDim] & /@ centroids;
centroidAligned = MapThread[ImageTransformation, {images, translations}];
ListAnimate[centroidAligned]
</code></pre>
|
2,929,804 | <p>My attempt</p>
<p>First I wanted to show <span class="math-container">$<3,x^2+1>$</span> is maximal
So, I supposed another maximal <span class="math-container">$A$</span> which contain <span class="math-container">$<3,x^2+1>$</span> properly, and choose element <span class="math-container">$a$</span> in <span class="math-container">$A \setminus <3,x^2+1>$</span> to induce <span class="math-container">$A$</span> must contain <span class="math-container">$1$</span>. But this approach can be applied only to the case of <span class="math-container">$\Bbb R[x]$</span></p>
<p>Second, by failure to above approach, I want to show that <span class="math-container">$\Bbb Z[x]/<3,x^2+1>$</span> is isomorphic to <span class="math-container">$Z_5[x]$</span> because I already know about <span class="math-container">$Z[i]/<2-i>$</span> is isomorphic to <span class="math-container">$Z_5[x]$</span>. So I preferentially narrowed down given ring into the case of <span class="math-container">$Z[x]/<x^2+1>$</span>. As result of the investigation, all elements of the ring could be expressed by <span class="math-container">$a+bi+<x^2+1>$</span> with assuming <span class="math-container">$x=i$</span> Thus I can conclude that <span class="math-container">$Z[x]/<x^2+1>$</span> is isomorphic to <span class="math-container">$Z_5[x]$</span> which is field. But I cannot connect this result with our given ring. </p>
<p>I want to get some advice from your help. Since I don't learn about the polynomial ring such as association with irreducible property(?). So please give me a hint to prove that by using basic property of field.</p>
| Stahl | 62,500 | <p>In this case, it is possible to simply compute the quotient ring. Below are some hints to guide you, but first let me make a remark.</p>
<p>In <span class="math-container">$\Bbb Z[x]/(3,x^2 + 1),$</span> the class of <span class="math-container">$3$</span> (i.e., <span class="math-container">$3 + (3, x^2 + 1)\in\Bbb Z[x]/(3,x^2 + 1)$</span>) is equal to <span class="math-container">$0,$</span> because <span class="math-container">$$3 - 0 = 3\in (3,x^2 + 1).$$</span> Thus, <span class="math-container">$\Bbb Z[x]/(3,x^2 + 1)$</span> cannot be isomorphic to <span class="math-container">$\Bbb Z_5[x],$</span> as <span class="math-container">$3\neq 0$</span> in <span class="math-container">$\Bbb Z_5[x].$</span></p>
<p><strong>Hint 1:</strong></p>
<blockquote class="spoiler">
<p> Use the following fact: if <span class="math-container">$R$</span> is a ring and <span class="math-container">$a,b\in R,$</span> <span class="math-container">$$R/(a,b)\cong (R/(a))/(b)\cong (R/(b))/(a).$$</span> That is, if you have an ideal generated by multiple elements, you can quotient by those elements in any order to compute the quotient ring.</p>
</blockquote>
<p><strong>Hint 2:</strong></p>
<blockquote class="spoiler">
<p> Here's another useful fact: Let <span class="math-container">$R$</span> be a ring and let <span class="math-container">$a\in R$</span> be some element. If <span class="math-container">$S$</span> is the quotient ring <span class="math-container">$S = R/(a),$</span> then <span class="math-container">$R[x]/(a)\cong S[x].$</span></p>
</blockquote>
<p><strong>Hint 3:</strong></p>
<blockquote class="spoiler">
<p> Fact: if <span class="math-container">$k$</span> is a field, and <span class="math-container">$f\in k[x],$</span> then <span class="math-container">$k[x]/(f)$</span> is a field if and only if <span class="math-container">$f$</span> is irreducible.</p>
</blockquote>
<p><strong>Hint 4:</strong></p>
<blockquote class="spoiler">
<p> One last fact: if <span class="math-container">$k$</span> is a field and <span class="math-container">$f\in k[x]$</span> has degree less than or equal to <span class="math-container">$3,$</span> <span class="math-container">$f$</span> is reducible over <span class="math-container">$k$</span> if and only if <span class="math-container">$f$</span> has a root in <span class="math-container">$k. $</span> That is, if <span class="math-container">$\deg f\leq 3,$</span> then <span class="math-container">$f$</span> is reducible over <span class="math-container">$k$</span> if and only if there exists <span class="math-container">$a\in k$</span> such that <span class="math-container">$f(a) = 0.$</span></p>
</blockquote>
|
125,317 | <p>Consider a (locally trivial) fiber bundle $F\to E\overset{\pi}{\to} B$, where $F$ is the fiber, $E$ the total space and $B$ the base space. If $F$ and $B$ are compact, must $E$ be compact? </p>
<p>This certainly holds if the bundle is trivial (i.e. $E\cong B\times F$), as a consequence of Tychonoff's theorem. It also holds in all the cases I can think of, such as where $E$ is the Möbius strip, Klein bottle, a covering space and in the more complicated case of $O(n)\to O(n+1)\to \mathbb S^n$ which prompted me to consider this question. I am fairly certain it holds in the somewhat more general case where $F,B$ are closed manifolds. However, I can't seem to find a proof of the general statement. My chief difficulty lies in gluing together the local homeomorphisms to transfer finite covers of $B\times F$ to $E$. Any insight would be appreciated.</p>
| savick01 | 18,493 | <p>I don't get where the problem is. Am I missing something?</p>
<p>Each point $b\in B$ has a neighbourhood $N_b$ such that the bundle over $N_b$ is trivial. Choose a smaller closed (thus compact) neighbourhood $C_b$ (we need some weak assumption here like Hausdorffness of $B$). The bundle over $C_b$ is homeomorphic to $C_b\times F$, thus compact.</p>
<p>Note that $\{\mathrm{int}(C_b)\}_{b\in B}$ is an open cover of $B$ and by compactness has a finite subcover indexed by $(b_i)_{i=1}^n$. Consequently $E$ is a finite sum of compact sets, thus compact:
$$E = \bigcup_{1\leq i \leq n} \pi^{-1}(C_{b_i}).$$</p>
|
2,483,611 | <p>I believe the answer is 13 * $13\choose4$ * $48\choose9$.</p>
<p>There are $13\choose4$ to draw 4 of the same cards, and multiply by 13 for each possible rank (A, 2, 3, ..., K). Then there are $48\choose9$ to choose the remaining cards.</p>
<p>One thing I am not certain of, is whether this accounts for the possibility of having two 4-of-a-kinds or three 4-of-a-kinds, but I believe it is, since having two and three means you have one.</p>
| Macavity | 58,320 | <p>For a calculus less approach, we can complete your analysis. As you noted, the roots of $Q(y)$ are real if
$$3m^2+16m + 16 \geqslant 0 \implies m \in (-\infty, -4] \cup [-\tfrac43, \infty)$$</p>
<p>Then for them to be non-negative, we just need $m+2 \geqslant 0 \iff m \geqslant -2$ as then
$$2(m+2) \geqslant \sqrt{3m^2+16m+16} \iff (2(m+2))^2 \geqslant 3m^2+16m+16 \iff m^2 \geqslant 0$$</p>
<p>Thus all the roots are real and non-negative iff $m \geqslant -\tfrac43$, the lowest three integers possible then are $m=-1, 0, 1$...</p>
|
2,524,487 | <p>I can not find a way to prove that the abelian group ($\mathbb{Q}_{>0}$,*) is a free abelian group with countable basis. Is is even true?</p>
| Dietrich Burde | 83,966 | <p>Yes, the positive rationals are the free abelian group whose basis consists of the primes:
$$
(\mathbb{Q}_{>0},\cdot) \cong \bigoplus_{p\in \mathbb{P}} \mathbb Z.
$$
This follows more or less directly from the Fundamental Theorem of Arithmetic.
Clearly the primes $\mathbb{P}$ are countable (and the whole group is countable anyway).</p>
|
2,524,487 | <p>I can not find a way to prove that the abelian group ($\mathbb{Q}_{>0}$,*) is a free abelian group with countable basis. Is is even true?</p>
| Shri | 442,962 | <p>I think it is a free abelian group which is generated by prime numbers (basis elements).</p>
|
1,903,717 | <p>This is actually from an Analysis text but i feel its a set theory question.</p>
<p>Proposition for ever rational number $\epsilon > 0$ there exists a non-negative number x s.t $x^2 < 2 < (x+ \epsilon )^2 $</p>
<p>It provides a proof that im having trouble understanding.</p>
<p>Proof: let $ \epsilon >0$ be rational. Suppose for contradiction sake that there is no on-negative rational number x that $x^2 < 2 < (x+ \epsilon )^2 $ holds.</p>
<p>ie when every $ x^2 < 2$ the statement $(x+ \epsilon )^2 <2 $</p>
<p>It states by a previous proposition that $(x+ \epsilon )^2 $ cannot equal 2.</p>
<p><strong>Then it states "Since $0^2 < 2$ we thus have $ \epsilon ^2 < 2$ which then implies that $ (2\epsilon )^2 < 2$ and indeed a simple induction shows that $ (n\epsilon )^2 < 2$ for every natural number n." Which is what i cant understand.</strong></p>
<p>The rest of the proof is strange as well im fine with the statement $ \epsilon ^2 < 2$ as it clearly follows that $ \epsilon ^2 < (x+ \epsilon )^2 $ as x is positive and $ \epsilon ^2$ is on both sides of the expression.</p>
<p>If i was proving it then i would rewrite </p>
<p>$ \epsilon ^2 = n $ $ \epsilon' $ s.t $n \in \mathbb {N} $ and $ \epsilon' \in \mathbb {Q} $ </p>
<p>i would then use the Archimedean property to prove this is a contradiction. </p>
<p>If anyone can follow/explain what the bold text means i would greatly appreciate it.</p>
| Kaligule | 182,303 | <p>We assumed that if $x^2<2$ then $(x+ϵ)^2<2$ (*).</p>
<p>Lets start with $x=0$: $x^2 = 0^2 = 0 < 2$, so we conclude that $(x+ϵ)^2 = ϵ^2<2$.</p>
<p>Now we have learned that $ϵ^2<2$, so we can use (*) again, with $x=ϵ$. We conclude that $(x+ϵ)^2=(ϵ+ϵ)^2=(2ϵ)^2<2$</p>
<p>Now we have learned that $(2ϵ)^2<2$, so we can use (*) again ... and again .. and again.</p>
|
2,581,135 | <blockquote>
<p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p>
</blockquote>
<p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p>
<p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
| omegadot | 128,913 | <p>If you factor out a $\sqrt{x}$ term from the denominator one has
\begin{align*}
\lim_{x \to \infty} \frac{\sqrt{x}}{\sqrt{x + \sqrt{x + \sqrt{x}}}} &= \lim_{x \to \infty} \frac{\sqrt{x}}{\sqrt{x} \sqrt{1 + \frac{1}{x} \sqrt{x + \sqrt{x}}}}\\
&= \lim_{x \to \infty} \frac{1}{\sqrt{1 + \sqrt{\frac{1}{x} + \frac{1}{x^{3/2}}}}}\\
&= 1.
\end{align*}</p>
|
3,842,653 | <p>I'm working on a problem for a class and I'm a bit confused on what exactly the question is asking, the question is as follows,</p>
<p>Suppose <span class="math-container">$(x_n)$</span> is a sequence in <span class="math-container">$\Bbb R$</span>. Prove that <span class="math-container">$\bigl\{a \in \Bbb R : \text{there is a subsequence }(x_{n_{k}}) \text{ with } (x_{n_{k}}) \to a \bigr\}= \bigcap^{\infty}_{n=1} \overline{\{ x_n,x_{n+1},x_{n+2},...\}}$</span></p>
<p>I know that this involves dealing with the closure of a set which we defined as,</p>
<p>Suppose <span class="math-container">$A\subseteq R$</span>, we define the closure of <span class="math-container">$A$</span> denoted by <span class="math-container">$\overline{A}$</span> by <span class="math-container">$\overline{A}=\{x \in R: \exists \ (a_n)$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$(a_n) \to x \}$</span>.</p>
<p>Thanks for the help in advance as I'm confused on exactly what to show.</p>
<p>Edit note: Had to adjust as R is the set of the reals.</p>
| Oliver Díaz | 121,671 | <p>Here is a sketch of a solution. I leave some details (why?) for the OP.</p>
<ul>
<li><p>If <span class="math-container">$a\in \bigcap^{\infty}_{n=1} \overline{\{ x_n,x_{n+1},x_{n+2},...\}}$</span>, there is <span class="math-container">$n_1\geq1$</span> such that <span class="math-container">$|x_{n_1}-a|<\frac12$</span> (<strong>why?</strong>). Then, by induction, suppose we have chosen <span class="math-container">$x_{n_1},\ldots,x_{n_k}$</span> with <span class="math-container">$n_1\leq\ldots\leq n_k$</span> and <span class="math-container">$n_k\geq k$</span> such that
<span class="math-container">$$ |x_{n_j}-a|<\frac{1}{2^j},\qquad j=1,\ldots,k$$</span>
Then, for <span class="math-container">$k+1$</span>, one can choose <span class="math-container">$n_{k+1}\geq\max(k+1,n_k)$</span> (<strong>why?</strong>) such that
<span class="math-container">$$|x_{n_{k+1}}-a|<\frac{1}{2^{k+1}}$$</span><br />
We have constructed a subsequence <span class="math-container">$x_{n_k}\xrightarrow{k\rightarrow\infty}a$</span>.</p>
</li>
<li><p>On the other direction, suppose <span class="math-container">$x_{n_k}\xrightarrow{k\rightarrow\infty}a$</span>. Then, for any <span class="math-container">$\varepsilon>0$</span>, there is <span class="math-container">$K$</span> such that <span class="math-container">$k\geq K$</span> implies <span class="math-container">$|x_{n_k}-a|<\varepsilon$</span>. As <span class="math-container">$n_k\geq k$</span>
<span class="math-container">$$B(a;\varepsilon)\cap\{x_\ell:\ell\geq k\}\neq\emptyset,\qquad \forall k\tag{why?}$$</span>
As this holds for any <span class="math-container">$\varepsilon>0$</span>, this means that <span class="math-container">$a\in\overline{\{x_\ell:\ell\geq k\}}$</span> for all <span class="math-container">$k\in\mathbb{N}$</span></p>
</li>
</ul>
|
4,520,506 | <p>I know that <span class="math-container">$ x \gt 0 $</span> because of logarithm precondition, and I can see that <span class="math-container">$ x \neq 1 $</span> because otherwise it would lead to <span class="math-container">$ 0^0$</span> which is problematic, but when I checked the graph of the function I have discovered that it started from <span class="math-container">$ 1 $</span> on <span class="math-container">$ x $</span> axis and <span class="math-container">$ (0,1) $</span> interval is not considered.</p>
<p>In sum, I thought that the domain should have been <span class="math-container">$ (0, \infty) - \{1\}$</span> but it seems to be <span class="math-container">$(1,\infty)$</span> and I can't figure out where I am wrong.</p>
| Robert Israel | 8,508 | <p>It goes up to <span class="math-container">$n$</span>, not <span class="math-container">$k+n$</span>.
In this case the product has only one factor namely <span class="math-container">$k+1 = 3$</span>.</p>
|
636,730 | <p>Let $G$ be a group of infinite order . Does there exist an element $x$ belonging to $G$ such that $x$ is not equal to $e$ and the order of $x$ is finite?</p>
| preferred_anon | 27,150 | <p>I can do better: consider the following set under the operation of multiplication:
$$\lbrace x=e^{i\pi t},t \in \mathbb{Q} \rbrace$$
The set is infinite, but every element has finite order (namely, if $t=a/b$ in lowest terms, the order of $x$ is $2b$).</p>
|
1,618,411 | <p>I'm learning the fundamentals of <em>discrete mathematics</em>, and I have been requested to solve this problem:</p>
<p>According to the set of natural numbers</p>
<p>$$
\mathbb{N} = {0, 1, 2, 3, ...}
$$</p>
<p>write a definition for the less than relation.</p>
<p>I wrote this:</p>
<p>$a < b$ if $a + 1 < b + 1$</p>
<p>Is it correct?</p>
| Zhanxiong | 192,408 | <p>Regarding to this particular set, you can define $<$ as $a < b$ if $b - a \in \mathbb{N}$ and $b - a \neq 0$.</p>
|
316,699 | <p>If $A,B,C$ are sets, then we all know that $A\setminus (B\cap C)= (A\setminus B)\cup (A\setminus C)$. So by induction
$$A\setminus\bigcap_{i=1}^nB_i=\bigcup_{i=1}^n (A\setminus B_i)$$
for all $n\in\mathbb N$.</p>
<p>Now if $I$ is an uncountable set and $\{B_i\}_{i\in I}$ is a family of sets, is it true that:
$$A\setminus\bigcap_{i\in I}B_i=\bigcup_{i\in I} (A\setminus B_i)\,\,\,?$$</p>
<p>If the answer to the above question will be "NO", what can we say if $I$ is countable?</p>
| Trevor Wilson | 39,378 | <p>Yes. If $x$ is in the left hand side then it's in $A$ but not in $\bigcap_{i\in I} B_i$, so it's missing from one of the $B_i$'s. Therefore it's in $A \setminus B_i$ for some $B_i$, and it's in the right hand side. The other direction is similar.</p>
|
4,474,806 | <p>I use the following method to calculate <span class="math-container">$b$</span>, which is <span class="math-container">$a$</span> <strong>increased</strong> by <span class="math-container">$x$</span> percent:</p>
<p><span class="math-container">$\begin{align}
a = 200
\end{align}$</span></p>
<p><span class="math-container">$\begin{align}
x = 5\% \text{ (represented as } \frac{5}{100} = 0.05 \text{)}
\end{align}$</span></p>
<p><span class="math-container">$\begin{align}
b = a \cdot (1 + x) \
= 200 \cdot (1 + 0.05) \
= 200 \cdot 1.05 \
= 210
\end{align}$</span></p>
<p>Now I want to calculate <span class="math-container">$c$</span>, which is also <span class="math-container">$a$</span> but <strong>decreased</strong> by <span class="math-container">$x$</span> percent.</p>
<p>My instinct is to preserve the method, but to use division instead of multiplication (being the inverse operation):</p>
<p><span class="math-container">$
\begin{align}
c = \frac{a}{1 + x} \
= \frac{200}{1 + 0.05} \
= \frac{200}{1.05} \
= 190.476190476 \
\end{align}
$</span></p>
<p>The result looks a bit off? But also interesting as I can multiply it by the percent and I get back the initial value (<span class="math-container">$190.476190476 \cdot 1.05 = 200$</span>).</p>
<p>I think the correct result should be 190 (without any decimal), using:</p>
<p><span class="math-container">$
\begin{align}
c = a \cdot (1 - x) \
= 200 \cdot (1 - 0.05) \
= 200 \cdot 0.95 \
= 190
\end{align}
$</span></p>
<p>What's the difference between them? What I'm actually calculating?</p>
| Ezra | 1,068,060 | <p>The difference between them is with respect to what variable the percentage is being taken. In your first calculation, when you divide, you are saying "<span class="math-container">$a$</span> is <span class="math-container">$x\%$</span> more than <span class="math-container">$c$</span>", while on the latter, you are saying "<span class="math-container">$c$</span> is <span class="math-container">$x\%$</span> less than <span class="math-container">$a$</span>". These are not the same! In the first case, the percentage is with respect to <span class="math-container">$c$</span> (hence why you multiply <span class="math-container">$c$</span> and <span class="math-container">$1+x$</span>). In the second, the percentage is with respect to <span class="math-container">$a$</span> (hence why you multiply <span class="math-container">$a$</span> and <span class="math-container">$1-x$</span>). Hope this is clear.</p>
|
3,964,237 | <p>I am trying to understand the meaning of this exercise question in a logic textbook.</p>
<blockquote>
<p>For each of the following statement forms,find a statement form that is logically equivalent to its negation and in which negation signs apply only to statement letters.<br/>
i. <span class="math-container">$A \rightarrow ( B \leftrightarrow \lnot C )$</span> <br/>
ii. <span class="math-container">$\lnot A \lor ( B \rightarrow C )$</span> <br/>
iii. <span class="math-container">$A \land (B \lor \lnot C)$</span> <br/></p>
</blockquote>
<p>But I keep failing. I don't understand the part "<em>find a statement form that is logically equivalent to its negation and in which negation signs apply only to statement letters</em>". So I am unable to solve it.Can someone tell me what that means?</p>
| Brian M. Scott | 12,042 | <p>In each problem you have a statement form <span class="math-container">$\varphi$</span>; you are to start with its negation <span class="math-container">$\neg\varphi$</span> and use logical equivalences to transform that into a statement form in which no compound expression is negated: the only expressions that are preceded by a negation are individual statement letters. As <strong>Couchy</strong> said in the comments, you cannot have <span class="math-container">$\neg(A\land B)$</span>, because in it the compound expression <span class="math-container">$A\land B$</span> is negated, but you can apply De Morgan’s laws to convert that to <span class="math-container">$\neg A\lor\neg B$</span>, in which the only things negated are the statement letters <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. I’ll work the first one in gory detail as an example.</p>
<p><span class="math-container">$$\begin{align*}
\neg\big(A\to(B\leftrightarrow\neg C)\big)&\equiv\neg\big(\neg A\lor(B\leftrightarrow\neg C)\big)\\
&\equiv\neg(\neg A)\land\neg(B\leftrightarrow\neg C)\\
&\equiv A\land\neg(B\leftrightarrow\neg C)\\
&\equiv A\land\neg\big((B\land\neg C)\lor(\neg B\land \neg(\neg C))\big)\\
&\equiv A\land\neg\big((B\land\neg C)\lor(\neg B\land C)\big)\\
&\equiv A\land\big(\neg(B\land\neg C)\land\neg(\neg B\land C)\big)\\
&\equiv A\land\neg(B\land\neg C)\land\neg(\neg B\land C)\\
&\equiv A\land\big(\neg B\lor\neg(\neg C)\big)\land\big(\neg(\neg B)\lor\neg C\big)\\
&\equiv A\land(\neg B\lor C)\land(B\lor\neg C)
\end{align*}$$</span></p>
|
3,289,658 | <p>I was solving A-level Further Mathematics paper and I didn't quite understand how to solve the question.</p>
<blockquote>
<p>Question:<a href="https://i.stack.imgur.com/C3UUX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C3UUX.png" alt="Question"></a></p>
</blockquote>
<p>I know the formula <span class="math-container">$\sum _{r=1}^n(u_{r})=f\left(1\right)-f\left(n+1\right)$</span> but I can't seem to figure out how to use it.</p>
<blockquote>
<p>This was the answer provided. It isn't very clear on how to solve the problem!
Answer:
<a href="https://i.stack.imgur.com/rwjsy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rwjsy.png" alt="Answer Provided"></a></p>
</blockquote>
<p>Could you provide me an explanation on how to approach this type of problem. Thanks.</p>
| Bartek | 671,751 | <p>Just sum the given identity for all <span class="math-container">$n=1,2,...,N$</span>. LHS will telescope and on the RHS you will obtain six times your desired sum plus five times the sum of cubes (which is known) and <span class="math-container">$\frac{3}{8}$</span> of the sum of the first powers which is also known. So you can solve for the sum you are looking for.</p>
|
3,289,658 | <p>I was solving A-level Further Mathematics paper and I didn't quite understand how to solve the question.</p>
<blockquote>
<p>Question:<a href="https://i.stack.imgur.com/C3UUX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C3UUX.png" alt="Question"></a></p>
</blockquote>
<p>I know the formula <span class="math-container">$\sum _{r=1}^n(u_{r})=f\left(1\right)-f\left(n+1\right)$</span> but I can't seem to figure out how to use it.</p>
<blockquote>
<p>This was the answer provided. It isn't very clear on how to solve the problem!
Answer:
<a href="https://i.stack.imgur.com/rwjsy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rwjsy.png" alt="Answer Provided"></a></p>
</blockquote>
<p>Could you provide me an explanation on how to approach this type of problem. Thanks.</p>
| Cornman | 439,383 | <p>With the given identity </p>
<p><span class="math-container">$(n+\frac12)^6-(n-\frac12)^6=6n^5+5n^3+\frac38n\Leftrightarrow n^5=\frac16\left((n-\frac12)^6-(n+\frac12)^6+5n^3+\frac38n\right)$</span></p>
<p>We get:</p>
<p><span class="math-container">$\sum_{n=1}^N n^5=\sum_{n=1}^N \frac16\left((n-\frac12)^6-(n+\frac12)^6+5n^3+\frac38n\right)$</span></p>
<p>With basic sum manipulation we derive:</p>
<p><span class="math-container">$=\frac16\sum_{n=1}^N (n-\frac12)^6-(n+\frac12)^6+\frac56\sum_{n=1}^N n^3+\frac1{16}\sum_{n=1}^N n$</span></p>
<p>Note that the first sum is telescoping.
The last sum, is the well known Gaussian sum <span class="math-container">$\sum_{n=1}^N n=\frac{N(N+1)}{2}$</span></p>
<p>Depending on what you know, it is also known, that</p>
<p><span class="math-container">$\sum_{n=1}^N n^3=\frac{N^2(N+1)^2}{4}$</span></p>
<p>If you know this result, we are kinda done and you just have to calculate the telescoping sum, which is not difficult and substitute the other sums with known formulas.
If you do not know these formulas, you might proof them first.</p>
<p>This can be down with induction fairly easy, but would a little bit trick the question, in my opinion.</p>
<p>You can also use the same trick to calculate <span class="math-container">$\sum_{n=1}^N n^3$</span>, by writing it as a telesoping sum.
Then you just need to know <span class="math-container">$\sum_{n=1}^N n^2$</span> and <span class="math-container">$\sum_{n=1}^N n$</span>.</p>
<p>There are many ways, it depends on your background.</p>
|
2,473,951 | <p>The definition I have of a convex function $f: \mathbb{R} \rightarrow \mathbb{R}$ is that for every $x, y \in \mathbb{R}$ and every $\lambda \in [0, 1]$,
$$
f(\lambda x + (1-\lambda )y) \leq \lambda f(x) + (1- \lambda )f(y).$$</p>
<p>By proving that slopes increase I mean that for $x \leq y \leq z$, we get $$\frac{f(y) - f(x)}{y-x} \leq \frac{f(z) - f(x)}{z-x} \leq \frac{f(z) - f(y)}{z-y}. $$</p>
<p>Is there a simple proof of this which doesn't assume that such a convex function has a non-negative second derivative? It's difficult to see how the definition gets us here.</p>
| zhw. | 228,045 | <p>I like to think of this geometrically. The definition of convexity implies $(y,f(y))$ lies on or below the line $L$ through $(x,f(x))$ and $(z,f(z)).$ Suppose it's below. Then the line through through $(x,f(x))$ and $(y,f(y))$ plainly has slope less than the slope of $L.$ And then to move back up to $(z,f(z)),$ the line through $(y,f(y))$ and $(z,f(z)$ plainly has slope greater than the slope of $L.$ Now interpret all of this with appropriate mind-numbing formulae and algebra.</p>
|
1,047,544 | <p>I'm doing some research and I'm trying to compute a closed form for $ \mathbb{E}[ X \mid X > Y] $ where $X$, $Y$ are independent normal (but not identical) random variables. Is this known?</p>
| user193702 | 193,702 | <p>Hint: you can directly do the double integrals in the district $x>y$ for $xf(x)$, not so hard, I remember.</p>
|
1,047,544 | <p>I'm doing some research and I'm trying to compute a closed form for $ \mathbb{E}[ X \mid X > Y] $ where $X$, $Y$ are independent normal (but not identical) random variables. Is this known?</p>
| Did | 6,179 | <p>One can reduce the problem to the computation of $E(U\mid U\gt aV-b)$ where $(U,V)$ are i.i.d. standard normal. Let $\varphi$ denote the standard normal PDF, then $u\varphi(u)=-\varphi'(u)$ hence $$E(U;U\gt aV-b)=\int_\mathbb R\varphi(v)\mathrm dv\int_{av-b}^\infty u\varphi(u)\mathrm du=\int_\mathbb R\varphi(v)\varphi(av-b)\mathrm dv.$$ Computing the product $\varphi(v)\varphi(av-b)$ and using the change of variable $w=cv$ with $$c=\sqrt{1+a^2},$$ one gets $$E(U;U\gt aV-b)=\varphi(b/c)/c.$$ On the other hand, $U-aV$ is centered normal with variance $c^2$ hence $$P(U\gt aV-b)=\Phi(b/c),$$ where $\Phi$ denotes the standard normal CDF. Thus, $$E(U\mid U\gt aV-b)=\frac{\varphi(b/c)}{c\Phi(b/c)}.$$ For general independent normal random variables $X$ and $Y$, note that $$E(X\mid X\gt Y)=\mu_X+\sigma_XE(U\mid U\gt aV-b),$$ with $$a=\sigma_Y/\sigma_X,\qquad b=(\mu_X-\mu_Y)/\sigma_X,$$ that is,
$$
E(X\mid X\gt Y)=\mu_X+\tau\sigma_X\frac{\varphi(\tau\nu)}{\Phi\left(\tau\nu\right)},\qquad\nu=\mu_X-\mu_Y,\qquad\tau=\frac{\sigma_X}{\sqrt{\sigma_X^2+\sigma_Y^2}}.$$</p>
|
4,269,414 | <p>In <a href="https://math.stackexchange.com/questions/550764/quotient-space-of-s1-is-homeomorphic-to-s1">this post</a> someone suggested:</p>
<p>"<span class="math-container">$z\mapsto z^2$</span>"</p>
<p>where both <span class="math-container">$z$</span> and <span class="math-container">$z^2$</span> are in <span class="math-container">$\mathbb{S^1}$</span> (unless I missed something).</p>
<p>Does this mean that they suggested: <span class="math-container">$z^2 = (z_1^2, z_2^2)$</span>?</p>
<p>If not, what is it?</p>
| ryang | 21,813 | <p><span class="math-container">$$z\in S^1 \implies z\in\mathbb C\implies z^2\in\mathbb C;$$</span><span class="math-container">$$\\z\in\mathbb C\kern.6em\not\kern-.6em\implies z^2\in\mathbb C^2.$$</span></p>
|
3,468,336 | <p>Consider the multiplication operator <span class="math-container">$A \colon D(A) \to L^2(\mathbb R)$</span> defined by
<span class="math-container">$$\forall f \in D(A): \quad(Af)(x) = (1+\lvert x \rvert^2)f(x),$$</span>
where <span class="math-container">$$D(A) := \left \{f\in L^2(\mathbb R): (1+\lvert x \rvert^2)f\in L^2(\mathbb R), \int_\mathbb R f(x) \, dx = 0\right \}.$$</span>
I want to show that <span class="math-container">$A$</span> is densely defined, symmetric and closed, but not self-adjoint. However I am struggling at some places.</p>
<p><em>Proof.</em> Since <span class="math-container">$x\mapsto (1+\lvert x \rvert^2)$</span> is real-valued, <span class="math-container">$A$</span> is symmetric. Now let <span class="math-container">$(g_n)_{n\in \mathbb N} \subseteq L^2(\mathbb R)$</span> converging in <span class="math-container">$\lVert \cdot \rVert_{L^2(\mathbb R)}$</span> to some <span class="math-container">$g \in L^2(\mathbb R)$</span> and <span class="math-container">$Ag_n \to h \in L^2(\mathbb R)$</span>. Passing to a subsequence we see that
<span class="math-container">$$g_n(x) \to g(x), \quad (1+\lvert x \vert^2)g_n(x) \to h(x)$$</span>
for almost every <span class="math-container">$x\in \mathbb R$</span>, from which we see that <span class="math-container">$$g(x) = \frac{h(x)}{1+\lvert x \rvert^2}$$</span>
for almost every <span class="math-container">$x\in \mathbb R$</span>. Also since <span class="math-container">$g_n \to g$</span> in <span class="math-container">$L^1(\mathbb R)$</span> we have <span class="math-container">$$\int_{\mathbb R} g(x) \, dx = \lim_{n\to \infty} \int_{\mathbb R} g_n(x) \, dx = 0,$$</span>
so <span class="math-container">$g\in D(A), Ag = h$</span> and <span class="math-container">$A$</span> is closed. Now consider the function <span class="math-container">$f(x) = e^{-x^2}$</span>. Then letting <span class="math-container">$\psi_f(x) = (1+\lvert x \rvert^2) f(x)$</span> we have <span class="math-container">$ \psi_f \in L^2(\mathbb R)$</span> and for each <span class="math-container">$g\in D(A):$</span>
<span class="math-container">$$\langle f, Ag \rangle_{L^2} = \langle \psi_f, g \rangle_{L^2},$$</span>
so <span class="math-container">$f\in D(A^*)$</span>. But since <span class="math-container">$\int_{\mathbb R} f(x) \, dx = \pi, f \notin D(A)$</span> and hence <span class="math-container">$A$</span> is not self-adjoint.</p>
<p>Is this correct so far? Also, I struggled showing <span class="math-container">$A$</span> is densely defined. It was hard for me to construct a approximating sequence under the constraint that the integral should vanish, also I could not compute the orthogonal complement. Any hints?</p>
| Henry | 6,460 | <p>The exercise is correct. As Clarinetest says in a comment, your error seems to be with <span class="math-container">$\frac{1}{n^2}\mathbb{E}_\theta\left[\left(\sum_{i=1}^n X_i\right)^2\right]$</span> i.e. with <span class="math-container">$\mathbb E[\bar{X}^2]$</span>. </p>
<p>You should have <span class="math-container">$\mathbb E[X_1 \mid \lambda]=\lambda$</span> and <span class="math-container">$\operatorname{Var}(X_1 \mid \lambda) = \lambda$</span> for a Poisson distribution </p>
<p>so <span class="math-container">$\mathbb E[\bar{X} \mid \lambda]=\lambda$</span> and <span class="math-container">$\operatorname{Var}(\bar{X} \mid \lambda) = \frac1n\lambda$</span> </p>
<p>leading to <span class="math-container">$\mathbb E[\bar{X}^2 \mid \lambda]=\lambda^2+ \frac1n\lambda$</span></p>
<p>and thus <span class="math-container">$\mathbb E[\bar{X}^2 - \frac1n\bar{X} \mid \lambda]=\lambda^2$</span></p>
|
11,353 | <p>Thinking about the counterintuitive <em>Monty Hall Problem</em> (stick or switch?),
revisited in <a href="https://matheducators.stackexchange.com/a/11346/511">this ME question</a>,
I thought I would issue a challenge:</p>
<blockquote>
<p>Give in one (perhaps long) sentence a convincing explanation of why <em>switching</em> is twice as likely to lead to winning as <em>sticking</em>.</p>
</blockquote>
<p>Assume the game assumptions
are pre-stated and clear.</p>
<p>The probabilities are not even close, so there should be a convincing explanation after all
<a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow noreferrer">the discussion of this topic</a>,
even though "1,000 Ph.D."s got it wrong (in 1990 when it first went viral).</p>
| Benoît Kloeckner | 187 | <p>The sticking strategy does not use the additional information revealed by the presenter, and thus cannot have more chance of winning than if the presenter would open no door, which is 1/3.</p>
|
3,270,725 | <p>Hello everyone I read on my notes this proposition: </p>
<p>Given a field <span class="math-container">$K$</span> and <span class="math-container">$R=K[T]$</span>, let <span class="math-container">$M$</span> be a (left) finitely generated <span class="math-container">$R$</span>-module; then <span class="math-container">$M$</span> is a torsion module if and only if <span class="math-container">$\dim_K(M)<\infty$</span>.</p>
<p>Since it has already been stated that <span class="math-container">$M$</span> is finitely generated, <span class="math-container">$\dim_K(M)$</span> must be something different from the number of generators of <span class="math-container">$M$</span>, then my question is: what does <span class="math-container">$\dim_K(M)$</span> mean?</p>
| TeM | 247,735 | <p>Given the function <span class="math-container">$f : D_f \to \mathbb{R}$</span> of law:
<span class="math-container">$$f(x) := \frac{1}{\sqrt{|\tan x| - \tan x}}\,,$$</span>
its natural domain is thus determinable:
<span class="math-container">$$
|\tan x| > \tan x
\; \; \; \; \; \; \Leftrightarrow \; \; \; \; \; \;
\begin{cases}
\tan x < 0 \\
- \tan x > \tan x
\end{cases}
\; \; \; \cup \; \; \;
\begin{cases}
\tan x \ge 0 \\
\tan x > \tan x
\end{cases}\,.
$$</span>
To conclude you.</p>
|
2,009,557 | <p>I am pretty sure this question has something to do with the Least Common Multiple. </p>
<ul>
<li>I was thinking that the proof was that every number either is or isn't a multiple of $3, 5$, and $8\left(3 + 5\right)$.</li>
<li>If it isn't a multiple of $3,5$, or $8$, great. You have nothing to prove.</li>
<li>But if it is divisible by one of them, I couldn't find a general proof that showed that it wouldn't be divisible by another one. Say $15$, it is divisible by $3$ and $5$, but not $8$.</li>
</ul>
| gav | 858,954 | <p>proof by induction</p>
<p>p(8): 3.(1)+5.(1)=8 holds</p>
<p>Assume p(k) holds => 3.a + 5.b = k</p>
<p>we must prove that p(k+1) holds.</p>
<p>p(k+1): 3x+5y = k+1=>
3x+5y = 3a+5b+1=>
3(x-a) + 5(y-a)</p>
<p>thus suffices to prove that 3z+5ω =1 has an integer solution.
That is true because gcd(3,5)=1 | 1 (1 devdes 1)</p>
<p>Calculating gcd(3,5) gives:
5 = 1*3 + 2</p>
<p>3 = 1*2 + 1</p>
<p>2 = 2*1 + 0</p>
<p>Then applying the Extended Euclidean Algorithm:</p>
<p>1 = (1 * 3) + (-1 * 2)
= (-1 * 5) + (2 * 3)=>
z0=2, ω0=-1 =>
3.2+5.(-1) = 1=>
p(k+1) holds as well</p>
|
4,613,471 | <p>I would like to figure out the power series expansion of <span class="math-container">$f(z)=\frac{1}{(z+1)^2}$</span> around <span class="math-container">$z_0=1$</span>. Somehow expanding this into a geometric series would be the way to go I suppose, however, I fail to see how this can be rearranged in terms of (z-1). Maybe somebody could point me in the right direction? Thanks!</p>
| Quanto | 686,284 | <p>Substitute <span class="math-container">$u+\frac12=\frac{\sqrt3}2\tan t$</span>
<span class="math-container">\begin{aligned}
&\int{\frac{1}{u\sqrt{u^{2}+u+1}}}du\\
=& -\int{\frac{1}{\cos(t+\frac\pi3)}}dt
= -\int\frac{d\left[\sin(t+\frac\pi3) \right]}{1-\sin^2(t+\frac\pi3) }\\
=&\ \frac12\ln\frac{1-\sin(t+\frac\pi3)}{1+\sin(t+\frac\pi3)}
= \frac12\ln \frac{\sqrt{u^{2}+u+1}-\frac12u-1}{\sqrt{u^{2}+u+1}+\frac12u+1} \\
\end{aligned}</span></p>
|
298,481 | <p>Find the general solution of </p>
<p>$y'' + \dfrac{7}{x} y' + \dfrac{8}{x^2} y = 1, x > 0$</p>
<p>I don't even know how to solve the homogeneous version because it involves variables...</p>
<p>Does anyone know how to solve it?</p>
| Mhenni Benghorbal | 35,472 | <p>It is of Euler differential equation type. Here is a <a href="https://math.stackexchange.com/questions/285274/solving-this-second-order-ode">related problem</a>. You should have the following solution </p>
<p>$$ y(x) ={\frac {{\it c_2}}{{x}^{4}}}+{\frac {{\it c_1}}{{x}^{2}}}+\frac{{x}^{2}}{24}.$$</p>
|
3,334,816 | <p>The question first requires me to prove the identity <span class="math-container">$$\sqrt{\frac{1- \sin x}{1+ \sin x}}=\sec x- \tan x, -90^\circ < x < 90^\circ$$</span> I am able to prove this. The second part says “Explain why <span class="math-container">$x$</span> must be acute for the identity to be true”. I don’t see why <span class="math-container">$x$</span> must be acute for the identity to hold true. Wouldn’t it suffice for it to lie in either the 1st or 4th quadrant? Eg <span class="math-container">$x=330^\circ$</span>. </p>
| azif00 | 680,927 | <p>The first quadrant is <span class="math-container">$0^\circ \leq x \leq 90^\circ$</span> and the fourth quadrant is <span class="math-container">$270^\circ \leq x \leq 360^\circ$</span> or this is the same as <span class="math-container">$-90^\circ \leq x \leq 0^\circ$</span>. That is
<span class="math-container">$$-90^\circ \leq x\leq 90^\circ$$</span>
represents both the first and the fourth quadrant.</p>
|
3,334,816 | <p>The question first requires me to prove the identity <span class="math-container">$$\sqrt{\frac{1- \sin x}{1+ \sin x}}=\sec x- \tan x, -90^\circ < x < 90^\circ$$</span> I am able to prove this. The second part says “Explain why <span class="math-container">$x$</span> must be acute for the identity to be true”. I don’t see why <span class="math-container">$x$</span> must be acute for the identity to hold true. Wouldn’t it suffice for it to lie in either the 1st or 4th quadrant? Eg <span class="math-container">$x=330^\circ$</span>. </p>
| Mick | 42,351 | <p>The important point is that “identity” is true only when <span class="math-container">$-90^\circ < x < 90^\circ$</span>.</p>
<p>In another word, it is not true when the terminal arm falls in QII and QIII.</p>
<p>Note that the angle lies in <span class="math-container">$-90^0 < x < 0^0$</span> is also an acute angle, because the minus sign only refers to moving the terminal arm in the clockwise direction. The resultant angle is still acute.</p>
<p>Added: To move the terminal arm through <span class="math-container">$330^0$</span> in the anticlockwise direction is the same as moving it through <span class="math-container">$30^0$</span> in the clockwise direction. </p>
|
1,514,388 | <p>$$ \lim_{x\to \infty} \left(\frac{1}{(x^2+x)\left(\ln\frac{x+1}{x}\right)^2}\right) $$</p>
<p>I know the answer is 1, but why does it tend to 1?
Can you manipulate the function and the "$\ln$" to make it obvious? </p>
<p>Much appreciated. </p>
| SchrodingersCat | 278,967 | <p>$$ \lim_{x\to \infty} \left(\frac{1}{(x^2+x)\left(\ln\frac{x+1}{x}\right)^2}\right) $$</p>
<p>Use $u=\frac{1}{x}$</p>
<p>So we have $$\lim_{u\to 0} \left(\frac{u^2}{(1+u)\left(\ln\left[1+u\right]\right)^2}\right)$$
$$=\frac{1}{\lim_{u\to 0}(1+u)}\cdot \frac{1}{\lim_{u\to 0}\frac{\left(\ln\left[1+u\right]\right)^2}{u^2}} $$
$$=\frac{1}{\lim_{u\to 0}(1+u)}\cdot \frac{1}{\left(\lim_{u\to 0}\frac{\ln(1+u)}{u}\right)^2}$$
$$= \frac{1}{1+0}\cdot \frac{1}{1}$$
$$=1$$</p>
|
2,625,763 | <p>I am having trouble with factoring $2x^3 + 21x^2 +27x$. The answer is $x(x+9)(2x+3)$ but not sure how that was done. Obviously I factored out the $x$ to get $x(2x^2+21x+27)$ then from there I am lost. I tried the AC method and grouping. Can someone show the steps? Thanks! </p>
| random | 513,275 | <p>Find the roots $x_1$ and $x_2$ of $2x^2+21x+27$ with the standard abc method.</p>
<p>Then $2x^2+21x+27=c(x-x_1)(x-x_2)$ with an easily computable $c$</p>
|
2,625,763 | <p>I am having trouble with factoring $2x^3 + 21x^2 +27x$. The answer is $x(x+9)(2x+3)$ but not sure how that was done. Obviously I factored out the $x$ to get $x(2x^2+21x+27)$ then from there I am lost. I tried the AC method and grouping. Can someone show the steps? Thanks! </p>
| fleablood | 280,126 | <p>1) </p>
<p>$2x^3 + 21x^2 +27x = x(2x^2 + 21x + 27) = x(2x(x + 9) + 3x + 27) =x(2x(x+9) + 3(x + 9)) = x(2x + 3)(x+9)$.</p>
<p>2) </p>
<p>$2x^3 + 21 x^2 + 27x = x(2x^2 + 21x + 27) = x*2*(x - a)(x+b)$ where $a,b$ are solutions to $2x^2 + 21x + 27=0$. i.e. $x = \frac {-21 \pm {21^2 - 8*27}}{4} = \frac {-21 \pm \sqrt {225}}4 = \frac {-21 \pm 15}4 = -9, -\frac 32$ so </p>
<p>$2x^3 + 21 x^2 + 27x = x*2*(x + \frac 32)(x + 9) = x(2x +3)(x+9)$.</p>
<p>3)</p>
<p>$2x^3 + 21x^2 + 27x = (ax + b)(cx + d)(ex + f)$ and </p>
<p>$ace = 2$. Wolog, $a = 2$ and $c, e = 1$.</p>
<p>So $2x^3 + 21x^2 + 27x = (2x + b)(x + d)(x + f)$</p>
<p>$2f + 2d + b = 21$</p>
<p>$2df + bf + bd = 27$</p>
<p>$bdf = 0$</p>
<p>$b$ is odd so $b\ne 0$. Wolog $d= 0$.</p>
<p>$2f + b = 21$ </p>
<p>$bf = 27$</p>
<p>So $b=3; f= 9$</p>
<p>So $x(2x + 3)(x + 9)$.</p>
|
3,473,944 | <p>So i have an object that moves in a straight line with initial velocity <span class="math-container">$v_0$</span> and starting position <span class="math-container">$x_0$</span>. I can give it constant acceleration <span class="math-container">$a$</span> over a fixed time interval <span class="math-container">$t$</span>. Now what i need is that when the time interval ends this object should stop exactly at a point <span class="math-container">$x_1$</span> with it's velocity being equal to <span class="math-container">$0$</span>. I need to find acceleration <span class="math-container">$a$</span> that i can give it in order for that to happen. </p>
<p>The way i see it we've got a system of equations:
<span class="math-container">$$ 0 = v_0 + a t $$</span>
<span class="math-container">$$ x_1 = x_0 + v_0 t + \frac {a t^2} {2} $$</span> </p>
<p>I have only one unknown, which is <span class="math-container">$a$</span>. </p>
<p>Let's get <span class="math-container">$a$</span> from the first equation:
<span class="math-container">$$ a = \frac { - v_0 } { t } $$</span> </p>
<p>And put it into the second one:
<span class="math-container">$$ x_1 = x_0 + v_0 t + \frac { - v_0 t } {2} $$</span> </p>
<p>Now let's express initial velocity (<span class="math-container">$v_0$</span>) from that equation:
<span class="math-container">$$ x_1 - x_0 = v_0 t + \frac { - v_0 t } {2} $$</span>
<span class="math-container">$$ \frac { x_1 - x_0 } { t } = v_0 + \frac { - v_0 } {2} $$</span>
<span class="math-container">$$ \frac { 2 ( x_1 - x_0 ) } { t } = 2 v_0 - v_0 $$</span>
<span class="math-container">$$ v_0 = \frac { 2 ( x_1 - x_0 ) } { t } $$</span> </p>
<p>And put it back into equation for acceleration:
<span class="math-container">$$ a = \frac { - v_0 } { t } $$</span>
<span class="math-container">$$ a = \frac { - \frac { 2 ( x_1 - x_0 ) } { t } } { t } $$</span>
<span class="math-container">$$ a = - \frac { 2 ( x_1 - x_0 ) } { t^2 } $$</span> </p>
<p>So we got an acceleration that i need to apply to an object over a time interval <span class="math-container">$t$</span>, so that it would stop at <span class="math-container">$x_1$</span> with velocity <span class="math-container">$0$</span>, right? </p>
<p>But it doesn't work! </p>
<p>Because it doesn't depend on initial velocity at all! So if my object is flying at 2 m/s then i would need to apply the same acceleration as if it was flying 100 m/s, or 1000 m/s? How come? </p>
<p>Where am i being wrong? This all seems mathematically sound... Am i setting the wrong premises? Interpreting results in the wrong way? </p>
<p>I really need it for my project, and i've been trying to solve this for weeks, studying different aspects of maths that might help me, but i just can't do it :( </p>
<p>But this looks so simple! And yet i just can't do it. 11 years of school seem so useless right now... </p>
<p>Help please </p>
| NiveaNutella | 695,512 | <p>As one of the comments suggested, it is not possible for all time intervals <span class="math-container">$t$</span>. I ignored the fixing of the time interval and solved the problem without that constraint. Hope it helps:</p>
<p>EDIT: If you want to still have a fixed <span class="math-container">$t_1$</span>, you can view the obtained equation for <span class="math-container">$t_1$</span> as a constraint, that must hold, for the problem to have a solution.</p>
<hr>
<p>Equations of motion of the object:</p>
<p><span class="math-container">$$a(t) = a_0$$</span>
<span class="math-container">$$v(t) = v_0 + a_0t$$</span>
<span class="math-container">$$x(t) = x_0 + v_0t + \frac{a_0}{2}t^2$$</span></p>
<hr>
<p>Equations that have to be satisfied:
<span class="math-container">$$x_1 = x_0 + v_0 \cdot t_1 + \frac{a_0}{2}t_1^2$$</span>
<span class="math-container">$$v_1 = v_0 + a_0 \cdot t_1$$</span>
Where <span class="math-container">$t_1$</span> is the time, when the object is at <span class="math-container">$x_1$</span>, so <span class="math-container">$x(t_1) = x_1$</span></p>
<p>You have <em>two</em> unknowns: <span class="math-container">$a_0$</span> and <span class="math-container">$t_1$</span>.
Proceeding as you described, you get:
<span class="math-container">$$a_0 = -\frac{v_0}{t_1}$$</span>
Plugging this into the first equation, that must be satisfied, gives:
<span class="math-container">$$x_1 = x_0 + v_0 \cdot t_1 + \frac{-\frac{v_0}{t_1}}{2}t_1^2$$</span>
<span class="math-container">$$t_1 = \frac{2(x_1-x_0)}{v_0}$$</span>
Plug this into the obtained equation for <span class="math-container">$a_0$</span>:
<span class="math-container">$$a_0 = -v_0 \cdot \frac{v_0}{2(x_1-x_0)}$$</span>
<span class="math-container">$$a_0 = - \frac{v_0^2}{2(x_1-x_0)}$$</span></p>
|
3,488,226 | <p>We have <span class="math-container">$$\ln(v-1) - \ln(v+3) = \ln(x) + C$$</span></p>
<p>Multiplying through by e gives:</p>
<p><span class="math-container">$$(v-1)/(v+3) = x + e^C$$</span></p>
<p>But the answer in the textbook is:</p>
<p><span class="math-container">$$(v-1)/(v+3) = Dx$$</span></p>
<p>Where <span class="math-container">$$D = e^C$$</span></p>
<p>Question: why is the answer Dx and not x + D?</p>
<p>Thanks!</p>
| Severin Schraven | 331,816 | <p>Because
<span class="math-container">$$ e^{\ln(x)+C} = e^{\ln(x)} e^C = x e^C. $$</span></p>
|
3,488,226 | <p>We have <span class="math-container">$$\ln(v-1) - \ln(v+3) = \ln(x) + C$$</span></p>
<p>Multiplying through by e gives:</p>
<p><span class="math-container">$$(v-1)/(v+3) = x + e^C$$</span></p>
<p>But the answer in the textbook is:</p>
<p><span class="math-container">$$(v-1)/(v+3) = Dx$$</span></p>
<p>Where <span class="math-container">$$D = e^C$$</span></p>
<p>Question: why is the answer Dx and not x + D?</p>
<p>Thanks!</p>
| Robert Shore | 640,080 | <p>You're not multiplying through by <span class="math-container">$e$</span>. You're exponentiating both sides of the equation. So your second equation should be:</p>
<p><span class="math-container">$$\frac{v-1}{v+3} = e^Cx.$$</span></p>
|
302,790 | <p>The $j$-ivariant has the following Fourier expansion
$$j(\tau)=\frac 1q +\sum_{n=0}^{\infty}a_nq^n=\frac{1}{q}+744+196884q+21493760q^2+\cdots.$$
Here is $q=e^{2\pi i \tau}$. </p>
<p>Is there some simple <strong>effective</strong> bound on the coefficients $a_n$?</p>
<p><strong>Backround.</strong></p>
<p>This question comes from <a href="https://www.sciencedirect.com/science/article/pii/0022314X69900237" rel="noreferrer">On the “gap” in a theorem of Heegner</a>. Let $D$ be a negative discriminant such that $h(D)=1$. We want to show that $J=j(\sqrt{D})$ generates a cubic extension of $\mathbf Q$. Since we have at our disposal a monic cubic polynomial with rational coefficients, the modular equation $\Phi_2(X,j)$, whose root is $J$, and the other two roots are non-real, it is sufficient to show that $J$ is not an integer.</p>
<p>In this case $j=j\left(\frac{-1+\sqrt D}{2} \right)$ is also an integer. Set</p>
<p>$$t=e^{2\pi i(-1+\sqrt D)/2}.$$</p>
<p>Then </p>
<p>$$J=\frac{1}{t^2}+744+196884t^2+O(t^4),$$</p>
<p>and </p>
<p>$$j^2-1488j+160512-J=42987520t+O(t^2).$$.</p>
<p>On the left there is an integer. However the right side tends to zero as $|D|$ gets large. Stark asserts that $|D|>60$ is enough for the RHS to be less than 1. <strong>Why is it enough?</strong></p>
| Noam D. Elkies | 14,830 | <p>Once you know that the coefficients are all positive (see postscript),
it's easy to get an effective upper bound that grows as $\exp(4\pi \sqrt{n})$,
which is within a factor $O(\sqrt n)$ of the correct order of growth.
Start from the inequality
$$
a_n = q^{-n} (a_n q^n) < q^{-n} \sum_{k=-1}^\infty a_k q^k = q^{-n} j(\tau)
$$
for any purely imaginary $\tau = it$ (because $q = e^{-2\pi t} > 0$
so each term $a_n q^n$ is positive). If $t \geq 1$ then
$$
j(it) = e^{2\pi t} + \sum_{n=0}^\infty a_n e^{-2\pi n t}
\leq e^{2\pi t} + \sum_{n=0}^\infty a_n e^{-2\pi n}
= e^{2\pi t} + j(i) - e^{2\pi} < e^{2\pi t} + 1728.
$$
Since $j(i/t) = j(it)$ it follows that also
$$
j(it) < e^{2\pi/t} + 1728
$$
for $t \leq 1$. Thus our inequality on $a_n$ yields
$$
a_n < q^{-n} j(\tau) < e^{2\pi n t} (e^{2\pi/t} + 1728).
$$
The main term $\exp(2\pi (nt+1/t))$ is minimized at $t = 1/\sqrt{n}$
where it equals $\exp(4\pi \sqrt{n})$. Choosing this value of $t$ yields
$$
a_n < e^{4\pi \sqrt n} + 1728 e^{2\pi \sqrt n}
$$
which is an effective bound of the desired kind.</p>
<p>Postscript: one easy proof of $a_n>0$ starts from the formula
$j = E_4^3 / \Delta$: the coefficients of $E_4$ are all positive,
so the same is true for $E_4^3$; and
$1 / \Delta = q^{-1} \prod_{m=1}^\infty ((1-q^m)^{-1})^{24}$
where each factor has nonnegative coefficients because
$(1-q^m)^{-1} = \sum_{k=0}^\infty q^{km}$. So the product
$E_4^3 \cdot 1/\Delta$ also has positive coefficients.</p>
|
302,790 | <p>The $j$-ivariant has the following Fourier expansion
$$j(\tau)=\frac 1q +\sum_{n=0}^{\infty}a_nq^n=\frac{1}{q}+744+196884q+21493760q^2+\cdots.$$
Here is $q=e^{2\pi i \tau}$. </p>
<p>Is there some simple <strong>effective</strong> bound on the coefficients $a_n$?</p>
<p><strong>Backround.</strong></p>
<p>This question comes from <a href="https://www.sciencedirect.com/science/article/pii/0022314X69900237" rel="noreferrer">On the “gap” in a theorem of Heegner</a>. Let $D$ be a negative discriminant such that $h(D)=1$. We want to show that $J=j(\sqrt{D})$ generates a cubic extension of $\mathbf Q$. Since we have at our disposal a monic cubic polynomial with rational coefficients, the modular equation $\Phi_2(X,j)$, whose root is $J$, and the other two roots are non-real, it is sufficient to show that $J$ is not an integer.</p>
<p>In this case $j=j\left(\frac{-1+\sqrt D}{2} \right)$ is also an integer. Set</p>
<p>$$t=e^{2\pi i(-1+\sqrt D)/2}.$$</p>
<p>Then </p>
<p>$$J=\frac{1}{t^2}+744+196884t^2+O(t^4),$$</p>
<p>and </p>
<p>$$j^2-1488j+160512-J=42987520t+O(t^2).$$.</p>
<p>On the left there is an integer. However the right side tends to zero as $|D|$ gets large. Stark asserts that $|D|>60$ is enough for the RHS to be less than 1. <strong>Why is it enough?</strong></p>
| GH from MO | 11,919 | <p>By a variation of Elkies's answer we can even get $a_n<e^{4\pi\sqrt{n}}$ without using $j(i)=1728$. </p>
<p>For $n=1$ the claim is clear. Now let $0<t<1$ and use the identity $j(it)=j(i/t)$. After expanding and rearranging, we get
$$\sum_{n=1}^\infty a_n(e^{-2\pi nt}-e^{-2\pi n/t})=e^{2\pi/t}-e^{2\pi t}.$$ It follows that
$$a_n<\frac{e^{2\pi/t}-e^{2\pi t}}{e^{-2\pi nt}-e^{-2\pi n/t}},\qquad n\geq 1.$$
Putting $t:=1/\sqrt{n}$, we get
$$a_n<\frac{e^{2\pi\sqrt{n}}-e^{2\pi/\sqrt{n}}}{e^{-2\pi\sqrt{n}}-e^{-2\pi n\sqrt{n}}},\qquad n\geq 2.$$
so it suffices to show that the RHS is less than $e^{4\pi\sqrt{n}}$. Equivalently,
$$4\pi\sqrt{n}-2\pi n\sqrt{n}<2\pi/\sqrt{n},\qquad n\geq 2,$$
$$2<n+n^{-1},\qquad n\geq 2.$$
The last inequality is obvious, so we are done.</p>
|
4,424,668 | <p>Suppose <span class="math-container">$V$</span> is a complex vector space with <span class="math-container">$n=\dim V=10$</span> and <span class="math-container">$N∈L(V)$</span> is nilpotent. What are possible values for <span class="math-container">$\dim\ker(N^3)-\dim\ker(N)$</span>? The only two things I know that are helpful to this that <span class="math-container">$\ker(N)\subseteq\ker(N^2)\subseteq\ldots\subseteq\ker(N^n)$</span> and <span class="math-container">$N^n=0$</span> for <span class="math-container">$N$</span> nilpotent. But it seems that all values between <span class="math-container">$1$</span> and <span class="math-container">$9$</span> are possible just by looking at these two restrictions. Can anyone give a hint? Thanks.</p>
| Samuel Adrian Antz | 1,045,826 | <p>Take the matrix <span class="math-container">$N$</span> with only entries of <span class="math-container">$0$</span>, but <span class="math-container">$N_{1i}=1$</span> for <span class="math-container">$i>j$</span> with <span class="math-container">$0<j\leq 10$</span>. We have <span class="math-container">$\operatorname{rk}(N)=10-j$</span> and <span class="math-container">$\operatorname{rk}(N^3)=0$</span> since <span class="math-container">$N^2=0$</span>. Using the rank-nullity-theorem, we have:
<span class="math-container">\begin{equation}
\dim\ker(N^3)-\dim\ker(N)
=\operatorname{rk}(N)-\operatorname{rk}(N^3)
=10-j.
\end{equation}</span>
Therefore every value between <span class="math-container">$0$</span> and <span class="math-container">$9$</span> is possible.</p>
|
3,009,543 | <p>I am having great problems in solving this:</p>
<p><span class="math-container">$$\lim\limits_{n\to\infty}\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}$$</span></p>
<p>I am trying to solve this for hours, no solution in sight. I tried so many ways on my paper here, which all lead to nonsense or to nowhere. I concluded that I have to use the third binomial formula here, so my next step would be:</p>
<p><span class="math-container">$$a^3-b^3=(a-b)(a^2+ab+b^2)$$</span> so </p>
<p><span class="math-container">$$a-b=\frac{a^3-b^3}{a^2+ab+b^2}$$</span></p>
<p>I tried expanding it as well, which led to absolutely nothing. These are my writings to this:</p>
<p><a href="https://i.stack.imgur.com/FyJ8t.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FyJ8t.jpg" alt="enter image description here"></a></p>
| Robert Z | 299,698 | <p>Alternative approach where we do not use of the identity <span class="math-container">$$a^3-b^3=(a-b)(a^2+ab+b^2).$$</span></p>
<p>We have that
<span class="math-container">$$0< \sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}=\frac{\sqrt[3]{n}}{\sqrt{n}}\cdot\frac{
\sqrt[3]{1+\frac{1}{\sqrt{n}}}-1}{\frac{1}{\sqrt{n}}}\leq \frac{1}{\sqrt[6]{n}}\cdot \frac{1}{3}.$$</span>
where we used the <a href="https://en.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow noreferrer">Bernoulli's inequality</a>
<span class="math-container">$$(1+x)^r\leq 1+rx$$</span>
with <span class="math-container">$r=1/3\in(0,1)$</span> and <span class="math-container">$x=1/\sqrt{n}$</span>. </p>
<p>Can you take it from here?</p>
<p>P.S. By using this approach we are also able to find<br>
<span class="math-container">$$\lim\limits_{n\to\infty}\left((n+\sqrt{n})^r-n^r\right)$$</span>
for any <span class="math-container">$r<1/2$</span>. Note that if we replace <span class="math-container">$\sqrt[3]{\ }$</span> with say <span class="math-container">$\sqrt[5]{\ }$</span> then the "algebraic way" could be very annoying!</p>
|
555,955 | <p>Suppose I have two doors. One of them has a probability of $1/9$ to contain X, the other has a probability of $2/3$ to contain X. Then, supposing I pick randomly one of the two doors, what is the probability that it contains X?</p>
<p>(If one contains X, the other can also contain X. They are independent but not mutually exclusive.)</p>
<p>I'm not sure what the solution is - is it just the average of the probabilities? I need this as a stepping stone in a larger argument. Thanks.</p>
| Peter Halburt | 85,427 | <p>I bet you that the solution is just the average of the probabilities. If they are independent but not mutually exclusive, then if one contains X, the other can too... so you can't just add the probabilities. The next simplest thing to do is average them, but as one knows, in all things probability this must be a weighted average! Also, notice that 2/3 +1/9 does not add up to 1, so you must weight them so that they do! Ok, so, what we have to do is multiply 1/9 by 3, so that it becomes 1/3. Now use the twisted sum rule, where you weight the first probability by 3 and the second one by 1! So the sum becomes 2+1/9 which is obviously greater than 1; we need to do some more work here. Have heard of "fictional" probability spaces? you might consider them here.</p>
|
2,451,092 | <p>I want to solve a Lagrange multiplier problem,</p>
<p>$$f(x,y) = x^2+y^2+2x+1$$
$$g(x,y)=x^2+y^2-16 $$</p>
<p>Where function $g$ is my constraint.
$$f_x=2x+2, \ \ \ f_y=2y, \ \ \ g_x=2x\lambda, \ \ \ g_y=2y\lambda$$</p>
<p>$$
\begin{cases}
2x+2=2x\lambda \\
2y=2y\lambda \\
x^2+y^2-16=0
\end{cases}
$$</p>
<p>See, this is a very nasty system of equations.
At any rate, I get $\lambda = 1$ because in this case, $y=0$. So I cannot do anything with this as far as algebra is concerned? How do I resolve a problem like this?</p>
| Satish Ramanathan | 99,745 | <p>$2x(1-\lambda) = -2\tag 1$</p>
<p>$2y(1-\lambda) = 0\tag 2$</p>
<p>From (2) Either $y = 0$ or $(1-\lambda) = 0$</p>
<p>$(1-\lambda) \ne 0$ because if it were (1) would not be true </p>
<p>Thus $y = 0$</p>
<p>Plug in the value of y in g(x,y) and find x.</p>
<p>and $x = +/- 4$</p>
<p>The points are $(4,0)$ and $(-4,0)$</p>
|
376,484 | <p>My questions are motivated by the following exercise:</p>
<blockquote>
<p>Consider the eigenvalue problem
$$
\int_{-\infty}^{+\infty}e^{-|x|-|y|}u(y)dy=\lambda u(x), x\in{\Bbb R}.\tag{*}
$$
Show that the spectrum consists purely of eigenvalues. </p>
</blockquote>
<p>Let $A:L^2({\Bbb R})\to L^2({\Bbb R})$ be a linear operator such that
$$(Au)(x)=\int_{\Bbb R}k(x,y)u(y)dy$$
where $k(x,y)=e^{-|x|-|y|}$. Then $A$ is self-adjoint, since $\overline{k(x,y)}=k(y,x)$. Thus $\sigma(A)\subset{\Bbb R}$.</p>
<blockquote>
<p>My first <strong>question</strong>: is $A$ invertible?</p>
</blockquote>
<p>$A$ is a Hilbert-Schmidt operator since $k\in L^2(\Bbb{R}^2)$ and thus $A$ is compact. Then the answer should be NO since $L^2({\Bbb R})$ is an infinite-dimensional Hilbert space. It follows that $\lambda=0$ must be an eigenvalue of $A$ according to the conclusion in the exercise. But $\operatorname{ker}(A)={0}$ which implies that $\lambda=0$ is not an eigenvalue of $A$. </p>
<blockquote>
<p>My second <strong>question</strong>: what mistake do I make above?</p>
</blockquote>
| Entosider | 972,378 | <p>What you need to prove is true. The problem with the refutations is that they are using functions where <span class="math-container">$\lim_{p^m\to\infty}f(p^m)\neq0$</span> because for some <span class="math-container">$\epsilon>0$</span>, for any <span class="math-container">$N$</span>, there will exist some prime <span class="math-container">$p$</span> that is bigger than that <span class="math-container">$N$</span> such that <span class="math-container">$f(p)>0$</span>, violating the definition of a limit.</p>
<p>Since <span class="math-container">$f(p^k)$</span> approaches <span class="math-container">$0$</span>, it must have a maximum value, call it <span class="math-container">$M$</span>. Assume that there are <span class="math-container">$a$</span> prime numbers that have a prime power <span class="math-container">$p^\alpha$</span> such that <span class="math-container">$f(p^\alpha)\geq1$</span>. We know that, since <span class="math-container">$f(p^k)$</span> approaches <span class="math-container">$0$</span>, <span class="math-container">$a$</span> must be finite. Note that if <span class="math-container">$p$</span> is not one of the <span class="math-container">$a$</span> such primes, then if <span class="math-container">$p\not | n$</span>, <span class="math-container">$f(n)<f(np^\alpha)=f(n)f(p^\alpha).$</span> Thus, an upper bound for any <span class="math-container">$f(n)$</span> is <span class="math-container">$M^a$</span>.</p>
<p>Let <span class="math-container">$\epsilon>0$</span> be given. Let <span class="math-container">$N$</span> be such that if <span class="math-container">$p^\alpha>N$</span>, <span class="math-container">$f(p^\alpha)<\epsilon/M^a$</span>. Thus, if <span class="math-container">$n$</span> is divisible by a prime power greater than <span class="math-container">$N$</span> (call it <span class="math-container">$p^\alpha$</span>), then <span class="math-container">$f(n)<\epsilon$</span> since <span class="math-container">$M^\alpha$</span> is the upper bound for any value of <span class="math-container">$f$</span>, so if <span class="math-container">$n=ap^\alpha$</span>, <span class="math-container">$f(ap^\alpha)=f(a)f(p^\alpha)\leq(M^a)(\epsilon/M^a)=\epsilon.$</span></p>
<p>We will now construct a number that must have at least one prime power greater than <span class="math-container">$N$</span>. Let <span class="math-container">$p_k$</span> denote the <span class="math-container">$k$</span>th prime, and let <span class="math-container">$\alpha_k$</span> be the lowest positive integer such that <span class="math-container">$p_k^{\alpha_k}>N.$</span> Then, we define
<span class="math-container">$$N_2=\prod_{\{k:p_k\leq N\}}p_k^{\alpha_k}.$$</span>
By construction, it is impossible for any <span class="math-container">$n\geq N_2$</span> to not be dividible by a prime power greater than <span class="math-container">$N$</span>. Thus, if <span class="math-container">$n>N_2$</span>, <span class="math-container">$f(n)<\epsilon.$</span> Thus, <span class="math-container">$\lim_{n\to\infty}f(n)=0$</span>.</p>
|
860,247 | <p>Simplify $$\frac{3x}{x+2} - \frac{4x}{2-x} - \frac{2x-1}{x^2-4}$$</p>
<ol>
<li><p>First I expanded $x²-4$ into $(x+2)(x-2)$. There are 3 denominators. </p></li>
<li><p>So I multiplied the numerators into: $$\frac{3x(x+2)(2-x)}{(x+2)(x-2)(2-x)} - \frac{4x(x+2)(x-2)}{(x+2)(x-2)(2-x)} - \frac{2x-1(2-x)}{(x+2)(x-2)(2-x)} $$</p></li>
</ol>
<p>I then tried 2 different approaches:</p>
<ol>
<li>Calculated it without eliminating the denominator into: $$\frac{-6x²-5x+2}{(x+2)(x-2)(2-x)}$$</li>
<li>Calculated it by multiplying it out to: $$\frac{-6x+2x²+2}{(x+2)(x-2)(2-x)}$$</li>
</ol>
<p>I can't seem to simplify them further and so they seem incorrect. Something I missed? Help! </p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>As $\displaystyle0\le x\le1,\sqrt{x(x+1)}=\sqrt x\sqrt{x+1}$</p>
<p>So, $$\int_0^1x^2\sqrt{x(x+1)}=\int_0^1x^{\dfrac52}\sqrt{x+1}\ dx$$</p>
<p>Set $x=\tan^2\theta$</p>
<p>Or integrating by parts, $$\int x^{\dfrac52}\sqrt{x+1}\ dx=x^{\dfrac52}\int\sqrt{x+1}\ dx-\int\left(\frac{d x^{\dfrac52}}{dx}\cdot\int\sqrt{x+1}\ dx\right)dx$$</p>
<p>$$=x^{\dfrac52}\cdot\frac{(x+1)^{\dfrac32}}{\dfrac32}-\dfrac52\int (x^2+x)^{\dfrac32}\ dx$$</p>
<p>Now for $x^2+x,$ we can use the substitution used in my other answer</p>
|
1,378,536 | <p>Here is a question that naturally arose in the study of some specific integrals. I'm curious if for such integrals are known <em>nice real analysis tools</em> for calculating them (<em>including here all possible sources<br>
in literature that are publicaly available</em>). At some point I'll add my <em>real analysis</em> solution.<br>
It's a question for the informative purpose rather than finding solutions, the solution is optional.</p>
<p>Prove that</p>
<p>$$\int_{-1}^1 \frac{1}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{6}. $$</p>
<p><em>Here is a supplementary question</em></p>
<p>$$\int_{-1}^1 \frac{\log(1-x)}{\pi^2+(2 \operatorname{arctanh}(x))^2} \, dx=\frac{1}{4}+\frac{\gamma }{6}+\frac{\log (2)}{6}-2 \log (A) $$</p>
<p>where $A$ is <a href="https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin_constant">Glaisher–Kinkelin constant</a>.</p>
<p>for the passionates of integrals, series and limits.</p>
| Ron Gordon | 53,268 | <p>Sub $x=\tanh{u}$, $dx = \operatorname{sech^2}{u} \, du$. Then the integral is</p>
<p>$$\int_{-\infty}^{\infty} du \, \frac{\operatorname{sech^2}{u}}{\pi^2+4 u^2} $$</p>
<p>Now, use Parseval. The Fourier transforms of the pieces of the integrand are</p>
<p>$$\int_{-\infty}^{\infty} du \, \frac{e^{i u k}}{\pi^2+4 u^2} = \frac14 \frac{\pi}{\pi/2} e^{-\pi |k|/2} $$</p>
<p>$$\int_{-\infty}^{\infty} du \, \operatorname{sech^2}{u} \, e^{i u k} = \pi k \operatorname{csch}{\left (\frac{\pi k}{2} \right )}$$</p>
<p>so by Parseval...</p>
<p>$$\int_{-\infty}^{\infty} du \, \frac{\operatorname{sech^2}{u}}{\pi^2+4 u^2} = \frac12 \frac{\pi}{2 \pi} \int_{-\infty}^{\infty} dk \, k \operatorname{csch}{\left (\frac{\pi k}{2} \right )} e^{-\pi |k|/2} = \int_0^{\infty} dk \frac{k \, e^{-\pi k/2}}{e^{\pi k/2}-e^{-\pi k/2}}$$</p>
<p>Expand the denominator:</p>
<p>$$\int_{-\infty}^{\infty} du \, \frac{\operatorname{sech^2}{u}}{\pi^2+4 u^2} = \sum_{m=0}^{\infty} \int_0^{\infty} dk \, k \, e^{-(1+m) \pi k} = \frac{1}{\pi^2} \sum_{m=0}^{\infty} \frac1{\left (1+m \right )^2} = \frac1{6}$$</p>
|
1,436,215 | <p>I'm using the following algorithm (in C) to find if a point lays within a given polygon</p>
<pre><code>typedef struct {
int h,v;
} Point;
int InsidePolygon(Point *polygon,int n,Point p)
{
int i;
double angle=0;
Point p1,p2;
for (i=0;i<n;i++) {
p1.h = polygon[i].h - p.h;
p1.v = polygon[i].v - p.v;
p2.h = polygon[(i+1)%n].h - p.h;
p2.v = polygon[(i+1)%n].v - p.v;
angle += Angle2D(p1.h,p1.v,p2.h,p2.v);
}
if (ABS(angle) < PI)
return(FALSE);
else
return(TRUE);
}
/*
Return the angle between two vectors on a plane
The angle is from vector 1 to vector 2, positive anticlockwise
The result is between -pi -> pi
*/
double Angle2D(double x1, double y1, double x2, double y2)
{
double dtheta,theta1,theta2;
theta1 = atan2(y1,x1);
theta2 = atan2(y2,x2);
dtheta = theta2 - theta1;
while (dtheta > PI)
dtheta -= TWOPI;
while (dtheta < -PI)
dtheta += TWOPI;
return(dtheta);
}
</code></pre>
<p>I found the algorithm while searching online for such an algorithm, <a href="http://bbs.dartmouth.edu/~fangq/MATH/download/source/Determining%20if%20a%20point%20lies%20on%20the%20interior%20of%20a%20polygon.htm" rel="nofollow">Point in Polygon algorithm</a>, Solution 2 (2D).</p>
<ol>
<li>The explanation in the link says it checks if the sum of angles is $2\pi$ but the algorithm (which works) checks it with $\pi$.</li>
<li>How are the angles even calculated?</li>
</ol>
| Community | -1 | <p>Here is a counter-proposal for efficient computation, using only additions and multiplications.</p>
<p>The idea is to count the intersections of the edges of the polygon with the horizontal line through the test point that lie on its right (like in "Solution 1"). An even number means outside.</p>
<p><a href="https://i.stack.imgur.com/odk7y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/odk7y.png" alt="enter image description here"></a></p>
<p>For the test point $P$ and the edge $QR$, such an intersection is detected by the condition</p>
<p>$$(Q_y\le P_y\land R_y\ge P_y\land\Delta_{PQR}\ge0)\lor(Q_y\ge P_y\land R_y\le P_y\land\Delta_{PQR}\le0),$$ where $\Delta_{PQR}$ is the signed area of the triangle $PQR$, which tells on what side of $QR$ the point $P$ lies.
$$\Delta_{PQR}=(Q_x-P_x)(R_y-P_y)-(Q_y-P_y)(R_x-P_x).$$ Shortcut evaluation is recommended.</p>
<pre><code>Inside= False
q= N-1; r= 0
while r < N:
if V[q].Y <= P.Y:
if V[r].Y > Y and Delta(P, V[q], V[r]) >= 0:
Inside= not Inside
else:
if V[r].Y <= Y and Delta(P, V[q], V[r]) <= 0:
Inside= not Inside
q= r; r++
</code></pre>
<p>The total cost is $2N$ ordinate comparisons and $M$ tests on the sign of $\Delta$, where $M$ is the number of edges straddling the ordinates of $P$, plus loop overhead.</p>
|
2,827,970 | <p>I'm aware this result (and the standard/obvious proof) is considered basic and while I've accepted and used it numerous times in the past, I'm starting to question its validity, or rather that said proof doesn't subtly require a form of the AC. (Disclaimer: it's been some time since I've looked at set theory.)</p>
<p>I found the same question answered here using the same method I'd used in the past. (<a href="https://math.stackexchange.com/questions/305597/why-are-infinite-cardinals-limit-ordinals">Why are infinite cardinals limit ordinals?</a>) My posting here is more concerned with the apparent assumptions required to use this argument.</p>
<p>The (apparent) issue here is that it presupposes that $\omega$ is the least infinite ordinal, i.e. a subset of any infinite ordinal $\alpha$. Now a countable choice argument could easily prove this, but this shouldn't be necessary, one would think.</p>
<p>I've considered this: a simple (natural) induction argument showing that every natural number is an element of any infinite ordinal. This would show $\omega\subseteq \alpha$, or that $\omega$ is the least infinite ordinal. Does this seem right?</p>
<p>Thank you </p>
<p>To clarify: I'm trying to show that $\omega$ is the least infinite ordinal. Evidently some define it this way, (there is clearly some least infinite ordinal) but the definition of $\omega$ I'm working with involves defining inductive classes and taking their intersection, which is then axiomatized as a set. (These two definitions should be equivalent)</p>
| Asaf Karagila | 622 | <p>The definition of $\omega$ is "the least infinite ordinal". So there are no hidden assumptions.</p>
<p>But even ignoring that, the proof I gave in the answer you linked doesn't use any "presupposed assumption". It only uses the fact that if $\alpha$ is an infinite ordinal, then for any finite ordinal $n$, $n<\alpha$.</p>
<p>In any case, an ordinal is by definition a well-ordered set. So we can exploit that to define a choice function on its subsets. So usually when choice is needed to prove things in general, it can be eliminated in the case of ordinals (since we already have a choice function).</p>
<hr>
<p>You are correct, however, that induction is needed to prove that no infinite ordinal is a predecessor of a finite ordinal. For example by proving first that a subset of a finite set is finite (which is where you'd use induction), and the noting that if $\alpha\in\beta$, then $\alpha\subseteq\beta$, and so if $\beta$ is finite, $\alpha$ must be too.</p>
|
2,827,970 | <p>I'm aware this result (and the standard/obvious proof) is considered basic and while I've accepted and used it numerous times in the past, I'm starting to question its validity, or rather that said proof doesn't subtly require a form of the AC. (Disclaimer: it's been some time since I've looked at set theory.)</p>
<p>I found the same question answered here using the same method I'd used in the past. (<a href="https://math.stackexchange.com/questions/305597/why-are-infinite-cardinals-limit-ordinals">Why are infinite cardinals limit ordinals?</a>) My posting here is more concerned with the apparent assumptions required to use this argument.</p>
<p>The (apparent) issue here is that it presupposes that $\omega$ is the least infinite ordinal, i.e. a subset of any infinite ordinal $\alpha$. Now a countable choice argument could easily prove this, but this shouldn't be necessary, one would think.</p>
<p>I've considered this: a simple (natural) induction argument showing that every natural number is an element of any infinite ordinal. This would show $\omega\subseteq \alpha$, or that $\omega$ is the least infinite ordinal. Does this seem right?</p>
<p>Thank you </p>
<p>To clarify: I'm trying to show that $\omega$ is the least infinite ordinal. Evidently some define it this way, (there is clearly some least infinite ordinal) but the definition of $\omega$ I'm working with involves defining inductive classes and taking their intersection, which is then axiomatized as a set. (These two definitions should be equivalent)</p>
| DanielWainfleet | 254,665 | <p>(1). In the absence of Infinity, Choice, and Foundation (Regularity):</p>
<p>Let $On$ be the class of ordinals. Define the class $FOn$ of finite ordinals as $\{x\in On: \forall y\in x\cup \{x\}\;(y=0\lor y\ne \cup y)\}.$ </p>
<p>Prove that $\forall x\in FOn \;\forall y\in x\;(y\in FOn).$ </p>
<p>(1-i). That is, $FOn$ is a transitive class. </p>
<p>(1-ii). Any set or definable class of ordinals is well-ordered by $\in.$</p>
<p>(2). Introducing Infinity. Suppose $\exists z\in On\backslash FOn.$ For any $x\in FOn$ we have $(z=x\lor z\in x\lor x\in z).$ But $z=x\in FOn$ cannot hold, and $z\in x\implies z\in FOn$ by def'n of $FOn.$ So every $x\in FOn$ belongs to $ z .$</p>
<p>So by Comprehension $\exists O \;\forall x\;( x\in O \iff (x\in z\land x\in FOn)).$ That is, the set $O$ is equal to $FOn.$ </p>
<p>So by (I-i) and (1-ii), and by the def'n of $On,$ we have $O\in On.$ Now for any $z'\in On\backslash FOn$ we have $z'\geq O$ because $z'<O\implies z'\in O\implies z'\in FOn,$ a contradiction. So $O=FOn$ is the least infinite ordinal. Usually denoted by $\omega.$ </p>
<p>And now prove that $0\in FOn$ (i.e. $FOn$ is not empty) and that $\forall x \in FOn\;(x+1\in FOn).$ </p>
|
1,176,435 | <p>Consider $G(4,p)$ - the random graph on 4 vertices. What is the probability that vertex 1 and 2 lie in the same connected component?</p>
<p>So far, I have considered the event where 1 and 2 do not lie in the same component. Then vertex 1 must lie in a component of order 1, 2 or 3 that doesn't contain vertex 2. However, I am unsure about how to compute these probabilities. For 1 to be in a component if order 1, I think this has probability $(1-p)^3$. </p>
| Laars Helenius | 112,790 | <p>For $p=1/2$, since there are a total of six edges in $K_4$, that means there are only $2^6=64$ possible labeled graphs, each of which is equally likely. You could write out the sample space and count.</p>
<p>When $p\ne 1/2$, you could determine a binomial probability for each possible graph and sum the probabilities of each of the graphs that satisfy your condition (as opposed to straight up counting them in the previous paragraph).</p>
<p>Admittedly, this is a naive approach, but it might give some insight on the general process.</p>
|
362,895 | <p>I have been having a lot of trouble teaching myself rings, so much so that even "simple" proofs are really difficult for me. I think I am finally starting to get it, but just to be sure could some one please check this proof that $\mathbb Z[i]/\langle 1 - i \rangle$ is a field. Thank you.</p>
<p>Proof: Notice that $$\langle 1 - i \rangle\\
\Rightarrow 1 = i\\
\Rightarrow 2 = 0.$$
Thus all elements of the form $a+ bi + \langle 1 - i \rangle$ can be rewritten as $a+ b + \langle 1 - i \rangle$. But since $2=0$ this implies that the elements that are left can be written as $1 + \langle 1 - i \rangle$ or $0 + \langle 1 - i \rangle$. Thus
$$
\mathbb Z[i]/ \langle 1 - i \rangle = \{ 0+ \langle 1 - i \rangle , 1 + \langle 1 - i \rangle\}.
$$</p>
<p>This is obviously a commutative ring with unity and no zero-divisors, thus it is a finite integral domain, and hence is a field. $\square$</p>
| Martin Brandenburg | 1,650 | <p>Your proof only shows that there are at most two elements. So you also have to check that these two elements differ, i.e. that $1-i$ is not a unit. But instead, you can also do it directly, without any elements at all:</p>
<p>$\mathbb{Z}[i]/(i-1)=\mathbb{Z}[x]/(x^2+1)/(x-1)=\mathbb{Z}/(1^2+1)=\mathbb{F}_2$.</p>
|
1,791,673 | <p>I was wondering about this, just now, because I was trying to write something like:<br>
$880$ is not greater than $950$. <br>
I am wondering this because there is a 'not equal to': $\not=$ <br>
Not equal to is an accepted mathematical symbol - so would this be acceptable: $\not>$? <br>
I was searching around but I couldn't find any qualified sites that would point me in that direction.</p>
<p><br>
So, I would like to know if there are symbols for, not greater, less than, less than or equal to, greater than or equal to x. </p>
<p>Thanks for your help and time! </p>
| Community | -1 | <p>Equality is special in that there are two ways that two real numbers $a$ and $b$ can be not equal:</p>
<p>$$a>b,b>a$$</p>
<p>So, instead of saying $a>b \;\textrm {or}\;b>a$, we write $b\neq a$.</p>
<p>For the others, each negation has an existing symbol, so:</p>
<p>$$a\not>b \iff a\leq b,\;\,a\nleq b\iff a>b$$</p>
<p>etc.</p>
<p>But like the comments say, either is OK.</p>
|
177,515 | <p>From <a href="http://mitpress.mit.edu/algorithms/" rel="nofollow">Cormen et all</a>:</p>
<blockquote>
<p>The elements of a matrix or vectors are numbers from a number system, such as the real numbers , the complex numbers , or integers modulo a prime .</p>
</blockquote>
<p>What do they mean by <strong>integers modulo a prime</strong> ? I thought real numbers and complex numbers together make up all the elements of a matrix . Why did they put this additional one ?</p>
| Calvin McPhail-Snyder | 10,104 | <p>In the elementary case, matrices are defined to only contain complex numbers, of which real numbers are treated as a special case. In this sense, matrices only contain complex numbers (since every real number is complex).</p>
<p>However, there are more general notions of number than just complex numbers, so it makes sense to talk about matrices whose entries are drawn from those generalized numbers. Depending on your taste and the applications, it would be reasonable to let a "number system" be either a ring or a field, or maybe some other structure.</p>
<p>When we discuss matrices in general we want to have the most broad notion of number that still allows a rich theory. In this case the authors are choosing fields, of which the real numbers, the complex numbers, and the integers modulo a prime are all examples. One reason for choosing fields is that the determinant of a matrix over a field has familiar properties. For example, even in a general field, a matrix has an inverse iff its determinant is nonzero.</p>
|
874,946 | <p>What is the remainder when the below number is divided by $100$?
$$
1^{1} + 111^{111}+11111^{11111}+1111111^{1111111}+111111111^{111111111}\\+5^{1}+555^{111}+55555^{11111}+5555555^{1111111}+55555555^{111111111}
$$
How to approach this type of question? I tried to brute force using Python, but it took very long time.</p>
| Darth Geek | 163,930 | <p><strong>Hint</strong></p>
<p>If $a \equiv b \pmod{n}$ then $a^k \equiv b^k \pmod{n}$
So for instance $111 \equiv 11 \pmod{100}$ so $111^{111} \equiv 11^{111} \pmod{100}$</p>
<p>Also note that $11^2 = 121 \equiv 21$ so $11^{111} = 11^{2·65 + 1} \equiv 11·21^{65}$. But $21^2 = 441 \equiv 41$ and so forth.</p>
<p>Continue simplifying and repeat for the rest of the numbers.</p>
|
902,522 | <p>How would I simplify a fraction that has a radical in it? For example:</p>
<p>$$\frac{\sqrt{2a^7b^2}}{{\sqrt{32b^3}}}$$</p>
| Kyle Gannon | 152,976 | <p>This paper, called <a href="http://arxiv.org/abs/0903.0340">Physics, Topology, Logic and Computation: A Rosetta Stone</a> does just that in section 3.2. If you have time and interest, I would suggest reading the entire paper (since the whole thing is pretty cool). </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.