qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
843,763 | <blockquote>
<p>Let $x\in \mathbb{R}$ an irrational number. Define $X=\{nx-\lfloor nx\rfloor: n\in \mathbb{N}\}$. Prove that $X$ is dense on $[0,1)$. </p>
</blockquote>
<p>Can anyone give some hint to solve this problem? I tried contradiction but could not reach a proof.</p>
<p>I spend part of the day studying this question <a href="https://math.stackexchange.com/questions/450493/multiples-of-an-irrational-mod-1-are-dense">Positive integer multiples of an irrational mod 1 are dense</a>
and its answers. Only one answer is clear and give clues to solve the problem. This answer is the first one. However, this answer does not answer the question nor directly, nor the proof follows from this answer. </p>
<p>This answer has some mistakes, he use that $[(k_1-k_2)\alpha]=[k_1\alpha]-[k_2\alpha]$ which is not true. Consider $k_1=3, k_2=1, \alpha=\sqrt{2}$ we have $[(k_1-k_2)\alpha]=2\not= 3=[k_1\alpha]-[k_2\alpha] $. We only can assure that $[k_2\alpha]-[k_1\alpha]-1\leq [(k_2-k_1)\alpha]\leq[k_2\alpha]-[k_1\alpha]$. </p>
<p>Who answered said something interesting about additive subgroups of $\mathbb{R}$, but unfortunately the set $X=\{nx-[nx] : n\in \mathbb{N} \}$ is not a subgroup. Considering the additive subgroup $G=\langle X \rangle$, if we prove the part (a) of the link, we get that indeed $G$ is dense on $\mathbb{R}$ but we can not conclude that $X$ is dense on $[0,1)$.</p>
<p>I think this problem has not been solved.</p>
<p>Thanks!</p>
| mm-aops | 81,587 | <p>Ok, since you've asked and it doesn't fit into a comment, there you go. I'll do it on a circle since it's slightly easier to explain and I'll leave it to you to complete it in the case of an interval. let's say you have a circle of length $1$. you take 'steps' along the circle of an irrational length, let's say counter-clockwise. you'll never hit the same spot twice so for any fixed $\epsilon > 0$ you'll eventually find two 'steps' $a_n$ and $a_m$ such that $0 < |a_n - a_m| < \epsilon$. the distance from $a_n$ to $a_m$ is the same as between $a_{n-m}$ and $a_0 = 0$ and so on. therefore if you let $k:= n-m$ and you only consider each $k$-th step you'll be going around the circle travelling a distance smaller than $\epsilon$ hence if you divide your circle into arcs of equal lengths greater than $\epsilon$ (but just slightly, say smaller than $2 \epsilon$) you'll have to land in each one of those in order to make your way all around the circle (because your steps are to small to jump over them). Every point of the circle is in at least one of those intervals which means that for each point of the circle you can find a number $a_j$ in your sequence that is closer than $2 \epsilon$ to it. Now conclude taking smaller and smaller $\epsilon$'s. </p>
<p>edit: oh, just note that I'm taking the distance along the circle, not the euclidean one</p>
|
2,213,528 | <p>Combination formula is defined as:</p>
<p>$$\binom{n}{r}=\frac{n!}{(n-r)!\cdot r!}$$</p>
<p>We do $(n-r)!$ because we want the combinations of only $r$ objects given $n$. We do $r!$ again because in combinations, order does not matter so doing this gets rid of repeated results. But why? Why does $r!$ get rid of anything.</p>
<p>Any visual help is appreciated.</p>
<p>thanks</p>
| spaceisdarkgreen | 397,125 | <p>If you have continuous RVs $X_1,X_2$ with joint density function $ f_{X_1X_2}(x_1,x_2)$ then you can define the conditional joint density given $X_2 = x_2^0$ as $$ f_{X_1|X_2}(x_1|x_2^0) = \frac{f_{X_1X_2}(x_1,x_2^0)}{f_{X_2}(x_2^0)} $$ where $f_{X_2}(x_2)$ is the marginal density of $X_2,$ given by $\int_{-\infty}^\infty f_{X_1X_2}(x_1,x_2)dx_1.$</p>
<p>This way everything is in terms of densities. </p>
<p>You are right that there's little sense writing $f_2(X_2=x_2^0).$ In general I'm not a fan of the notation you're using cause it confuses random variables with the variables corresponding to them in density functions. </p>
<p>One notation I've seen that is like what your book is talking about is to write the joint density function as $P(X_1\in dx_1,X_2\in dx_2).$ In this notation $dx_1$ refers to an infinitessimal interval around $x_1$ and similarly for $x_2.$ This describes pretty well what a density function is.</p>
|
3,220,135 | <p>How many integer numbers, <span class="math-container">$x$</span>, verify that the following</p>
<p><span class="math-container">\begin{equation*}
\frac{x^3+2x^2+9}{x^2+4x+5}
\end{equation*}</span></p>
<p>is an integer?</p>
<p>I managed to do:</p>
<p><span class="math-container">\begin{equation*}
\frac{x^3+2x^2+9}{x^2+4x+5} = x-2 + \frac{3x+19}{x^2+4x+5}
\end{equation*}</span></p>
<p>but I cannot go forward.</p>
| Servaes | 30,382 | <p>You're off to a good start. Now note that the denominator <span class="math-container">$x^2+4x+5$</span> is quickly larger than the numerator <span class="math-container">$3x+19$</span>; you can quickly reduce the problem to only finitely many values for <span class="math-container">$x$</span> to check.</p>
<p><strong>More details:</strong> (Hover to show)</p>
<blockquote class="spoiler">
<p> The fraction is certainly not an integer if the denominator is greater than the numerator, i.e. if <span class="math-container">$$x^2+4x+5>3x+19,$$</span> unless perhaps the numerator is zero, but that is not possible in this case. The quadratic formula shows that the inequality above holds if <span class="math-container">$x\leq5$</span> or <span class="math-container">$x\geq4$</span>. Then it remains to check whether the fraction is an integer for <span class="math-container">$x$</span> in the range <span class="math-container">$-4\leq x\leq3$</span>.</p>
</blockquote>
|
3,343,550 | <p>Show that if <span class="math-container">$(a,15)=1$</span>, then <span class="math-container">$a^4\equiv1 \mod 15$</span>, so that we do not have primitive roots of <span class="math-container">$15$</span>
Please help me with this problem.</p>
| J. W. Tanner | 615,567 | <p>If <span class="math-container">$\gcd(a,15)=1,$</span> then <span class="math-container">$a\equiv\pm1, \pm2, \pm4,$</span> or <span class="math-container">$\pm7\mod15$</span>, so <span class="math-container">$a^2\equiv1$</span> or <span class="math-container">$ 4$</span>, so <span class="math-container">$a^4=(a^2)^2\equiv1$</span>.</p>
|
4,350,781 | <p><a href="https://i.stack.imgur.com/57qXm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57qXm.jpg" alt="enter image description here" /></a></p>
<p>I was able to follow most of the proof but I don’t understand how the author concludes that <span class="math-container">$r=0$</span> at the final part. I would appreciate it if someone could clarify. Thanks</p>
| Toby Bartels | 63,003 | <p>For the sloping surface, this is the plane through the points <span class="math-container">$ ( 0 , 0 , 1 ) $</span>, <span class="math-container">$ ( 0 , 1 , 0 ) $</span>, and <span class="math-container">$ ( 1 , 0 , 0 ) $</span>. Since none of these is the origin, this plane has an equation of the form <span class="math-container">$ A x + B y + C z = 1 $</span>, so you can put in the coordinates of these three points to get a system of linear equations to solve for <span class="math-container">$ A $</span>, <span class="math-container">$ B $</span>, and <span class="math-container">$ C $</span>. (But you might also be able to see, once you realize the form that the equation needs to take, that all three of these constants have to be <span class="math-container">$ 1 $</span>.) So that's how you get <span class="math-container">$ 1 x + 1 y + 1 z = 1 $</span>, or <span class="math-container">$ x + y + z = 1 $</span> for short. You should also realize that the other surfaces are the coordinate planes, with equations <span class="math-container">$ x = 0 $</span>, <span class="math-container">$ y = 0 $</span>, and <span class="math-container">$ z = 0 $</span>.</p>
<p>Now you have to decide which order you want to do the variables. This is arbitrary, and while sometimes some orders are more convenient than others, it makes no difference at all this time. Choosing to do <span class="math-container">$ x $</span> first gives the result in your question. You simply solve <span class="math-container">$ x + y + z = 1 $</span> for <span class="math-container">$ x $</span> to get <span class="math-container">$ x = 1 - y - z $</span>, and you also have <span class="math-container">$ x = 0 $</span> for one of the coordinate planes, and there you go, <span class="math-container">$ x $</span> runs from <span class="math-container">$ 0 $</span> to <span class="math-container">$ 1 - y - z $</span>. (Or maybe from <span class="math-container">$ 1 - y - z $</span> to <span class="math-container">$ 0 $</span>, but it should be clear from the picture that the sloping plane is in <em>front</em> of the plane <span class="math-container">$ x = 0 $</span>. I'll mention later how you can tell when you <em>don't</em> have a nice picture.)</p>
<p>We've done <span class="math-container">$ x $</span>, so let's say that we want to do <span class="math-container">$ y $</span> next. We can't solve <span class="math-container">$ x + y + z = 1 $</span> for <span class="math-container">$ y $</span>; we've already used that equation, and we're not allowed to use <span class="math-container">$ x $</span> anymore anyway. So we only have <span class="math-container">$ y = 0 $</span>. What's the other bound on <span class="math-container">$ y $</span>? We can find that by setting the two bounds on <span class="math-container">$ x $</span> equal: <span class="math-container">$ 0 = 1 - y - z $</span>. (Visually, <span class="math-container">$ 0 = 1 - y - z $</span> is the line in the <span class="math-container">$ ( y , z ) $</span>-plane where the sloping surface <span class="math-container">$ x = 1 - y - z $</span> meets the back plane <span class="math-container">$ x = 0 $</span>.) Solve that for <span class="math-container">$ y $</span>, and you get <span class="math-container">$ y = 1 - z $</span>. So <span class="math-container">$ y $</span> runs from <span class="math-container">$ 0 $</span> to <span class="math-container">$ 1 - z $</span>.</p>
<p>Now only <span class="math-container">$ z $</span> is left. And there's only one equation left, <span class="math-container">$ z = 0 $</span>. But again we can get another equation by setting the bounds on <span class="math-container">$ y $</span> equal: <span class="math-container">$ 0 = 1 - z $</span>. (Visually, this is the point on the <span class="math-container">$ z $</span>-axis within the <span class="math-container">$ ( y , z ) $</span>-plane where the line <span class="math-container">$ y = 1 - z $</span> meets the line <span class="math-container">$ y = 0 $</span>.) Solve this for <span class="math-container">$ z $</span> to get <span class="math-container">$ z = 1 $</span>. So <span class="math-container">$ z $</span> runs from <span class="math-container">$ 0 $</span> to <span class="math-container">$ 1 $</span>. (You can often get the last one easily from just the picture; you can see that <span class="math-container">$ 0 $</span> is the smallest value that <span class="math-container">$ z $</span> takes while <span class="math-container">$ 1 $</span> is the largest value. But if you don't have a good picture, then these can still be calculated as I did here.)</p>
<p>We're done now, but even if you were unsure whether <span class="math-container">$ x $</span> runs from <span class="math-container">$ 0 $</span> to <span class="math-container">$ 1 - y - z $</span> or the reverse, it's clear that <span class="math-container">$ z $</span> must run from <span class="math-container">$ 0 $</span> to <span class="math-container">$ 1 $</span>, since <span class="math-container">$ 0 < 1 $</span>. To be sure of the order for <span class="math-container">$ y $</span>, pick a value of <span class="math-container">$ z $</span> in between, such as <span class="math-container">$ z = 1 / 2 $</span>. (Actually <span class="math-container">$ z = 0 $</span> would work too, but not <span class="math-container">$ z = 1 $</span>, since this is the value that you got by setting the bounds on <span class="math-container">$ y $</span> equal.) Then use this value of <span class="math-container">$ z $</span> to compare <span class="math-container">$ 0 $</span> and <span class="math-container">$ 1 - z $</span>, the bounds on <span class="math-container">$ y $</span>. You get <span class="math-container">$ 0 $</span> and <span class="math-container">$ 1 - ( 1 / 2 ) = 1 / 2 $</span>, and <span class="math-container">$ 0 < 1 / 2 $</span>, so you know that <span class="math-container">$ 0 < 1 - z $</span> in general. (Technically it's <span class="math-container">$ 0 \leq 1 - z $</span> since they may be equal at the bounds on <span class="math-container">$ z $</span>, and indeed they are equal when <span class="math-container">$ z = 1 $</span>, but that's not important here.) Then keeping <span class="math-container">$ z = 1 / 2 $</span>, pick a value of <span class="math-container">$ y $</span> between <span class="math-container">$ 0 $</span> and <span class="math-container">$ 1 / 2 $</span>, say <span class="math-container">$ y = 1 / 4 $</span>, to check the order of the bounds on <span class="math-container">$ x $</span>. You get <span class="math-container">$ 0 $</span> and <span class="math-container">$ 1 - ( 1 / 2 ) - ( 1 / 4 ) = 1 / 4 $</span>. Since <span class="math-container">$ 0 < 1 / 4 $</span>, you know that <span class="math-container">$ 0 < 1 - y - z $</span> in general (or technically, <span class="math-container">$ 0 \leq 1 - y - z $</span>). So now the orders of the bounds are clear even without using the picture to help.</p>
<p>And now you can set up volume integrals as <span class="math-container">$$ \int _ { z = 0 } ^ 1 \int _ { y = 0 } ^ { 1 - z } \int _ { x = 0 } ^ { 1 - y - z } f ( x , y , z ) \, \mathrm d x \, \mathrm d y \, \mathrm d z \text . $$</span></p>
|
2,043,690 | <p>I'm in the process of studying for an exam and this just popped into my head. Sorry if it's a dumb question</p>
| Noah Schweber | 28,111 | <p>Yes, they are the same. Note that $(-1)+13=12$, so their difference is exactly the thing that is ignored by mod $13$!</p>
|
1,507,519 | <p>How do I solve this question?</p>
<p><a href="https://i.stack.imgur.com/drfiA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/drfiA.jpg" alt="Question"></a></p>
<p>I tried using the quadratic formula on the question equation and got</p>
<p>$x_1 = 0.25 +1.089724..i = \ln r$</p>
<p>$x_2 = 0.25 -1.089724..i = \ln s$</p>
<p>I know $\ln x = \log_ex$, but how do I complete the questions with the imaginary numbers?</p>
| Aditya Agarwal | 217,555 | <p>For the $a)$ part, use Vieta's Formula: Sum of roots of a quadratic equation is $\frac{-b}a$.
So answer would be: $\frac12$. <br>
For $b)$ $\frac54$, using Vieta's Formulas for product of roots: $\frac ca$. <br>
For $c)$ the expression becomes $$\frac{\ln r+\ln s}{\ln r\ln s}=\frac{\frac12}{\frac54}=\frac25$$
For $d)$ use the properties of logarithms that $\log_rs=\frac{\log s}{\log r}$ and $a^2+b^2=(a+b)^2-2ab$.
<br> So the given expression becomes $$\frac{(\log r+\log s)^2-2\log r\log s}{\log r.\log s}=\frac{\frac14-\frac{10}4}{\frac54}=\frac{-9}{5}$$</p>
|
1,250,459 | <p>Let $A\in M_{m \times n}(\mathbb{R})$, $x\in \mathbb{R}^n$ and $b,y\in \mathbb{R}^m$. Show that if $Ax=b$ and $A^ty=0_{\mathbb{R}^m}$, then $\langle b,y\rangle=0$. Also make a geometric interpretation.</p>
<p>I think I may have something to do with overdetermined / underdetermined system, but do not know how to prove it.</p>
<p>Take this opportunity to ask for the appointment of a linear algebra and matrix reference, I'm able to do the numerical exercises mostly, but those involving statements are a big problem.</p>
| Ian | 83,396 | <p>A set is open if, from any point in the set, you can wiggle in any direction a little bit and stay inside the set. What "wiggle" means depends on the context at hand. </p>
<p>In metric spaces, "wiggle" means what you might expect: "move a small distance". That is, for each point in an open subset of a metric space, there is a ball around the point which is contained in the open set.</p>
<p>In one important extreme, the trivial topology $\{ \emptyset,X \}$, there is no wiggle room anywhere: everything is somehow collapsed together, and by wiggling at all you "bump into everything". Formally, any sequence in the trivial topological space converges to every point in the space.</p>
<p>In the other important extreme, the discrete topology, all sets are open, including singletons. In view of how the subspace topology works, a nice way of viewing this is by thinking of a discrete space as a set of isolated points in a larger space. For instance $\mathbb{Z}$ is a discrete subset of $\mathbb{R}$. (I am not sure how to give an intuitive explanation of the subspace topology, however.)</p>
<p>Other topological spaces are in between these two extremes. </p>
<p>The axioms of topology make some intuitive sense in this framework:</p>
<ul>
<li><p>The empty set is vacuously open: my intuitive definition says "from any point in the set" and there are no such points. </p></li>
<li><p>The entire set is also certainly open, provided we assume that "wiggling" does not take you out of the set. </p></li>
<li><p>If $U$ is a union of open sets and $x \in U$, then there is a particular open set $O$ in the union with $x \in O$. This $O$ gives $x$ some wiggle room, and this wiggle room is also contained in $U$. </p></li>
<li><p>If $U$ is a finite intersection of open sets and $x \in U$, then we can take the "wiggle room" of $x$ from each of the sets in the intersection and intersect all of them. This will still leave some room to move around $x$. This is easier to interpret in metric spaces, where the wiggle room consists of balls: the finite intersection of balls of a fixed center is just a ball whose radius is the minimum of all the radii. From this perspective, infinite intersections of open sets needn't be open, because this "minimum" (really, infimum) radius might actually be zero, which would leave us with no room to move.</p></li>
</ul>
<p>Some (maybe most) general topology books prove that the definition of a topology with open sets is equivalent to the definition of a topology with a <em>neighborhood system</em>. This latter definition is more closely tied to the intuitive picture I've been trying to draw here.</p>
<p>Note that this idea is not really universal. For example, I don't think there is a way to apply this idea to the Sierpinski space $(\{ 0,1 \},\{\emptyset,\{ 0,1 \},\{ 1 \} \}$).</p>
|
2,985,256 | <blockquote>
<p>Let there are three points <span class="math-container">$(2,5,-3),(5,3,-3),(-2,-3,5)$</span> through which a plane passes. What is the equation of the plane in Cartesian form?</p>
</blockquote>
<p>I know how to find it in using vector form by computing the cross product to get the normal vector and passing through any one of the given points. But I want to do it a bit differently. </p>
<p>We know, the equation of any plane passing through the first point is <span class="math-container">$$a(x-2)+b(y-5)+c(z+3)=0$$</span></p>
<p>This equation must satisfy the other two points. However, this given me two equations with three unknowns <span class="math-container">$a,b,c$</span>. So can I not solve by this method?</p>
| ilovebulbasaur | 586,948 | <p>Arrange the two equations into an augmented matrix to get:</p>
<p>M=
<span class="math-container">\begin{bmatrix}
3 & -2 & 0 & 0 \\
-4 & -8 & 8 & 0
\end{bmatrix}</span></p>
<p>The general solution of this linear system is any scalar multiple of</p>
<p>x =
<span class="math-container">\begin{bmatrix}
2\\
3\\
4\\
\end{bmatrix}</span></p>
<p>This follows from that the normal vector of a plane is not uniquely determined.</p>
<p>You can check that if we take the coordinates of the normal as our a, b, c, then you will get the equation of the plane regardless of the choice of <span class="math-container">$c\vec{x}$</span>.</p>
|
1,108,918 | <p>Given two functions $f$ and $g$ whose derivatives $f'$ and $g'$ satisfy : $f'(x)=g(x), g'(x)=-f(x),f(0)=0, g(0)=1$ for all $x$ in an interval $J$. $~~~\cdots(A)$</p>
<p>(a) Prove that $f^2(x)+g^2(x)=1 ~\forall~x \in J $</p>
<p>(b) Let $F$ and $G$ be another pair of functions in $J$ which satisfy the given conditions $(A)$. Prove that $f(x)=F(x)$ and $g(x)=G(x)$.</p>
<p><strong>Attempt:</strong> $(a)$</p>
<p>$f^2(x)+g^2(x)+c= 2[ ~\int f(x) ~d ~(f(x)) + ~\int g(x) ~d ~(g(x))~]$</p>
<p>$=2 [ ~\int f(x) ~ g(x)~ dx - ~\int g(x) ~f(x)~ dx~]$</p>
<p>$ = 0$</p>
<p>Applying the initial conditions, we get :$f^2(x)+g^2(x)=1$</p>
<p>$(b)$</p>
<p>Consider : $h(x)=[f(x)-F(x)]^2+[g(x)-G(x)]^2$</p>
<p>If we prove that $h(x)=0~\forall~x \in J$, then $f(x)=F(x), g(x)=G(x)$</p>
<p>Expanding : $h(x)=f^2(x)+F^2(x)-2f(x)F(x)+g^2(x)+G^2(x)-2g(x)G(x)$</p>
<p>$=2[1-f(x)F(x)-g(x)G(x)]$</p>
<blockquote>
<p>How do I prove that $f(x)F(x)+g(x)G(x) =1$?</p>
</blockquote>
<p>Thank you for your help.</p>
| math110 | 58,742 | <p>Hint</p>
<p>Let</p>
<p>$$H(x)=f(x)F(x)+g(x)G(x)$$
since
$$F'=G,G'=-F,f'=g,g'=-f$$
so</p>
<p>\begin{align*}H'(x)&=f'(x)F(x)+f(x)F'(x)+g'(x)G(x)+g(x)G'(x)\\
&=g(x)F(x)+f(x)G(x)+g'(x)G(x)+g(x)G'(x)\\
&=g(x)[F(x)+G'(x)]+G(x)[f(x)+g'(x)]\\
&=0
\end{align*}
so
$$H(x)=H(0)=f(0)F(0)+g(0)G(0)=1$$</p>
|
3,913,005 | <p>So say there is 26 possible characters and you've got 1800 long random string. Now how would you go about finding the chance of there being a specific 5 letter word within?</p>
<p>I already found a related question to this but the given formula in the answer doesn't seem to make sense: <a href="https://math.stackexchange.com/questions/815741/probability-of-a-four-letter-word-from-a-sequence-of-n-random-letters">Probability of a four letter word from a sequence of n random letters</a></p>
<p>According to the formula in the accepted answer, the denominator would get really big 26^1800 with 1800 length string and mean very little probability. But shouldn't probability increase when the random string length is increased instead of decrease?</p>
| user2661923 | 464,411 | <p>My previous answer involved Inclusion-Exclusion. <br><br />
My intuition of the proper way of using Inclusion-Exclusion has changed. <br>
Given a set <span class="math-container">$S$</span> with a finite number of elements, let <span class="math-container">$|S|$</span> denote the
number of elements of set <span class="math-container">$S$</span>.</p>
<p>For simplicity, assume that the desired string is "abcde" (i.e. that the string consists of <span class="math-container">$5$</span> different characters). My approach will again be</p>
<p><span class="math-container">$$\frac{N\text{(umerator)}}{D\text{(enominator)}}$$</span></p>
<p>where</p>
<p><span class="math-container">$$D = (26)^{(1800)}.$$</span></p>
<p>For <span class="math-container">$k \in \{1,2,3,\cdots, 1796\}$</span> <br>
let <span class="math-container">$A_k$</span> denote the set of all of the <span class="math-container">$1800$</span> character strings that <br>
contain the string "abcde", starting in position <span class="math-container">$k$</span>.</p>
<p>Then, to illustrate the idea, <br>
<span class="math-container">$A_1$</span> consists of the set of all strings that look like <br>
"abcde***...***", where "*" can be any character. <br>
<span class="math-container">$A_2$</span> consists of the set of all strings that look like <br>
"*abcde***...***".</p>
<p>Then <span class="math-container">$|A_1| = (26)^{(1795)}.$</span></p>
<hr />
<p>Let <span class="math-container">$T_1$</span> denote <span class="math-container">$\sum_{i=1}^{1796} |A_i|$</span>.</p>
<p>Let <span class="math-container">$T_2$</span> denote
<span class="math-container">$\displaystyle\sum_{1 \leq i_1 < i_2 \leq 1796}
\left|A_{i_1} \cap A_{i_2}\right|$</span>. <br>
That is <span class="math-container">$T_2$</span> is computed by considering <span class="math-container">$\binom{1796}{2}$</span> terms. <br></p>
<p>For <span class="math-container">$k \in \{3,4,1796\}$</span>, let <span class="math-container">$T_k$</span> denote <br>
<span class="math-container">$\displaystyle \sum_{1 \leq i_1 < i_2 < \cdots < i_k \leq 1796}
\left|A_{i_1} \cap A_{i_2} \cap \cdots \cap A_{i_k}\right|$</span>. <br>
That is <span class="math-container">$T_k$</span> is computed by considering <span class="math-container">$\binom{1796}{k}$</span> terms. <br></p>
<p>Then
<span class="math-container">$$N = \left|A_1 \cup A_2 \cup \cdots \cup A_{1796}\right|
= \sum_{k=1}^{1796} (-1)^{(k+1)}T_k.$$</span></p>
<p>Therefore, the problem has reduced to computing each of
<span class="math-container">$T_1, T_2, \cdots, T_{1796}$</span>.</p>
<hr />
<p>You have that <span class="math-container">$|A_1| = (26)^{(1795)}.$</span> <br>
Then, by symmetry, <br>
<span class="math-container">$T_1 = (1796) \times |A_1| = (1796) \times (26)^{(1795)}.$</span></p>
<hr />
<p>To compute <span class="math-container">$T_2$</span>, note that each term will equal either <span class="math-container">$0$</span> or
<span class="math-container">$\displaystyle (26)^{(1800 - [2 \times 5])}.$</span></p>
<p>Here, <span class="math-container">$\displaystyle \left|A_{i_1} \cap A_{i_2}\right|$</span> will not equal <span class="math-container">$0$</span> <br>
if and only if <span class="math-container">$i_2 \geq (i_1 + 5).$</span></p>
<p>What is needed is a way to enumerate <span class="math-container">$T_2$</span> that will generalize well,
when considering <span class="math-container">$T_3, T_4, \cdots T_{1796}.$</span></p>
<p>Let <span class="math-container">$c_1 = i_1 - 1.$</span> <br>
Let <span class="math-container">$c_2 = (i_2 - i_1) - 5.$</span> <br>
Let <span class="math-container">$c_3 = 1796 - i_2.$</span></p>
<p>Here, <span class="math-container">$c_1, c_2, c_3$</span> represent the gaps before and after <span class="math-container">$i_1, i_2$</span>.</p>
<p>The constraints on <span class="math-container">$c_1, c_2, c_3$</span> are as follows:</p>
<p>(1) : <span class="math-container">$c_1, c_2, c_3$</span> are all non-negative integers.</p>
<p>(2) : <span class="math-container">$c_1 + c_2 + c_3 = 1800 - (2 \times 5).$</span></p>
<p>As discussed in
<a href="https://brilliant.org/wiki/integer-equations-star-and-bars/" rel="nofollow noreferrer">this Stars and Bars article</a>,
the number of solutions to the above constraints is</p>
<p><span class="math-container">$$S_2 = \binom{1800 - [2\times 5] + [3-1]}{3-1} = \binom{1792}{2}.$$</span></p>
<p>Therefore,</p>
<p><span class="math-container">$$T_2 = S_2 \times (26)^{(1800 - [2 \times 5])}.$$</span></p>
<hr />
<p>Note that the string "abcde" consists of <span class="math-container">$5$</span> distinct letters. Therefore,
you can not have more than <span class="math-container">$360$</span> occurrences of this string in <span class="math-container">$1800$</span> characters.
Therefore, for <span class="math-container">$k > 360, T_k = 0.$</span></p>
<p>The approach taken to compute <span class="math-container">$T_2$</span> generalizes well when computing <span class="math-container">$T_k$</span>,
for <span class="math-container">$k \in \{3,4,\cdots, 360\}.$</span></p>
<p>When computing <span class="math-container">$T_k$</span>, of the <span class="math-container">$\binom{1796}{k}$</span> terms, <br>
assume that there are <span class="math-container">$S_k$</span> non-zero terms. <br>
Each of these non-zero terms will equal <span class="math-container">$(26)^{(1800 - [k \times 5])}.$</span></p>
<p>Let <span class="math-container">$c_1 = i_1 - 1.$</span> <br>
Let <span class="math-container">$c_2 = (i_2 - i_1) - 5.$</span> <br>
Let <span class="math-container">$c_3 = (i_3 - i_2) - 5.$</span> <br>
Let <span class="math-container">$c_4 = (i_4 - i_3) - 5.$</span> <br>
<span class="math-container">$\cdots$</span> <br>
Let <span class="math-container">$c_k = (i_k - i_{k-1}) - 5.$</span> <br>
Let <span class="math-container">$c_{k+1} = 1796 - i_k.$</span></p>
<p>Here, <span class="math-container">$c_1, c_2, \cdots, c_k, c_{k+1}$</span> represent the gaps
before and after <span class="math-container">$i_1, i_2, \cdots, i_k.$</span></p>
<p>The constraints on <span class="math-container">$c_1, c_2, \cdots, c_k, c_{k+1}$</span> are as follows:</p>
<p>(1) : <span class="math-container">$c_1, c_2, \cdots, c_k, c_{k+1}$</span> are all non-negative integers.</p>
<p>(2) : <span class="math-container">$c_1 + c_2 + \cdots + c_k + c_{k+1} = 1800 - (k \times 5).$</span></p>
<p>Therefore,</p>
<p><span class="math-container">$$S_k = \binom{1800 - [k\times 5] + [k]}{k} = \binom{1800 - [4 \times k]}{k}.$$</span></p>
<p>Therefore,</p>
<p><span class="math-container">$$T_k = S_k \times (26)^{(1800 - [k \times 5])}.$$</span></p>
<hr />
<p>In summary,</p>
<p><span class="math-container">$$N = \sum_{k=1}^{360} \left[(-1)^{(k+1)} \times \binom{1800 - [4 \times k]}{k} \times (26)^{(1800 - [k \times 5])}
\right].$$</span></p>
|
3,390,407 | <p>If <span class="math-container">$t>0,t^2, t+\frac{1}{t},t+t^2,\frac{1}{t}+\frac{1}{t^2}$</span> are all irrational number,
<span class="math-container">$$a_n=n+\left \lfloor \frac{n}{t} \right \rfloor+\left \lfloor \frac{n}{t^2} \right \rfloor,\\
b_n=n+\left \lfloor \frac{n}{t} \right \rfloor +\left \lfloor nt \right \rfloor,\\
c_n=n+\left \lfloor nt \right \rfloor+\left \lfloor nt^2 \right \rfloor, $$</span>
then every positive integer appear exactly once. In other words, the sequences <span class="math-container">$a_1,b_1,c_1,a_2,b_2,c_2,\cdots$</span> together contain all the positive integers without repetition.
I have checked every integer from <span class="math-container">$1$</span> to <span class="math-container">$10^6$</span> for <span class="math-container">$t=2^\frac{1}{4}$</span>:
<span class="math-container">$$a_n=1, 4, 7, 9, 12, 15, 16, 19, 22, 25, 27, 30, 32, 34, 37, 40, 43, 45, 47, 50,\dots \\
b_n=2, 5, 8, 11, 14, 18, 20, 23, 26, 29, 33, 36, 38, 41, 44, 48, 51, 54, 56, 59,\dots \\
c_n=3, 6, 10, 13, 17, 21, 24, 28, 31, 35, 39, 42, 46, 49, 53, 57, 61, 64, 67, 71,\dots $$</span></p>
<p>PS: This is a special case of following statement:</p>
<blockquote>
<p>If <span class="math-container">$t_1,t_2,\cdots,t_k>0$</span>,and <span class="math-container">$\forall i \not =j,\frac{t_j}{t_i}$</span> is irrational,
<span class="math-container">$$a_i(n)=\sum_{j=1}^k{\left \lfloor \frac{t_j}{t_i}n \right \rfloor},i=1,2,\cdots,k,$$</span></p>
<p>then every positive integer appear exactly once in <span class="math-container">$a_1(n),\cdots,a_k(n)$</span>.</p>
</blockquote>
| Hw Chu | 507,264 | <p><strong>Edited 10/24.</strong> Now this is largely expanded, probably longer than what it needs to be. TL;DR: Construct a function <span class="math-container">$f$</span>, record where it ticks. The range of <span class="math-container">$f$</span> is exactly <span class="math-container">$\mathbb N$</span> and covers <span class="math-container">$\{a_n\}, \{b_n\}$</span> and <span class="math-container">$\{c_n\}$</span> without repetition.</p>
<p>That is true.</p>
<p>By substituting <span class="math-container">$t$</span> by <span class="math-container">$1/t$</span> if necessary, assume <span class="math-container">$t>1$</span>. By adding dummy <span class="math-container">$\lfloor \cdot \rfloor$</span>, we rewrite the sequences as
<span class="math-container">$$
\begin{aligned}
a_n &= \Big\lfloor\frac{n}{t^2}\Big\rfloor + \Big\lfloor\frac{n}{t}\Big\rfloor + \lfloor n\rfloor,\\
b_n &= \Big\lfloor\frac{n}{t}\Big\rfloor + \lfloor n \rfloor + \lfloor nt\rfloor,\\
c_n &= \lfloor n\rfloor + \lfloor nt\rfloor + \lfloor nt^2\rfloor.
\end{aligned}
$$</span></p>
<p>This suggests us the following process. We consider a function <span class="math-container">$$f(\delta) := \lfloor\delta\rfloor + \lfloor\delta t\rfloor + \lfloor\delta t^2\rfloor,$$</span> and start with <span class="math-container">$f(1/t^2) = a_1 = 1$</span>. For convenience denote <span class="math-container">$\delta_1 = 1/t^2$</span>.</p>
<p>Given <span class="math-container">$\delta_{i-1}$</span>, we will obtain <span class="math-container">$\delta_i$</span> by the following process. We increase <span class="math-container">$\delta$</span> continuously from <span class="math-container">$\delta_{i-1}$</span>, until we are in the situation that one of <span class="math-container">$\delta$</span>, <span class="math-container">$\delta t$</span> or <span class="math-container">$\delta t^2$</span> hits an integer. Then we will call the new <span class="math-container">$\delta$</span> value <span class="math-container">$\delta_i$</span>. Therefore we get a sequence <span class="math-container">$$\left\{\delta_1 = \frac1{t^2}, \delta_2, \cdots\right\}.$$</span></p>
<p>What do the function <span class="math-container">$f$</span> and the sequence <span class="math-container">$\{\delta_i\}$</span> tell us? Well, lets look at it.</p>
<ol>
<li><span class="math-container">$f$</span> only take values at integers, by the definition, and it is non-decreasing, with <span class="math-container">$f(\delta_1) = 1$</span>. Therefore by restricting the domain, the range <span class="math-container">$$f\left(\left[\delta_1, \infty\right)\right) \subseteq \mathbb N.$$</span></li>
<li>In the interval <span class="math-container">$\delta \in (\delta_{i-1}, \delta_i)$</span>, by the construction of the sequence <span class="math-container">$\{\delta_i\}$</span>, <span class="math-container">$\delta, \delta t, \delta t^2$</span> have the same integral part as <span class="math-container">$\delta_{i-1}, \delta_{i-1}t, \delta_{i-1}t^2$</span>, respectively. Hence <span class="math-container">$f(\delta) = f(\delta_{i-1})$</span>. In other words, <span class="math-container">$\{\delta_i\}$</span> are the places when <span class="math-container">$f$</span> "jump in value". Written in math, <span class="math-container">$$f(\{\delta_i\}) = f\left(\left[\delta_1, \infty\right)\right) \subseteq \mathbb N.$$</span></li>
<li>For every <span class="math-container">$\delta_i$</span> in the sequence, <span class="math-container">$f(\delta_i)$</span> is in <span class="math-container">$\{a_n\}, \{b_n\}$</span> or <span class="math-container">$\{c_n\}$</span>, depending on which of <span class="math-container">$\delta_i, \delta_it$</span> or <span class="math-container">$\delta_it^2$</span> is an integer. For instance, if <span class="math-container">$\delta_it = n$</span> is an integer, then <span class="math-container">$f(\delta_i) = \lfloor n/t\rfloor + \lfloor n\rfloor + \lfloor nt \rfloor = b_n$</span>. So <span class="math-container">$$f(\{\delta_i\}) \subseteq \{a_n\} \cup \{b_n\} \cup \{c_n\}.$$</span></li>
<li>Converse to 3., whenever <span class="math-container">$\delta, \delta t$</span> or <span class="math-container">$\delta t^2$</span> is an integer, <span class="math-container">$\delta \in \{\delta_i\}$</span>. In other words, the sequence <span class="math-container">$\{\delta_i\}$</span> can be obtained by merging and sorting the three sequences <span class="math-container">$\{n\}_{n=1}^\infty$</span>, <span class="math-container">$\{n/t\}_{n=1}^\infty$</span> and <span class="math-container">$\{n/t^2\}_{n=1}^\infty$</span>, or <span class="math-container">$$f(\{\delta_i\} \supseteq \{a_n\}) \cup \{b_n\} \cup \{c_n\}.$$</span></li>
<li>For an integer <span class="math-container">$i$</span>, <span class="math-container">$f(\delta_i) = f(\delta_{i-1})+1$</span>. Since <span class="math-container">$t$</span> and <span class="math-container">$t^2$</span> are irrational, only one of <span class="math-container">$\delta_{i-1}, \delta_{i-1}t, \delta_{i-1}t^2$</span> can be an integer. Same when <span class="math-container">$i-1$</span> is changed to <span class="math-container">$i$</span>. Therefore, comparing <span class="math-container">$f(\delta_{i-1})$</span> and <span class="math-container">$f(\delta_i)$</span>, two of the three terms are same (have the same integral part) and the third term jumps to the next integer. Therefore <span class="math-container">$f(\delta_i) = f(\delta_{i-1})+1$</span>. Together with the fact <span class="math-container">$f(\delta_1) = 1$</span>, we have <span class="math-container">$$f(\{\delta_i\}) = \mathbb N.$$</span> Putting 3., 4., and 5. together, we know that <span class="math-container">$$\{a_n\}\cup\{b_n\}\cup \{c_n\} = f(\{\delta_i\}) = \mathbb N.$$</span></li>
<li>If <span class="math-container">$a_n = b_{n'}$</span>, from 4., there exist <span class="math-container">$i, i' \in \mathbb N$</span> such that <span class="math-container">$a_n = f(\delta_i)$</span> and <span class="math-container">$b_{n'} = f(\delta_{i'})$</span>. From 5., it is enforced that <span class="math-container">$i = i'$</span>. From the construction, this means that both <span class="math-container">$\delta_i$</span> and <span class="math-container">$\delta_i t$</span> are integers, hence <span class="math-container">$t \in \mathbb Q$</span>, but this is impossible. Hence, <span class="math-container">$\{a_n\} \cap \{b_n\} = \varnothing$</span>. Similarly, we have <span class="math-container">$$\{a_n\} \cap \{b_n\} = \{b_n\} \cap \{c_n\} = \{a_n\} \cap \{c_n\} = \varnothing.$$</span></li>
</ol>
<p>This ends the proof of your conjecture.</p>
<p><strong>Remarks, specializations and generalizations:</strong></p>
<p>Playing with the "construct a floor function <span class="math-container">$f$</span> and record points where it jumped" trick as above by changing the function <span class="math-container">$f$</span>, there are other things you can say:</p>
<ul>
<li>(From @Jyrki Lahtonen) Classic Beatty's theorem says, if <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> are irrationals satisfying <span class="math-container">$$\frac1\alpha + \frac1\beta = 1$$</span>, then <span class="math-container">$\lfloor n\alpha\rfloor$</span> and <span class="math-container">$\lfloor n\beta\rfloor$</span> forms a partition of <span class="math-container">$\mathbb N$</span>. A little computation indicates <span class="math-container">$\alpha = 1 + \tau$</span> and <span class="math-container">$\beta = 1+\tau^{-1}$</span> for some irrational <span class="math-container">$\tau$</span>. Rewrite <span class="math-container">$\lfloor n\alpha \rfloor = \lfloor n\rfloor+\lfloor n\tau\rfloor$</span> and <span class="math-container">$\lfloor n\beta \rfloor = \lfloor n\rfloor+\lfloor n\tau^{-1}\rfloor$</span>, and consider the function <span class="math-container">$$f(\delta) = \lfloor\delta\rfloor + \lfloor\delta\tau\rfloor,$$</span> one can prove the classic Beatty theorem (probably this is not the proof of Beatty theorem most people know, at least I did not know at the beginning).</li>
<li>Consider the function <span class="math-container">$$f(\delta) = \left\lfloor \delta\frac{t_1}{t_1}\right\rfloor + \left\lfloor \delta\frac{t_2}{t_1}\right\rfloor + \cdots + \left\lfloor \delta\frac{t_k}{t_1}\right\rfloor,$$</span> your last statement can be proved:</li>
</ul>
<blockquote>
<p><span class="math-container">$\{a_i(n)\}$</span>, <span class="math-container">$i = 1, \cdots, k$</span> forms a partition of <span class="math-container">$\mathbb N$</span>, where <span class="math-container">$$a_i(n) = \sum_{j=1}^k \left\lfloor n\frac{t_j}{t_i}\right\rfloor.$$</span></p>
</blockquote>
<ul>
<li>Pass to infinity. By considering the function <span class="math-container">$$f(\delta) = \sum_{i=1}^\infty \left\lfloor \delta \tau^{-i}\right\rfloor,$$</span> we can prove the following:</li>
</ul>
<blockquote>
<p>Let <span class="math-container">$\tau > 1$</span> be transcendental (like <span class="math-container">$\pi$</span>; this is sufficient but not likely to be necessary). Then <span class="math-container">$$\mathbb N = \bigcup_{j=0}^\infty A_j, \quad A_{j} \cap A_{j'} = \varnothing \text{ if $j \neq j'$},$$</span> where <span class="math-container">$A_j = \{a_j^n\}_{n=1}^\infty$</span>, and <span class="math-container">$$a_j^n = \sum_{i=0}^\infty \lfloor n\tau^{j-i}\rfloor.$$</span></p>
</blockquote>
<p>This gives another proof that you can accommodate (countably) infinite travelers in infinitely many hotels, each hotel have infinitely many rooms, so that all rooms are occupied (i.e. the <span class="math-container">$\mathbb N = \mathbb N\times \mathbb N$</span> problem). This proof is neater than what I knew (the square grid trick), in my opinion.</p>
<p>You can further generalize by changing the sequence <span class="math-container">$\{\tau^i\}$</span> into other infinite sequence, but the descriptions get uglier and I will omit it.</p>
|
500,579 | <p>Is there a formula for the coefficients of $x^n$ for </p>
<p>$$
\prod_{i=1}^N(x+x_i)
$$</p>
<p>in terms of $x_i$?</p>
| Community | -1 | <p>The coefficient of $x$ is
$$\sum_{k=1}^N\prod_{\substack{i=1\\i\ne k}}^N x_i$$</p>
<p>and to see how we find this formula just expand by choosing one $x$ from one factor and the $x_i$ from the other
$$(\color{red}{x}+x_1)(x+\color{red}{x_2})\cdots(x+\color{red}{x_n})$$
and repeat the same thing for all the factors.</p>
<p><strong>Added</strong> By the same method we have the coefficient of $x^n$
$$\sum_{1\leq i_1<i_2<\cdots<i_{N-n}\le N}x_{i_1}x_{i_2}\cdots x_{i_{N-n}}$$</p>
|
500,579 | <p>Is there a formula for the coefficients of $x^n$ for </p>
<p>$$
\prod_{i=1}^N(x+x_i)
$$</p>
<p>in terms of $x_i$?</p>
| Michael Hardy | 11,667 | <p>$$
\begin{align}
& \phantom{{}={}} (x+x_1)(x+x_2)(x+x_3)(x+x_4) \\[12pt]
& = x^4 & \text{(all size-0 subsets)} \\
& \phantom{=} {} + (x_1+x_2+x_3+x_4) x^3 & \text{(all size-1 subsets)} \\
& \phantom{=} {} + (x_1 x_2 + x_1 x_3 + x_1 x_4 + x_2 x_3 + x_2 x_4 + x_3 x_4) x^2
& \text{(all size-2 subsets)} \\
& \phantom{=} {} + (x_1 x_2 x_3 + x_1 x_2 x_4 + x_1 x_3 x_4 + x_2 x_3 x_4) x
& \text{(all size-3 subsets)} \\
& \phantom{=} {} + (x_1 x_2 x_3 x_ 4) & \text{(all size-4 subsets)}
\end{align}
$$
The "subsets" referred to are subset of the set $\{x_1,x_2,x_3,x_4\}$.</p>
<p>A similar thing works when the number of members of this last-mentioned set is something other than four.</p>
|
2,834,219 | <p>I'd like to numerically evaluate the following integral using computer software but it has a singularity at $x=1$:</p>
<p>\begin{equation}
\int_1^{\infty} \frac{x}{1-x^4} dx
\end{equation}</p>
<p>I was thinking of a variable transformation, rewriting the expression, or something of a kind. One of my attempts was to do differentiation under the integral sign, but I only got so far:</p>
<p>\begin{equation}
I(b) = \int_1^{\infty} \frac{x}{1-x^4}e^{-bx} dx
\end{equation}</p>
<p>where setting $b=0$ gives the original expression. Differentiating w.r.t. $b$ gives</p>
<p>\begin{equation}
I'(b) = -\int_1^{\infty} \frac{x^2}{1-x^4}e^{-bx} dx = \int_1^{\infty} \frac{1}{1+x^2}e^{-bx} dx - \int_1^{\infty} \frac{1}{1-x^4}e^{-bx} dx
\end{equation}</p>
<p>It remains to integrate the two terms w.r.t. $x$ and then $b$, but they are not standard integrals.</p>
<p>Is there a better, faster, or easier way that I'm not thinking of?</p>
| Mark Viola | 218,419 | <p>Using partial fraction expansion reveals</p>
<p>$$\frac{x}{1-x^4}=\frac{x}{2(x^2+1)}-\frac{1}{4(x-1)}-\frac{1}{4(x+1)}$$</p>
<p>Hence, we see that for $t>1$</p>
<p>$$\begin{align}
\int_t^\infty \frac{x}{1-x^4}\,dx&=\frac14 \log\left(\frac{1-t^2}{1+t^2}\right)\tag1
\end{align}$$</p>
<p>Inasmuch as $\displaystyle \lim_{t\to1^+} \log\left(\frac{1-t^2}{1+t^2}\right)$ fails to exist, the integral of interest diverges.</p>
|
112,147 | <p>I have a long vector and some of the values (19 out of 64) are complex. I got them using the Mathematica Rationalize function, so the complex ones are written in the a+bi form. Is there a function I can apply to the entire vector, that would change my complex numbers to the form A<em>Exp[I</em>phi]? </p>
| Alexei Boulbitch | 788 | <p>Try this:</p>
<pre><code>{1, 1 + 2 I, 3 - 5 I, 7, 9 + I} /.x_ /; Head[x] == Complex ->
Sqrt[Re[x]^2 + Im[x]^2]*Exp[ArcTan[Re[x], Im[x]]]
</code></pre>
<p>yielding</p>
<pre><code>(* {1, Sqrt[5] E^ArcTan[2], Sqrt[34] E^-ArcTan[5/3], 7,
Sqrt[82] E^ArcTan[1/9]} *)
</code></pre>
<p><strong>Edit</strong>: to address your question
If you want to have the argument shown as fractions of Pi sa for the angles of 45 grad or 60 grad, this is achieved automatically. If you need to express each argument as the fraction of Pi, you might try this:</p>
<pre><code>{1, 1 + 2 I, 3 - 5 I, 7, 9 + I, 1 + I, 3 + I} /.
x_ /; Head[x] == Complex ->Sqrt[Re[x]^2 + Im[x]^2]*
Exp[NumberForm[Rationalize[N[ArcTan[Re[x], Im[x]]/\[Pi]]], {3, 2}]*I*\[Pi]]
</code></pre>
<p>But this will not be the form which you can further operate with, just because of the NumberForm function and also the negative arguments look ugly. </p>
<p>The way to transform it depends upon the answer to the question, what do you want it for in such a form?</p>
<p>Have fun!</p>
|
112,147 | <p>I have a long vector and some of the values (19 out of 64) are complex. I got them using the Mathematica Rationalize function, so the complex ones are written in the a+bi form. Is there a function I can apply to the entire vector, that would change my complex numbers to the form A<em>Exp[I</em>phi]? </p>
| Jason B. | 9,490 | <p>The trouble with many methods is that they only work on integer inputs. Trying Alexei's answer with approximate numbers</p>
<pre><code>{1.0, 1.0 + 2 I, 3.0 - 5 I, 7, 9.0 + I} /.
x_ /; Head[x] == Complex ->
Sqrt[Re[x]^2 + Im[x]^2]*Exp[I ArcTan[Re[x], Im[x]]]
(* {1., 1. + 2. I, 3. - 5. I, 7, 9. + 1. I} *)
</code></pre>
<p>just spits back out the original answer. Also, a simpler way to do this would be</p>
<pre><code>argForm[n_] := (#1 E^(I #2)) & @@ AbsArg@n
</code></pre>
<p>But again, if the numbers are decimals this won't work because <em>Mathematica</em> automatically parses numbers like these to have into the original form,</p>
<pre><code>1. Exp[2. I]//FullForm
(* Complex[-0.4161468365471424`,0.9092974268256817`] *)
</code></pre>
<p>If you want the numbers to be in exponential form for display purposes, then this is the way to go,</p>
<pre><code>argForm[n_] := (Row@{Abs[n],
Superscript["\[ExponentialE]",
"\[ImaginaryI]" <> ToString[Arg[n]]]})
</code></pre>
<p>This will work for exact and approximate numbers,</p>
<pre><code>argForm[1 + 2. I]
argForm[1 + 2 I]
</code></pre>
<p><a href="https://i.stack.imgur.com/jEHfU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jEHfU.png" alt="enter image description here"></a></p>
|
482,596 | <p>How do I evaluate $ sin(20)$ exactly? [in degrees]</p>
<p>I derived the relationship between $sin(x) $ and $sin(3x)$ where $x = sin(x)$ and $ y = sin(3x)$</p>
<p><a href="http://www.wolframalpha.com/input/?i=-4x%5E3+%2B+3x+%3D+y%2C+solve+for+x" rel="nofollow">http://www.wolframalpha.com/input/?i=-4x%5E3+%2B+3x+%3D+y%2C+solve+for+x</a></p>
<p>Now I am interested in subsituting $sin(60) $ and moving along but I am not sure which formula will result in me getting a real solution.</p>
<p><a href="http://www.wolframalpha.com/input/?i=-4x%5E3+%2B+3x+%3D+%283%29%5E%281%2F2%29%2F2%2C+solve+for+x" rel="nofollow">http://www.wolframalpha.com/input/?i=-4x%5E3+%2B+3x+%3D+%283%29%5E%281%2F2%29%2F2%2C+solve+for+x</a></p>
<p>I am curious how to de-nest this mess into something cleaner. I would really like to get rid of all the imaginary numbers but if that is not possible I still feel that this can definitely be de-nested into a simpler looking form even if the radical count does not go down. </p>
<p>I have tried a couple attempts at substituting stuff back in but my answer seems to change whenever I move thing in and out of the cube root</p>
| Nick Peterson | 81,839 | <p><strong>Hint:</strong> To show that it is not unique, all you need to do is find two distinct pairs $(s_1,t_1)$ and $(s_2,t_2)$ such that $1=7s_1+11t_1=7s_2+11t_2$.</p>
|
3,774,400 | <p>Just like the title says, I don't know how to write an example matrix here to look like matrix. If this makes sense, <span class="math-container">$A$</span> can be <span class="math-container">$[1 0 0;0 1 0;0 1 0]$</span> (like in MATLAB syntax) then if we find determinants of <span class="math-container">$A-\lambda Ι$</span> we get <span class="math-container">$0$</span> for every <span class="math-container">$\lambda$</span>. In MATLAB it says the eigenvalues are <span class="math-container">$0,1,1$</span>.</p>
| Fred | 380,717 | <p>If <span class="math-container">$A$</span> is your matrix above, then</p>
<p><span class="math-container">$$ \det(A- \lambda E)=- \lambda(\lambda-1)^2.$$</span></p>
|
324,219 | <p>Given an urn with $M$ unique balls, how many times do I need to draw with replacement before the probability that I have seen each ball at least once is greater than $\epsilon$?</p>
| wece | 65,630 | <p>You can see this as a combinatorial problem.</p>
<p>There are $M^k$ possible run of $k$ draw each with the same probability (the draw are independent).</p>
<p>For a run we have seen each balls at least once if we can choose $M$ indexes such that for those indexes the balls are different. There are ${k\choose M}$ such set of indexes possible, and for each one $M!$ possibility to arrange the balls in those indexes.</p>
<p>Hence the number of run of length k that have seen all the balls at least once is $${k\choose M}M!$$
it follows that the probability to have such a run is ($0$ for $k<M$ and for $k\geq M$):
$$P(X=k)=\frac{{k\choose M}M!}{M^k}=\frac{\frac{k!}{M!(k-M)!}M!}{M^k}=\frac{k!}{M^k(k-M)!}$$
So if you want $P(X=k)\geq \epsilon$ you have to sole $$\frac{k!}{M^k(k-M)!}\geq\epsilon$$
It should be doable :)</p>
<p>I hope it helped.</p>
|
4,389,997 | <p>In Enderton's <em>A Mathematical Introduction to Logic</em>, he defines <span class="math-container">$n$</span>-tuples recursively using ordered pairs, i.e. <span class="math-container">$\langle x_1,\dots,x_{n+1}\rangle=\langle\langle x_1,\dots,x_n\rangle, x_{n+1}\rangle$</span>. But he also notes,</p>
<blockquote>
<p>Finite sequences are often defined to be certain finite functions, but
the above definition is slightly more convenient for us.</p>
</blockquote>
<p>I believe he's referring to the other definition (which I prefer) of <a href="https://en.wikipedia.org/wiki/Tuple#Tuples_as_functions" rel="nofollow noreferrer"><span class="math-container">$n$</span>-tuples as functions</a>.</p>
<p><a href="https://math.stackexchange.com/questions/2122856/endertons-a-mathematical-introduction-to-logic-question-about-n-tuples-in-th">Here</a> is a related question that provides some more background, but I'm not confused about the mathematics of Enderton's definition. I'm curious about why he feels it's slightly more convenient when it feels the opposite to me.</p>
<p><strong>In what way might Enderton's choice be slightly more convenient, at least in the context of his logic book?</strong>
If you have access to the book, the relevant pages are page 4 and remark 5 on page 15.</p>
| Tankut Beygu | 754,923 | <p>The author suggests the definition of <em>n</em>-tuples as nested ordered pairs,
<span class="math-container">$$\langle x_1,\dots,x_{n+1}\rangle=\langle\langle x_1,\dots,x_n\rangle, x_{n+1}\rangle ,$$</span></p>
<p>has several cognitional and operational features, which might not be so significant for others. We realise from his scattered remarks that he pays heed of such features. For example, he writes on p. 209:</p>
<blockquote>
<p>Recall that any function <span class="math-container">$f:\mathbb{N}^k\rightarrow\mathbb{N}$</span> is also
a <span class="math-container">$(k + 1)$</span>-ary relation on <span class="math-container">$\mathbb{N}$</span>:</p>
<p><span class="math-container">$$\langle a_1,\ldots, a_k, b\rangle\in f\Longleftrightarrow f(a_1,\ldots,
> a_k) = b.$$</span></p>
<p>At one time it was popular to distinguish between the function and the
relation (which was called the <em>graph</em> of the function). Current
set-theoretic usage takes a function to be the same thing as its
graph. But we still have the two ways of looking at the function.</p>
</blockquote>
<p>For one thing, this definition is a <em>straightforward</em> case of effective (computable) enumerability; see the section 'Recursive Enumerability' (p. 238 ff.)</p>
<p>Another is that <span class="math-container">$\langle x_1,\dots,x_{n+1}\rangle$</span> is an expression of a relation that describes a predicate in a domain of discourse. By this type of definition, we can partially interpret and obtain meaningful expression a predicate. For example, suppose</p>
<p><span class="math-container">$$\langle x_1, x_2, x_3\rangle=\langle\langle x_1, x_2\rangle, x_3\rangle$$</span></p>
<p>We can employ this expression meaningfully as describing a dyadic predicate <span class="math-container">$P(a, x_3)$</span> that we can operate on obtained from a triadic predicate <span class="math-container">$P(x_1, x_2, x_3)$</span>. Hence, we can sentences <span class="math-container">$\forall x_3 P(a, x_3)$</span> or <span class="math-container">$\exists x_3 P(a, x_3)$</span>, which may be useful, particularly in studies with philosophical streak.</p>
<p>An interesting point to pursue (understandably not mentioned in the book) is the intrinsic connection the definition bears to <a href="https://encyclopediaofmath.org/wiki/Exponential_law_for_sets" rel="nofollow noreferrer">Currying</a> (I'd like to say "Schönfinkelisation" to be historically correct):</p>
<p><span class="math-container">$$f(x_1,\dots,x_{n})\rightarrow (f'(x_1,\dots,x_{n-1}))(x_{n})\rightarrow (f''(x_1,\dots,x_{n-2})(x_{n-1}))(x_{n})\rightarrow\ldots\rightarrow((\ldots ((f^{n-1}(x_1))(x_{2})\ldots(x_{n-1}))(x_{n})$$</span></p>
<p>which is an important technique also in such other fields as computer programming and linguistics.</p>
|
2,906,797 | <p>I want to express this polynomial as a product of linear factors:</p>
<p>$x^5 + x^3 + 8x^2 + 8$</p>
<p>I noticed that $\pm$i were roots just looking at it, so two factors must be $(x- i)$ and $(x + i)$, but I'm not sure how I would know what the remaining polynomial would be. For real roots, I would usually just do use long division but it turns out a little messy in this instance (for me at least) and was wondering if there was a simpler method of finding the remaining polynomial. </p>
<p>Apologies for the basic question!</p>
| Mark Bennet | 2,906 | <p>If you spotted this by looking at it you have good intuition. When roots come in complex pairs you can always combine them to find a quadratic factor with real coefficients. Here $(x+i)(x-i)=x^2+1$</p>
<p>If you could do the first bit with intuition, I am sure you can do that division and complete the factorisation.</p>
|
662,744 | <h2>Question Statement</h2>
<blockquote>
<p>Let $K$ be a finite field of $q$ elements. Let $U$, $V$ be vector spaces over $K$ with
$\dim(U) = k$, $\dim(V) = l$. How many linear maps $U \rightarrow V$ are there?</p>
</blockquote>
<p>I'm struggling to answer this question. Given that a linear map is uniquely determined by its action on a basis we may as well consider the mappings of the standard basis. Even with this restriction, each of the $k$ basis vectors can map to not only $q$ distinct elements but also into $l$ different positions (or none, or into multiple positions).</p>
<p>What easy thing am I missing?</p>
| voldemort | 118,052 | <p>All such linear maps are of the form $c_1y_1+..c_ky_k$ where $y_1,..y_k$ is the image of the basis vectors of $U$, and $c_i$ are scalars. Note that you can have $q$ different choices for each $y_i$</p>
<p>Hope you can finish the problem now.</p>
|
662,744 | <h2>Question Statement</h2>
<blockquote>
<p>Let $K$ be a finite field of $q$ elements. Let $U$, $V$ be vector spaces over $K$ with
$\dim(U) = k$, $\dim(V) = l$. How many linear maps $U \rightarrow V$ are there?</p>
</blockquote>
<p>I'm struggling to answer this question. Given that a linear map is uniquely determined by its action on a basis we may as well consider the mappings of the standard basis. Even with this restriction, each of the $k$ basis vectors can map to not only $q$ distinct elements but also into $l$ different positions (or none, or into multiple positions).</p>
<p>What easy thing am I missing?</p>
| Asa Cremin | 77,802 | <p>The answer (if I actually got it right), for anyone still wondering:</p>
<p>Let $\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_k$ be a basis of $U$, $\mathbf{f}_1,\ldots,\mathbf{f}_l$ be a basis of $V$ and $T:U\rightarrow V$ be a linear map.</p>
<p>Then
$$T(\mathbf{e}_1) = \alpha_{11}\mathbf{f}_1 + \alpha_{12}\mathbf{f}_2 + \cdots + \alpha_{1l}\mathbf{f}_{l}$$
$$T(\mathbf{e}_2) = \alpha_{21}\mathbf{f}_1 + \alpha_{22}\mathbf{f}_2 + \cdots + \alpha_{2l}\mathbf{f}_l$$
$$\vdots $$
$$
T(\mathbf{e}_k) = \alpha_{k1}\mathbf{f}_1 + \alpha_{k2}\mathbf{f}_2 + \cdots + \alpha_{kl}\mathbf{f}_l$$</p>
<p>defines $T$. The number of different linear transformations is then just the number of different combinations of $\alpha_{ij}$'s. There are a finite number of combinations as there are only $q$ choices for each one.</p>
<p>I found two similar methods, each with a slightly different focus, that lead to the answer:</p>
<h2>More algebraic</h2>
<p>As there are $q$ different choices for each $\alpha_{ij}$ and they are chosen independently, so on each row there are $q^l$ different choices. Now over $k$ independently chosen rows we have
$$\underbrace{q^l \cdot q^l \cdot \cdots \cdot q^l}_{k\text{ times}}=(q^l)^k=q^{kl}$$
different choices</p>
<h2>More combinatorial</h2>
<p>we may write the linear transformation as a matrix of the coefficients ($\alpha_{ij}$'s) in the following way</p>
<p>$$
\pmatrix{
\alpha_{11} & \cdots & \alpha_{1l}\\
\\
\vdots & \ddots & \vdots\\
\\
\alpha_{k1} & \cdots & \alpha_{kl}
}
$$</p>
<p>You can now look at the number of ways of choosing combinations of non-zero $\alpha_{ij}$'s, which is every possible matrix and therefore every possible linear transformation (note that choosing no $\alpha_{ij}$ non-zero gives us the zero matrix).</p>
<p>There are </p>
<ul>
<li><p>$\binom{kl}{0} \cdot (q-1)^0 = 1$ ways of choosing no non-zero coefficients. We are choosing zero of the $k \times l$ coefficients.</p></li>
<li><p>$\binom{kl}{1} \cdot (q-1)^1 = (q-1)$ ways of choosing one $\alpha_{ij}$ non-zero. 1 of the $k\times l$ coefficients and $q-1$ different choices for it</p></li>
<li><p>$\binom{kl}{2} \cdot (q-1)^2$ ways of choosing two $\alpha_{ij}$ non-zero. There are $q-1$ non-zero choices for each of the 2 coefficients, so $(q-1)^2$ different combinations</p></li>
</ul>
<p>In general there are $\binom{kl}{i} \cdot (q-1)^i$ ways of choosing $i\ \alpha_{ij}$ non-zero. The total number of linear transformations is the number of different matrices, which is the sum of all the numbers above:</p>
<p>$$\sum_{i=0}^{kl} \binom{kl}{i} \cdot (q-1)^i = \sum_{i=0}^{kl} \binom{kl}{i} \cdot (q-1)^i \cdot 1^{kl-i}= (q-1+1)^{kl}=q^{kl}$$</p>
|
411,875 | <p>This is an exam question I encountered while studying for my exam for our topology course:</p>
<blockquote>
<p>Give two continuous maps from $S^1$ to $S^1$ which are not homotopic. (Of course, provide a proof as well.)</p>
</blockquote>
<p>The only continuous maps from $S^1$ to $S^1$ I can think of are rotations, and I thought rotations on a circle can be continuously morphed into one another. </p>
| nonlinearism | 59,567 | <p>$F_1:S^1\to S^1 $</p>
<p>$F_1(s)=(\cos(2\pi s),\sin(2\pi s))$ and</p>
<p>$F_2:S^1\to S^1$ </p>
<p>$F_2(s)=(\cos(4\pi s),\sin(4\pi s))$ where $s$ goes from $0$ to $1$, are not homotopic.</p>
|
4,594,043 | <p>For <span class="math-container">$|x|<1,$</span> we have
<span class="math-container">$$
\begin{aligned}
& \frac{1}{1-x}=\sum_{k=0}^{\infty} x^k \quad \Rightarrow \quad \ln (1-x)=-\sum_{k=0}^{\infty} \frac{x^{k+1}}{k+1}
\end{aligned}
$$</span></p>
<hr />
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x)}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k dx \\
& =-\sum_{k=0}^{\infty} \frac{1}{(k+1)^2} \\
& =- \zeta(2) \\
& =-\frac{\pi^2}{6}
\end{aligned}
$$</span></p>
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln xdx \\
& =\sum_{k=0}^{\infty} \frac{1}{k+1}\cdot\frac{1}{(k+1)^2} \\
& =\zeta(3) \\
\end{aligned}
$$</span></p>
<hr />
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^2 x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln ^2 xdx \\
& =-\sum_{k=0}^{\infty} \frac{1}{k+1} \cdot \frac{2}{(k+1)^3} \\
& =-2 \zeta(4) \\
& =-\frac{\pi^4}{45}
\end{aligned}
$$</span></p>
<hr />
<p>In a similar way, I dare guess that</p>
<p><span class="math-container">$$\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x =(-1)^{n+1}\Gamma(n)\zeta(n+2),$$</span></p>
<p>where <span class="math-container">$n$</span> is a non-negative <strong>real</strong> number.</p>
<p>Proof:
<span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln ^n xdx \\
\end{aligned}
$$</span>
Letting <span class="math-container">$y=-(k+1)\ln x $</span> transforms the last integral into a Gamma function as</p>
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 x^k \ln ^n x d x & =\int_{\infty}^0 e^{-\frac{k}{k+1}}\left(-\frac{y}{k+1}\right)^n\left(-\frac{1}{k+1} e^{-\frac{y}{k+1}} d y\right) \\
& =\frac{(-1)^n}{(k+1)^{n+1}} \int_0^{\infty} e^{-y} y^n d y \\
& =\frac{(-1)^n \Gamma(n+1)}{(k+1)^{n+1}}
\end{aligned}
$$</span></p>
<p>Now we can conclude that
<span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x & =(-1)^{n+1} \Gamma(n+1) \sum_{k=0}^{\infty} \frac{1}{(k+1)^{n+2}} \\
& =(-1)^{n+1} \Gamma(n+1)\zeta(n+2)
\end{aligned}
$$</span></p>
<p><strong>Can we</strong> evaluate <span class="math-container">$\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x$</span> without expanding <span class="math-container">$\ln (1-x)$</span>?</p>
<p>Your comments and alternative methods are highly appreciated?</p>
| user170231 | 170,231 | <p>Without expanding using series at all (taking for granted any properties of special functions whose derivations require series manipulation):</p>
<p><span class="math-container">$$\begin{align*}
I &= \int_0^1 \frac{\log(1-x) \log^n(x)}x \, dx \\[1ex]
&= \int_0^1 \frac{\log(x) \log^n(1-x)}{1-x} \, dx \tag{1} \\[1ex]
&= (-1)^n \int_0^\infty x^n \log\left(1-e^{-x}\right) \, dx \tag{2} \\[1ex]
&= (-1)^{n+1} n \int_0^\infty x^{n-1} \operatorname{Li}_2(e^{-x}) \, dx \tag{3} \\[1ex]
&= (-1)^{n+1} n (n-1) \int_0^\infty x^{n-2} \operatorname{Li}_3(e^{-x}) \, dx = \cdots \tag{4} \\
&\;\vdots \\
&= (-1)^{n+1} n! \int_0^\infty \operatorname{Li}_{n+1}(e^{-x}) \, dx \\[1ex]
&= (-1)^{n+1} n! \operatorname{Li}_{n+2}(1) \\[1ex]
&= \boxed{(-1)^{n+1} \Gamma(n+1) \zeta(n+2)} \tag{5}
\end{align*}$$</span></p>
<hr />
<ul>
<li><span class="math-container">$(1)$</span> : substitute <span class="math-container">$x\mapsto1-x$</span></li>
<li><span class="math-container">$(2)$</span> : substitute <span class="math-container">$x\mapsto 1-e^{-x}$</span></li>
<li><span class="math-container">$(3)$</span> : integrate by parts, recalling <span class="math-container">$\displaystyle \frac d{dx}\operatorname{Li}_2(x) = -\frac{\log(1-x)}x$</span> where <span class="math-container">$\operatorname{Li}_2$</span> is the <a href="https://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm" rel="nofollow noreferrer">dilogarithm</a></li>
<li><span class="math-container">$(4)$</span> : integrate by parts ad nauseam, using the recurrence <span class="math-container">$\displaystyle\frac d{dx}\operatorname{Li}_n(x)=\frac{\operatorname{Li}_{n-1}(x)}x$</span></li>
<li><span class="math-container">$(5)$</span> : <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow noreferrer"><span class="math-container">$n!=\Gamma(n+1)$</span></a> and <a href="https://en.wikipedia.org/wiki/Polylogarithm#Relationship_to_other_functions" rel="nofollow noreferrer"><span class="math-container">$\operatorname{Li}_n(1)=\zeta(n)$</span></a></li>
</ul>
|
378,228 | <p>This is probably known, but I have not located a reference.</p>
<p>Let <span class="math-container">$P$</span> be the convex hull of <span class="math-container">$k$</span> points in <span class="math-container">$\mathbb R^n$</span> with rational coordinates. Consider the Euclidean square norm function <span class="math-container">$F:P\to\mathbb R$</span>, <span class="math-container">$F(x)=||x||^2$</span>.</p>
<p>Is it true that <span class="math-container">$\mathrm{min}_{x\in P}F(x)$</span> is rational?</p>
| Claudio Gorodski | 15,155 | <p>I think I may have answered my own question under the assumption that the points are linearly independent as vectors in <span class="math-container">$\mathbb R^n$</span>.</p>
<p>Denote the <span class="math-container">$k$</span> points by <span class="math-container">$v_1,\ldots, v_k\in\mathbb R^n$</span>. Let <span class="math-container">$x=\sum_{i=1}^k x_i v_i\in P $</span> be the point of minimum of <span class="math-container">$F$</span> on <span class="math-container">$P$</span>, so that <span class="math-container">$\sum_{i=1}^k x_i = 1$</span> and <span class="math-container">$x_i\geq0$</span> for
<span class="math-container">$i=1,\ldots, k$</span>.</p>
<p>By throwing away the <span class="math-container">$v_i$</span> with <span class="math-container">$x_i=0$</span>, we may assume all <span class="math-container">$x_i>0$</span>.
Namely, let <span class="math-container">$I=\{i\in\{1,\ldots,k\}:x_i\neq0\}$</span>. Then <span class="math-container">$x=\sum_{i\in I} x_iv_i$</span> and <span class="math-container">$\sum_{ì\in I}x_i=1$</span> with <span class="math-container">$x_i>0$</span> for all <span class="math-container">$i\in I$</span>. Note that <span class="math-container">$x$</span> is the point of minimum of <span class="math-container">$F$</span> over <span class="math-container">$Q$</span>, where <span class="math-container">$Q$</span> is the convex hull of <span class="math-container">$\{x_i\}_{i\in I}$</span>.</p>
<p>Now <span class="math-container">$x$</span> is the point of minimum of <span class="math-container">$F$</span> over the affine plane <span class="math-container">$\sum_{i \in I} x_i=1$</span>, and we can apply the Lagrange multiplier method. We want to minimize
<span class="math-container">$$F((t_i)_{i\in I})=\sum_{i,j\in I}g_{ij}t_it_j,$$</span>
where <span class="math-container">$g_{ij}=\langle v_i,v_j\rangle$</span>, over <span class="math-container">$\sum_{i\in I}x_i=1$</span>. The equations
are
<span class="math-container">$$\frac{\partial F}{\partial t_i}=2\sum_{j\in I} t_jg_{ij} =\lambda$$</span>
and
<span class="math-container">$$\sum_{i\in I}t_i=1.$$</span>
By the assumption, the matrix <span class="math-container">$(g_{ij})$</span> is invertible, so we can solve the
first set of equations for the <span class="math-container">$t_i$</span> in terms of <span class="math-container">$\lambda$</span>, and then
substitute in the last one to determine the value of <span class="math-container">$\lambda$</span>. This shows that
even the coordinates of the point of minimum are rational, so in particular, the sum of their squares.</p>
<p>Perhaps there is a way of reducing the general case to this one.</p>
|
757,702 | <p>I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as <span class="math-container">$P = Q\cdot L + R$</span></p>
<p>I am unable to solve the following:</p>
<blockquote>
<p>Show that <span class="math-container">$(x + 1)^{(2n + 1)} + x^{(n + 2)}$</span> can be divided by <span class="math-container">$x^2 + x + 1$</span> without remainder.</p>
</blockquote>
| Andrei Kulunchakov | 140,670 | <p>Hint: try to check if complex roots of $x^2+x+1$ are also roots for $(x+1)^{2n+1}+x^{n+2}$.</p>
<p>Also there is another way to prove it. Let see that $((x+1)^2-x)\cdot (x+1)^{2n-1}$ also divided by $x^2+x+1$. So we just need to prove that $x\cdot (x+1)^{2n-1}+x^{n+2}$ is divided by $x^2+x+1$. In this way we can come to prove that $(x^k\cdot (x+1)^{2n+1-2\cdot k}+x^{n+2}), \dots ,$ $(x^n(x+1)+x^{n+2})$ are all divided by $x^2+x+1$. But it's true, because $x^n(x+1)+x^{n+2} = x^n(x^2+x+1) $</p>
|
757,702 | <p>I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as <span class="math-container">$P = Q\cdot L + R$</span></p>
<p>I am unable to solve the following:</p>
<blockquote>
<p>Show that <span class="math-container">$(x + 1)^{(2n + 1)} + x^{(n + 2)}$</span> can be divided by <span class="math-container">$x^2 + x + 1$</span> without remainder.</p>
</blockquote>
| Mark Bennet | 2,906 | <p>Suppose $a$ is a root of $x^2+x+1=0$, then we have both $$a+1=-a^2$$ and $$a^3=1$$</p>
<p>Let $f(x)=(x+1)^{2n+1}+x^{n+2}$ then $$f(a)=(-a^2)^{2n+1}+a^{n+2}=-a^{4n+2}+a^{n+2}=-a^{n+2}+a^{n+2}=0$$</p>
<p>Since the two distinct roots of the quadratic are also roots of $f(x)$ we can use the remainder theorem to conclude that the remainder is zero.</p>
|
2,187,571 | <p>Consider $e^{it}$ with $t$ real. I can rewrite $t$ as $t=2\pi \tau$:</p>
<p>$ e^{i2\pi\tau} = (e^{i2\pi})^\tau = (1)^\tau = 1 $ for arbitrary $\tau$ or $t$. Where is the mistake in this calculation ? My guess that $(e^x)^y = e^{x\cdot y}$ only applies for real $x$ and $y$, but I don't see why this should be the case despite the example I presented. </p>
| skyking | 265,767 | <p>The problem lies in the meaning of $a^z$, it's only that well defined in general for complex numbers.</p>
<p>For other $a$ we will fall into the "rabbit hole" of multi-functions, but we can do it nevertheless. You define $a^z$ otherwise as:</p>
<p>$$a^z = \exp(z \ln a)$$</p>
<p>For clarity I use $\exp$ for the exponential function, ie $\exp z = \sum z^k/k!$ and with $e^z$ I'll use the above definition.</p>
<p>But here we have to realize that $\ln$ is to be considered a multi-function. For each given solution to $e^\omega = a$ also $e^{\omega + 2n\pi j} = a$ is also a solution. </p>
<p>Keeping this in mind the formula still holds, but we have to interpret $1^\tau$ accordingly, that is $\exp(\tau\ln 1)$ where $\ln 1 = 2n\pi j$. This means that $1^\tau = \exp(\tau 2n\pi j)$.</p>
<p>Still the identity $a^{wz} = (a^w)^z$ isn't without complications. By calculation we see that:</p>
<p>$$a^{wz} = \exp{wz \ln a}$$
$$(a^w)^z = \exp(z \ln (a^w)) = \exp(z \ln\exp(w \ln a))$$</p>
<p>Now they would be equal if $\ln\exp(\phi) = \phi$, but it's not that simple. We have instead $\ln\exp\phi = \phi + 2n\pi i$. </p>
|
2,495,176 | <p>For how many positive values of $n$ are both $\frac n3$ and $3n$ four-digit integers?</p>
<p>Any help is greatly appreciated. I think the smallest n value is 3000 and the largest n value is 3333. Does this make sense?</p>
| Lisa | 160,525 | <p>To be absolutely clear: $n$ is a positive integer, right?</p>
<p>Then for $\frac{n}{3}$ to be a $4$-digit number, you need $2999 < n < 29998$. But you also have to consider that not all those numbers are multiples of $3$.</p>
<p>But for $3n$ to be a $4$-digit number, you need $333 < n < 3334$.</p>
<p>These two ranges overlap, so the answer is simply to count up the multiples of $3$ in the overlap of $2999 < n < 3334$.</p>
|
101,078 | <p>While reading through several articles concerned with mathematical constants, I kept on finding things like this:</p>
<blockquote>
<p>The continued fraction for $\mu$(Soldner's Constant) is given by $\left[1, 2, 4, 1, 1, 1, 3, 1, 1, 1, 2, 47, 2, ...\right]$. </p>
<p>The <strong>high-water marks</strong> are 1, 2, 4, 47, 99, 294, 527, 616, 1152, ... ,
which occur at positions 1, 2, 3, 12, 70, 126, 202, 585, 1592, ... . </p>
</blockquote>
<p>(copied from <a href="http://mathworld.wolfram.com/SoldnersConstant.html" rel="nofollow">here</a>)</p>
<p>I didn't find a definition of <strong>high-water marks</strong> in the web, so I assume that it's a listing of increasing largest integers, while going through the continued fraction expansion.</p>
<p>Is this correct and is there special meaing behind them?</p>
| Michael Lugo | 173 | <p>Your definition seems correct to me -- at least, it agrees with the data you've provided and with my intuition, as a native speaker of English, of how this phrase is used. I don't know of a specific significance of these high-water marks, but it's well-known that cutting off a continued fraction expansion just before a particularly large coefficient gives a good rational approximation to the number being expanded. For example the <a href="http://oeis.org/A001203">sequence of convergents of π</a> begins 3, 7, 15, 1, 292; the continued fraction [3, 7, 15, 1] is the well-known, surprisingly good approximation 355/113.</p>
|
1,431,969 | <p>Just like in the title:</p>
<blockquote>
<p>How to compute the CDF of $X\cdot Y$ if $X,Y$ are independent random variables, uniformly distributed over $(-1,1)$?</p>
</blockquote>
<p>I tried using the next formula:
the density of $X\cdot Y$ is the integral of
$f(u/v)\cdot g(v)$ where $f$ is the density of $X$ and $g$ is the density of $Y$ but it has no sense.</p>
| Jack D'Aurizio | 44,121 | <p>Let $Z=X\cdot Y$ and $f_Z,f_Y,f_X$ be the pdfs of $X,Y,Z$. </p>
<p>$f_Z$ is quite trivially an even function, supported on $[-1,1]$. </p>
<p>So, let we compute, for any $p\in(0,1)$:</p>
<p>$$\mathbb{P}[0\leq Z\leq p]=2\int_{0}^{1}\int_{0}^{\min\left(1,\frac{p}{x}\right)}\frac{1}{2}\cdot\frac{1}{2}\,dy\,dx=\frac{1}{2}\int_{0}^{1}\min\left(1,\frac{p}{x}\right)\,dx\tag{1}$$
that gives:
$$\mathbb{P}[0\leq Z\leq p]=\frac{1}{2}\left(\int_{0}^{p}1\,dx+\int_{p}^{1}\frac{p}{x}\,dx\right)=\frac{p-p\log p}{2}\tag{2}$$
hence:
$$ \int_{0}^{p}f_Z(t)\,dt = \frac{p-p\log p}{2}\tag{3} $$
gives, through differentiation, $f_Z(p)=-\frac{\log p}{2}$. At last,
$$ f_Z(z) = -\frac{\log|z|}{2}\cdot\mathbb{1}_{(-1,1)}(z),\\ F_Z(z)=\left[\frac{1}{2}-\frac{(1-|z|)+|z|\log |z|}{2}\text{Sign}(z)\right]\cdot\mathbb{1}_{(-1,1)}(z)\tag{4}$$
are the pdf and cdf of $Z$.</p>
|
1,898,810 | <p>How do I integrate $\frac{1}{1-x^2}$ without using trigonometric identities or partial fractions? Thanks!</p>
| Community | -1 | <p>Lookup a table of derivatives and spot</p>
<p>$$(\text{artanh } x)'=\frac1{1-x^2}.$$</p>
|
3,552,695 | <p>The question is to find all real numbers solutions to the system of equations: </p>
<ul>
<li><span class="math-container">$y=\Large\frac{4x^2}{4x^2+1}$</span>,</li>
<li><span class="math-container">$z=\Large\frac{4y^2}{4y^2+1}$</span>,</li>
<li><span class="math-container">$x=\Large\frac{4z^2}{4z^2+1}$</span>, </li>
</ul>
<p>This seems simple enough, so I tried substituting the values of x, y and z into the different equations but I only ended up with a huge degree 8 equation which clearly doesn't seem like the right approach. I really have no idea on how to go about solving this if substitution is not the answer.<br>
Any help would be greatly appreciated :)</p>
| Dietrich Burde | 83,966 | <p>I had no difficulties with the equations. Substituting I obtain two linear equations with solutions
<span class="math-container">$$
(x,y,z)=(0,0,0), \quad (x,y,z)=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)
$$</span>
and a polynomial of degree <span class="math-container">$6$</span>, namely
<span class="math-container">$$
f(x)=5696x^6 + 1600x^5 + 496x^4 + 96x^3 + 28x^2 + 4x + 1=0,
$$</span>
which has no real solution.</p>
<p>More precisely, substituting <span class="math-container">$y$</span> and <span class="math-container">$z$</span> by the first two equations, the third one is
<span class="math-container">$$
f(x)(2x-1)^2x=0.
$$</span></p>
|
622,883 | <p>I'm finding maximum and minimum of a function $f(x,y,z)=x^2+y^2+z^2$ subject to $g(x,y,z)=x^3+y^3-z^3=3$.</p>
<p>By the method of Lagrange multiplier, $\bigtriangledown f=\lambda \bigtriangledown g$ and $g=3$ give critical points. So I tried to solve these equalities, i.e.</p>
<p>$\quad 2x=3\lambda x^2,\quad 2y=3\lambda y^2,\quad 2z=-3\lambda z^2,\quad x^3+y^3-z^3=3$. </p>
<p>But it is too hard for me. Can anybody solve these?</p>
| mathlove | 78,967 | <p>Let us consider a set $\{a,a+1,\cdots,b-1,b\}$ where $1\lt a\lt b\in\mathbb N$.</p>
<p>Case 1 : If $a,b$ are even, the answer is $1+(b-a)/2$.</p>
<p>Case 2 : If $a,b$ are odd, the answer is $(b-a)/2$.</p>
<p>Case 3 : If $a$ is even and $b$ is odd, then the answer is $1+(b-a-1)/2.$</p>
<p>Case 4 : If $a$ is odd and $b$ is even, then the answer is the same as case 3.</p>
<p>Yours is case 4, so the answer is $1+(456-45-1)/2$.</p>
<p><strong>P.S.</strong> You can use these formulas in the case of $a=1$.</p>
|
1,248,329 | <p>Let $R$ be an integral domain, and let $r \in R$ be a non-zero non-unit. Prove that $r$ is irreducible if and only if every divisor of $r$ is either a unit or an associate of $r$.</p>
<p>Proof. ($\leftarrow$) Suppose $r$ is reducible then $r$ can be expressed as $r = ab$ where $a$, $b$ are not units. This contradicts the fact that every divisor of $r$ is either a unit or an associate of $r$.</p>
<p>($\rightarrow$) Suppose every divisor of $r$ is neither a unit nor an associate of $r$. Then $r$ is reducible.</p>
<p>Can someone verify if my proof is correct. However, I prefer a direct proof. This proof doesn't look nice. Can someone show me how I can do this directly?</p>
| Ronnie Brown | 28,586 | <p>Since the question asks only for hints I refer to the following paper (available <a href="http://pages.bangor.ac.uk/~mas010/pdffiles/bhk-gpds-mv.pdf" rel="nofollow">here</a>) </p>
<p>R. Brown, P/R. Heath, and K.H. Kamps, ``Groupoids and the
Mayer-Vietoris sequence'', <em>J. Pure Appl. Alg</em>. 30 (1983)
109-129.</p>
<p>which deals with the more general question of components and vertex groups of the pullback by a morphism $f: A \to C$ of a groupoid fibration or covering of a groupoid $C$. </p>
<p>The relevance to coverings of spaces is explained in <a href="http://pages.bangor.ac.uk/~mas010/topgpds.html" rel="nofollow">Topology and Groupoids</a> Chapter 10. </p>
|
2,727,237 | <p>$$
\begin{matrix}
1 & 0 & -2 \\
0 & 1 & 1 \\
0 & 0 & 0 \\
\end{matrix}
$$</p>
<p>I am told that the span of vectors equal $R^m$ where $m$ is the rows which has a pivot in it. So when describing the span of the above vectors, is it correct it saying that they don't span $R^3$ but only span $R^2$?.</p>
<p>Thanks</p>
| Theo Bendit | 248,286 | <p>A basis being orthonormal is dependent on the inner product used. Have a think: why are the coordinate vectors $(1, 0, 0, \ldots, 0)$ and $(0, 1, 0 ,\ldots, 0)$ orthogonal? Traditionally, if they were just considered vectors in $\mathbb{R}^n$, then <strong>under the dot product</strong>, they are orthogonal because their dot product is $0$. But, does taking the dot product mean anything here? Dot products on coordinate vectors won't necessarily correspond to the inner product on the space.</p>
<p>However, and this is what I think you might be getting at, it is true that any basis can be made orthonormal with an appropriate inner product! If you take a basis $(e_1, \ldots, e_n)$ on a real vector space (without an inner product), and define an inner product by
$$\langle \lbrace \xi_1, \ldots, \xi_n \rbrace, \lbrace \mu_1, \ldots, \mu_n \rbrace\rangle = \xi_1\mu_1 + \ldots + \xi_n\mu_n,$$
i.e. doing the dot product to the coordinate vectors, then we form an inner product under which $(e_1, \ldots, e_n)$ is orthonormal. (It's a nice little exercise to show that such a bilinear map is indeed an inner product, and that $(e_1, \ldots, e_n)$ is orthonormal.)</p>
<p>So basically, I wouldn't say that non-orthonormal bases are impossible, as given any inner product, most of our bases are not orthonormal. However, I would say that every basis can be made orthonormal by equipping the space with the right inner product.</p>
|
1,299,032 | <p>The following problem is from Taylor's PDE I. I do not get why $D \det(I)B=\frac{d}{d t}\det(I+t B)|_{t=0}$ hold. Since $\det(I)=1$ and $D$ is $n\times n$ matrix, the left side seems to be a matrix, while the right side is clearly a scalar. Could anyone tell me how I should correctly interpret $\det(I)$ in this case?</p>
<p><img src="https://i.stack.imgur.com/pIVUt.png" alt="enter image description here"></p>
| abel | 9,252 | <p>suppose the eigenvalues of $B$ are $\lambda_1, \lambda_2, \cdots, \lambda_n.$ then the eigenvalues of $I + tB$ are $1 + t\lambda_1, 1 + t\lambda_2, \cdots, 1 + t\lambda_n.$ we will use the fact that the determinant of a matrix is the product of its eigenvalues.</p>
<p>now we have $$det(I + tB) = (1 + t\lambda_1) (1 + t\lambda_2) \cdots (1 + t\lambda_n) = det(I)+ t(\lambda_1 + \lambda_2 \cdots +\lambda_n) + \cdots $$</p>
<p>therefore $$\frac{d}{dt}det(I + tB)\big|_{t = 0} = trace(B).$$</p>
|
4,361,626 | <p><span class="math-container">$4$</span> balls are randomly distributed into <span class="math-container">$3$</span> cells (<span class="math-container">$3^4=81$</span> possibilities of equal probability).</p>
<p>What is the probability that there is a cell that contains exactly <span class="math-container">$2$</span> balls?</p>
<p>The correct answer is: <span class="math-container">$\frac{2}{3}$</span>, but i know don't where was i mistaken.</p>
<p>Here was my idea:</p>
<p>Let's define: <span class="math-container">$\forall _{i=1,2,3}:A_i$</span> = The event that cell #<span class="math-container">$i$</span> contains exactly <span class="math-container">$2$</span> balls.</p>
<p>Then, according to the <em>Inclusion–exclusion principle</em>, the answer should be:</p>
<p><span class="math-container">$P_{solution} = P(A_1 \cup A_2 \cup A_3) = P(A_1)+P(A_2)+P(A_3)-P(A_1\cap A_2)-P(A_1\cap A_3)-P(A_2\cap A_3)+P(A_1\cap A_2 \cap A_3)$</span></p>
<p>Where:
<span class="math-container">$$
\forall _{i=1,2,3}: \quad P(A_i)=\frac{{4 \choose 2}*2^2}{3^4}=\frac{8}{27}
$$</span>
<span class="math-container">$$
\forall_{i \neq j}: \quad P(A_i \cap A_j)= \frac{{4 \choose 2}*2}{3^4}=\frac{4}{27}
$$</span>
<span class="math-container">$$
P(A_1\cap A_2 \cap A_3)=0
$$</span></p>
<p>and so:</p>
<p><span class="math-container">$$P_{solution}=3*\frac{8}{27}-3*\frac{4}{27} = \frac{4}{9}$$</span></p>
<p>I can see that IF my calculation of <span class="math-container">$P(A_i \cap A_j)$</span> was <span class="math-container">$\frac{{4 \choose 2}}{3^4}$</span> (without multiplying by <span class="math-container">$2$</span>), then that would be correct, but i can't seem not to wonder why. I have to multiply by <span class="math-container">$2$</span>. Suppose we look at cell #1 and cell #2: I need to choose <span class="math-container">$2$</span> balls out of <span class="math-container">$4$</span>, that's <span class="math-container">$4 \choose 2$</span>. Let's say i chose the the balls <span class="math-container">$\{1,3\}$</span> and <span class="math-container">$\{2,4\}$</span>, then i must decide which cell will get the <span class="math-container">$\{1,3\}$</span> set and which will the <span class="math-container">$\{2,4\}$</span>. That's <span class="math-container">$2$</span> options, so we multiply by <span class="math-container">$2$</span>.</p>
<p>Any idea? Where was i mistaken? Can you show me your solutions?</p>
| Wuestenfux | 417,848 | <p>Here <span class="math-container">$J=\langle p(x)\rangle$</span> is the ideal generated by the underlying irreducible polynomial.</p>
<p>The result basically says that <span class="math-container">$\bar x = x+J$</span> is a zero of <span class="math-container">$p(x)$</span> which lies in the extension field <span class="math-container">$F/J$</span>.</p>
<p>Indeed, the computation shows that <span class="math-container">$p(\bar x)= \bar 0$</span>, with <span class="math-container">$\bar 0 =J$</span>.</p>
|
510,488 | <p>Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...</p>
<p>"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?</p>
<p>It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point. </p>
| Ted Shifrin | 71,348 | <p>Assuming everything in the picture is a real number, you've arrived at $xy\ge 0$, and so $x^2+xy+y^2\ge 0$. This contradicts our assumption.</p>
|
3,861,820 | <p>Given the following <span class="math-container">$\frac{1}{2^2}+
\frac{1}{2^3}+\frac{1}{2^4}+...+\frac{1}{3^2}+\frac{1}{3^3}+...+$</span></p>
<p>Can this be symbolized as:
<span class="math-container">$$\sum_{n=2}^{\infty}2^{-n}+(n+1)^{-n}$$</span></p>
<p>and if so, are the following values for <span class="math-container">$S_{n}$</span> correct?</p>
<p><span class="math-container">$$S_n= \begin{cases}
\sqrt[\leftroot{-2}\uproot{2}n]{n},\\
-\frac{1}{4}\\
\end{cases}$$</span></p>
<p>The formula for <span class="math-container">$S_n$</span> is <span class="math-container">$$S_n=\frac{n}{2}(a_1+a_n)$$</span></p>
| marwalix | 441 | <p>Let me follow up on my comment.</p>
<p>We start with the inner sum, a geometric series</p>
<p><span class="math-container">$$\sum_{n=2}^\infty{1\over m^n}={1\over m^2}{1\over 1-{1\over m}}={1\over m^2-m}$$</span></p>
<p>Now we décompose the rational fraction</p>
<p><span class="math-container">$${1\over m(m-1)}={1\over m-1}-{1\over m}$$</span></p>
<p>And so the double sum is just telescoping</p>
<p><span class="math-container">$$\sum_{m=2}^\infty\sum_{n=2}^\infty{1\over m^n}=\sum_{m=2}^\infty \left({1\over m-1}-{1\over m}\right)=1$$</span></p>
|
3,815,891 | <p>Let <span class="math-container">$x_{n}$</span> be the positive real root of equation
<span class="math-container">$$(x^{n-1}+2^n)^{n+1} = (x^{n} + 2^{n+1})^{n}$$</span></p>
<p>How to prove that <span class="math-container">$x_{n} > x_{n + 1}$</span>?</p>
<p>Actually, <span class="math-container">$x_{n} > 2$</span> and I get that <span class="math-container">$x_{1} = 5, x_{2} \approx 3.5973, x_{3} \approx 3.1033$</span></p>
| Claude Leibovici | 82,404 | <p><em>This is not a proof.</em></p>
<p>Consider that you look for the non-trivial positive zero of function
<span class="math-container">$$f_n(x)=(x^{n-1}+2^n)^{n+1} - (x^{n} + 2^{n+1})^{n}$$</span> If you compute the series of <span class="math-container">$\frac{f_n(x)}{x^n}$</span> to <span class="math-container">$O(x)$</span>, you have
<span class="math-container">$$\frac{f_n(x)}{x^n}=\frac {a_n}x-b_n+O(x)\implies x_{(n)}=\frac{b_n}{a_n}$$</span> This gives as an <em>approximation</em>
<span class="math-container">$$x_{(n)}=2+\frac 2n$$</span> which is not fantastic but "almost" decent (I hope).</p>
<p><span class="math-container">$$\left(
\begin{array}{ccc}
n & \text{approximation} & \text{exact} \\
10 & 2.20000 & 2.35954 \\
20 & 2.10000 & 2.18495 \\
30 & 2.06667 & 2.12467 \\
40 & 2.05000 & 2.09405 \\
50 & 2.04000 & 2.07552 \\
60 & 2.03333 & 2.06309 \\
70 & 2.02857 & 2.05417 \\
80 & 2.02500 & 2.04747 \\
90 & 2.02222 & 2.04224 \\
100 & 2.02000 & 2.03805
\end{array}
\right)$$</span></p>
|
3,815,891 | <p>Let <span class="math-container">$x_{n}$</span> be the positive real root of equation
<span class="math-container">$$(x^{n-1}+2^n)^{n+1} = (x^{n} + 2^{n+1})^{n}$$</span></p>
<p>How to prove that <span class="math-container">$x_{n} > x_{n + 1}$</span>?</p>
<p>Actually, <span class="math-container">$x_{n} > 2$</span> and I get that <span class="math-container">$x_{1} = 5, x_{2} \approx 3.5973, x_{3} \approx 3.1033$</span></p>
| yisishoujo | 339,918 | <p>replace <span class="math-container">$x$</span> by <span class="math-container">$2x$</span>, we have
<span class="math-container">\begin{equation}
1+\frac{x^n}{2} = \Big(1+\frac{x^{n-1}}{2}\Big)^{1+\frac{1}{n}} > 1 + \big(1+\frac{1}{n}\big)\frac{x^{n-1}}{2}
\end{equation}</span>
hence <span class="math-container">$x> 1+1/n$</span>. Let
<span class="math-container">\begin{equation}
f(y) = 1+\frac{y^{n+1}}{2} - \Big(1+\frac{y^n}{2}\Big)^{1+\frac{1}{n+1}}
\end{equation}</span>
then
<span class="math-container">\begin{equation}
f'(y) = \frac{y^n}{2}\Big(n+1-\frac{n(n+2)\big(1+\frac{y^n}{2}\big)^{\frac{1}{n+1}}}{(n+1)y}\Big)
\end{equation}</span>
<span class="math-container">$f'(y)>0$</span> if <span class="math-container">$y>1+1/n$</span>. It is easy to check <span class="math-container">$f(x)>0$</span>, hence <span class="math-container">$x>y_0$</span>(root of <span class="math-container">$f$</span>).</p>
|
1,518,258 | <p>I haven't been able to come up with a counterexample so far. </p>
| Dylan | 146,990 | <p>In general, this does not hold for any $n$ except for $n \leq 2$.</p>
<p>You can check that the result holds for $n \leq 2$ by just considering all of the possible values of $n$, $a$ and $b$. (There aren't that many)</p>
<p>If $n \geq 3$, then by Bertrand's Postulate, there is a prime $p$ strictly between $\frac{n}{2}$ and $n$.</p>
<p>Take $a=b=p$. Then since $p \leq n-1$, we have that $a \mid (n-1)!$ and $b \mid n!$.</p>
<p>Further, we have that $ab \leq (n-1)^2 < n!$. (Since $n! \geq n(n-1) > (n-1)^2$)</p>
<p>Thus $ab < n!$. But $p$ only divides $n!$ once (i.e. $p^2$ does not divide $n!$), and so $ab$ does not divide $n!$.</p>
<p>As a concrete example, lets take $n=42$. Then $23$ is a prime strictly between $21$ and $42$. Then my claim is that $23 \mid 41!$ and $23 \mid 42!$, and $23^2 \not\mid 42!$ even though $23^2 < 42!$.</p>
|
2,928,806 | <p><span class="math-container">$m(t)=\frac{t^2}{1-t^2},t<1$</span></p>
<p>The suggested answer provided by our teacher states that m(0)=0, so there's no real value with this function as mgf; hence mgf doesn't exist.</p>
<p>Intuitively I understand that <span class="math-container">$e^{xt}>0$</span> should always be true, so the expectation should always be positive under this transformation. But I still don't quite get what does "m(0)=0 so there's no real value to this function" and how this statement backed the conclusion "mgf does not exist".</p>
| drhab | 75,923 | <p><span class="math-container">$$m_X(t)=\mathbb Ee^{tX}\text{ so that }m_X(0)=\mathbb Ee^{0\cdot X}=\mathbb E1=1$$</span></p>
<p>showing that for <em>every</em> moment generating function we have <span class="math-container">$m(0)=1\neq0$</span>.</p>
|
2,168,554 | <p>Assuming you are playing roulette.</p>
<p>The probabilities to win or to lose are:</p>
<p>\begin{align}
P(X=\mathrm{win})&=\frac{18}{37}\\
P(X=\mathrm{lose})&=\frac{19}{37}
\end{align}</p>
<p>Initially 1$ is used. Everytime you lose, you double up the stake. If you win once, you stop playing. If required you play forever.</p>
<p>We can calculate two expectations:</p>
<p>Win somewhen:</p>
<p>$E[X_{win}]=\lim_{n\to\infty}1-(\frac{19}{37})^n=1$</p>
<p>The expected payoff:</p>
<p>$E[X_{payoff}]=\lim_{n\to\infty}\left(p^n(-(2^n-1))+(1-p^n)\right)=1-(\frac{38}{37})^n=-\infty$</p>
<p>This result confuses me: We have the probability of 1 to win eventually, but the expected payoff is $-\infty$. Whats wrong here?</p>
<p>Thank you</p>
| spaceisdarkgreen | 397,125 | <p>It is absolutely the case that you will win eventually with probability one. However this means your expected winnings is one, since when you inevitably win and walk away you win a single bet. I don't understand your calculation of expected winnings but rest assured it's wrong.</p>
<p>The usual caveat to mention with this startegy is that it actually requires infinitely deep pockets and is thus not realistic. If you have any loss limit where you have to walk away, no matter how large, your expected value is negative cause of the remote chance of losing a huge amount. This is a consequence of the optional stopping theorem.</p>
|
33,743 | <p>I have a lot of sum questions right now ... could someone give me the convergence of, and/or formula for, $\sum_{n=2}^{\infty} \frac{1}{n^k}$ when $k$ is a fixed integer greater than or equal to 2? Thanks!!</p>
<p>P.S. If there's a good way to google or look up answers to these kinds of simple questions ... I'd love to know it...</p>
<p>Edit: Can I solve it by integrating $\frac{1}{x^k}$ ? I can show it converges, but to find the formula? Is my question just the Riemann Zeta function?</p>
<p>(edit)</p>
<p>Thanks guys! This got me the following result:</p>
<p>$\sum_{p} \frac{1}{p} \log \frac{p}{p-1} ~ ~ \leq ~ \zeta(2)$</p>
<p>summing over all primes $p$. (And RHS is Riemann zeta function.)</p>
<p>First, sum over all integers $p$ instead of primes. Then transform the log into $\sum_{m=1}^{\infty} \frac{1}{m p^{m}}$ (<a href="http://en.wikipedia.org/wiki/Natural_logarithm#Derivative.2C_Taylor_series" rel="nofollow">reference: wikipedia</a>. I know.). Now we have (with rearranging):</p>
<p>$\leq ~ \sum_{m=1}^{\infty} \frac{1}{m} \sum_{p=2}^{\infty} \frac{1}{p^{m+1}}$</p>
<p>By the result of this question (Arturo's answer), this inner sum, which is $\zeta(m+1)$ is at most $\frac{1}{m+1-1} = \frac{1}{m}$. So we have</p>
<p>$\leq ~ ~ \sum_{m=1}^{\infty} \frac{1}{m} \frac{1}{m} = \zeta(2)$</p>
<p>I think this is a very pretty little proof. Thanks again, hope you math people enjoyed reading this....</p>
| Arturo Magidin | 742 | <p>The fact that these series converge follows from the integral test; their values are somewhat more complicated, as indicated by the comments: they correspond to evaluating Riemann's zeta function at $k$, and this is highly nontrivial, even for positive integral $k$ (called the <a href="http://en.wikipedia.org/wiki/Zeta_constant" rel="nofollow">zeta constants</a>). There are formulas for even integer values of $k$, though in terms of Bernoulli numbers. Note that your series is actually one less than the zeta constants, since
$$\zeta(k) = \sum_{n=1}^{\infty}\frac{1}{n^k}$$
and your sum starts with $n=2$, not $n=1$.</p>
<p>As mentioned, the Integral Test shows these series converge. But to do it explicitly:</p>
<p>Consider the sequence of partial sums
$$s_n = \sum_{r=2}^n \frac{1}{r^k}.$$
Since all summands are positive, this is an increasing sequence:
$$s_2\leq s_3\leq s_4\leq\cdots$$
so the convergence of the sequence of partial sums is equivalent to showing that the sequence is bounded. </p>
<p>To show the sequence is bounded, consider the function $\frac{1}{x^k}$. This function is decreasing, so if you approximate
$$\int_1^b\frac{1}{x^k}\,dx$$
using a right hand sum, you will get an underestimate for the integral. </p>
<p>Consider $$\int_1^{n}\frac{1}{x^k}\,dx,$$
and approximate it using a right hand sum with the interval $[1,n]$ divided into $n-1$ equal parts (so we break up the integral into $[1,2]$, $[2,3],\ldots,[n-1,n]$). The right hand sum with that partition is given by:
$$\mathrm{Right Sum} = \frac{1}{2^k} + \frac{1}{3^k} + \cdots + \frac{1}{n^k} = s_{n}.$$
Since right hand sums are underestimates, then for every positive integer $n$ we have that:
$$s_n \leq \int_1^n\frac{1}{x^k}\,dx = \frac{1-n^{1-k}}{k-1}.$$
Since the bounds get larger for larger $n$, we have:
$$s_n\leq \frac{1-n^{1-k}}{k-1} \leq \lim_{n\to\infty}\frac{1 - n^{1-k}}{k-1} = \frac{1}{k-1}.$$
So the sequence of $s_n$ is bounded above. Since it is increasing, the sequence of partial sums converges.</p>
<p>Since the sequence of partial sums converges, the series converges as well.</p>
<p>(This is the essence of the Integral Test: if you have a series
$$\sum a_n$$
such that there is a positive, continuous, decreasing function $f(x)$ such that $f(n)=a_n$ for all $n$, then the convergence of the series is equivalent to the convergence of the improper integral
$$\int_1^{\infty}f(x)\,dx$$
in the sense that if the integral converges, then the series converges; and if the integral diverges, then the series diverges. Of course, you can change the lower limit of the integral if necessary.)</p>
|
854,671 | <p>So I'm a bit confused with calculating a double integral when a circle isn't centered on $(0,0)$. </p>
<p>For example: Calculating $\iint(x+4y)\,dx\,dy$ of the area $D: x^2-6x+y^2-4y\le12$.
So I kind of understand how to center the circle and solve this with polar coordinates. Since the circle equation is $(x-3)^2+(y-2)^2=25$, I can translate it to $(u+3)^2+(v+2)^2=25$ and go on from there.</p>
<p>However I would like to know if I could solve this without translating the circle to the origin. I thought I could, so I simply tried solving $\iint(x+4y)\,dx\,dy$ by doing this:
$\int_0^{2\pi}\,d\phi\int_0^5(r\cos\phi + 4r\sin\phi)r\,dr$ but this doesn't work. I'm sure I'm missing something, but why should it be different? the radius is between 0 and 5 in the original circle as well, etc.</p>
<p>So my questions are:</p>
<ol>
<li><p>How can I calculate something like the above integral without translating the circle to the origin? What am I doing wrong?</p></li>
<li><p>I would appreciate a good explanation of what are the steps exactly when translating the circle. I kind of "winged it" with just saying "OK, I have to move the $X$ back by 3, so I'll call it $X+3$, the same with the $Y$ etc. If someone could give a clear breakdown of the steps that would be very nice :)</p></li>
</ol>
<p>Thanks!</p>
| Martin Sleziak | 8,297 | <p>Let us ignore leap years for a moment.</p>
<p>And let us assume that some date (e.g., 13th) falls on Monday in January. </p>
<p>What happens in February? January has $31=4\cdot 7+3$ days, so the days move by 3 to Thursday.</p>
<p>So if we denote the days in week by numbers $\{0,1,2,3,4,5,6\}$, we only have to consider the number of days in a month, compute modulo 7 and see what happens:</p>
<pre><code>0 January (31 days)
3 February (28 days)
3 March (31 days)
6 April (30 days)
1 May (31 days)
4 June (30 days)
6 July (31 days)
2 August (31 days)
5 September (30 days)
0 October (31 days)
3 November (30 days)
5 December (31 days)
1 January next year
</code></pre>
<p>(Computing day in January in the next year is irrelevant for this question; but comparing this with an actual calendar is a good sanity check.)</p>
<p>Notice, that all numbers 0,1,2,3,4,5,6 appear in the above table.</p>
<hr>
<p>Now what happens if a year starts not on Monday (=0) but on Tuesday. You simply have to add $+1$ (computing modulo 7) in each row of the table. But since every number appeared at least one, we will get $6+1=0$ in some row. And the same is true for any other day.</p>
<p>So now you only have to create similar table for a leap year, check whether all numbers from 0 to 6 appear there, and you are done.</p>
<hr>
<p>Somewhat related: <a href="http://en.wikipedia.org/wiki/Friday_the_13th#Occurrence">Occurrence of Friday the 13th</a> on Wikipedia.</p>
|
611,198 | <p>A corollary to the Intermediate Value Theorem is that if $f(x)$ is a continuous real-valued function on an interval $I$, then the set $f(I)$ is also an interval or a single point.</p>
<p>Is the converse true? Suppose $f(x)$ is defined on an interval $I$ and that $f(I)$ is an interval. Is $f(x)$ continuous on $I$? </p>
<p>Would the answer change if $f(x)$ is one-to-one?</p>
| Andrés E. Caicedo | 462 | <p>Here is a bad counterexample: <a href="http://en.wikipedia.org/wiki/Conway_base_13_function" rel="nofollow">Conway's base-$13$ function</a> takes <em>all values</em> on any (non-degenerate) interval. (This is stronger than just asking that $f(I)$ is an interval: You get that $f(J)$ is an interval for all intervals $J$. Note that requiring this removes the counterexamples offered in the other answers.) </p>
|
2,990,642 | <p><span class="math-container">$\lim_{n\to \infty}(0.9999+\frac{1}{n})^n$</span></p>
<p>Using Binomial theorem:</p>
<p><span class="math-container">$(0.9999+\frac{1}{n})^n={n \choose 0}*0.9999^n+{n \choose 1}*0.9999^{n-1}*\frac{1}{n}+{n \choose 2}*0.9999^{n-2}*(\frac{1}{n})^2+...+{n \choose n-1}*0.9999*(\frac{1}{n})^{n-1}+{n \choose n}*(\frac{1}{n})^n=0.9999^n+0.9999^{n-1}+\frac{n-1}{2n}*0.9999^{n-2}+...+n*0.9999*(\frac{1}{n})^{n-1}+(\frac{1}{n})^n$</span></p>
<p>A limit of each element presented above is 0. How should I prove that limit of "invisible" elements (I mean elements in "+..+") is also 0?</p>
| Travis Willse | 155,629 | <p><strong>Hint</strong> Instead of using the binomial expansion, observe that for <span class="math-container">$n > 10^5$</span> the quantity in parentheses is at most <span class="math-container">$1 - \frac{1}{10^5 (10^5 + 1)}$</span>. Now apply the Squeeze Theorem.</p>
|
1,651,922 | <p>I'm solving this equation:</p>
<p>$$\sin(3x) = 0$$</p>
<p>The angle is equal to 0, therefore:</p>
<p>$$3x=0+2k\pi \space\vee\space3x= (\pi-0)+2k\pi$$ </p>
<p>$$x = \frac {2}{3}k\pi \space \vee \space x = \frac {\pi + 2k\pi}{3}$$</p>
<p>Though, the answer is </p>
<p>$$x = k\frac {\pi}{3}$$</p>
<p>It looks like the two trigonometric equations have been combined into one. I must have made a mistake. Any hints?</p>
| mathmandan | 198,422 | <p>Your solutions are:
$$
x = \frac{\pi}{3}(2k),\phantom{NNNNNNNN}
x = \frac{\pi}{3}(2k + 1).
$$
So, you've shown that $x$ is either $\frac{\pi}{3}$ times an even integer, or else $\frac{\pi}{3}$ times an odd integer. In other words, $x$ is $\frac{\pi}{3}$ times an integer.</p>
|
932,430 | <p>I am studying about how a real number is defined by its properties. The three type of properties that make the real numbers what they are.</p>
<ol>
<li><p>Algebraic properties i.e, the axioms of addition, subtraction multiplication and division.</p></li>
<li><p>Order properties i.e., the inequality properties</p></li>
<li><p>Completeness property </p></li>
</ol>
<p>Here is the question : I am not able to understand the completeness property and please explain it to me in detail(it says upper bound lower bound ......)as I am a self learner.</p>
| MJD | 25,554 | <p>The crucial problem with rational numbers is that they are <em>incomplete</em>. It was discovered about 2300 years ago that there is no rational number whose square is 2. But why should there be a square root of 2? One reason is that it's easy to find a sequence of rational numbers that appears to be getting closer and closer to the square root of 2:</p>
<p>$$\frac11, \frac 32, \frac 75, \frac{17}{12}, \frac{41}{29},\ldots, \frac ab, \frac {a+2b}{a+b},\ldots$$ </p>
<p>and one can show that although the terms in this sequence get closer and closer together, there isn't anything they get closer <em>to</em>, because if there was, its square would be $2$, and there is no rational number whose square is 2. Or one can consider the sequence of rational numbers $$1, 1.4, 1.41, 1.414, 1.4142, \ldots$$ which is similar: the terms get closer and closer together as you look farther along the sequence, but they do not get close to any rational number, again because if they did that rational number would be a square root of 2, and there is no such rational.</p>
<p>So the rational numbers are literally incomplete; there are
“missing” numbers.</p>
<p>The real numbers solve this problem: they are a system of numbers that contains the rationals, but has the property that if $S$ is any sequence whose members eventually get closer and closer together, like the examples above, then there is some real number that the elements of $S$ approach as closely as desired; $S$ converges to some real number. This is the “completeness” property you are looking for.</p>
|
29,443 | <p>I'm looking at a <a href="http://en.wikipedia.org/wiki/Partition_function_%28statistical_mechanics%29#Relation_to_thermodynamic_variables" rel="nofollow noreferrer">specific</a> derivation on wikipedia relevant to statistical mechanics and I don't understand a step.</p>
<p>$$ Z = \sum_s{e^{-\beta E_s}} $$</p>
<p>$Z$ (the partition function) encodes information about a physical system. $E_s$ is the energy of a particular system state. $Z$ is found by summing over all possible system states.</p>
<p>The expected value of $E$ is found to be:</p>
<p>$$ \langle E \rangle = -\frac{\partial \ln Z}{\partial \beta} $$</p>
<p>Why is the variance of $E$ simply defined as:</p>
<p>$$ \langle(E - \langle E\rangle)^2\rangle = \frac{\partial^2 \ln Z}{\partial \beta^2} $$</p>
<p>just a partial derivative of the mean. </p>
<p>What about this problem links the variance and mean in this way?</p>
| Fabian | 7,266 | <p>The answer is valid for the partition sum $Z$ (which is closely related to the <a href="http://en.wikipedia.org/wiki/Moment-generating_function" rel="nofollow">moment generating function</a>). The reason is the special structure of the partition sum $$Z = \sum_s e^{-\beta E_s}.$$
The system is characterized
with <em>probability</em> $$P_s=\frac{e^{-\beta E_s}}{Z}$$ that a state $s$ with energy $E_s$ is attained.</p>
<p>Given this definition it is easy to see that
$$-\partial_\beta \ln Z = -\frac{\partial_\beta Z}{Z}
= \sum_s E_s \frac{e^{-\beta E_s}}{Z}= \sum_s P_s E_s =\langle E \rangle .$$</p>
<p>Similarly, one can easily convince oneself that
$$
\begin{align*}
\partial_\beta^2 \ln Z &= -\partial_\beta \left[ \sum_s E_s \frac{e^{-\beta E_s}}{Z} \right]
=\sum_s E_s^2 \frac{e^{-\beta E_s}}{Z} - \left[ \sum_s E_s \frac{e^{-\beta E_s}}{Z}\right] \left[\sum_{s'} E_{s'} \frac{e^{-\beta E_{s'}}}{Z}\right]\\
&= \langle E^2\rangle -\langle E\rangle^2 = \langle (E- \langle E\rangle)^2\rangle,
\end{align*}$$
i.e., the variance is given by the second derivative of $\ln Z$.</p>
|
851,959 | <p>In Halmos's Naive Set Theory about well-ordering set, it states that if a collecton <span class="math-container">$\mathbb{C}$</span> of well-ordered set is a chain w.r.t continuation, then the union of these sets is a well-ordered set. However, I cannot see why it is well-ordered. That is, I cannot see why each of its subset has a smallest element.</p>
| Asaf Karagila | 622 | <p>Note that in general, the union of a $\subseteq$-chain of well-ordered sets is not necessarily a well-ordered set. For example, write $\Bbb Q$ as $\{q_n\mid n\in\Bbb N\}$, then it is the increasing union of $Q_n=\{q_i\mid i<n\}$, with the linear order induced by $\Bbb Q$. Then $\{Q_n\mid n\in\Bbb N\}$ is a chain of well-ordered sets whose union is the furthest thing from a well-ordered set.</p>
<p>But here we require more. We require that this is not a chain in the inclusion relation, but rather in the continuation relation. So as you progress along the chain, you don't add new information "below" information that you already had.</p>
<p>What do I mean by that? Note that in the case of the $Q_n$'s at some point you have added an element which lies below $q_0$ in the order of the rational numbers; later on you have added an element which lies between $q_0$ and $q_1$, and so on and so forth.</p>
<p>If that would have been a continuation relation, then whenever $m<n$ and $a,b\in Q_m$, and $c\in Q_n$ such that $a<c<b$ then it is necessarily the case that $c\in Q_m$.</p>
<p>So why is the union of such a chain well-ordered? Well, if $U$ is a non-empty set, then it is necessarily the case that it meets one of the ordered sets in the chain on a non-empty set. That non-empty intersection has a least element $u$. Here we use the continuation of the orders, and we conclude that it was impossible that $U$ has elements below $u$.</p>
<p>(I have given you an outline, but you should sit down and work out the details yourself.)</p>
|
159,510 | <p>I have data points of the form <code>{x,y,z}</code>:</p>
<pre><code>s = {{0, 1, 2}, {3, 4, 5}, {6, 7, 8}}
</code></pre>
<p>And I need to remove <em>entire data points</em> based on a condition. For example, let's say that I want to remove any data points where <code>z > 6</code>. The result should be this:</p>
<pre><code>s2 = {{0, 1, 2}, {3, 4, 5}}
</code></pre>
<p>How do I do this? I think <code>DeleteCases</code> might be the way to go, but I'm still fairly inexperienced with Mathematica and am not sure how to use this function to make this work.</p>
| Nasser | 70 | <p>This works, but may be there is easier way</p>
<pre><code>SetDirectory[NotebookDirectory[]]
data = Flatten@Import["input.txt","CSV"];
data2 = If[StringQ[#],ToExpression[StringDelete[#,{"[","]"}]],#]&/@data;
data2 = ArrayReshape[data2,{Length@data2/3,3}]
</code></pre>
<p>now</p>
<pre><code>MatrixForm[data2]
</code></pre>
<p><img src="https://i.stack.imgur.com/hebEg.png" alt="Mathematica graphics"></p>
<p>Plot it</p>
<pre><code>ListPointPlot3D[data2]
</code></pre>
<p><img src="https://i.stack.imgur.com/oTTx5.png" alt="Mathematica graphics"></p>
<p>The input file is what was shown in OP </p>
<p><img src="https://i.stack.imgur.com/SyHPP.png" alt="Mathematica graphics"></p>
|
2,467,095 | <p>I'm considering the original coupon collector problem with a small modification. For the sake of completeness I shall state the original problem again first, where <strong>my question is at the end</strong>. </p>
<p>say there is a coupon inside every packet of wafers, for the moment let's assume there are only two distinct coupons $C_1$ and $C_2$ that can be collected. How many times do you need to buy the wafers on average to collect both coupons? </p>
<p>The solution to this problem as a classical coupon collector problem is 3. See for example <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">Wikipedia</a>.</p>
<p><strong>Now my question:</strong> </p>
<blockquote>
<p>How many times on average, should I buy the wafers if I want at least one $C_2$ to be collected before one $C_1$?</p>
</blockquote>
| Deep | 386,936 | <p>I am not sure I correctly understand your question. Are you looking for sequences: $C_2C_1,C_2C_2C_1,C_2C_2C_2C_1,...$. If $p$ is the probability of obtaining coupon $C_2$ then probability of a sequence of the sort mentioned which contains $n~C_2$ coupons and $1~C_1$ coupon is $P(n)=p^n(1-p)$. Expected number of wafers to be bought is $\sum_{n=1}^\infty (n+1)P(n)=p(2-p)/(1-p)$ for $0<p<1$. The result makes no sense for $p=0$ because there are no $C_2$ coupons to be had.</p>
|
1,129,052 | <p>How many 3-digits numbers possess the following property: </p>
<blockquote>
<p>After subtracting $297$ from such a number, we get a $3$-digit number consisting of the same digits in the reverse order.</p>
</blockquote>
| Mufasa | 49,003 | <p>You have:$$\text{ }abc$$$$-297$$$$cba$$The 10's column has $b-9=b$. This implies there first must have been a borrow from the 10's column to the units column, and then a borrow from the 100's column into the 10's column. This information tells us that:$$a=c+3$$and $b$ can be any digit from $0$ to $9$.</p>
<p>Now you just need to calculate the number of combinations you can get with these restrictions.</p>
|
1,129,052 | <p>How many 3-digits numbers possess the following property: </p>
<blockquote>
<p>After subtracting $297$ from such a number, we get a $3$-digit number consisting of the same digits in the reverse order.</p>
</blockquote>
| Arian | 172,588 | <p>Let one of these three digit number be $\overline{abc}$ where $a,b$ and $c$ are the digits. Then the problem can be expressed as
$$\overline{abc}-297=\overline{cba}$$
or writing it in powers of $10$ as follows
$$100a+10b+c-297=100c+10b+a\Rightarrow 99(a-c)=297\Rightarrow a-c=3$$
So we have any triple $(a,b,c)=(c+3,b,c)$ would satisfy the condition where $b\in\{0,1,2,3,4,5,6,7,8,9\}$ and $c\in\{0,1,2,3,4,5,6\}$. So in total we have $$10\times7=70$$
possible numbers if you allow for $c=0$ otherwise you would have only
$$10\times6=60$$
three digit numbers satisfying the condition.</p>
|
3,097,816 | <p>Show that the product of four consecutive odd integers is 16 less than a square.</p>
<p>For the first part I first did
<span class="math-container">$n=p(p+2)(p+4)(p+6)
=(p^2+6p)(p^2+6p+8).$</span>
I know that you are supposed to rearrange this to give an equation in the form <span class="math-container">$(ap^2+bp+c)^2$</span>, but I'm not sure how to. Also, once we get to that point, how do we prove it has to be odd numbers? </p>
<p>NOTE: I do not know how to edit Math LaTeX so I was hoping someone could edit my post for me. Thanks </p>
| Wolfgang Kais | 640,973 | <p>The middle number is <span class="math-container">$p+3 =: m_1$</span>.</p>
<p><span class="math-container">$$n = p(p+2)(p+4)(p+6) = p(p+6)(p+2)(p+4)
= (m_1-3)(m_1+3)(m_1-1)(m_1+1)
= (m_1^2-3^2)(m_1^2-1^2)
= (m_1^2-9)(m_1^2-1)$$</span></p>
<p>Now, with a new "middle number" <span class="math-container">$m_2 := m_1^2-5$</span>, we get</p>
<p><span class="math-container">$$n = (m_2-4)(m_2+4) = m_2^2-16
= (m_1^2-5)^2-16
= ((p+3)^2-5)^2-16$$</span></p>
<p>So indeed <span class="math-container">$n+16$</span> is a square number, no matter whether <span class="math-container">$p$</span> is odd or not.</p>
|
1,536,904 | <blockquote>
<p>A father send his $3$ sons to sell watermelons.The first son took $10$
watermelons,the second $20$ and the third $30$ watermelons.The father
gave order to sell all in the same price and collect and the same
number of money.How is this possible?</p>
</blockquote>
<p>Any ideas for this puzzle?</p>
| Empy2 | 81,790 | <p>Son 3 sells 30 watermelons, then buys 20 watermelons from Son 2.
Son 2 buys 10 watermelons from Son 1.</p>
|
1,536,904 | <blockquote>
<p>A father send his $3$ sons to sell watermelons.The first son took $10$
watermelons,the second $20$ and the third $30$ watermelons.The father
gave order to sell all in the same price and collect and the same
number of money.How is this possible?</p>
</blockquote>
<p>Any ideas for this puzzle?</p>
| mvw | 86,776 | <p>Son 3 sells 20 watermelons then gives 10 watermelons to son 1. Son 2 and son 1 then sell 20 watermelons each.</p>
|
1,874,159 | <blockquote>
<p>Given real numbers $a_0, a_1, ..., a_n$ such that $\dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1}=0,$ prove that $a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n=0$ has at least one real solution.</p>
</blockquote>
<p>My solution:</p>
<hr>
<p>Let $$f(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$$</p>
<p>$$\int f(x) = \dfrac {a_0}{1} x + \dfrac {a_1}{2}x^2 + \cdots + \dfrac {a_n}{n+1} x^{n+1} + C$$</p>
<p>$$\int_0^1 f(x) = \left[ \dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1} \right]-0$$</p>
<p>$$\int_0^1 f(x) = 0$$</p>
<p>Since $f$ is continuous, by the area interpretation of integration, it must have at least one zero.</p>
<hr>
<p>My question is, is this rigorous enough? Do I need to prove the last statement, perhaps by contradiction using Riemann sums? Is this a theorem I can/should quote?</p>
| Hagen von Eitzen | 39,174 | <p>Why not write it the other way round?</p>
<p>The polynomial function
$$F(x)=\sum_{k=0}^n\frac{a_k}{k+1}x^{k+1} $$
is a differentiable function $\Bbb R\to\Bbb R$ with derivative $$F'(x)=\sum_{k=0}^na_kx^k.$$
We are given that $F(1)=0$, and clearly $F(0)=0$. Hence by <a href="https://en.wikipedia.org/wiki/Rolle%27s_theorem"><strong>Rolle's theorem</strong></a>, there exists $x\in(0,1)$ such that $F'(x)=0$, as was to be shown.</p>
|
2,465,705 | <p>In differential geometry, you are often asked to reparameterize a curve using arc-length. I understand the process of how to do this, but I don't understand what we are reparameterizing from.</p>
<p>What is the curve originally parameterized by (before we REparameterize it by arc-length)?</p>
| user7530 | 7,530 | <p>It's completely arbitrary.</p>
<p>For example the "standard" parameterization of the unit circle is $\gamma(t) = [\cos(t),\sin(t)]$. But you could also sweep out a circle using the curves</p>
<ul>
<li>$\gamma(t) = [\cos(t^2), \sin(t^2)]$, which "speeds up" as you draw it;</li>
<li>$\gamma(t) = [\cos(\sqrt{t}), \sin(\sqrt{t})]$, which "slows down" as you draw it;</li>
<li>$\gamma(t) = [t, \sqrt{1-t^2}]$, which moves at constant speed in the $x$ direction while speeding up and slowing down in the $y$, etc etc etc.</li>
</ul>
<p>If you have an arbitrary curve $\gamma(t):\mathbb{R}\to\mathbb{R}^n$, nothing is generally known in advance about the parameterization; the "speed" at which changing $t$ sweeps out the curve, measured by $\|\gamma'(t)\|$, could be arbitrary fast in some places in arbitrarily slow in others. The only assumption that's relatively common is to assume that the curve is regular: $\|\gamma'(t)\|>0$, which rules out some pathologies like cusps in the curve.</p>
|
402,970 | <p>I had an exercise in my text book:</p>
<blockquote>
<p>Find $\frac{\mathrm d}{\mathrm dx}(6)$ by first principles.</p>
</blockquote>
<p>The answer that they gave was as follows:</p>
<p>$$\lim_{h\to 0} \frac{6-6}{h} = 0$$</p>
<p>However surely that answer would be undefined if we let $h$ tend towards $0$, as for finding all other limits like that we substitute $0$ in the place of $h$? </p>
| Michael Hardy | 11,667 | <p>In all cases in which the $\lim\limits_{h\to0}\dfrac{f(x+h)-f(x)}{h}$ exists, <b>both</b> the numerator and the denominator approach $0$. And even if the limit does not exist the numerator and denominator both become $0$ when $h=0$.</p>
<p>But $\lim\limits_{h\to0}$ of <em>anything</em> depends only on the behavior of that function of $h$ when $h$ is <b>not</b> $0$.</p>
<p>These are the most important things you can learn about limits before getting into these problems about derivatives. If these things were not as they are, the reason for learning about limits before going on to derivatives just wouldn't be there at all.</p>
|
1,859,652 | <p>I've just been studying cyclic quads in geometry at school and I'm thinking see seems pretty interesting, but where would I actually find these in the real world? They seem pretty useless to me...</p>
| Michael Hardy | 11,667 | <p>Ptolemy's theorem says that if $a,b,c,d$ are the lengths of the sides of a cyclic quadrilateral, with $a$ opposite $c$ and $b$ opposite $d$, and $e,f$ are the diagonals, then $ac+ bd = ef$. In the second century AD, Ptolemy used that to prove identities that today we would express as
\begin{align}
\sin(a+b) & = \sin a \cos b + \cos a \sin b, \\
\cos(a+b) & = \cos a \cos b - \sin a \sin b.
\end{align}
As to where these come up in Reality, you can start with this: <a href="https://en.wikipedia.org/wiki/Uses_of_trigonometry" rel="nofollow">https://en.wikipedia.org/wiki/Uses_of_trigonometry</a></p>
<p><b>PS:</b> A bit more on what Ptolemy did: <a href="https://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords" rel="nofollow">https://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords</a></p>
|
4,502,150 | <p>There is a "direct" way to prove this equality.</p>
<p><span class="math-container">$\displaystyle \binom{1/2}{n}=\frac{2(-1)^{n-1}}{n4^n}\binom{2n-2}{n-1}$</span></p>
<p>I am trying to skip the induction. Maybe there is a rule or formula that will help me.</p>
<p>Thank you</p>
| runway44 | 681,431 | <p>Just move the numbers around in the formula:</p>
<p><span class="math-container">$$\begin{array}{cl}
\displaystyle\binom{1/2}{n} & \displaystyle = \frac{(\frac{1}{2})(\frac{1}{2}-1)\cdots(\frac{1}{2}-(n-1))}{n!} \\[5pt]
& \displaystyle = \frac{(1)(1-2)(1-4)\cdots(1-2(n-1))}{2^nn!} \\[5pt]
& \displaystyle = (-1)^{n-1}\frac{(1)(3)\cdots(2n-3)}{2^nn!} \\[5pt]
& \displaystyle = (-1)^{n-1}\frac{(1)(2)(3)(4)\cdots(2n-3)(2n-2)}{2^nn!\cdot(2)(4)\cdots(2n-2)} \\[5pt]
& \displaystyle = (-1)^{n-1}\frac{(2n-2)!}{2^nn!\cdot2^n(n-1)!} \\[5pt]
& \displaystyle = (-1)^{n-1}\frac{(2n-2)!}{4^nn\cdot(n-1)!^2} \\[5pt]
& \displaystyle = \frac{(-1)^{n-1}}{4^nn}\binom{2n-2}{n-1}
\end{array} $$</span></p>
|
1,290,316 | <p>Let $F_i$ be a family of closed sets, then we know that $\bigcup_{i=1}^nF_i$ is closed.</p>
<p>Proving that statement is equivalent to proving:</p>
<blockquote>
<p>If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then $p\in\bigcup_{i=1}^nF_i$</p>
</blockquote>
<p>It is easy to prove the contrapositive: if $p\notin\bigcup_{i=1}^nF_i$ then $p$ is not a limit point of $\bigcup_{i=1}^nF_i$</p>
<p>However I tried the following direct proof that I am sure it is wrong because it does not use the countable nature of the union. I want to know where I am making a mistake in the following chain of reasonings:</p>
<p>If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then in every neighbourhood there is a point $q\neq p$, such that $q\in \bigcup_{i=1}^nF_i$. Since $q\in \bigcup_{i=1}^nF_i$ then $q$ belongs to at least one $F_i$, then (and this is what I suspect is false) $p$ is a limit point of $F_i$. Since $F_i$ is closed, then $p\in F_i$, then $p\in\bigcup_{i=1}^nF_i$.</p>
<p>Therefore: If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then $p\in\bigcup_{i=1}^nF_i$.</p>
<p>Thank you very much in advance.</p>
| KyleW | 252,356 | <p>I know from your other questions you like to see rigorous answers; so I thought I'd do the same on this one too. I did this a slightly longer way since I do not know which definition of closed you are using (so I used a standard definition of open)</p>
<p>def. $U$ is open $\Leftrightarrow \forall x \in U\; \exists \varepsilon >0 : B(x, \varepsilon) \subset U$</p>
<p>def. $x$ is a limit point of $U \Leftrightarrow \forall \varepsilon >0 \, (B(x, \varepsilon)\setminus \{x\}) \cap U \neq \emptyset$</p>
<ol>
<li>$\forall i\, F_i$ is closed</li>
<li>$p$ is a limit point of $\cup_{i=1}^n F_i$</li>
<li>$\forall \varepsilon >0 \, B(p, \varepsilon)\setminus \{p\} \cup (\cap_{i=1}^n F_i) \neq \emptyset$</li>
<li>$\forall \varepsilon >0 \, B(p, \varepsilon) \cap (\cup_{i=1}^n F_i) \neq \emptyset$</li>
<li>Suppose $p \not \in \cup_{i=1}^n F_i$ </li>
<li>$p \in (\cup_{i=1}^n F_i)^c$</li>
<li>$p \in \cap_{i=1}^n F_i^c $</li>
<li>$\forall i \leq n : p \in F_i^c$</li>
<li>$p \in F_k^c$</li>
<li>$F_k^c$ is open</li>
<li>$\forall x \in F_k^c\; \exists \varepsilon >0 : B(x, \varepsilon)\subset F_k^c$</li>
<li>$\exists \varepsilon >0 : B(p, \varepsilon)\subset F_k^c$</li>
<li>$B(p, \varepsilon)\subset F_k^c$</li>
<li>$\forall i \, B(p, \varepsilon)\subset F_i^c$</li>
<li>$B(p, \varepsilon) \subset \cap_{i=1}^n F_i^c $</li>
<li>$B(p, \varepsilon) \cap (\cap_{i=1}^n F_i^c)^c = \emptyset$</li>
<li>$B(p, \varepsilon) \cap (\cup_{i=1}^n F_i) = \emptyset$</li>
<li>$ \exists \varepsilon >0: B(p, \varepsilon) \cap (\cup_{i=1}^n F_i) = \emptyset$</li>
<li>$\lnot \forall \varepsilon >0 \, B(p, \varepsilon) \cap (\cup_{i=1}^n F_i) \neq \emptyset$</li>
<li>Contradition</li>
<li>$p \in \cup_{i=1}^n F_i$</li>
</ol>
<p>Notice that I did not explicitly use the fact that $n$ is finite. This is a very subtle point in the proof that is used implicitly in steps 6 - 8; As a counter example in the case $n \rightarrow \infty$, let $(X, d)$ be a metric space, and $\lim_{n\rightarrow \infty} \cup_{i=1}^{n} F_i = X$ where $F_i \subset F_{i+1}$. Can you see how this would nullify this proof?</p>
|
388,950 | <p>I've been trying to get my head around this for days. I understand what is going on with the calculation of a linear recurrence and I also understand how the characteristic is obtained.</p>
<p>What is confusion me is the general solution.</p>
<p>The general solution to the recurrence $at(n) + bt(n-1) ct(n-2) = 0$</p>
<p>For a unique root $r$, the general solution is: $t(n) = (A + Bn) r^n$. For two distinct roots $r_1$ and $r_2$, the general solution is: $t(n) = Ar_1^n + Br_2^n$</p>
<p>What are $A$ and $B$? What do they represent in the linear recurrence and in the general solution</p>
<p>I just can't see it and I need to get it so I can start my discrete math assignment ...</p>
<p>Thanks heaps in advance :)</p>
| amWhy | 9,003 | <p>Your steps in using the two point formula to find a relation between $F$ and $C$ is correct, and done well.</p>
<p>The resulting relation (equation) is your solution: $$F = \frac{9}{5}C+32$$</p>
<p>Note that given this equation, you can also represent $C$ as a function of $F$:</p>
<p>$$F = \frac{9}{5}C+32 \iff F - 32 = \frac 95 C \iff \frac 59(F - 32) = C$$</p>
<p>So $$F = \frac{9}{5}C+32 \quad \text{and} \quad C = \frac 59(F - 32)$$ represent <em>equivalent equations</em>, where the first expresses $F$ as a function of $C$, and the second expresses $C$ as a function of $F$.</p>
|
2,867,718 | <p>How do I integrate $\sin\theta + \sin\theta \tan^2\theta$ ?</p>
<p>First thing,
I have been studying maths for business for approximately 3 months now. Since then, I studied algebra and then I started studying calculus. Yet, my friend stopped me there, and asked me to study Fourier series as we'll need it for our incoming projects. So I feel that I am missing a lot of things as I haven't studied integrals yet. </p>
<p>Today, I encountered this solution that I couldn't understand at all. </p>
<p>Apparently, $\int(\sin\theta + \sin\theta \tan^2\theta)d\theta = \int(\sin\theta (1 + \tan^2\theta))d\theta$. </p>
<p>Below, a link to the solution at 5:08.
<a href="https://youtu.be/aw_VM_ZDeIo" rel="nofollow noreferrer">https://youtu.be/aw_VM_ZDeIo</a></p>
<p>He stated the we have to learn the integral identities. So, I started searching the whole internet looking for them. But, I think I couldn't find them. The only thing that I found was something called Magic hexagon. I thought of reading about $\theta$ as it might mean something. But, after all I learned that it is just a regular greek letter used as a variable.</p>
| Bernard | 202,857 | <p><em>Bioche's rules</em> say we can make the substitution $u=\cos\theta$, $\mathrm d u=-\sin\theta\,\mathrm d\theta$. Indeed
$$ \int(1+\tan^2\theta)\sin\theta\,\mathrm d\theta=\int \frac1{\cos^2\theta}\,\sin\theta\,\mathrm d\theta=\int -\frac{\mathrm du}{u^2}=\frac1u=\frac1{\cos\theta}. $$</p>
|
1,485,463 | <p>At a school function, the ratio of teachers to students is $5:18$. The ratio of female students to male students is $7:2$. If the ratio of the female teachers to female students is $1:7$, find the ratio of the male teachers to male students. </p>
| tony0858 | 281,122 | <p>Let A = # of Female Teachers</p>
<p>Let B = # of Male Teachers</p>
<p>Let X = # of Female Students</p>
<p>Let Y = # of Male Students</p>
<p>Then the statement "the ratio of Teachers to students is 5:18" can be represented by the following equation, EQ1:</p>
<p>EQ1: (A+B)/(X+Y) = 5/18 _________(A+B = number of teachers, X+Y = number of students)</p>
<pre><code> 18*(A+B) = 5*(X+Y) _________(Still EQ1)
18*A + 18*B = 5*X + 5*Y ____(Still EQ1)
</code></pre>
<p>The statement "the ratio of famale students to male students can be represented by the following equation, EQ2: </p>
<p>EQ2: X/Y = 7/2 --> X = (7/2)*Y</p>
<p>The statement "the ratio of female teachers to female students can be represented by the following equation, EQ3: </p>
<p>EQ3: A/X = 1/7 --> A = X/7</p>
<p>We want to solve for the ratio of male teachers (B) to male students (Y)</p>
<p>In order to do that, we want first want to substitute X in EQ3 with the value of X defined in EQ2</p>
<p>A = X/7 </p>
<p>A = (7/2)*Y/7 ____________(substituted (7/2)*Y for X)</p>
<p>A = Y/2 ________________ (We will call this EQ4)</p>
<p>Now, we want to substitute X in EQ1 with the value of X defined in EQ2, and A in EQ1 with the value of A defined in EQ4, and solve for B/Y.</p>
<p>18*A + 18*B = 5*X + 5*Y _____________________(EQ1 from above)</p>
<p>18*(Y/2) + 18*B = 5*((7/2)*Y) + 5*Y _________(Substituted for A and X)</p>
<p>9*Y + 18*B = 17.5*Y + 5*Y</p>
<p>18*B = 17.5*Y + 5*Y - 9*Y</p>
<p>18*B = 13.5*Y</p>
<p>B/Y = 13.5/18 = 3/4</p>
<p>Therefore, the ratio of male teachers to male students, B:Y = 3:4.</p>
|
141,101 | <p>I have given a high dimensional input $x \in \mathbb{R}^m$ where $m$ is a big number. Linear regression can be applied, but in generel it is expected, that a lot of these dimensions are actually irrelevant.</p>
<p>I ought to find a method to model the function $y = f(x)$ and at the same time uncover which dimensions contribute to the output. The overall hint is to apply the <strong>$L_1$-norm Lasso regularization</strong>.</p>
<p>$$L^{\text lasso}(\beta) = \sum_{i=1}^n (y_i - \phi(x_i)^T \beta)^2 + \lambda \sum_{j = 1}^k | \beta_j |$$</p>
<p>Minimizing $L^{\text lasso}$ is in general hard, for that reason I should apply gradient descent. My approach so far is the following:</p>
<p>In order to minimize the term, I chose to compute the gradient and set it $0$, i.e.</p>
<p>$$\frac{\partial}{\partial \beta} L^{\text lasso}(\beta) = -2 \sum_{i = 1}^n \phi(x_i)(y_i - \beta)$$</p>
<p>Since this cannot be computed simply, I want to apply the gradient descent here. The gradient descent takes a function $f(x)$ as input, so in place of that will be put the $L(\beta)$ function.</p>
<p>My problems are:</p>
<ul>
<li><p>is the gradient I computed correct? I left out the regularizationt erm in the end, which I guess is a problem, but I also have problems to compute the gradient for this one</p>
<p>In addition I may replace $| \beta_j |$ by the function $l(x)$, which is $l(x) = x - \varepsilon/2$ if $|x| \geq \epsilon$, else $x^2/(2 \varepsilon)$</p></li>
<li><p>one step in the gradient descent is: $\beta \leftarrow \beta - \alpha \cdot \frac{g}{|g|}$, where $g = \frac{\partial}{\partial \beta}f(x))^T$</p>
<p>I don't get this one, when I transpose the gradient I have computed the term cannot be resolved due to dimension mismatch</p></li>
</ul>
| Xiaorui Zhu | 372,354 | <p>The subgradient should be like this: </p>
<p>$$\frac{\partial}{\partial \beta} L^{\text lasso}(\beta) = - \sum_{i = 1}^n (2\phi(x_i)(y_i - \phi(x_i)^T \beta) + \lambda \;sign(\beta))$$</p>
|
1,412,091 | <p>The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e.</p>
<p>Could someone explain why it does not converge to zero a.e.?</p>
<blockquote>
<p><span class="math-container">$f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n < 2^{k+1}.$</span></p>
</blockquote>
<p>Note: the <a href="http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/" rel="nofollow noreferrer">typewriter sequence</a> (Example 7).</p>
| grand_chat | 215,011 | <p>Draw a picture of the generic function $f_n$ in the typewriter sequence. It's a rectangle of height 1 over an interval of width $1/2^k$, with value zero elsewhere. As the sequence progresses, the rectangles slide across the unit interval, the way a typewriter moves across the page. At each 'carriage return' of the typewriter, a new row of rectangles starts, each rectangle having half the width as before. You can see that for every point $x$ in the unit interval, the sequence $f_n(x)$ takes values zero and one infinitely often, so $f_n(x)$ cannot converge to any number.</p>
|
3,839,244 | <p>Is <span class="math-container">$\{3\}$</span> a subset of <span class="math-container">$\{\{1\},\{1,2\},\{1,2,3\}\}$</span>?</p>
<p>If the set contained <span class="math-container">$\{3\}$</span> plain and simply I would know but does the element <span class="math-container">$\{1,2,3\}$</span> include <span class="math-container">$\{3\}$</span> such that it would be a subset?</p>
| the_candyman | 51,370 | <p>As you can see, there are plenty of good answers here.</p>
<p>I think that the most important thing to understand is:</p>
<p><span class="math-container">$$3 \neq \{ 3 \}.$$</span></p>
<p>These are two different objects.</p>
<p>Moreover the followings hold:</p>
<p><span class="math-container">$$ 3 \in \{3\}$$</span>
<span class="math-container">$$ 3 \in \{\ldots, 3, \ldots \}$$</span>
<span class="math-container">$$\{3\} \in \{\ldots, \{3\}, \ldots\}$$</span>
and</p>
<p><span class="math-container">$$ \{3\} \not\in \{3\}$$</span>
<span class="math-container">$$ \{3\} \not\in \{\ldots, 3, \ldots \}$$</span></p>
|
159,438 | <p>Can be easily proved that the following series onverges/diverges?</p>
<p>$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$</p>
<p>I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.</p>
| leonbloy | 312 | <p>A proof that the sequence $\frac{\tan(n)}{n}$ does not have a limit for $n\to \infty$ is given in <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.5431&rep=rep1&type=pdf" rel="nofollow">this article</a> <sub>(Sequential tangents, Sam Coskey)</sub>. This, of course, implies that the series does not converge. </p>
<p>The proof, based on <a href="http://goo.gl/zEfj0" rel="nofollow">this paper by Rosenholtz</a> (*), uses the continued fraction of $\pi/2$, and, essentially, it shows that it's possible to find a subsequence such that $\tan(n_k)$ is "big enough", by taking numerators of the truncated continued fraction ("convergents").</p>
<p>(*) <em>"Tangent Sequences, World Records, π, and the Meaning of Life: Some Applications of Number Theory to Calculus", Ira Rosenholtz - Mathematics Magazine Vol. 72, No. 5 (Dec., 1999), pp. 367-376</em></p>
|
765,738 | <p>I am trying to prove the following inequality:</p>
<p>$$(\sqrt{a} - \sqrt{b})^2 \leq \frac{1}{4}(a-b)(\ln(a)-\ln(b))$$</p>
<p>for all $a>0, b>0$.</p>
<p>Does anyone know how to prove it?</p>
<p>Thanks a lot in advance!</p>
| user136725 | 136,725 | <p>Wlog a>b, then by Cauchy-Swartz $$\left(\int_b^a \ 1\cdot\frac{1}{\sqrt{x}}\ dx\right)^2\leq\int_b^a \ 1\ dx \cdot\int^a_b \frac{1}{x}\ dx$$</p>
|
7,072 | <p>I want to replace $x^{i+1}$ with $z_i$.</p>
<p>EDIT:
I have some latex equations, and wish to import them to MMa.</p>
<p>How to replace x_i (NOT $x_i$) with $x[i]$</p>
<p>But it seems the underscore "_" has a special meaning (function), can we do such substitution in MMa?</p>
<p><strong>EDIT2:</strong></p>
<p>For example, I want to convert the following two latex equations:</p>
<pre><code> `{la_3^2=0, la_1la_3-6la_3ps=0}`
</code></pre>
<p>into</p>
<pre><code> `{la[3]^2==0, la[1]*la[3]-6*la[3]*ps==0}`
</code></pre>
<p>so that I could apply Simplify or Solve to these equations later.</p>
<p>Great thanks.</p>
| Jens | 245 | <p>To put in the offset by <code>1</code> that was desired in the question, one could do this:</p>
<pre><code>x^(i+1) /. x^exponent_ :> Subscript[z, exponent - 1]
</code></pre>
<p>The rule works independently of whether <code>i</code> is an integer or not. I didn't take care of the special case <code>i=0</code>, but that would be doable just like in Artes' answer. </p>
<p>The difference in my answer is that I use <code>RuleDelayed</code> (<code>:></code>) which allows me to do the subtraction of <code>1</code> on the <em>right-hand</em> side of the rule., instead of looking for the <code>+1</code> in the pattern on the left-hand side (which could equally be a valid approach).</p>
<p><strong>Edit</strong></p>
<p>If your actual application is to import a $\LaTeX$ string, then the usual approach would be something like this:</p>
<pre><code>latexString = "x_i + x^{i+1}";
ToExpression[latexString, TeXForm]
</code></pre>
<blockquote>
<p>$x_i + x^{i+1}$</p>
</blockquote>
<p>The output is of course an expression that <em>no longer contains</em> the <code>Blank</code> symbol <code>_</code>. So your substitution would miss the target if it were now to look for a <code>_</code>. Instead, it has to look for the translated expression pattern involving <code>Subscript</code> $x_i$.</p>
<p>The general rule when you're not sure what pattern to look for in performing substitutions on Mathematica expressions is: take an example expression, and wrap it in <code>InputForm[expression]</code> to see how it is represented internally. For more complicated cases, you'll have to inspect <code>FullForm[expression]</code>. </p>
<p>On the other hand, when importing $\LaTeX$ one can also run into a different kind of trouble: the input may not correspond to a valid expression, which will cause an error when doing <code>ToExpression</code>. In such cases it's sometimes necessary to do <code>StringReplace</code>.</p>
<p>That would also be something you could do in your example.But I'm not sure if that's what you want, so I'll leave it out for now.</p>
<p><strong>Edit 2</strong></p>
<p>I Mathematica, $\LaTeX$ code has to be put into strings by surrounding it with quotation marks. Otherwise you're asking for trouble (syntax errors). </p>
<p>When $\LaTeX$ code is inside a string, you furthermore have to escape the backslash character so that <code>\sin</code> becomes <code>\\sin</code> etc. Even with these precautions, it isn't guaranteed that Mathematica will understand your $\LaTeX$ code, as I mention on <a href="http://pages.uoregon.edu/noeckel/computernotes/Mathematica/EquationEditing.html" rel="nofollow">this web page</a>. </p>
<p>One useful trick that you can always try, though, is to take an example Mathematica expression that you would like to produce from $\LaTeX$, and find out what the correct $\LaTeX$ source for it would be by doing this:</p>
<pre><code>mmaCode = {la[3]^2==0, la[1]*la[3]-6*la[3]*ps==0}
ToString[TeXForm[mmaCode]]
</code></pre>
<p>This will tell you what the <em>input</em> string should be, to get the expression in <code>mmaCode</code>. </p>
<p>Now you can <em>copy</em> the output of this command, and when you paste it again it will have the escaped backslashes appear automatically.</p>
<p>This is what I'm doing now, by pasting the last result back into a <code>ToExpression</code>:</p>
<pre><code>ToExpression[
"\\left\\{\\text{la}(3)^2=0,\\text{la}(1)
\\text{la}(3)-6 \\text{la}(3) \\text{ps}=0\\right\\}",
TeXForm
]
(* ==> {9 la == 0, -6 ps la[3] + la[1] la[3] == 0} *)
</code></pre>
<p>Notice how equals signs are treated differently in $\LaTeX$ and Mathematica (<code>=</code> versus <code>==</code>), and how variable names with more than one character <strong>have to</strong> be wrapped in <code>\text</code> in order to be correctly recognized as a single symbol in Mathematica. If you don't do that, every character is interpreted as a separate variable name.</p>
<p>So what does this mean for your example <em>input</em>? First we have to write it in quotation marks because of what I just said, and then we have to <em>escape</em> the curly brackets as you see in the previous output of <code>TeXForm</code>. Lastly, the variable names have to be wrapped in <code>\\text</code>:</p>
<pre><code>inputString =
"\\{\\text{la}_3^2=0, \\text{la}_1\\text{la}_3-6\\text{la}_3\\text{ps}=0\\}"
</code></pre>
<p>With this, you can finish the conversion to Mathematica:</p>
<pre><code>mmaCode2 = ToExpression[inputString, TeXForm]
</code></pre>
<p>On the result, we can now do the pattern replacements that started this post.</p>
<pre><code>mmaCode2 /. Subscript[la, i_] -> la[i]
(* ==> {la[3]^2 == 0, -6 ps la[3] + la[1] la[3] == 0} *)
</code></pre>
|
870,174 | <p>If I have $6$ children and $4$ bedrooms, how many ways can I arrange the children if I want a maximum of $2$ kids per room?</p>
<p>The problem is that there are two empty slots, and these empty slots are not unique.</p>
<p>So, I assumed there are $8$ objects, $6$ kids and $2$ empties.</p>
<p>$$C_2^8 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2520.$$</p>
<p>Subtract off combinations where empties are together:</p>
<p>$$2520 - 4 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2160$$</p>
<p>Divide by $2!$ to get rid of identical combinations due to identical empties and I get $1080$.</p>
<p>Is this right? </p>
| Juanito | 153,015 | <p>Only possible distributions are (2,2,2,0) and (2,2,1,1).
So, total ways=$^4C_1*^6C_2*^4C_2+^4C_2*^6C_2*^4C_2*2=1440$
$\\$
If you do not want to leave any room empty as an added requirement, the only pattern possible is (2,2,1,1), so the answer is 1080. hence, OP is right!</p>
|
599,656 | <p>If the objects of a category are algebraic structures in their own right, this often places additional structure on the homsets. Is there somewhere I can learn more about this general idea?</p>
<hr>
<p><strong>Example 1.</strong> Let $\cal M$ denote the category of magmas and <strong>functions</strong>, not necessarily structure-preserving. Then given objects $X$ and $Y$ of $\cal M$, the hom"set" $\mathrm{Hom}(X,Y)$ is probably best viewed as a <em>magma</em> as opposed to a set, with the pointwise operation inherited from $Y$. Furthermore we have a right (but not left) distributivity law. In particular, given objects $X, Y$ and $Z$ and arrows $f:X \rightarrow Y$ and $g,g' : Y \rightarrow Z,$</p>
<p>$$(g+g')\circ f = (g \circ f) + (g' \circ f).$$</p>
<p><strong>Example 2.</strong> Now let $\cal N$ denote the category of <a href="http://en.wikipedia.org/wiki/Medial_magma" rel="nofollow">medial</a> magmas and magma <strong>homomorphisms</strong>. Then given objects $X$ and $Y$ of $\cal N$, the hom"set" $\mathrm{Hom}(X,Y)$ can be viewed as a medial magma in its own right. Furthermore, we have both left and right distributivity laws. In the sense that given objects $X, Y$ and $Z$ and arrows $f,f':X \rightarrow Y$ and $g,g' : Y \rightarrow Z,$</p>
<p>$$(g+g')\circ f = (g \circ f) + (g' \circ f)$$</p>
<p>$$g \circ (f + f') = (g \circ f) + (g \circ f').$$</p>
| zrbecker | 19,536 | <p>The quadratic formula is $$x=\frac{-b\pm\sqrt{b^2 - 4ac}}{2a}.$$
Thus in your problem, $$x = \frac{-104\pm\sqrt{104^2 - 4(-896)}}{2}
=\frac{-104 \pm\sqrt{14400}}{2} = \frac{-104 \pm 120}{2}.$$
This gives $x = 8$ or $-112$.</p>
<p>It seems the problem in your computation was you forgot the negative on $896$ when you plugged it into the quadratic formula.</p>
|
1,035,091 | <p>I try to get the limit of the following expression (should be valid for $a\geq 0$):</p>
<p>$\lim_{n\to\infty}\sum_{k=0}^n\frac{a}{ak+n}$</p>
<p>Unfortunately nothing worked. I tried to rewrite the expressiona and to use l'hospitals rule, but it didn't work.</p>
<p>Thank you for any help:)</p>
<p>regards</p>
<p>Kevin</p>
| Community | -1 | <p>Rewrite the given sum on the Riemann sum's form:</p>
<p>$$\frac1n\sum_{k=0}^n\frac{a}{a\frac kn+1}\xrightarrow{n\to\infty}\int_0^1\frac{a}{ax+1}dx=\ln(ax+1)\Bigg|_0^1=\ln(a+1)$$</p>
|
548,563 | <p>Calculate $$\sum \limits_{k=0}^{\infty}\frac{1}{{2k \choose k}}$$</p>
<p>I use software to complete the series is $\frac{2}{27} \left(18+\sqrt{3} \pi \right)$</p>
<p>I have no idea about it. :|</p>
| Pedro | 23,350 | <p><a href="http://www.emis.de/journals/INTEGERS/papers/g27/g27.pdf" rel="noreferrer">This</a> paper is very relevant to your question. In particular, $\bf Theorems \;\;3.4-5$ and $\bf Theorem \;\;3.7$</p>
|
1,865,868 | <p>the problem goes like that "in urn $A$ white balls, $B$ black balls. we take out without returning 5 balls. (we assume $A,B\gt4$) what would be the probability that at the 5th ball removal, there was a white ball while we know that at the 3rd was a black ball".</p>
<p>What I did is I build a conditional probability tree. as it seemed, it gets really ugly and parameters won't reduce, so the equation is huge, therefor probably it's not the correct path of solution.</p>
<p>I've got this intuition that the probability of the <strong>p</strong>th ball removal being black while <strong>q</strong>th ball removal being white is the same as the probability of the first ball being black, and the second ball being white - $\frac{A}{A+B-1}$ but this is only intuition and I can't explain it.</p>
<p>would appreciate your advising,</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>Lets
$$
\vec{r} \equiv \pars{x,y,z}\,,\quad
\vec{a} \equiv \pars{1,1,1}\,,\quad
\vec{b} \equiv \pars{4,1,1}\,,\quad
\mbox{Note that}\ \vec{r}\cdot\vec{a} = 2\ \mbox{and}\ \vec{r}\cdot\vec{b} = 4
$$</p>
</blockquote>
<p>'Lagrange': $\ds{\half\,\vec{r}\cdot\vec{r} - \mu\vec{r}\cdot\vec{a} - \nu\vec{r}\cdot\vec{b}}$:</p>
<p>\begin{align}
&\vec{r} - \mu\vec{a} - \nu\vec{b} = 0\quad\imp\quad
\mu\vec{a} + \nu\vec{b} = \vec{r}\quad\imp\quad
\left\lbrace\begin{array}{rcrcl}
\ds{a^{2}\,\mu} & \ds{+} & \ds{\vec{a}\cdot\vec{b}\,\nu} & \ds{=} & \ds{2}
\\
\ds{\vec{a}\cdot\vec{b}\,\mu} & \ds{+} & \ds{b^{2}\,\nu} & \ds{=} & \ds{4}
\end{array}\right.
\\[4mm] &\
\imp\quad\left.\begin{array}{rcrcl}
\ds{3\mu} & \ds{+} & \ds{6\nu} & \ds{=} & \ds{2}
\\
\ds{3\mu} & \ds{+} & \ds{9\nu} & \ds{=} & \ds{2}
\end{array}\right\rbrace\quad\imp\quad \mu = {2 \over 3}\,,\quad\nu = 0
\end{align}
<hr>
$$
\vec{r} = \mu\vec{a} = {2 \over 3}\pars{1,1,1} = \pars{{2 \over 3},{2 \over 3},{2 \over 3}}\quad\imp
\color{#f00}{x} = \color{#f00}{y} = \color{#f00}{z} = \color{#f00}{2 \over 3}
$$</p>
|
191,307 | <p>How can I calculate the perimeter of an ellipse? What is the general method of finding out the perimeter of any closed curve?</p>
| Aang | 33,989 | <p>For general closed curve(preferably loop), perimeter=$\int_0^{2\pi}rd\theta$ where (r,$\theta$) represents polar coordinates.</p>
<p>In ellipse, $r=\sqrt {a^2\cos^2\theta+b^2\sin^2\theta}$</p>
<p>So, perimeter of ellipse = $\int_0^{2\pi}\sqrt {a^2\cos^2\theta+b^2\sin^2\theta}d\theta$ </p>
<p>I don't know if closed form for the above integral exists or not, but even if it doesn't have a closed form , you can use numerical methods to compute this definite integral.</p>
<p>Generally, people use an approximate formula for arc length of ellipse = $2\pi\sqrt{\frac{a^2+b^2}{2}}$</p>
<p>you can also visit this link : <a href="http://pages.pacificcoast.net/~cazelais/250a/ellipse-length.pdf" rel="noreferrer">http://pages.pacificcoast.net/~cazelais/250a/ellipse-length.pdf</a></p>
|
191,307 | <p>How can I calculate the perimeter of an ellipse? What is the general method of finding out the perimeter of any closed curve?</p>
| NinjaDarth | 181,639 | <p>If the semi-major axis has length <span class="math-container">$a$</span> and the eccentricity is <span class="math-container">$e$</span>, then the perimeter is
<span class="math-container">$$
2πa \left(\frac{1}{1} \left(\frac{(-1)!!}{0!!} e^0\right)^2 + \frac{1}{-1} \left(\frac{1!!}{2!!} e^1\right)^2 + \frac{1}{-3} \left(\frac{3!!}{4!!} e^2\right)^2 + \frac{1}{-5} \left(\frac{5!!}{6!!} e^3\right)^2 + ⋯\right)
$$</span>
where <span class="math-container">$(-1)!! = 1 = 0!!$</span>, and for all <span class="math-container">$n ∈ \{⋯, -7, -5, -3, -1, 0, 1, 2, 3, 4, 5, ⋯ \}$</span>: <span class="math-container">$(n+2)!! = (n+2) n!!$</span>. The sum extends to negative powers of <span class="math-container">$e$</span>, if you take <span class="math-container">$n!! = ∞$</span> for <span class="math-container">$n ∈ \{⋯, -8, -6, -4, -2\}$</span>; and if you work out the double-factorials, using:</p>
<ul>
<li><span class="math-container">$(2n)!! = 2^n n!$</span>, for <span class="math-container">$n ≥ 0$</span>;</li>
<li><span class="math-container">$(2n-1)!! = \frac{(2n)!}{2^n n!}$</span>, for <span class="math-container">$n ≥ 0$</span>;</li>
<li><span class="math-container">$(2n-1)!! = \frac{(-1)^n}{(-2n - 1)!!} = (-2)^{-n} \frac{(-n)!}{(-2n)!}$</span>, for <span class="math-container">$n ≤ 0$</span>;
</ul>
then it comes out to the following:
<span class="math-container">$$
2πa \left(1 - \left(\frac{1}{2} e\right)^2 - \frac{1}{3} \left(\frac{1}{2} \frac{3}{4} e^2\right)^2 - \frac{1}{5} \left(\frac{1}{2} \frac{3}{4} \frac{5}{6} e^3\right)^2 + ⋯\right).
$$</span>
<p>Im my previous life oops I said that out loud I offered the following estimate:
<span class="math-container">$$\mbox{Ramanujan's Formula}:
π(3(a+b) - \sqrt{(3a+b)(a+3b)}) = π(a+b) (3 - \sqrt{4 - h}),$$</span>
where <span class="math-container">$b = a \sqrt{1 - e^2}$</span> is the length of the semi-minor axis and <span class="math-container">$h = ((a - b)/(a + b))^2$</span>, while secretly holding onto the other, much better, estimate:
<span class="math-container">$$π(a+b) \left(\frac{12 + h}{8} - \sqrt{\frac{2 - h}{8}}\right).$$</span></p>
|
1,993,034 | <p>if I have say $x_4$ = free, what value goes in the $x_4$ position of the parametric form, is it $1$ or $0$ or can it be any value since it's free?</p>
| Matthew Leingang | 2,785 | <p>"Free" is not a value and "x4=free" is an abuse of the equal sign. You mean $x_4$ <em>is</em> free. <em>Is</em> can denote equality ($=$), but here it denotes membership ($\in$). I had originally put this as a postscript, but your comments on my first drafts indicate this might be a stumbling block for you.</p>
<p>If $x_4$ is a free variable, it should be left as a parameter since it can have, as you say, any value.</p>
<p>You can express the other variables in terms of the free variables, or you can give names like $s$ and $t$ to the parameters and express all $x_i$ variables in terms of $s$ and $t$.</p>
<p>It sounds like your specific general solution is:
\begin{align*}
x_4 &\text{is free} \\
x_3 &= x_4 \\
x_2 &= 0 \\
x_1 &= 3x_4
\end{align*}
"$x_4$ is free" means that you can't say anything more specific than $x_4 = x_4$. So (in vector form) $[x_1,x_2,x_3,x_4] = [3x_4,0,x_4,x_4] = x_4[3,0,1,1]$. Alternatively, you could say $[x_1,x_2,x_3,x_4] = [3x_4,0,x_4,x_4] = t[3,0,1,1]$, where $t\in\mathbb{R}$.</p>
|
3,450,692 | <p>How to solve this equation?</p>
<p><span class="math-container">$$
\frac{dy}{dx}=\frac{x^{2}+y^{2}}{2xy}
$$</span></p>
| IrbidMath | 255,977 | <p>Try the following let <span class="math-container">$u = \frac{y}{x} $</span> </p>
|
3,450,692 | <p>How to solve this equation?</p>
<p><span class="math-container">$$
\frac{dy}{dx}=\frac{x^{2}+y^{2}}{2xy}
$$</span></p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$\frac{dy}{dx}=\frac{x^2+y^2}{2xy} ~~~~(1)$$</span>
Let <span class="math-container">$y=vx \implies \frac{dy}{dx}=v+x \frac{dv}{dx}.$</span>
Then
<span class="math-container">$$v+xv'=\frac{1+v^2}{2v} \implies \int \frac{2v}{1-v^2}= \frac{dx}{x}
\implies -\ln (1-v^2)= \ln Cx \implies Cx(1-v^2)=1.$$</span>
Finally, the solution of the ODE (1) is <span class="math-container">$$x^2-y^2=Dx,$$</span>
where <span class="math-container">$D$</span> is a constant,</p>
|
102,402 | <p>I wanted to make a test bank of graphs of linear equations for my algebra classes. I want the $y$-intercept of each graph to be an integer no less than $-10$ and no greater than $10$. Generally, you want these graphs to be small, so i've decided on a $20 \times 20$ grid (10 units from the origin). Additionally, i would like students to be able to see a second integral point on this grid, so they can find the slope. How many possible graphs would be in this test bank? How did you get the solution?</p>
| André Nicolas | 6,312 | <p>As a start, we compute exactly the number of such lines with $y$-intercept $0$. </p>
<p>There are $2$ (the axes) plus twice the number with positive slope. How positive slopes are possible? There is the special case slope $1$. Then there are the proper (reduced) fractions such as $7/9$ and $1/3$, <em>plus</em> their reciprocals $9/7$ and $3$. So we first count the reduced proper fractions $a/b$ where $1\le a<b\le 10$. </p>
<p>For any $b \ge 2$, the number of such fractions is the number of positive integers $<b$ which are relatively prime to $b$. This is $\varphi(b)$, where $\varphi$ is the <a href="http://en.wikipedia.org/wiki/Euler%27s_totient_function" rel="nofollow">Euler $\varphi$-function.</a>. There is a formula for $\varphi(n)$ in terms of the prime factorization of $n$, but for small numbers we can easily determine $\varphi(b)$ directly from the definition.</p>
<p>Thus the number of reduced proper fractions as $b$ goes from $2$ to $10$ is
$$\varphi(2)+\varphi(3)+\varphi(4)+\cdots+\varphi(9)+\varphi(10).$$
This is $31$. Multiply by $2$ to include the reciprocals. We are at $62$. Add $1$ for the line with slope $1$. Double to include negative slopes. Add $2$ for the axes. We get $128$. Nice number! Often, when we get a nice number in a complicated way, the number is trying to tell us that we are overlooking a much better solution. I believe that is not true in this case.</p>
<p>This is just a small start. We need to count the lines with intercepts $-1$, $-2$, and so on up to $-10$, and double to take care of the positive intercepts. For each of $-1$, $-2$, up to $-10$ we can take a limited advantage of symmetry, since there are just as many lines with positive slope as with negative slope. </p>
<p>The $7/9$, $9/7$ symmetry breaks down, however, and the work becomes more unpleasant. For intercept $-3$, for instance, the numerators of the slopes can go to $13$, while the denominators stay at $10$ or below. </p>
<p>Unless one is very motivated, it is excessively tedious to do the work by hand. But it would be easy to write a computer program that does the job.<br>
If you happen to count (by hand) the number with intercept say $-10$, I can verify that you are right. </p>
|
1,910,085 | <p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
| Soham | 242,402 | <p>$$4\equiv1\pmod3$$</p>
<p>$$\implies4^n\equiv1^n\pmod3$$</p>
<p>Also,$1\equiv1\pmod3$</p>
<p>Adding,$$4^n+1\equiv2\pmod3$$</p>
|
245,623 | <p><em>For the following vectors $v_1 = (3,2,0)$ and $v_2 = (3,2,1)$, find a third vector $v_3 = (x,y,z)$ which together build a base for $\mathbb{R}^3$.</em></p>
<p>My thoughts:</p>
<p>So the following must hold:</p>
<p>$$\left(\begin{matrix}
3 & 3 & x \\
2 & 2 & y \\
0 & 1 & z
\end{matrix}\right)
\left(\begin{matrix}
{\lambda}_1 \\
{\lambda}_2 \\
{\lambda}_3
\end{matrix}\right) =
\left(\begin{matrix}
0 \\
0 \\
0
\end{matrix}\right)
$$</p>
<p>The gauss reduction gives</p>
<p>$$
\left(\begin{matrix}
3 & 3 & x \\
0 & 1 & z \\
0 & 0 & -\frac{2}{3}x+y
\end{matrix}\right)
$$</p>
<p>(but here I'm not sure if I'm allowed to swap the $y$ and $z$ axes)</p>
<p>For ${\lambda}_1 = {\lambda}_2 = {\lambda}_3 = 0$, this gives me</p>
<p>$$
x = 0 \\
y = 0 \\
z = 0
$$</p>
<p>Is this third vector $v_3$ building a base of $\mathbb{R}^3$ together with the other two vectors? If not, where are my mistakes?</p>
| Lubin | 17,760 | <p>But we’re talking about vector spaces over $\mathbb R$ here. If the dimension of the vector space is $n$, then any set of fewer than $n$ vectors spans a lower-dimensional subspace, whose complement is <em>open and dense</em> in the whole. You should think of this as telling you that one more vector has almost no chance of being a wrong choice. So in the case at hand, any randomly-chosen third vector should complete a basis. Like $(5,-11,17/3)$, for example.</p>
|
1,791,990 | <p>I have to prove that integral</p>
<p>$I = \int_{0}^{+\infty}\sin(t^2)dt$ is convergent. Could you tell me if it's ok?</p>
<p>Let $t^2=u$ then $dt=\frac{du}{2\sqrt{u}}$</p>
<p>Now $$I = \int_{0}^{+\infty}\frac{\sin(u)du}{2\sqrt{u}}$$</p>
<p>Which is equal to $$\int_{0}^{1}\frac{\sin(u)du}{2\sqrt{u}} + \int_{1}^{+\infty}\frac{\sin(u)du}{2\sqrt{u}}$$</p>
<p>First of these is convergent because of the limit</p>
<p>$$\lim_{u\to 0}\frac{sin(u)}{2\sqrt{u}} = 0$$</p>
<p>Second is convergent from Dirichlet test.</p>
<p>Is it correct?
Also how to find the value of this integral ($\sqrt{\frac{\pi}{8}}$) ?</p>
| Aritra Das | 229,480 | <p>Putting $\sqrt{u}=t$ and $\dfrac{du}{2\sqrt{u}}=dt$, </p>
<p>$$I = \int_{0}^{+\infty}\frac{\sin(u)du}{2\sqrt{u}}\\=\int_0^{+\infty} \sin(t^2) dt\\=\int_0^{+\infty}\Im(e^{it^2})dt\\=\Im\left(\int_0^{+\infty}e^{it^2}dt\right)$$</p>
<p>Putting $it^2=-z$ and $t=\sqrt{-\frac1iz}=\sqrt{i}\sqrt{z}$, $$dt = \sqrt{i}\frac{1}{2\sqrt{z}}dz$$</p>
<p>Hence, </p>
<p>\begin{align*}
I &=\Im\left(\int_0^{+\infty}e^{-z}\sqrt{i}\frac{1}{2\sqrt{z}}dz\right)\\
&=\Im\left(\frac{\sqrt{i}}{2}\int_0^{+\infty} z^{-\frac12}e^{-z} dz\right)\\
&=\Im\left(\frac{\sqrt{i}}{2} \Gamma\left(\frac12\right)\right)\\
&=\Im\left(\sqrt{i}\frac{\sqrt{\pi}}{2}\right)\\
&=\Im\left(\frac{1}{2\sqrt{2}}(1+i)\sqrt{\pi}\right) \tag{!!}\\
&=\fbox{$\sqrt{\frac{\pi}{8}}$}
\end{align*}</p>
<p>Now I'm concerned about step $(!!)$ because I don't see why I should take that root of $i$ and not the other one. However since it yields the right answer, I think there must be some reason. Could anyone help me out?</p>
|
4,465,150 | <p>Let <span class="math-container">$A_1,A_2,…,A_n$</span> be events in a probability space <span class="math-container">$(\Omega,\Sigma,P)$</span>.</p>
<p>If <span class="math-container">$A_1,A_2,…,A_n$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_n^c$</span> are also independent, (where <span class="math-container">$A^c = \Omega \setminus A$</span>).</p>
<p>I have found a proof by induction for this exercise, however, I have not been able to understand the conclusion of the proof, which I have marked in red. That is, why can it be immediately concluded that <span class="math-container">$A_1^c , A_2^c ,..., A_{k+1}^c$</span> are independent? I would really appreciate if someone can give me a clear explanation of what happens in that conclusion.</p>
<p><strong>Proof by induction.</strong></p>
<p><em>Basis for the Induction</em>.</p>
<p>If <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are independent then <span class="math-container">$A_1^c$</span> and <span class="math-container">$A_2^c$</span> are independent.</p>
<p>Assume <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are independent. Then
<span class="math-container">\begin{align*}
P(A_1^c \cap A_2^c)
&= 1 - P(A_1 \cup A_2) \\
&= 1 - P(A_1) - P(A_2) + P(A_1 \cap A_2) \\
&= 1 - P(A_1) - P(A_2) + P(A_1)P(A_2) \\
&= (1-P(A_1))(1-P(A_2)) \\
&= P(A_1^c)P(A_2^c).
\end{align*}</span></p>
<p><em>Induction Hypothesis.</em></p>
<p>This is our induction hypothesis:</p>
<p>If <span class="math-container">$A_1,A_2,…,A_k$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_k^c$</span> are independent.</p>
<p>Then we need to show:</p>
<p>If <span class="math-container">$A_1,A_2,…,A_{k+1}$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_{k+1}^c$</span> are independent.</p>
<p><em>Induction Step</em>.</p>
<p>This is our induction step.</p>
<p>Suppose <span class="math-container">$A_1,A_2,…,A_{k+1}$</span> are independent.</p>
<p>Then:
<span class="math-container">\begin{align}
P\left( {\bigcap_{i = 1}^{k + 1} A_i}\right) &= P\left( \bigcap_{i=1}^{k}A_i \cap A_{k+1} \right) \\
&= \prod_{i=1}^{k}P(A_i) \cdot P(A_{k+1})\\
&= P\left(\bigcap_{i=1}^{k}A_i\right) \cdot P(A_{k+1})
\end{align}</span></p>
<p>So we see that <span class="math-container">$\bigcap_{i=1}^{k}A_i$</span> and <span class="math-container">$A_{k+1}$</span> are independent.</p>
<p>So <span class="math-container">$\bigcap_{i=1}^{k}A_i$</span> and <span class="math-container">$A_{k+1}^c$</span> are independent.</p>
<p><span class="math-container">$\color{red}{\text{So, from the above results, we can see that} A_1^c,A_2^c,…,A_{k+1}^c \text{are independent}}.$</span></p>
| mark leeds | 49,818 | <p>Hi Inquirer: My apologies for the delay. I've had a lot going on so I just got back to this today.</p>
<p>I figured I'd put it in an answer since you get more space that way.</p>
<p>So, you agree that you proved the statement for <span class="math-container">$i = 2$</span> case, right. You showed that if <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are independent, then <span class="math-container">$A^{c}_{1}$</span> and <span class="math-container">$A^{c}_{2}$</span> are independent. So, what induction says is that, if we have proven the statement for <span class="math-container">$i = 2$</span> and then we can show that, it being true for <span class="math-container">$i = 2$</span>, implies that it is also true for the <span class="math-container">$i = 3$</span> case, then we are done with the proof by an induction argument.</p>
<p>So, let us assume that <span class="math-container">$A_{1}, A_{2}$</span> and <span class="math-container">$A_3$</span> are independent. We want to show that, given that the statement is true for <span class="math-container">$A_{1}$</span> and <span class="math-container">$A_{2}$</span>, then this implies that <span class="math-container">$A^{c}_{1}, A^{c}_{2}$</span> and <span class="math-container">$A^{c}_{3}$</span> are independent.</p>
<p>So, what we can do for clarity is let <span class="math-container">$k = 2$</span> and then use the same proof that you used in your question.</p>
<p><span class="math-container">\begin{align}
P\left( {\bigcap_{i = 1}^{2 + 1} A_i}\right) &= P\left( \bigcap_{i=1}^{2}A_i \cap A_{2+1} \right) \\
&= \prod_{i=1}^{2}P(A_i) \cdot P(A_{2+1})\\
&= P\left(\bigcap_{i=1}^{2}A_i\right) \cdot P(A_{2+1}) \\
&= P\left(\bigcap_{i=1}^{2}A_i\right) \cdot P(A_{3})
\end{align}</span></p>
<p>Next we can use the trick of letting <span class="math-container">$A_{1}^{*} = \bigcap_{i=1}^{2}A_i$</span> and letting <span class="math-container">$A_{2}^{*} = A_{3}$</span>.</p>
<p>So, what we have shown above is that that <span class="math-container">$A^{*}_1$</span> and <span class="math-container">$A^{*}_2$</span> are independent. But we then know that (because it's already been proven for <span class="math-container">$i = 2$</span>) that <span class="math-container">$A^{c*}_{1}$</span> and <span class="math-container">$A^{c*}_{2}$</span> are independent. But <span class="math-container">$A^{c*}_{1} = \bigcap_{i=1}^{2}A^{c}_i$</span> and <span class="math-container">$A^{*c}_{2} = A^{c}_{3}$</span> so this
means that <span class="math-container">$A^{c}_{1}, A^{c}_{2}$</span> and <span class="math-container">$A^{c}_{3}$</span> are independent. This is because <span class="math-container">$\left(\bigcap_{i=1}^{2}A^{c}_i\right) \bigcap A^{c}_{3} = \bigcap_{i=1}^{3}A^{c}_i$</span>.</p>
<p>So, what we have shown is that the <span class="math-container">$i = 2$</span> case being true implies that the <span class="math-container">$i = 3$</span> case is also true so, since this argument holds for <span class="math-container">$k = 2$</span>, it implies that it also holds for any value of <span class="math-container">$k$</span>. So we have proved the statement using induction. Does that make sense ? If not, let me know what part doesn't and I'll try to explain it more clearly.</p>
|
185,020 | <p>In literature, there are many proofs of the well-known result $$\zeta(2) = \frac{\pi^2}{6}.$$ </p>
<p>However, as far as I know, they do not offer an <em>intuitive</em> explanation of why this result <em>should</em> be true. </p>
<p>So my question is: </p>
<blockquote>
<p>What is the key intuition -- that is, the picture -- behind the result? Are there any <em>visual</em> of anyway <em>intuitive</em> proofs of the statement?</p>
</blockquote>
| Alex R. | 934 | <p>The word intuitive by definition means that something "feels correct", and is very myopic. The fact that there are so many different proofs is in itself a gift because you can pick your favorite and intuit all you want. </p>
<p>I like Euler's proof because it shows how it's related to the taylor series of $\sin(x)$ and so the mystery of $\pi$ and 6 becomes less mysterious. </p>
<p>On the other hand someone with lots of experience in Fourier analysis will see it as the Fourier transform of something and again the appearance of $\pi$, particular $\pi^2$ becomes apparent from Parseval's identity. </p>
|
185,020 | <p>In literature, there are many proofs of the well-known result $$\zeta(2) = \frac{\pi^2}{6}.$$ </p>
<p>However, as far as I know, they do not offer an <em>intuitive</em> explanation of why this result <em>should</em> be true. </p>
<p>So my question is: </p>
<blockquote>
<p>What is the key intuition -- that is, the picture -- behind the result? Are there any <em>visual</em> of anyway <em>intuitive</em> proofs of the statement?</p>
</blockquote>
| user60745 | 60,745 | <p>Here is a short note by Robert E. Greene (UCLA):</p>
<p><a href="http://www.math.ucla.edu/~greene/How%20Geometry.pdf" rel="nofollow">How Geometry implies $\sum \frac{1}{k^2} = \pi^2/6$</a></p>
|
577,163 | <p>Let $A=(a_{ij})_{n\times n}$ such
$a_{ij}>0$ and $\det(A)>0$.</p>
<p>Defining the matrix $B:=(a_{ij}^{\frac{1}{n}})$, show that $\det(B)>0?$.</p>
<p>This problem is from my friend, and I have considered sometimes, but I can't. Thank you </p>
| Marc van Leeuwen | 18,880 | <p>This is wrong.
$$
\begin{pmatrix}27&8&1\\1&1&\epsilon\\8&\epsilon&1\end{pmatrix}
$$
where $\epsilon>0$ is very small.</p>
|
577,163 | <p>Let $A=(a_{ij})_{n\times n}$ such
$a_{ij}>0$ and $\det(A)>0$.</p>
<p>Defining the matrix $B:=(a_{ij}^{\frac{1}{n}})$, show that $\det(B)>0?$.</p>
<p>This problem is from my friend, and I have considered sometimes, but I can't. Thank you </p>
| Hu Zhengtang | 53,845 | <p>It is <strong>false</strong> when $n\ge 3$. Counter-example: for $x>0$ small, let</p>
<p>$$A(x)=\begin{pmatrix}8 & (1+3x)^3 & 1\\ 1 & (1+x)^3 & 1 \\ x^3 & 1& 1\end{pmatrix}.$$
Then</p>
<p>$$B(x)=\begin{pmatrix}2 & 1+3x & 1\\ 1 & 1+x & 1 \\ x & 1& 1\end{pmatrix}.$$
Direct calculation shows that when $x>0$ is small enough,
$$\det A(x)=15x+O(x^2)>0,\quad \det B(x)=-x+O(x^2)<0.$$</p>
|
3,614,875 | <p><strong>Prop:</strong> Every sequence has a monotone subsequence.</p>
<p><strong>Pf:</strong> Suppose <span class="math-container">$\{a_n\}_{n\in \mathbb{N}}$</span> is a sequence. Choose <span class="math-container">$a_{n_1} \in \{a_1,a_2,...\}$</span>. Further choose smallest possible <span class="math-container">$a_{n_2} \in \{a_1,a_2,...\}$</span> such that <span class="math-container">$a_{n_1}\leq a_{n_2}$</span>. Denote the subsequence of <span class="math-container">$\{a_n\}$</span> starting with <span class="math-container">$a_{n_2}$</span> by <span class="math-container">$\{a^{(2)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_3}\in \{a^{(2)}_n\}$</span> such that <span class="math-container">$a_{n_2}\leq a_{n_3}$</span>. Denote the subsequence of <span class="math-container">$\{a^{(2)}_n\}$</span> starting with <span class="math-container">$a_{n_3}$</span> by <span class="math-container">$\{a^{(3)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_4} \in \{a^{(3)}_n\}$</span> such that <span class="math-container">$a_{n_3}\leq a_{n_4}$</span>. Denote the subsequence of <span class="math-container">$\{a^{(3)}_n\}$</span> starting with <span class="math-container">$a_{n_4}$</span> by <span class="math-container">$\{a^{(4)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_5} \in \{a^{(3)}_n\}$</span> such that <span class="math-container">$a_{n_4}\leq a_{n_5}$</span>. Then, <span class="math-container">$a_{n_1}\leq a_{n_2}\leq ... \leq a_{k}\leq a_{k+1}$</span>. Conversely, if there exists an index <span class="math-container">$n_2$</span> such that <span class="math-container">$a_{n_1}\geq a_{n_2}$</span>, denote the subsequence of <span class="math-container">$\{a_n\}$</span> starting with <span class="math-container">$a_{n_2}$</span> by <span class="math-container">$\{b^{(2)}_n\}$</span>. Choose largest possible <span class="math-container">$a_{n_3} \in \{b^{(2)}_n\}$</span> such that <span class="math-container">$a_{n_2}\geq a_{n_3}$</span>. Repeating the same process, we inductively conclude that for all <span class="math-container">$k \in \mathbb{N}$</span>, <span class="math-container">$a_k \geq a_{k+1}$</span>. Thus, <span class="math-container">$a_1\geq a_2 \geq ... \geq a_k \geq a_{k+1}$</span>. </p>
<p>Does my proof look correct? I wrote my arguments more understandable.</p>
| DonAntonio | 31,254 | <p>Your proof will work, with the natural changes, if you argue as follows: let <span class="math-container">$\;a_n\;$</span> be an infinite sequence. If for some <span class="math-container">$\;N\in\Bbb N\;$</span> we have that the sequence is monotonic (whatever: ascending or descending) for <span class="math-container">$\;n>N\;$</span>, then we're clearly done, otherwise...and now you do what you wrote, but you will prove something stronger: that the sequence has a monotonic <strong>ascending</strong> subsequence:</p>
<p>Let <span class="math-container">$\;a_{n_1}=a_1\;$</span>, and now let <span class="math-container">$\;n_2\;$</span> be the first index greater that <span class="math-container">$\;n_1=1\;$</span> with <span class="math-container">$\;a_{n_1}\le a_{n_2}\;$</span> (this </p>
<p>index <span class="math-container">$\;n_2\;$</span> must exist otherwise the sequence is monotonic descending for <span class="math-container">$\;n>1$</span>..!). Next, let <span class="math-container">$\;n_3\;$</span> </p>
<p>be the first index greater than <span class="math-container">$\;n_2\;$</span> such that <span class="math-container">$\;a_{n_2}\le a_{n_3}\;$</span> (again argue as before to show it exists...) </p>
<p>and etc. Inductively we've defined a subsequence <span class="math-container">$\;\{a_{n_k}\}_{k=1}^\infty\subset \{a_n\}_{n=1}^\infty\;$</span> which is monotonic </p>
<p>ascending.</p>
<p>Pollish now the above and there you go...</p>
<p><strong>Note</strong>: as you can read in the comment below, a correction must be made in the above. Instead of taking <span class="math-container">$\;a_{n_1}=a_1\;$</span> , which could cause the problem described in the comment, choose <span class="math-container">$\;n_1\;$</span> to be the first index for which there exisats a least an index <span class="math-container">$\;m>n_1\;$</span> s.t. <span class="math-container">$a_{n_1}\le a_m\;$</span> . The consecutive subindices chosen will be equally required to fulfill this.</p>
<p>Or it is possible to write the proof: if <span class="math-container">$\;a_1\;$</span> is bigger than any other element in the sequence, then we'll show a monotonic descending subsequence...and you proceed as before to choose indices that will make sure the subsequence is monotonic descending </p>
|
1,668,487 | <p>$$2^{-1} \equiv 6\mod{11}$$</p>
<p>Sorry for very strange question. I want to understand on which algorithm there is a computation of this expression. Similarly interested in why this expression is equal to two?</p>
<p>$$6^{-1} \equiv 2\mod11$$</p>
| vrugtehagel | 304,329 | <p>First, we need to understand what $2^{-1}\mod 11$ stands for. Raising to the power $-1$ is usually used to denote the <em>multiplicative inverse</em>. That is, $b$ is a multiplicative inverse of $a$ if and only if $a\cdot b=1$. Thus, $2^{-1}\mod 11$ denotes the number such that $$2^{-1}\cdot2\equiv1\mod11$$ Since $2\cdot 6\equiv 12\equiv1\mod11$, we see $2^{-1}\equiv 6\mod11$. We can do the same to show $6^{-1}\equiv 2\mod 11$.
<hr>
For a reliable approach (that is, one that doesn't rely on simply testing $2\cdot 1,2\cdot 2,\cdots,2\cdot 10$ and see which one is $1$), we can use (where we're using that $\gcd(2,11)=1$) $$2^{\phi(11)}\equiv 1\mod 11$$ by <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function" rel="nofollow">Euler's Totient Function</a> (or you could use <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow">Fermat's Little Theorem</a> in this case, since $11$ is prime), and since $\phi(11)=10$, we have \begin{align}
2^{-1}&\equiv 2^{-1}\cdot 1\\
&\equiv 2^{-1}\cdot2^{\phi(11)}\\
&\equiv 2^{-1}\cdot2^{10}\\
&\equiv 2^{-1+10}\\
&\equiv 2^9\\
&\equiv 512\\
&\equiv 6\mod 11
\end{align}</p>
|
847,672 | <p>I repeat the Peano Axioms:</p>
<ol>
<li><p>Zero is a number.</p></li>
<li><p>If a is a number, the successor of a is a number.</p></li>
<li><p>zero is not the successor of a number.</p></li>
<li><p>Two numbers of which the successors are equal are themselves equal.</p></li>
<li><p>If a set S of numbers contains zero and also the successor of every number in S, then every number is in S.</p></li>
</ol>
<p>Suppose to have two isomorph "copies" of the natural numbers $\mathbb{N}':=\{0',1',2'...\}$ and $\mathbb{N}'':=\{0'',1'',2''...\}$.
Then the set $NUMBERS:=\mathbb{N}'\cup \mathbb{N}''$ with "Zero"$:=0'$ and the "natural" successor for each element in any of the two sets, seems to satisfy the axioms.</p>
<p>Yes, P5 is very strange now because, it says that when I start with a set which I know to contain at least $0'$ and every successor of the numbers in it, automatically contains $0''$ which is not a successor of any number.</p>
<p>If this way of reasoning is allowed we could also use a number of copies of $\mathbb{N}$ indicized by a continuous index so there will be TWO NOT ISOMORPHIC Peano sets.</p>
<p>Because this sounds very strange to me, It's possible that there is a problem in my argument. What do you think?</p>
| David Z | 1,190 | <p>I'm not fully sure I understand your question, but perhaps you will find this argument enlightening: consider the set</p>
<p>$$\mathbb{N}' = \{0', s(0'), s(s(0')), \ldots\}$$</p>
<p>which contains $0'$ and all its successors. (This is the same $\mathbb{N}'$ you defined in the question.) Hopefully it is clear that $\mathbb{N}'$ contains the successor of every element in $\mathbb{N}'$:</p>
<p>$$\forall n\in\mathbb{N}'\ s(n)\in\mathbb{N}'$$</p>
<p>So axiom 5 applies to $\mathbb{N}'$, which tells us that $\mathbb{N}'$ contains all numbers, or in other words, axiom 5 tells us that if $n$ is a number, $n\in\mathbb{N}'$. That in turn means anything not in $\mathbb{N}'$ is not a number.</p>
<p>Now, if you want, you can postulate the existence of another object, like $0''$, and even a set $\mathbb{N}''$ of that object and its successors. But you cannot claim that $0''$ (or any of its successors) is a <em>number</em> (as defined in the Peano axioms) without contradicting the result of the preceding argument, and thus contradicting axiom 5.</p>
<p>You can construct sets which contain numbers and other things, like $\mathbb{N}'\cup\mathbb{N}''$. This set contains every number, because every number is in $\mathbb{N}'$ and by the definition of the union operation, anything in $\mathbb{N}'$ is in the union of $\mathbb{N}'$ and any other set. It also contains a bunch of other things which are not numbers, namely the objects in the set $\mathbb{N}''$. Nothing says that every object in the set $S$ referenced in axiom 5 must be a number.</p>
|
847,672 | <p>I repeat the Peano Axioms:</p>
<ol>
<li><p>Zero is a number.</p></li>
<li><p>If a is a number, the successor of a is a number.</p></li>
<li><p>zero is not the successor of a number.</p></li>
<li><p>Two numbers of which the successors are equal are themselves equal.</p></li>
<li><p>If a set S of numbers contains zero and also the successor of every number in S, then every number is in S.</p></li>
</ol>
<p>Suppose to have two isomorph "copies" of the natural numbers $\mathbb{N}':=\{0',1',2'...\}$ and $\mathbb{N}'':=\{0'',1'',2''...\}$.
Then the set $NUMBERS:=\mathbb{N}'\cup \mathbb{N}''$ with "Zero"$:=0'$ and the "natural" successor for each element in any of the two sets, seems to satisfy the axioms.</p>
<p>Yes, P5 is very strange now because, it says that when I start with a set which I know to contain at least $0'$ and every successor of the numbers in it, automatically contains $0''$ which is not a successor of any number.</p>
<p>If this way of reasoning is allowed we could also use a number of copies of $\mathbb{N}$ indicized by a continuous index so there will be TWO NOT ISOMORPHIC Peano sets.</p>
<p>Because this sounds very strange to me, It's possible that there is a problem in my argument. What do you think?</p>
| Dan Christensen | 3,515 | <p><strong>Theorem</strong></p>
<p>Suppose we have zeroes $0$ and $0'$ in a <em>Peano set</em> such that:</p>
<ol>
<li><p>$0$ is a number.</p></li>
<li><p>$0'$ is a number</p></li>
<li><p>If <em>x</em> is a number, the successor of <em>x</em> is a number.</p></li>
<li><p>$0$ is not the successor of any number.</p></li>
<li><p>$0'$ is not the successor of any number.</p></li>
<li><p>Two numbers of which the successors are equal are themselves equal.</p></li>
<li><p>If a set S of numbers contains $0$ and also the successor of every number in S, then every number is in S.</p></li>
<li><p>If a set S of numbers contains $0'$ and also the successor of every number in S, then every number is in S.</p></li>
</ol>
<p>Therefore $0=0'$, and there is a unique <em>zero</em> in every <em>Peano set</em> (defined in the obvious way).</p>
<p><strong>Proof</strong></p>
<p>We can easily prove by induction (7) that all non-$0$ numbers have a predecessor.</p>
<p>Suppose $0\neq 0'$. Therefore $0'$ must have a predecessor. But this contradicts (5). Therefore, we must have $0=0'$. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.