qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,140,881
<p>I want to show the title.</p> <blockquote> <p>If <span class="math-container">$M$</span> is a finitely generated module over a local ring <span class="math-container">$A$</span>, then there is a free <span class="math-container">$A$</span>-module <span class="math-container">$L$</span> such that <span class="math-container">$L/mL\simeq M/mM$</span> where <span class="math-container">$m$</span> is a unique maximal ideal of <span class="math-container">$A$</span>.</p> </blockquote> <p>My attempt: Let <span class="math-container">$M$</span> is a finitely generated <span class="math-container">$A$</span>-module where <span class="math-container">$A$</span> is a local ring with maximal ideal <span class="math-container">$m$</span>. Then <span class="math-container">$M/mM$</span> is a <span class="math-container">$A/m$</span>-vector space so there is a basis <span class="math-container">$x_1+mM,...,x_n+mM$</span> of <span class="math-container">$M/mM$</span>. Now let <span class="math-container">$L$</span> be a free <span class="math-container">$A$</span>-module generated by <span class="math-container">$x_1,...,x_n$</span>. Define <span class="math-container">$\phi:L\to M/mM$</span> by <span class="math-container">$x_i\mapsto x_i+mM$</span>. Then I want to show the kernel <span class="math-container">$\{\sum_{i=1}^na_ix_i| \sum_{i=1}^na_ix_i \in mM\}=mL$</span>. <span class="math-container">$\supset$</span> is clear but how can I prove the reverse inclusion?</p>
Math Lover
801,574
<p>If set of passwords that are missing lowercase is <span class="math-container">$N_L$</span>, set of passwords that are missing uppercase is <span class="math-container">$N_U$</span> and set of passwords that are missing digits is <span class="math-container">$N_D$</span>, then</p> <p><span class="math-container">$|N_L| = 39^{10} \:, \: |N_U| = 39^{10} \: , \: |N_D| = 58^{10}$</span></p> <p><span class="math-container">$ |N_L \cap N_U|= 10^{10} \:, \: |N_L \cap N_D|= 29^{10} \: , \: |N_U \cap N_D|= 29^{10}$</span></p> <p>And clearly <span class="math-container">$|N_L \cap N_U \cap N_D| = 0$</span></p> <p>Therefore number of passwords that are missing lowercase characters, uppercase characters or digits is given by,</p> <p><span class="math-container">$|N_L \cup N_U \cup N_D| = |N_L| + |N_U| + |N_D| - |N_L \cap N_U| - |N_L \cap N_D| - |N_U \cap N_D| + |N_L \cap N_U \cap N_D|$</span></p> <p><span class="math-container">$ = 39^{10} + 39^{10} + 58^{10} - 10^{10} - 29^{10} - 29^{10}$</span></p> <p>Finally, the number of passwords that have at least one lower case, one uppercase <em>and</em> a digit is given by,</p> <p><span class="math-container">$ \ 68^{10} - |N_L \cup N_U \cup N_D|$</span></p>
10,174
<p>I was reading about realizations of the "Fibonacci" fusion ring $X \otimes X = X \oplus 1$ in <a href="http://arxiv.org/abs/math.qa/0203255" rel="noreferrer">Fusion Categories of Rank 2</a> by Victor Ostrik. Apparently, there are two of them and they arise in various ways:</p> <ul> <li>integer-spin representations of integrable $\widehat{sl}_2$-modules of level 3</li> <li>the minimal model $\mathcal{M}(2,5)$ of the Virasoro algebra (central charge c = -22/5)</li> <li>representations of $U_q(sl_2)$ with $q = e^{\pi i /5}, e^{3\pi i / 5}$.</li> </ul> <p><del>In any of these cases, how is the Fibonacci category realized?</del></p> <p>I would like to understand each of these specific categorifications of the Fibonacci fusion ring. Can someone here explain the basics of integrable $\widehat{sl}_2$-modules or about the $\mathcal{M}(2,5)$ minimal model from Conformal Field Theory? I would also like to learn about $U_q(sl_2)$, though it's probably written in many texts.</p>
José Figueroa-O'Farrill
394
<p>The Virasoro minimal model $\mathcal{M}(2,5)$ (or in some conventions also $\mathcal{M}(5,2)$ is the conformal field theory which describes the critical behaviour of the <em>Lee-Yang edge singularity</em>. It is described, for example, in <a href="http://books.google.com/books?id=keUrdME5rhIC" rel="nofollow">Conformal Field Theory</a>, by di Francesco, Mathieu and Sénéchal; albeit the description of the Lee-Yang singularity itself is perhaps a little too physicsy. Still their treatment of minimal models should be amenable to mathematicians without prior exposure to physics.</p> <p>At any rate, googling <em>Lee-Yang edge singularity</em> might reveal other sources easier to digest. In general it is the Verlinde formula which relates the fusion ring and the Virasoro characters, and at least for the case of the Lee-Yang singularity, these can be related in turn to Temperley-Lieb algebras and Ocneanu path algebras on a suitable graph. Some details appear in <a href="http://arxiv.org/abs/hep-th/9501005" rel="nofollow">this paper</a>.</p> <p>The relation between the Virasoro minimal models and the representations of $\widehat{sl}_2$ goes by the name of the <em>coset construction</em> in the Physics conformal field theory literature or also <em>Drinfeld-Sokolov reduction</em>. This procedure gives a cohomology theory (a version of semi-infinite cohomology for a nilpotent subalgebra) which produces Virasoro modules from $\widehat{sl}_2$ modules. Relevant words to google are <em>W- algebras</em>, <em>Casimir algebras</em>,... Of course here we are dealing with the simplest case of $\widehat{sl}_2$ and Virasoro, which is the tip of a very large iceberg. The case of the Lee-Yang edge singularity is simple enough that it appears in many papers as an example from which to understand more general constructions.</p> <p>I know less about the quantum group story, but <a href="http://arxiv.org/abs/hep-th/9407186" rel="nofollow">this paper of Gaberdiel</a> might be a good starting point.</p>
10,174
<p>I was reading about realizations of the "Fibonacci" fusion ring $X \otimes X = X \oplus 1$ in <a href="http://arxiv.org/abs/math.qa/0203255" rel="noreferrer">Fusion Categories of Rank 2</a> by Victor Ostrik. Apparently, there are two of them and they arise in various ways:</p> <ul> <li>integer-spin representations of integrable $\widehat{sl}_2$-modules of level 3</li> <li>the minimal model $\mathcal{M}(2,5)$ of the Virasoro algebra (central charge c = -22/5)</li> <li>representations of $U_q(sl_2)$ with $q = e^{\pi i /5}, e^{3\pi i / 5}$.</li> </ul> <p><del>In any of these cases, how is the Fibonacci category realized?</del></p> <p>I would like to understand each of these specific categorifications of the Fibonacci fusion ring. Can someone here explain the basics of integrable $\widehat{sl}_2$-modules or about the $\mathcal{M}(2,5)$ minimal model from Conformal Field Theory? I would also like to learn about $U_q(sl_2)$, though it's probably written in many texts.</p>
Noah Snyder
22
<p>Unfortunately all three of those realizations are the sort of thing you need to read a book about not a MO post. I agree with Greg that Kassel's book is a great place to start for the quantum group construction (I don't know the other two constructions well, presumably for the affine algebra construction you'd want to start with Kac's book?).</p> <p>On the other hand there is an easier to explain elementary diagrammatic description. As usual with diagram categories you only construct a full subcategory and then you'll need to take the additive and idempotent completions to get an abelian category.</p> <p>Consider the Temperley-Lieb subcategory, whose objects are indexed by integers and whose morphisms m->n are given by linear combinations of planar diagrams of nonintersecting arcs with m boundary points at the bottom and n boundary points at the top modulo a single relation that a circle can be removed for a multiplicative factor of either the golden ratio or its conjugate. Composition is stacking, tensor product is disjoint union. There's an explicit 4-strand projection (called a Jones-Wenzl idempotent) here that has the property that any way you close it off you get zero. Kill that idempotent. Now look at the "even part" i.e. the full subcategory whose objects are even integers. This is your category. Its simple objects are the 0 and 2-strand Jones-Wenzl idempotents.</p> <p>There's another way to think of this example. First checkboard shade the regions of all your even diagrams so that they're unshaded on the outside. Then collapse all the dark regions to lines. What you end up with now has half as many boundary points and is allowed to have internal 3-valent and 1-valent vertices. It's easy to see that they satisfy an I=H relation and a relation allowing absorbing vertices. This gives a construction of the Fibonacci category using the Yamada polynomial relations (I think to get the usual Yamada polynomial on the nose here you want to actually throw in a bunch of JW2s everywhere but its six of one half dozen the other).</p> <p>Finally there's a slightly different diagram description given in the appendix of one of my <a href="http://arxiv.org/abs/0808.0764" rel="noreferrer">papers with Emily Peters and Scott Morrison</a>. In our notation there the Fibonacci category is (the additive and idempotent completion of) the tadpole planar algebra T_2.</p>
4,104,364
<p>I got the following exercise:<br /> Let <span class="math-container">$W$</span> be a finite-dimensional <span class="math-container">$\Bbb{R}$</span>-vector space. Let <span class="math-container">$\Bbb{R}_W=\Bbb{R}\times W$</span>. Define addition and multiplication by <span class="math-container">$(r,w)+(s,v)=(r+s,w+v)$</span>, <span class="math-container">$(r,w)*(s,v)=(rs,sw+rv)$</span>, for <span class="math-container">$r,s\in \Bbb{R}$</span> and <span class="math-container">$w,v\in W$</span>.<br /> It is easy to show that <span class="math-container">$\Bbb{R}_W$</span> is a commutative unitary ring. Now how do I show it is Noetherian? I think I need to find all of its ideals, but I do not know how to do it. Thanks for your help!</p>
Muses_China
307,348
<p>I think finding all ideals of <span class="math-container">$\mathbb{R}_W$</span> is a good idea. Do not think it too complicated!</p> <p>Let <span class="math-container">$I$</span> be an ideal of <span class="math-container">$\mathbb{R}_W$</span>. We divide it into two cases.</p> <p>The first case, <span class="math-container">$I$</span> contains an element like <span class="math-container">$(r, w)$</span> (<span class="math-container">$r \neq 0$</span>). Therefore, for every element <span class="math-container">$(s,x)$</span> in <span class="math-container">$\mathbb{R}_W$</span>, we have:</p> <p><span class="math-container">$(r, w) * (\frac{s}{r}, \frac{x}{r} - \frac{s}{r^2}w) = (s, x)$</span>.</p> <p>Therefore, in the first case, <span class="math-container">$I = \mathbb{R}_W$</span> is finitely generated by the identity <span class="math-container">$(1, 0)$</span>.</p> <p>The second case, every element in <span class="math-container">$I$</span> has the form <span class="math-container">$(0, w)$</span>. In this case, the finite set <span class="math-container">$\{(0, w_i)\}$</span> generates <span class="math-container">$I$</span> if the set <span class="math-container">$\{w_i\}$</span> forms a base of <span class="math-container">$W$</span> (using the condition that <span class="math-container">$W$</span> is finite-dimensional).</p>
4,184,196
<blockquote> <p>Let <span class="math-container">$\hat{f} : \Bbb S^1 \to \Bbb R^2$</span>, <span class="math-container">$\hat{f}(x,y) = (x,y)$</span> and <span class="math-container">$\hat{g} : \Bbb S^1 \to \Bbb R^2$</span>, <span class="math-container">$\hat{g}(x,y) = -(x,y)$</span>. Show that there exists a Homotopy <span class="math-container">$\hat{H} : \Bbb S^1 \times [0,1] \to \Bbb R^2$</span> from <span class="math-container">$\hat{f}$</span> to <span class="math-container">$\hat{g}$</span>.</p> </blockquote> <p>So both <span class="math-container">$\hat{f}$</span> and <span class="math-container">$\hat{g}$</span> are mapping points from the unit disk to the plane. Isn't the image of both maps just the disk back itself? I'm confused on how to get an intuition for the problem here. The definition of Homotopy is that I would need to construct <span class="math-container">$\hat{H}$</span> such that <span class="math-container">$$\hat{H}(x,y,0) = \hat{f}$$</span> and that <span class="math-container">$$\hat{H}(x,y,1) = \hat{g}.$$</span> Certainly <span class="math-container">$$\hat{H}(x,y,t) = (tx, ty)$$</span> doesn't work since <span class="math-container">$\hat{H}(x,y,0) = (0,0) \ne (-x,-y).$</span> What can I do here?</p>
jasnee
916,067
<p>Here is an approach that doesn't use any parametrisation of <span class="math-container">$S^1$</span>: Define <span class="math-container">$$ \hat{H} : S^1 \times [0,1] \to \mathbb{R}^2, \qquad ((x,y),t) \mapsto ((-2t+1)x,(-2t+1)y). $$</span> This map is continuous and you can check that it indeed defines the desired homotopy between <span class="math-container">$\hat{f}$</span> and <span class="math-container">$\hat{g}$</span>. Hope this helps!</p>
2,007,584
<p>Let $M$ be a smooth manifold and $f:M\rightarrow \mathbb{R}$ be a smooth function such that $f(M)=[0,1]$. Let $1/2$ be a regular value and suppose we consider the open and non-empty set $U:=f^{-1}(\frac{1}{2},\infty)\subset M$. I would like to show that $f^{-1}(\frac{1}{2})$ must coincide with the topological boundary of $U$, i.e. $\partial U=f^{-1}(\frac{1}{2})$.</p> <p>I could prove that $\partial U\subset f^{-1}(\frac{1}{2})$. But I have problems to show the opposite inclusion. How can one prove that $ f^{-1}(\frac{1}{2})\subset\partial U$?</p> <p>Best wishes</p>
Jack Lee
1,421
<p>By the Rank Theorem (Theorem 4.12 in my <em>Introduction to Smooth Manifolds</em>, 2nd ed.), each point $p\in f^{-1}(\frac 1 2)$ is contained in the domain of a coordinate chart on which $f$ has a coordinate representation of the form $f(x^1,\dots,x^n) = x^n$. Thus any sufficiently small neighborhood of $p$ contains both points where $f&gt;\frac 1 2$ and points where $f&lt;\frac 1 2$.</p>
334,351
<p>I am a student of mathematics, and have some background in </p> <ul> <li>Algebraic Topology (Hatcher, Bott-Tu, Milnor-Stasheff), </li> <li>Differential Geometry (Lee, Kobayashi-Nomizu), </li> <li>Riemannian Geometry (Do Carmo), </li> <li>Symplectic Geometry (Ana Cannas da Silva) and </li> <li>Differential Topology (Hirsch, Minor (Morse Theory), Milnor (h-cobordism theorem)). </li> </ul> <p>I would like to learn Floer homology and Khovanov homology. </p> <p>Q1. Is my background sufficient to learn these topics right now? What are the standard first-level and second-level sources for these topics? </p> <p>Q2. At the research level, will I need to know any other areas to work on these topics? In particular, would I need to know algebraic geometry and to what extent? To give you an idea of my present knowledge, I currently know absolutely no algebraic geometry, and even my commutative algebra background has several gaps, especially in parts about DVRs.</p> <p>Q3. Once I finish books recommended in Q1, what are some good papers to start reading on these areas? </p> <p>I would greatly appreciate reasoning behind any comments that aren't strictly factual and are opinions of the writer.</p> <p>Thank you in advance! :)</p>
David White
11,540
<p>There have been several questions previously in this vein, but yours is more general. My present answer is adapted from an answer to a question asking for a <a href="https://mathoverflow.net/a/149041/11540">"Road Map" to Homotopy Theory</a>. Your question is a bit different, so I'll write some different things. First, we need to define what we mean by "Algebraic Topology" so I'll take the subfields listed on the <a href="https://en.wikipedia.org/wiki/Algebraic_topology" rel="nofollow noreferrer">wikipedia article</a>. I think the books you have suggested for <strong>manifolds</strong> (smooth, Riemannian, etc) are already sufficient to get a working grasp.</p> <p>In general, you have a good list of "first level" sources. The background from the linked answer will give you great second level sources (Q1), plus papers (Q3), for <strong>homotopy theory</strong>:</p> <ul> <li><p><a href="https://mathoverflow.net/questions/136077/an-advanced-beginners-book-on-algebraic-topology">Here is a question</a> asks for an advanced beginners book (and <a href="https://mathoverflow.net/questions/326697/regarding-learning-algebraic-topology">here</a> is another, that was closed as a duplicate). The consensus seemed to be that it was difficult to find a one-size-fits-all text because people come in with such diverse backgrounds. Peter May's textbook <a href="http://www.math.uchicago.edu/~may/CONCISE/ConciseRevised.pdf" rel="nofollow noreferrer">A Concise Course in Algebraic Topology</a> is probably the closest thing we've got. If you like that, then you can also read <a href="http://www.math.uchicago.edu/~may/TEAK/KateBookFinal.pdf" rel="nofollow noreferrer">More concise algebraic topology</a> by May and Ponto. I also recommend <a href="http://indiana.edu/~lniat/m621notessecondedition.pdf" rel="nofollow noreferrer">Davis and Kirk</a>'s Lecture Notes in Algebraic Topology. I think these would be a very reasonable place for a beginning grad student to start (assuming they'd already studied Allen Hatcher's book or something equivalent). I'll add that nice books for simplicial things include <a href="https://www.sciencedirect.com/science/article/pii/0001870871900156" rel="nofollow noreferrer">Curtis</a>, and <a href="https://www.springer.com/gp/book/9783034601887" rel="nofollow noreferrer">Goerss-Jardine</a>.</p></li> <li><p>Another question asked for <a href="https://mathoverflow.net/questions/18041/algebraic-topology-beyond-the-basicsany-texts-bridging-the-gap">textbooks bridging the gap</a> and got similar answers. Finally, there was a more specific question about a <a href="https://mathoverflow.net/questions/81740/modern-source-for-spectra-including-ring-spectra?rq=1">modern source for spectra</a> and this has a host of useful answers. Again, Peter May and coauthors have written quite a bit on the subject, notably EKMM for S-modules, Mandell-May for Orthogonal Spectra, and MMSS for diagram spectra in general. Another great reference is Hovey-Shipley-Smith Symmetric Spectra. On the more modern side, there's Stefan Schwede's Symmetric Spectra Book Project. All these references contain phrasing in terms of model categories, which seem indispensible to modern homotopy theory. Good references are Hovey's book and Hirschhorn's book.</p></li> </ul> <p>Now for Q1 and Q3 for <strong>knot theory</strong>. A book that was standard reading for graduate students at Wesleyan learning knot theory is <a href="https://www.barnesandnoble.com/p/knot-theory-manturov/1120989622/2679570029131?st=PLA&amp;sid=BNB_ADL%20Marketplace%20Good%20New%20Books%20-%20Desktop%20Low&amp;sourceId=PLAGoNA&amp;dpid=tdtve346c&amp;2sid=Google_c&amp;gclid=EAIaIQobChMIgK63ho724gIVGLjACh0gdARQEAQYBCABEgJxb_D_BwE" rel="nofollow noreferrer">here</a>. If the start is a bit rough, I can highly recommend <a href="https://rads.stackoverflow.com/amzn/click/com/0821836781" rel="nofollow noreferrer" rel="nofollow noreferrer">this</a> book for an exposition aimed at undergraduates. After those books, <a href="https://math.stackexchange.com/questions/355126/an-introduction-to-khovanov-homology-heegaard-floer-homology">Here</a> is a question that gave recommendations for papers to learn Khovanov homology. </p> <p>As for Q2, if you take the knot theory route, you'll need to know about the Alexander Polynomial and the Jones Polynomial. A good book to start with would be <a href="https://rads.stackoverflow.com/amzn/click/com/0471433349" rel="nofollow noreferrer" rel="nofollow noreferrer">Dummit and Foote</a>. For any of the above subfields of algebraic topology, it would be good to know some commutative algebra, e.g. <a href="https://rads.stackoverflow.com/amzn/click/com/0521367646" rel="nofollow noreferrer" rel="nofollow noreferrer">Matsumura</a> or (the classic, but harder) <a href="https://rads.stackoverflow.com/amzn/click/com/0201407515" rel="nofollow noreferrer" rel="nofollow noreferrer">Atiyah-MacDonald</a>. For both fields, you also need <strong>homological algebra</strong>, and a great book would be <a href="https://rads.stackoverflow.com/amzn/click/com/0387948236" rel="nofollow noreferrer" rel="nofollow noreferrer">Hilton-Stammbach</a>. A second level book, more suited for homotopy theory, would be <a href="https://rads.stackoverflow.com/amzn/click/com/0521559871" rel="nofollow noreferrer" rel="nofollow noreferrer">Weibel</a>. </p> <p>As for algebraic geometry, I have not seen much used in knot theory. If you go the homotopy theory route, you will need to know about sheaves, and eventually about schemes and stacks. A reasonable book would be <a href="https://rads.stackoverflow.com/amzn/click/com/0387902449" rel="nofollow noreferrer" rel="nofollow noreferrer">Hartshorne</a> (but only after the algebraic background above). In my opinion there's no reason to rush into trying to teach yourself algebraic geometry, but if you can take classes on it, do. Much of the algebraic geometry needed for homotopy theory has been reformulated beautifully by <a href="http://www.math.harvard.edu/~lurie/" rel="nofollow noreferrer">Jacob Lurie</a> over the last 10 years, and his writings are also great if you intend to do algebraic topology (after learning homological algebra and learning about simplicial things from the first block of links above).</p> <p>Lastly, there is the relatively new field of <strong>topological data analysis</strong>, and applied algebraic topology. Many excellent sources to learn in that field (from the basics all the way up, including what's needed from homological algebra) are at <a href="https://people.clas.ufl.edu/peterbubenik/intro-to-tda/" rel="nofollow noreferrer">Peter Bubenik's page</a>. In particular, I highly recommend <a href="https://rads.stackoverflow.com/amzn/click/com/1502880857" rel="nofollow noreferrer" rel="nofollow noreferrer">Ghrist's book</a>, and the surveys Bubenik links to.</p> <p>Good luck!</p>
3,781,968
<p>Let <span class="math-container">$d\in\mathbb N$</span>, <span class="math-container">$U\subseteq\mathbb R^d$</span> be open and <span class="math-container">$M\subseteq U$</span> be a <span class="math-container">$k$</span>-dimensional embedded <span class="math-container">$C^1$</span>-submanifold of <span class="math-container">$\mathbb R^d$</span></p> <blockquote> <p>Let <span class="math-container">$f\in\mathcal L^1(U)$</span> and <span class="math-container">$\sigma_M$</span> denote the surface measure on <span class="math-container">$\mathcal B(M)$</span>. Are we able to show that <span class="math-container">$\left.f\right|_M\in\mathcal L^1(\sigma_M)$</span>?</p> </blockquote> <p>Let <span class="math-container">$\lambda$</span> denote the Lebesgue measure on <span class="math-container">$\mathcal B(\mathbb R)$</span>. Maybe we can show <span class="math-container">$$\sigma_M(B)\le\lambda^{\otimes d}(B)\;\;\;\text{for all }B\in\mathcal B(M)\tag1$$</span> and use this to conclude the desired claim.</p> <p>In this regard, we may note that, trivially, <span class="math-container">$U$</span> is a <span class="math-container">$d$</span>-dimensional embedded <span class="math-container">$C^1$</span>-submanifold of <span class="math-container">$\mathbb R^d$</span> and <span class="math-container">$$\sigma_U=\left.\lambda^{\otimes d}\right|_U\tag2.$$</span></p> <p><em>Remark</em>: It might be useful to note that there is the following characterization of the surface measure: <span class="math-container">$\sigma_M$</span> is the unique measure on <span class="math-container">$\mathcal B(M)$</span> with <span class="math-container">$$\left.\sigma_M\right|_\Omega=\sigma_\Omega\tag3$$</span> for every open subset (in the subspace topology) <span class="math-container">$\Omega$</span> of <span class="math-container">$M$</span>.</p>
H. H. Rugh
355,946
<p>If <span class="math-container">$k&lt;d$</span> the answer is no. <span class="math-container">$M$</span> has full surface measure but which has zero Lebesgue measure, while for <span class="math-container">$U\setminus M$</span> it is the converse. So they are mutually singular. In particular, your inequality (1) does not hold. When <span class="math-container">$M$</span> is compact and <span class="math-container">$C^1$</span>, it has finite surface measure since this will be the case locally around each point in <span class="math-container">$M$</span> (and then use compactness).</p>
3,266,367
<p>Find the solution set of <span class="math-container">$\frac{3\sqrt{2-x}}{x-1}&lt;2$</span></p> <p>Start by squaring both sides <span class="math-container">$$\frac{-4x^2-x+14}{(x-1)^2}&lt;0$$</span> Factoring and multiplied both sides with -1 <span class="math-container">$$\frac{(4x-7)(x+2)}{(x-1)^2}&gt;0$$</span> I got <span class="math-container">$$(-\infty,-2)\cup \left(\frac{7}{4},\infty\right)$$</span> Since <span class="math-container">$x\leq2$</span> then <span class="math-container">$$(-\infty,-2)\cup \left(\frac{7}{4},2\right]$$</span></p> <p>But the answer should be <span class="math-container">$(-\infty,1)\cup \left(\frac{7}{4},2\right]$</span>. Did I missed something?</p>
Michael Hoppe
93,935
<p>Define <span class="math-container">$f(x)=\frac{3\sqrt{2-x}}{x-1}-2$</span>. Being continuous on its domain <span class="math-container">$(-\infty,2]\setminus\{1\}$</span>,the function may change its sign only at its zero <span class="math-container">$7/4$</span> or at its singularity, namely at <span class="math-container">$1$</span>. Now check the sign of <span class="math-container">$f$</span> in the corresponding intervals <span class="math-container">$(-\infty,1)$</span>, <span class="math-container">$(1,7/4)$</span> and <span class="math-container">$(7/4,2]$</span>; you want <span class="math-container">$f(x)&lt;0$</span>.</p>
629,996
<p>At <a href="https://math.stackexchange.com/questions/629950/why-i-left-px-in-mathbbz-leftx-right2-mid-p0-right-is-not-a-prin#629954">this question</a> I asked about specific one...</p> <p>But I think that I don't understand the basic: </p> <blockquote> <p>If I have an ideal $I$, what avoid it to be principal? </p> </blockquote> <p>I think that I really need a good example of some ideal $I$ that is <strong>not</strong>(!!) principal. </p> <p>At all the books and the examples I saw that $\left&lt;m\right&gt;=m\cdot \mathbb{Z}$ is a principal ideal, but I look for an opposite example, i.e. an ideal that is <strong>not</strong> principal and why this ideal is not principal... </p> <p>I open another Q, because I really want to understand it but not via my example...</p> <p>Thank you! </p>
Community
-1
<p>Hint: you can see that $y(x)=c_1 x$ is a solution. Use that solution to construct the second solution by variation of parameters. The second solution is $y(x)=c_2 x e^{-1/x}$.</p>
183,100
<p>Let $\phi: M_2(\mathbb{C})\times M_2(\mathbb{C})\rightarrow M_2(\mathbb{C})$ be the map $$(B_1,B_2)\mapsto [B_1,B_2]$$ which takes two $2\times 2$ matrices to its Lie bracket. </p> <p>Then why does $d\phi_{(B_1,B_2)}:M_2(\mathbb{C})\times M_2(\mathbb{C})\rightarrow M_2(\mathbb{C})$ send $$(D_1,D_2)\mapsto [B_1,D_2]+[D_1,B_2]?$$ </p> <p>$$ $$ Taking $B_1=(g_{ij})$ and $B_2=(h_{ij})$, I do not think taking the partials of the Lie bracket $[B_1,B_2]=$ $$ \left[ \begin{array}{cc} g_{12} h_{21} - g_{21} h_{12} &amp; -g_{12} h_{11} + g_{11} h_{12} - g_{22} h_{12} + g_{12} h_{22} \\ g_{21} h_{11} - g_{11} h_{21} + g_{22} h_{21} - g_{21} h_{22} &amp; g_{21} h_{12} - g_{12} h_{21} \\ \end{array}\right] $$ is a clever way to figure out the map. </p>
Qiaochu Yuan
232
<p>Taking differentials is all about looking at what happens to your map upon a very small perturbation. So compute the bracket $$[B_1 + \epsilon D_1, B_2 + \epsilon D_2]$$</p> <p>and look at the coefficient of $\epsilon$. </p>
29,016
<p>Suppose I want to compute $f(1)\vee f(2) \vee \ldots \vee f(10^{10})$, but I know <em>a priori</em> that $f(n)$ is <code>True</code> for some $n \ll 10^{10}$ with high probability. For example, <code>f = PrimeQ</code>.</p> <p>One way to do this is to write: <code>Or[f/@Range[1,10^10]]</code>, but that would involve allocating memory for $10^{10}$ elements, as well as computing <code>f</code> unnecessarily. (I overcame the latter problem with using <code>Hold</code>, but the memory problem still stands).</p> <p><strong>Question:</strong> Is there a way to compute <code>Or[f[1], ..., f[10^10]]</code> without allocating memory for $10^{10}$ booleans?</p> <p>I've done some research: it seems like <a href="https://mathematica.stackexchange.com/questions/838/functional-style-using-lazy-lists/885#885">Functional style using lazy lists?</a> might work, but I'm wondering if there is a shorter solution -- one that does not involve defining streams?</p>
Jens
245
<p>It looks like you only need a looping construct that terminates when the first <code>True</code> is encountered with the <code>Or</code> operation. So how about this:</p> <pre><code>f = PrimeQ (* ==&gt; PrimeQ *) notFound = False; n = 0; While[! notFound, n += 1; notFound = notFound || f[n] ]; n (* ==&gt; 2 *) </code></pre>
539,457
<p>Suppose we have a natural number $N$ with decimal representation $A_kA_{k-1}\ldots A_0$. How do I prove that if the $\sum\limits_{i=0}^kA_i$ is divisible by $9$ then $N$ is divisible by $9$ too?</p>
njguliyev
90,209
<p><em>Hint:</em> $\overline{A_kA_{k-1}\ldots A_1A_0} = 10^kA_k + 10^{k-1}A_{k-1} + \ldots + 10A_1 + A_0$.</p> <blockquote class="spoiler"> <p> $(10^kA_k + 10^{k-1}A_{k-1} + \ldots + 10A_1 + A_0) - (A_k + A_{k-1} + \ldots + A_1 + A_0) = (10^k-1)A_k + (10^{k-1}-1)A_{k-1} + \ldots + (10-1)A_1$ is divisible by $9$.</p> </blockquote>
3,644,823
<p>Let <span class="math-container">$f$</span> be a non constant holomorphic function on and inside of the unit circle <span class="math-container">$C:=\{z\in \mathbb C~:~|z|=1\}$</span>. Suppose <span class="math-container">$|f(z)|=1$</span> on <span class="math-container">$C$</span>, then for <span class="math-container">$D=\{z\in \mathbb C~:~|z|\leq 1\}$</span>, prove that <span class="math-container">$f: \bar{D}\rightarrow \bar{D}$</span> is onto.</p> <p>Since <span class="math-container">$f$</span> is non constant, maximum modulus gives that <span class="math-container">$|f(z)| &lt;1$</span> on <span class="math-container">$D$</span>. But it is not enough to say that <span class="math-container">$f$</span> is onto.</p>
Community
-1
<p><span class="math-container">$f$</span> has at least one zero: this follows applying Rouche's theorem to <span class="math-container">$f(z);f(z)-f(z_0)$</span>, where <span class="math-container">$z_0\in \mathbb{D}$</span> is such that :<span class="math-container">$f(z_0)\neq 0$</span>. Indeed, on <span class="math-container">$\partial \mathbb{D}$</span> we have <span class="math-container">$$1=|f(z)|&gt;|f(z_0)|=|f(z)-(f(z)-f(z_0))|$$</span></p> <p>Since <span class="math-container">$f(z)-f(z_0)$</span> has at least one zero (in <span class="math-container">$z_0$</span>), by R.T. <span class="math-container">$f$</span> has at least one zero.</p> <p>For every <span class="math-container">$w:|w|&lt;1$</span>, <span class="math-container">$f(z)-w$</span> has at least one zero, as you can prove applying Rouche's theorem to <span class="math-container">$f(z);f(z)-w$</span>. Indeed, on <span class="math-container">$\partial \mathbb{D}$</span> we have</p> <p><span class="math-container">$$1=|f(z)|&gt;|w|=|f(z)-(f(z)-w)|$$</span></p> <p>Since <span class="math-container">$f$</span> has at least one zero, <span class="math-container">$f-w$</span> has at least one zero, i.e. <span class="math-container">$f^{-1}(w)\neq \emptyset$</span></p> <p>Now, for the last part, we need to prove that <span class="math-container">$f(\partial \mathbb{D})=\partial \mathbb{D}$</span>. Let <span class="math-container">$z_n$</span> be a sequence of points in the interior of the unit disc converging to a general point <span class="math-container">$z\in \partial \mathbb{D}$</span>, and let <span class="math-container">$w_n:w_n\in f^{-1}(z_n)\in \mathbb{D}$</span>. Since the closed unit disc is compact, we can extract a convergent subsequence <span class="math-container">$\hat{w}_n\to w\in \partial \mathbb{D}$</span>. By continuity <span class="math-container">$f(w)=z$</span>. Since <span class="math-container">$z$</span> was general we have proven the assertion</p>
17,270
<p>I just joined MathSE and it's beautiful here, except for the fact that some unregistered users ask a question and never come back. Most of the time these questions are trivial, though they still consume answerers' (valuable) time which never gets rewarded. I thought it was okay until I saw someone's profile with the following statistics: active $1$ year $7$ Months, $0$ Answers , $72$ Questions, $0$ accept votes. Yes, I agree that the answers are up-voted in this case but is it really okay to never accept any answers?</p>
Gerry Myerson
8,269
<p>No, it's not OK to post 72 questions, get useful answers, and not accept any of them. I would hope some moderator would take this user aside and explain a few things about how this site works best. </p>
225,351
<p>Consider a sine wave having <span class="math-container">$4$</span> cycles wrapped around a circle of radius 1 unit.</p> <p><span class="math-container">$$ y = \sin(4x) $$</span></p> <p>To find the equation of the sine wave with circle acting, one approach is to consider the sine wave along a rotated line. But it doesn't suffice for the circular path. </p>
Phira
9,325
<p>Do it first for the circle centered at the origin in polar coordinates.</p> <p>Then switch do Cartesian coordinates, then shift to the actual center of the circle.</p>
225,351
<p>Consider a sine wave having <span class="math-container">$4$</span> cycles wrapped around a circle of radius 1 unit.</p> <p><span class="math-container">$$ y = \sin(4x) $$</span></p> <p>To find the equation of the sine wave with circle acting, one approach is to consider the sine wave along a rotated line. But it doesn't suffice for the circular path. </p>
Max
164,373
<p>it should be, in cartesian coordinates</p> <p>x = (R + a · sin(n·θ)) · cos(θ) + xc</p> <p>y = (R + a · sin(n·θ)) · sin(θ) + yc</p> <p>where </p> <p>R is circle's radius</p> <p>a is sinusoid amplitude</p> <p>θ is the parameter (angle), from 0 to 2π</p> <p>xc,yc is circle's center point</p> <p>n is number of sinusoids on circle</p> <p>you can also get a pure cartesian equation (non-parametric) on x/y, but just for half circle, solving second for sin(θ) and replacing it on first one.</p>
164,746
<p>For example</p> <p><a href="https://i.stack.imgur.com/bFDdw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bFDdw.png" alt="enter image description here"></a></p> <p>But I want to strip those box edge except the bottom ones like this</p> <p><a href="https://i.stack.imgur.com/DpzG1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DpzG1.png" alt="enter image description here"></a></p> <p>Note that there is still one redundant edge left, because erasing it will be quite ugly with my image editing tool. Also I need the "y" axis labeled like 2D plot case.</p> <p>I search the documentation, did't find a way to detailed control the box edge.</p>
José Antonio Díaz Navas
1,309
<p>Something like this (playing with <code>AxesEdge</code>)?:</p> <pre><code>Plot3D[Exp[-x^2 - y^2], {x, -2, 2}, {y, -2, 2}, Axes -&gt; {True, True, False}, Boxed -&gt; False, AxesLabel -&gt; {Style["x", 12, Bold], Style["y", 12, Bold], None}, ViewPoint -&gt; {0, -2, 2}, AxesEdge -&gt; {Automatic, {-1, -1}, None}] </code></pre> <p><a href="https://i.stack.imgur.com/bIfUb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bIfUb.png" alt="enter image description here"></a></p>
2,559,350
<p>Consider a circumference centered in the origin and with radius $r$. Let $C$ be a point on the circumference, and let $A,B$ be its projections on the axes, respectively $x$-axis and $y$-axis. What is the length of $AB$?</p> <p>I tried applying the laws of sines and cosines on $ABC$, but I only got tautologies....</p>
Anders Beta
464,504
<p>If $|z-i|^4=1 \Rightarrow |z-i|=1$, which implies $z-i=e^{i\theta}$ for some arbitrary angle $\theta$. This implies:</p> <p>$$ z=i+e^{i\theta} = \cos\theta +i(1+\sin\theta)$$</p>
132,879
<p>This question is motivated by the physical description of magnetic monopoles. I will give the motivation, but you can also jump to the last section.</p> <p>Let us recall Maxwell’s equations: Given a semi-riemannian 4-manifold and a 3-form $j$. We describe the field-strength differential form $F$ as a solution of the equations</p> <p>$\mathrm{d}F=0$</p> <p>$\mathrm{d}\star F=j$ (where $\star$ denotes the Hodge star).</p> <p>If the second de-Rham-cohomology vanishes (for example in Minkowski space), $F$ is exact and we can write it as $F=\mathrm{d}A$, where $A$ denotes a 1-form.</p> <p>Now let us consider monopoles: We use two 3-forms $j_m$ (magnetic current) and $j_e$ (electric current) and consider the equations</p> <p>$\mathrm{d}F=j_m$</p> <p>$\mathrm{d}\star F=j_e$.</p> <p>Essentially, it is described in <a href="http://arxiv.org/pdf/math-ph/0203043v1.pdf">this paper</a>, but the author Frédéric Moulin (a physicist) uses coordinates. Now he assumes that (in Minkowski space) $F$ can be decomposed using two potentials — into an exact (in the image of the derivative) and a coexact (in the image of the coderivative) form: $F=\mathrm{d}A-\star\mathrm{d}C$. Is there a mathematical justification for this assumption (maybe it is just very pragmatic)?</p> <h3>The actual question</h3> <p>Given a 2-form $F$ on 4-dimensional Minkowski space (more generally: semi-riemannian manifolds)—are there any known conditions such that $F$ decomposes into an exact and a coexact form: $F=\mathrm{d}A+\star\mathrm{d}C$)?</p> <p>For <em>compact riemannian</em> manifolds there is the well-known Hodge decomposition: There is always a decomposition into an exact, a coexact and a harmonic form. In the non-compact case you might be able to get rid of the harmonic form by only considering “rapidly decaying” forms (<a href="http://en.wikipedia.org/wiki/Helmholtz_decomposition#Differential_forms">Wikipedia suggests that</a>, but I do not have a good reference, in euclidean space there is the Helmholtz decomposition, and non-trivial (smooth) harmonic 1-forms do not vanish at infinity).</p> <p>That is why I also ask: Are there “rapidly decaying” harmonic 2-forms in Minkowski space? Any references where I could see what is known about harmonic forms and Hodge theory in the semi-riemannian case are also welcome.</p>
Willie Wong
3,948
<ol> <li><p>There are no "rapid decaying harmonic 2-forms" in Minkowski space. </p> <p>Consider the expression $$ 0 = \mathrm{d} \star \mathrm{d} F = \partial^i \partial_{[i}F_{jk]} = \frac13 (\Box F_{jk} + \partial_j \partial^i F_{ki} + \partial_k \partial^i F_{ij})$$ The second and third terms in the parentheses vanish by $\mathrm{d}\star F = 0$, so we see that over Minkowski space, Maxwell's equation implies that the components of the Faraday tensor must solve a linear wave equation. </p> <p>(In the case you have source, you also see that the equations are equivalent to $\Box F_{jk} = J_{jk}$ where $J_{jk}$ is built out of the magnetic and electric currents in the appropriate way.)</p> <p>Now, the linear decay estimates for the wave equation (that $|F| \sim \frac{1}{1+|t|}$) is sharp. In fact, by the finite speed of propagation + strong Huygens principle, if $F$ decays too rapidly in time, conservation of energy immediately implies that $F$ must vanish identically! </p> <p>(If you don't want to go through the wave equation, you can also argue through the Maxwell equation by considering the energy integrals.)</p></li> <li><p>Conservation of energy also implies that over Minkowski space $\mathbb{R}^{1,3}$ there cannot exist a non-vanishing "harmonic" form with finite space-time $L^2$ norm. So harmonic forms cannot exist, even in a class that is not necessary "rapidly" decaying. </p></li> <li><p>Now, under the influence of external sources $j_e$ and $j_m$, you can pose the ansatz (by linearity of the equations) $F = G + H$, where $$ \begin{align} \mathrm{d} G &amp; = j_e &amp; \mathrm{d} H &amp; = 0 \newline \mathrm{d} \star G &amp; = 0 &amp; \mathrm{d} \star H &amp;= j_m \end{align} $$ Then the cohomology result you mentioned implies that $H = \mathrm{d}A$ and $\star G = \mathrm{d}C$ for $A,C$. The difference of the true solution with your ansatz is $\tilde{F} - F$ a harmonic two form, which can be completely absorbed into the electric potential $A$ if you want...</p> <p>So the claimed decomposition is available whenever it is possible to solve the equations (possibly nonuniquely) for $G$ and $H$. For compatible source terms (you need $\mathrm{d} j_e =0 = \mathrm{d} j_m$ as a necessary condition) which are $L^1$ in space-time, the existence of a solution follows from the well-posedness of the corresponding Cauchy problem, for example. </p></li> <li><p>Much of "Hodge theory" is false for "compact semi-Riemannian manifolds" in general. For example, there is no good connection between the cohomology and the number of harmonic forms. (Simplest example: on any compact Riemannian manifold the only harmonic functions are constants. But take, for example, the torus $\mathbb{T}^2 = \mathbb{S}^1\times\mathbb{S}^1$ with the Lorentzian metric $\mathrm{d}t^2 - \mathrm{d}x^2$. Then for any $k$ the function $\sin (kx) \sin(kt)$ is "harmonic", and they are linearly independent over $L^2$. )</p></li> <li><p>Number 4 above also implies that the Hodge decomposition is not true for general compact semi-Riemannian manifolds. Consider the exact same $\mathbb{T}^2$ as above. Let $\alpha$ be the one form $\sin(t) \cos(x) \mathrm{d}t$. We have that $$ \mathrm{d}\alpha = \sin(t) \sin(x) \mathrm{d}t\wedge \mathrm{d}x \qquad \mathrm{d}\star \alpha = \cos(t) \cos(x) \mathrm{d}t \wedge \mathrm{d}x$$ To allow a Hodge decomposition requires that there exists $A$ and $C$, scalars, such that $$\mathrm{d}\star\mathrm{d}C = \mathrm{d}\alpha \implies \Box C = \pm \sin(t)\sin(x) $$ and $$ \mathrm{d}\star \mathrm{d}A = \mathrm{d}\star\alpha \implies \Box A = \pm \cos(t) \cos(x) $$ where $\Box$ is the wave operator $\partial_t^2 - \partial_x^2$. </p> <p>Now taking Fourier transform (since we would desire $C$ and $A$ be in $L^2$ at least), we see that we have a problem. If $f\in L^2(\mathbb{T}^2)$ we have $$ \widehat{\Box f} = -(\tau^2 - \xi^2)\widehat{f} = 0 \qquad\text{ whenever } |\tau| = |\xi| $$ But the Fourier support of $\sin(t)\sin(x)$ and $\cos(t)\cos(x)$ are precisely when $\tau = \xi = 1$. So there in fact does <em>not</em> exist a Hodge-like decomposition for the $C^\infty$ form $\alpha$. </p></li> </ol> <p>The above just goes to say that</p> <ol> <li>You are not going to find anything in the literature concerning Hodge decomposition for general compact semi-Riemannian manifolds, as such as theory does not exist</li> <li>You are also unlikely to find literature that studies the Hodge decomposition in the non-compact case in general: on non-compact manifolds causality violations (closed time-like curves, for example) can, in principle, also cause problems with solving the wave equation (both homogeneous and inhomogeneous), which will prevent an analogue for Hodge decomposition to hold. </li> <li>Your best bet, in so far as two-forms are concerned, are precisely the mathematical-physics literature concerning Maxwell equations over globally hyperbolic Lorentzian manifolds. For these manifolds, at least in principle, some version of the Hodge decomposition can be had by solving the appropriate initial value problems (something like $\Box C = j_e, \Box A = j_m, \mathrm{d}F_0 = 0 = \mathrm{d}\star F_0$ where $F_0$ is the harmonic part, with initial data prescribed so that the data for $C$ and $A$ vanishes on the initial surface and $F_0$ is equal to $F$ on the initial surface). </li> </ol>
2,938,372
<p>Seems to me like it is. There are only finitely many distinct powers of <span class="math-container">$x$</span> modulo <span class="math-container">$p$</span>, by Fermat's Little Theorem (they are <span class="math-container">$\{1, x, x^2, ..., x^{p-2}\}$</span>), and the coefficient that I choose for each of these powers can only be taken from <span class="math-container">$\{0,1,2,..., p-1\}$</span>. So essentially I'm choosing amongst <span class="math-container">$p$</span> things <span class="math-container">$p-1$</span> many times, resulting in at most <span class="math-container">$p^{p-1}$</span> distinct polynomials. </p> <p>Yet an assignment claims that <span class="math-container">$\mathbb{Z}_p[x]$</span> is infinite. </p>
Robert Lewis
67,071
<p><span class="math-container">$\Bbb Z_p[x]$</span> is indeed infinite, since it contains polynomials, with coefficients in <span class="math-container">$\Bbb Z_p$</span>, of arbitrary high degree, e.g. <span class="math-container">$x^n \in \Bbb Z_p[x]$</span>, where <span class="math-container">$n \in \Bbb N$</span>; also, <span class="math-container">$x^n \ne x^m$</span> if <span class="math-container">$m \ne n$</span>.</p> <p>Fermat's Little Theorem, that <span class="math-container">$a^p = a$</span> for <span class="math-container">$a \in \Bbb Z_p$</span>, does <em>not</em> apply to the indeterminate <span class="math-container">$x \in \Bbb Z_p[x]$</span>. <span class="math-container">$x \in \Bbb Z_p[x]$</span> and <span class="math-container">$a \in \Bbb Z_p$</span> are definitely "birds of a different feather" in this regard.</p> <p>So though <span class="math-container">$\Bbb Z_p$</span> is finite, <span class="math-container">$\Bbb Z_p[x]$</span> is not.</p>
2,938,372
<p>Seems to me like it is. There are only finitely many distinct powers of <span class="math-container">$x$</span> modulo <span class="math-container">$p$</span>, by Fermat's Little Theorem (they are <span class="math-container">$\{1, x, x^2, ..., x^{p-2}\}$</span>), and the coefficient that I choose for each of these powers can only be taken from <span class="math-container">$\{0,1,2,..., p-1\}$</span>. So essentially I'm choosing amongst <span class="math-container">$p$</span> things <span class="math-container">$p-1$</span> many times, resulting in at most <span class="math-container">$p^{p-1}$</span> distinct polynomials. </p> <p>Yet an assignment claims that <span class="math-container">$\mathbb{Z}_p[x]$</span> is infinite. </p>
Mark Bennet
2,906
<p>You might have <span class="math-container">$x^2\equiv x \bmod 2$</span> for the two elements <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. So this is within <span class="math-container">$\mathbb Z_2$</span>.</p> <p>Now note that <span class="math-container">$x^2+x+1$</span> has no roots in <span class="math-container">$\mathbb Z_2$</span>, so let's invent one and call it <span class="math-container">$\alpha$</span>, and there will be another root <span class="math-container">$\alpha+1$</span> because the sum of the roots is then <span class="math-container">$\alpha+(\alpha+1)\equiv 1\equiv -1 \bmod 2$</span>.</p> <p>This is the same process by which we invent/construct/discover/adjoin a root of the real polynomial <span class="math-container">$x^2+1$</span> and call it <span class="math-container">$i$</span>.</p> <p>We then find that <span class="math-container">$\alpha^2=\alpha+1$</span> (modulo <span class="math-container">$2$</span>) so that the two polynomials <span class="math-container">$x^2$</span> and <span class="math-container">$x$</span> are no longer the same function when we add the element <span class="math-container">$\alpha$</span> to the mix, even if we are still working modulo <span class="math-container">$2$</span>. This is a small example of something which becomes rather common when you are trying to analyse polynomials and their roots.</p> <p>So no, the <span class="math-container">$x$</span> and <span class="math-container">$x^2$</span> are not formally the same modulo <span class="math-container">$2$</span>, as other have said, but importantly they behave differently in important algebraic contexts and it matters that we should distinguish them.</p>
2,822,126
<blockquote> <p>If $0⩽x⩽y⩽z⩽w⩽u$ and $x+y+z+w+u=1$, prove$$ xw+wz+zy+yu+ux⩽\frac15. $$</p> </blockquote> <p>I have tried using AM-GM, rearrangement, and Cauchy-Schwarz inequalities, but I always end up with squared terms. For example, applying AM-GM to each pair directly gives$$ x^2+y^2+z^2+w^2+u^2 ⩾ xw + wz + zy + yu + ux, $$ but I cannot seem to continue from here or use $x + y + z + w + u = 1$. Other inequalities like Chebyshev's rely on the multiplied pairs to be in order from least to greatest or vice versa, so I am stuck here.</p>
Rhys Hughes
487,658
<p>The case where $xw+wz+zy+yu+ux$ is maximal will be where $x=y=z=w=u$. Here, they must all be $\frac15$, and plugging into the equation gives $5(\frac15)^2=\frac15$. So the expression is $\le\frac15$.</p>
256,138
<p>I need to generate four positive random values in the range [.1, .6] with (at most) two significant digits to the right of the decimal, and which sum to exactly 1. Here are three attempts that do not work.</p> <pre><code>x = {.15, .35, .1, .4}; While[Total[x] != 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; x = {.25, .25, .25, .25}; While[Total[x] == 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; NestWhileList[Total[x], x = Table[Round[RandomReal[{.1, .6}], .010], 4], Plus @@ x == 1][[1]] </code></pre>
bobthechemist
7,167
<p>I don't think anyone has yet tried <code>FindInstance</code>:</p> <pre><code>FindInstance[{a + b + c + d == 100 &amp;&amp; 10 &lt;= a &lt;= 60 &amp;&amp; 10 &lt;= b &lt;= 60 &amp;&amp; 10 &lt;= c &lt;= 60 &amp;&amp; 10 &lt;= d &lt;= 60}, {a, b, c, d}, Integers, 2, RandomSeeding -&gt; Round@(10^6 RandomReal[])] </code></pre> <p>If only 1 result is requested, then the result seems to always be {60,20,10,10}. The RandomSeeding option is necessary to avoid repeats. The result then gets divided by 100 to get the sig figs, I think.</p> <p>Here we have 100 results, 97 of which were unique when I ran it:</p> <pre><code>results = 0.01 {a, b, c, d} /. FindInstance[{a + b + c + d == 100 &amp;&amp; 10 &lt;= a &lt;= 60 &amp;&amp; 10 &lt;= b &lt;= 60 &amp;&amp; 10 &lt;= c &lt;= 60 &amp;&amp; 10 &lt;= d &lt;= 60}, {a, b, c, d}, Integers, 100, RandomSeeding -&gt; Round@(10^6 RandomReal[])] Short[Results] (* {{0.5, 0.21, 0.19, 0.1}, {0.18, 0.13, 0.11, 0.58}, {0.11, 0.54, 0.18, \ 0.17}, &lt;&lt;95&gt;&gt;, {0.17, 0.17, 0.11, 0.55}, {0.3, 0.13, 0.47, 0.1} *) Sort /@ results // Union // Length (* 97 *) Total /@ results // Union (* {1., 1., 1.} *) </code></pre> <p>The last result suggests to me that there might be some machine precision roundoff errors.</p>
3,441,438
<p>I was wondering if anyone could get me started on solving the following: <span class="math-container">$$(u')^2-u\cdot u''=0$$</span></p> <p>I have tried letting <span class="math-container">$v=u'$</span>, but I don't seem to make progress with such a substitution.</p>
user247327
247,327
<p>First, since <span class="math-container">$0^8= 0$</span> and, conversely, if <span class="math-container">$x^8= 0$</span>, x= 0, any x is a root of <span class="math-container">$(x^4+ 7x^3+ 22x^2+ 31x+ 9)^2= 0$</span> is a root of <span class="math-container">$x^4+ 7x^3+ 22x^2+ 31x+ 9= 0$</span>. Further, since all coefficients are real numbers, given that <span class="math-container">$-2+ i\sqrt{5}$</span>, <span class="math-container">$-2- i\sqrt{5}$</span> is also. So <span class="math-container">$(x+ 2- i\sqrt{5})(x+ 2+ i\sqrt{5})= x^2+ 4x+ 4+ 5= x^2+ 4x+ 9$</span> is a factor. Dividing <span class="math-container">$x^4+ 7x^3+ 22x^2+ 31x+ 9$</span> by <span class="math-container">$x^2+ 4x+ 4+ 5= x^2+ 4x+ 9$</span> gives you a quadratic equation you can solve for the other two roots.</p>
554,825
<p>Here's a question from my homework. First 2 questions I solved (but would appreciate any input you can give on my solution) and the last question I'm just completely stumped. It's quite complicated.</p> <p>On the shelf there are 5 math books, 3 science fiction books and 2 thrillers (<strong>all of the books are different</strong>)</p> <p>1) In how many different combinations can you organize the books on the shelf, without any limitation? **My answer: $10!$ since all the books are different, it's just organizing 10 books.</p> <p>2) In how many different combinations can you organize the books so that books of the same kind are next to each other? **My answer: $3!*5!*3!*2!$ - first imagine all the math books are just 1 block, all the sci fi books are 1 block, and that the 2 thrillers are 1 block. I have $3!$ ways to organize these blocks on the shelf. The $5!*3!*2!$ is due to the order of the books inside the block.</p> <p>3) How many different combinations are there to organize the books such that the sci fi books are together, and there is at least 1 book inbetween the thrillers? **My answer: I don't know. It's too complex. I thought maybe simplying it by saying at least 1 book in between = combinations with 1 book in between + combinations with 2 books etc but even that doesn't make it any simpler...</p> <p>Help? :)</p>
Pavel Čoupek
82,867
<p>The statement is not true, i.e. a non-irreducible element can be mapped to an irreducible element.</p> <p><strong>Hint:</strong> Think of embedding $f$ of $R$ into some localization of $R$ (by some multiplicative set) and the fact that some element $y$ of the localization usually is a unit despite the fact that $f^{-1}(\{y\})$ is not. Such $y$ can be then used to find counterexample.</p> <p><strong>More detailed hint:</strong></p> <p>Ok, so I would still use the localization example, but in a concrete situation to make it clear. Let $f$ be an embedding of the ring $\mathbb{Z}$ to the following subring of $\mathbb{Q}$: $$ S=\{ \frac{a}{2^k} \; | \; a\in \mathbb{Z}, k \in \mathbb{N}_0 \}.$$</p> <p>Try to find the example in this situation. That is, try to find an integer which is a product of at least two primes, but is irreducible as an element of the ring $S$.</p>
239,387
<p>Someone could explain how to build the smallest field containing to $\sqrt[3]{2}$.</p>
i. m. soloveichik
32,940
<p>If the base field is $F=Z_2$ then we already have the cube root of 2, which is 0. If $F=Z_3$ then $F$ already contains one cube root of 2, i.e. cube root of -1, namely -1; if we want the other cube roots of 2, then the field $F(2^{1/3})=F[2^{1/3}]=\{a+b\cdot 2^{1/3}\}$ has 9 elements.</p>
112,394
<p>Could someone clarify why the first of these <code>MatchQ</code> finds a match whereas the second does not? (I'm using version 10.0, in case that matters.)</p> <pre><code>MatchQ[Hold[x + 2 y], Hold[x + 2 _]] (*True*) MatchQ[Hold[x + 2 y + 0], Hold[x + 2 _ + 0]] (*False*) </code></pre> <p>EDIT: The conclusion below is that when encountering an <code>Orderless</code> function (here <code>Plus</code>), the pattern matcher does not test for all possible orderings of arguments but rather sorts the pattern's constant arguments and allows blanks <code>_</code> to appear in arbitrary places within that list. This reordering disregards <code>Hold</code>. On the other hand <code>Hold</code> does keep the order of the arguments of the expression fixed, so <code>Hold[x+2y+0]</code> is compared (with fixed argument order) to <code>Hold[0+x+2_]</code>, <code>Hold[0+2_+x]</code> and <code>Hold[2_+0+x]</code>, none of which match.</p>
Alexey Popkov
280
<h2>UPDATE</h2> <p><a href="https://mathematica.stackexchange.com/q/94432/280">That question</a> is by the essence an exact duplicate of this one. The <a href="https://mathematica.stackexchange.com/a/94569/280">explanation</a> given by Mr.Wizard means that the pattern-matcher is NOT capable to handle situations when an unevaluated function with <code>Orderless</code> attribute is wrapped by <code>Hold</code>. So this is indeed a gedanken functionality.</p> <p>The pattern-matcher works on the base of the assumption that <code>Orderless</code> attribute is already applied and the arguments are sorted in the canonical order:</p> <pre><code>ClearAll[o] SetAttributes[o, Orderless] MatchQ[Hold[o[y, x, a]], Hold[o[_, x, a]]] MatchQ[Hold[Evaluate@o[y, x, a]], Hold[o[_, x, a]]] </code></pre> <blockquote> <pre><code>False True </code></pre> </blockquote> <p>In the light of this the Documentation <a href="http://reference.wolfram.com/language/ref/Orderless.html" rel="nofollow noreferrer">statement</a></p> <blockquote> <p>In matching patterns with <code>Orderless</code> functions, all possible orders of arguments are tried.</p> </blockquote> <p>means that actually only all possible <em>positions</em> of the <code>Blank</code>s in the pattern are tried but other arguments <em>in the pattern</em> are preliminarily sorted in the fixed canonical order. This is evident from the following:</p> <pre><code>MatchQ[Hold[o[y, a, x]], Hold[o[_, x, a]]] MatchQ[Hold[o[y, x, a]], Hold[o[_, x, a]]] MatchQ[Hold[o[y, a, x]], Hold[o[x, _, a]]] MatchQ[Hold[o[y, a, x]], Hold[o[x, a, _]]] </code></pre> <blockquote> <pre><code>True False True True </code></pre> </blockquote> <p>In the other words, <code>Hold</code> prevents applying <code>Orderless</code> attribute in the sense that it prevents sorting of the arguments in the canonical order, but it does not prevent action of this attribute in the pattern (i.e. all possible positions of the <code>Blank</code>s are indeed tried).</p> <hr> <h2>Original answer</h2> <p>This behavior is definitely comes from the <code>Orderless</code> attribute of <code>Plus</code> and reproduces for arbitrary function having this attribute. As can be seen from the following, without <code>Hold</code> the pattern matches regardless of the order of the arguments due to <code>Orderless</code> attribute of <code>o</code>:</p> <pre><code>ClearAll[o] SetAttributes[o, Orderless] MatchQ[o[y, x, a], o[_, x, a]] MatchQ[o[y, a, x], o[_, a, x]] MatchQ[o[y, x, a], o[_, a, x]] </code></pre> <blockquote> <pre><code>True True True </code></pre> </blockquote> <p>But when we wrap the expressions by <code>Hold</code> something breaks - now it does not match even when the order of the arguments is identical:</p> <pre><code>MatchQ[Hold[o[y, x, a]], Hold[o[_, x, a]]] </code></pre> <blockquote> <pre><code>False </code></pre> </blockquote> <p>Surprisingly, it does match if we reorder arguments in the both expressions:</p> <pre><code>MatchQ[Hold[o[y, a, x]], Hold[o[_, a, x]]] </code></pre> <blockquote> <pre><code>True </code></pre> </blockquote> <p>I suspect that we have a bug in the pattern matcher because this behavior cannot be explained by the documented meaning of the <code>Orderless</code> attribute.</p>
4,014,756
<p>I was reading the book &quot;Quantum Computing Since Democritus&quot;.</p> <blockquote> <p>&quot;The set of ordinal numbers has the important property of being well ordered,which means that every subset has a minimum element. This is unlike the integers or the positive real numbers, where any element has another that comes before it.&quot;</p> </blockquote> <p>Unlike integers? Let's consider a set <span class="math-container">$\{1,2,3\}$</span> This has a minimum element.</p> <p>Do you get what does the author wants to say here?</p>
Brian M. Scott
12,042
<p>For each <span class="math-container">$n\in\Bbb N$</span> let <span class="math-container">$C_n=[0,4]$</span>, and let <span class="math-container">$C=\prod_{n\in\Bbb N}C_n$</span>. Then</p> <p><span class="math-container">$$\Bbb R^\infty\setminus C\ne\prod_{n\in\Bbb N}(\Bbb R\setminus C_n)\,,$$</span></p> <p>so the fact that <span class="math-container">$\prod_{n\in\Bbb N}(\Bbb R\setminus C_n)$</span> is not open tells you nothing about whether <span class="math-container">$C$</span> is closed or not.</p> <p>Suppose that <span class="math-container">$x=\langle x_n:n\in\Bbb N\rangle\in\Bbb R^\infty$</span>; when is <span class="math-container">$x$</span> <strong>not</strong> in <span class="math-container">$C$</span>? In order for <span class="math-container">$x$</span> to be in <span class="math-container">$C$</span>, we must have <span class="math-container">$x_n\in[0,4]$</span> for <strong>every</strong> <span class="math-container">$n\in\Bbb N$</span>. Thus, <span class="math-container">$x\in\Bbb R^\infty\setminus C$</span> exactly when there is <strong>at least one</strong> <span class="math-container">$n\in\Bbb N$</span> such that <span class="math-container">$x_n\notin[0,4]$</span>. If for each <span class="math-container">$k\in\Bbb N$</span> we set</p> <p><span class="math-container">$$U_k=\left\{\langle x_n:n\in\Bbb N\rangle\in\Bbb R^\infty:x_k\notin[0,4]\right\}\,,$$</span></p> <p>then <span class="math-container">$\Bbb R^\infty\setminus C=\bigcup_{k\in\Bbb N}U_k$</span>. If each of the sets <span class="math-container">$U_k$</span> is open, then so is their union, and therefore <span class="math-container">$C$</span> is closed.</p> <p>And in fact each <span class="math-container">$U_k$</span> is even a <strong>basic</strong> open set in the product topology on <span class="math-container">$\Bbb R^\infty$</span>; can you see how to write <span class="math-container">$U_k$</span> in the form <span class="math-container">$\prod_{n\in\Bbb N}V_n$</span>, where each <span class="math-container">$V_n$</span> is open in <span class="math-container">$\Bbb R$</span>, and all but finitely many of the <span class="math-container">$V_n$</span> are equal to <span class="math-container">$\Bbb R$</span>?</p>
2,602,410
<p>$$\int_{0}^{\pi /2} \frac{\sin^{m}(x)}{\sin^{m}(x)+\cos^{m}(x)}\, dx$$</p> <p>I've tried dividing by $\cos^{m}(x) $, and subbing out the $\ 1+\cot^{m}(x) $ with $\csc^{n}(x) $ for some $n$, but to no avail. I've also tried adding and subtracting $\cos^{m}(x)$ to the numerator, and substituting $x$ by $\pi-y$, but these techniques haven't helped either.</p>
user284331
284,331
<p>Let $I=\displaystyle\int_{0}^{\pi/2}\dfrac{\sin^{m}x}{\sin^{m}x+\cos^{m}x}dx$, by letting $y=\pi/2-x$, then $I=\displaystyle\int_{0}^{\pi/2}\dfrac{\cos^{m}y}{\sin^{m}y+\cos^{m}y}dy$, so $2I=\displaystyle\int_{0}^{\pi/2}\dfrac{\sin^{m}x+\cos^{m}x}{\sin^{m}x+\cos^{m}x}dx=\pi/2$, so $I=\pi/4$.</p>
2,830,969
<p>I did not find an example when the denominator $x$ approximates to $0$.</p> <p>$f(0) + f'(0)x$ does not work because $f(0)$ would be $+\infty$. </p>
Community
-1
<p>There is no answer to your question as the limit does not exist. This can be proven by-</p> <ol> <li>first take the left hand side limit. This is equal to negative infinity in this case</li> <li>Then take the right hand side limit, which equals positive infinity</li> <li>Since, left hand side limit is not equal to right hand side limit, the limit does not exist.</li> </ol> <p>I hope you got your answer. </p>
382,603
<p>Let $F$ be a finite field. Prove that the following are equivalent:</p> <p>i) $A \subset B$ or $B \subset A$ for each two subgroups $A,B$ of $F^*$.</p> <p>ii) $\#F^*$ equals 2, 9, a Fermat-prime or $\#F^* -1$ equals a Mersenne prime.</p> <p>Any ideas for i => ii ? I don't know where to start, except for remarking that $\#F^*$ is cyclic. Thanks.</p>
Jyrki Lahtonen
11,619
<p>Hints/suggestions:</p> <p>Let $F=GF(q)$ with $q=p^m$. For $F^*$ to have property i) it is necessary and sufficient that $q-1=\ell^n$ for some prime $\ell$ and non-negative integer $n$.</p> <ol> <li>If $p&gt;2$, then $q-1$ is an even integer, so we must have $\ell=2$. Also all factors of $q-1$ must be powers of two. In particular we must have that $p-1$ and $p^{m-1}+p^{m-2}+\cdots+p+1$ must both be powers of two. The first implies that $p$ is a Fermat prime, and the second implies that $m$ must be an even integer or equal to one. The case $m=1$ is already covered. The case $m&gt;1, 2\mid m$ implies that $(p^2-1)\mid (p^m-1)$. As $p^2-1=(p-1)(p+1)$ both $p-1$ and $p+1$ must be powers of two. The only Fermat prime for which that happens is $p=3$. A higher power of three can be eliminated by repeating this argument, and showing that we must have $4\mid m$ and consequently also $p^2+1$ should be a a factor of $p^m-1$ and hence also a power of two.</li> <li>If $p=2$, then the Mersenne prime cases obviously work, and the goal is to show that nothing else works. Well, we have $$ 2^m-1=\ell^n, $$ so $2^m=\ell^n+1$. If $n$ is even, then $\ell^n+1\equiv 2\pmod4$, which cannot be a power of two. Therefore $n$ is odd, and $\ell+1$ is a factor of the r.h.s. and hence also a power of two. So $\ell$ is a Mersenne prime, say $\ell=2^r-1$ for some prime $r$. If $n=2k+1$ is odd, then $$ \ell^n=(2^r-1)^n=(2^r-1)((2^r-1)^2)^k\equiv(2^r-1)\pmod{2^{r+1}}, $$ so $\ell^n+1\equiv 2^r\pmod{2^{r+1}}$ cannot be a power of two unless $n=1$.</li> </ol> <p>A lot of details left for you to check. </p>
1,873,648
<p>Let $A=\{1,2,3,...,2^n\}$. Consider the greatest odd factor (not necessarily prime) of each element of A and add them. What does this sum equal? </p>
robjohn
13,854
<p>The sum of the first $m$ odd numbers is $1+3+5+7+\cdots+(2m-1)=m^2$.</p> <p>Excluding $2^n$, we sum the first $2^{n-1}$ odd numbers, then the first $2^{n-2}$ odd numbers under the guise of their doubles, then the first $2^{n-3}$ odd numbers under the guise of their quadruples, etc.</p> <blockquote> <p>For example, when $n=3$ $$ \color{#0000F0}{1}+\color{#00A000}{2}+\color{#0000F0}{3}+\color{#C00000}{4}+\color{#0000F0}{5}+\color{#00A000}{6}+\color{#0000F0}{7}+8\\ \color{#0000F0}{16}+\color{#00A000}{4}+\color{#C00000}{1}+1=22 $$ The first $2^{n-1}$ odd numbers are in blue, the doubles of the first $2^{n-2}$ odd numbers are in green, the quadruples of the first $2^{n-3}$ odd numbers are in red. This coloring excludes $2^n$.</p> <p>For example, when $n=4$ $$ \color{#0000F0}{1}+\color{#00A000}{2}+\color{#0000F0}{3}+\color{#C00000}{4}+\color{#0000F0}{5}+\color{#00A000}{6}+\color{#0000F0}{7}+\color{#F08030}{8}+\color{#0000F0}{9}+\color{#00A000}{10}+\color{#0000F0}{11}+\color{#C00000}{12}+\color{#0000F0}{13}+\color{#00A000}{14}+\color{#0000F0}{15}+16\\ \color{#0000F0}{64}+\color{#00A000}{16}+\color{#C00000}{4}+\color{#F08030}{1}+1=86 $$ The first $2^{n-1}$ odd numbers are in blue, the doubles of the first $2^{n-2}$ odd numbers are in green, the quadruples of the first $2^{n-3}$ odd numbers are in red, the octuples of the first $2^{n-4}$ odd numbers are in orange. This coloring excludes $2^n$.</p> </blockquote> <p>Excluding the term for $2^n$, we get $$ \left(2^{n-1}\right)^2+\left(2^{n-2}\right)^2+\left(2^{n-3}\right)^2+\dots+1=\frac{4^n-1}{4-1} $$ Including the term for $2^n$, which adds $1$, the final sum is $$ \frac{4^n-1}{3}+1=\frac{4^n+2}{3} $$</p>
3,363,810
<p>I;m not sure how to go about proving this. I just started learning about it and would appreciate some help.</p>
fleablood
280,126
<p>Claim 1: <span class="math-container">$X\cap Y \subset X$</span>.</p> <p>Proof: If <span class="math-container">$x \in X\cap Y$</span> then <span class="math-container">$x \in X$</span> and <span class="math-container">$x \in Y$</span>. So <span class="math-container">$x \in X$</span>.</p> <p>Claim 2: If <span class="math-container">$A \subset X$</span> then <span class="math-container">$X = X\cup A$</span>.</p> <p>Pf: a: if <span class="math-container">$x \in X$</span> then <span class="math-container">$x\in X$</span>... so <span class="math-container">$x \in X$</span> or <span class="math-container">$X\in A$</span> is true. So <span class="math-container">$x \in X\cup A$</span>.</p> <p>and <span class="math-container">$X\subset $</span>X\cup A$.</p> <p>If <span class="math-container">$x \in X\cup A$</span> then either <span class="math-container">$x \in X$</span> or <span class="math-container">$x \in A$</span>. If <span class="math-container">$x \in A$</span> then <span class="math-container">$x \in X$</span> because <span class="math-container">$A\subset X$</span>. On the other hand if <span class="math-container">$x \in X$</span> then <span class="math-container">$x\in X$</span>. So either way <span class="math-container">$x \in X$</span>.</p> <p>so <span class="math-container">$X\cup A \subset X$</span>.</p> <p>So <span class="math-container">$X= X\cup A$</span>.</p> <p>So claim 1 and 2 together prove your result:</p> <p>....</p> <p>Or directly. If <span class="math-container">$x \in X$</span> then <span class="math-container">$x\in X$</span> so <span class="math-container">$x \in X$</span> or <span class="math-container">$x \in X\cap Y$</span> is true. So <span class="math-container">$X \subset X\cup (X\cap Y)$</span>.</p> <p>If <span class="math-container">$x \in X\cup (X\cap Y)$</span> then either <span class="math-container">$x \in X$</span> or <span class="math-container">$x \in (X\cap Y)$</span>. If <span class="math-container">$x \in X$</span> ... then <span class="math-container">$x \in X$</span>. If however <span class="math-container">$x \in X\cap Y$</span> then <span class="math-container">$x \in X$</span> and <span class="math-container">$x \in Y$</span>. So <span class="math-container">$x \in X$</span>. Either way <span class="math-container">$x \in X$</span> so <span class="math-container">$X\cup(X\cap Y) \subset X$</span>.</p> <p>....</p>
852,404
<p>How to prove that there exist an infinite number of prime $n$ for which $n^2=p+8$ for some prime $p$?</p> <p>Verification of the form $n^2=p+8$ where $n$ and $p$ are some $p$.</p> <p>$$\begin{array}{|c|c|} \hline n &amp; n^2 = 8 + p \\ \hline 11 &amp; 121 = 8 + 113 \\ 23 &amp; 529 = 8 + 521 \\ 31 &amp; 961 = 8 + 953 \\ 37 &amp; 1369= 8 + 1361 \\ \hline \end{array} $$</p>
miket
34,309
<p>Your question is equivalent to there's infinitely many $m$ that: </p> <p>$2n^2 + 2(n - 2) = m,\ $where $2n + 1,\ 2m+1$ is prime. </p> <p>Such as: $n=3, \ 2n^2 + 2(n - 2) = 20,(2 \cdot 3+1)^2=7^2=49=2 \cdot 20+1+8=41+8,\ $ where $7,41$ is prime. It seems this is a elementary question.</p>
1,905,186
<blockquote> <p>Let <span class="math-container">$R$</span> be a commutative Noetherian ring (with unity), and let <span class="math-container">$I$</span> be an ideal of <span class="math-container">$R$</span> such that <span class="math-container">$R/I \cong R$</span>. Then is it true that <span class="math-container">$I=(0)$</span> ?</p> </blockquote> <p>I know that a surjective ring endomorphism of a Noetherian ring is also injective, and since there is a natural surjection from <span class="math-container">$R$</span> onto <span class="math-container">$R/I$</span> we get a surjection from <span class="math-container">$R$</span> onto <span class="math-container">$R$</span>, but the problem is I can not determine the map explicitly and I am not sure about the statement. Please help. Thanks in advance.</p>
Oliver Kayende
704,766
<p>Assume <span class="math-container">$A\approx A/a$</span>. Of those <span class="math-container">$A$</span> ideals <span class="math-container">$a'$</span> satisfying <span class="math-container">$A\approx A/a'$</span> choose <span class="math-container">$b$</span> maximal with respect to set inclusion. Consequently we may choose a non-trivial proper ideal <span class="math-container">$C$</span> of the the quotient ring <span class="math-container">$A/b$</span> such that <span class="math-container">$A\approx (A/b)/C$</span> but then <span class="math-container">$$A\approx (A/b)/C\approx A/(b+c)$$</span> where <span class="math-container">$c:=\{x\in A:x+b\in C\}$</span> is the pullback <span class="math-container">$A$</span> ideal of <span class="math-container">$C$</span>, which then violates the maximality of <span class="math-container">$b$</span> because <span class="math-container">$C$</span> is non-trivial and thus <span class="math-container">$b+c$</span> would properly contain <span class="math-container">$b$</span>. </p>
496,178
<p>Let $t$ be a positive real number. Differentiate the function</p> <blockquote> <p>$$g(x)=t^x x^t.$$</p> </blockquote> <p>Your answer should be an expression in $x$ and $t$.</p> <p>came up with the answer </p> <blockquote> <p>$$(x/t)+(t/x)\ln(t^x)(x^t)=\ln(t^x)+\ln(x^t)=x\ln t+t\ln x .$$</p> </blockquote> <p>and the derivative to that is $(x/t)+(t/x)$. Not sure if I've done it right.</p>
triple_sec
87,778
<p>Using the product rule for differentiation, $$g'(x)=(t^x)'(x^t)+(t^x)(x^t)'=(t^x)(\ln t)(x^t)+(t^x)(t)(x^{t-1})=(t^x)(\ln t)(x^t)+t^{x+1}x^{t-1}.$$</p>
3,740,647
<p>I want to load an external Magma file within another Magma file. (Both files are saved in the same directory.) I want to be able to quickly change which external file is being loaded, ideally at the beginning of the file making the load call, so that I can easily run the same code with various inputs.</p> <p>(The external file contains computations, whose ultimate result is used by the file making the load call. These computations vary depending on the object being analyzed.)</p> <p>I tried creating a string-type variable that stores the external file's name, then using Magma's <code>load</code> command with this variable. For example,</p> <pre><code>fileName := &quot;externalMagmaFile.txt&quot;; load fileName; </code></pre> <p>However, this results in the error</p> <blockquote> <p>User error: Could not open file &quot;fileName&quot; (No such file or directory)</p> </blockquote> <p>The same error results when I include double quotes around the external file name:</p> <pre><code>fileName := &quot;\&quot;externalMagmaFile.txt\&quot;&quot;; load fileName; </code></pre> <p>It seems that, for the <code>load</code> command, Magma interprets the variable name as the string specifying the file name, instead of first evaluating the variable, then executing <code>load</code>.</p> <p>(I am using Magma V2.23-1 on MacOS Version 10.15.5.)</p> <p>Can I use a variable with the <code>load</code> command in Magma? If yes, how?</p>
kera
871,352
<p>Another solution (which is also a bit of a work around) is to set the Magma path based on which file you want to load using the SetPath() function. The Magma path indicates which directories are searched for loading Magma files.</p> <p>To clarify, if you have 2 different files by the same name in separate directories (or can set it up so that you do) and want to choose which one to load, you can set filepath := &quot;directory1&quot; or filepath := &quot;directory2&quot; and then SetPath(filepath) before loading the file.</p> <p>If you are concerned about changing the path, you can use GetPath() to save off the previous one and restore afterwards. Or add the new path to the end of the current path before calling SetPath().</p> <p>Side note: I would have added this as a comment rather than an answer since it is a workaround and it is not the ideal solution for every scenario. But adding comments requires reputation, and I came upon this question because I was in a similar situation and this answer would have helped me. In my case I wanted to load several files from the same path (where the path is a variable determined at runtime) so this worked better for me than trying to do a dummyloadfile for each. Also, sadly, the true answer to your question (afaik) is that there is no direct way to load a file using a variable name.</p>
2,861,362
<p>I have the following question: - We proved that if $T$ is 1-1 and $\{v_1...v_n\}$ is linearly independent then $\{T(v_1)...T(v_n)\}$ is linearly independent! I understood the proof! But can’t $\{T(v_1)...T(v_n)\}$ be linearly independent without having $T$ 1-1? The image I uploaded shows my work on proving that the set of the images is linearly independent without having $T$ 1-1. Can someone help me? <a href="https://i.stack.imgur.com/IRqZV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IRqZV.jpg" alt="enter image description here"></a></p>
Community
-1
<p>If you are allowed to use the definition of the Euler constant,</p> <p>$$a_n=H_{2n}-\frac12H_n-\log2n+\frac12\log n+\log2.$$</p> <p>The first difference cancels out all even harmonic terms and the remaining terms account for $\log\sqrt n$.</p> <p>Then</p> <p>$$\lim_{n\to\infty}a_n=\gamma-\frac12\gamma+\log2.$$</p>
3,460,749
<p>I'm an undergraduate student currently studying mathematical analysis. </p> <p>Our professor uses Zorich's Mathematical Analysis, but I found the text too difficult to understand. </p> <p>After exploring some textbooks, I found that Abbott was easier to follow, so I studied Abbott until I realized that there's a significant amount of content in Zorich that Abbott doesn't cover.</p> <p>So I was wondering if there's a book out there that covers as much content as Zorich but is more readable?</p> <p>Thank you for any help.</p>
dxb
669,798
<p>You may find Rudin's analysis texts <em>Principles of Mathematical Analysis</em> and <em>Real and Complex Analysis</em> to be useful, although various analysis textbooks will cover slightly different material.</p> <p>A discussion related to Zorich/Rudin/Abbott can be found <a href="https://math.stackexchange.com/q/1003791/669798">here</a>.</p>
885,478
<p>Let $f$ be a function defined on an interval $I$ differentiable at a point $x_o$ in the interior of $I$.</p> <p>Prove that if $\exists a&gt;0$ $ \ [x_o -a, x_o+a] \subset I$ and $ \ \forall x \in [x_o -a, x_o+a] \ \ f(x) \leq f(x_o)$, then $f'(x_o)=0$.</p> <p>I did it as follows:</p> <p>Let b>0.</p> <p>Since $f$ is differentiable at $x_o$, $$ \exists a_o&gt;0 \ \ \text{s.t} \ \ \forall x \in I \ \ \ \ \ 0&lt;|x-x_o|&lt;a_o \implies \left| \frac{f(x)-f(x_o)}{x-x_0} - f'(x_o)\right| &lt;b$$ Let $x_1 \in (x_o,x_o+a) \forall x \in I; f(x_1) \leq f(x_o)$ $$ \left| \frac{f(x_1)-f(x_o)}{x_1-x_0} - f'(x_o)\right| &lt;b \\ -b &lt; f'(x_o)-\frac{f(x_1)-f(x_o)}{x_1-x_0} &lt;b \\ f'(x_o) &lt; b+ \frac{f(x_1)-f(x_o)}{x_1-x_0} &lt; b$$ $$f'(x_o) &lt; b \tag{1} $$ Similarly Let $x_2 \in (x_o-a,x_o) \forall x \in I; f(x_2) \leq f(x_o)$</p> <p>$$ \left| \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o)\right| &lt;b \\ -b &lt; \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o) &lt;b \\ -b&lt; -b + \frac{f(x_2)-f(x_o)}{x_2-x_0} &lt; f'(x_o)$$ $$-b&lt;f'(x_o) \tag{2} $$</p> <p>From $(1)$ and $(2)$, $$ -b &lt; f'(x_o) &lt;b \\ |f'(x_o)|&lt;b $$</p> <p>I'm stuck here, how can I go to $f'(x_o)=0$ from here?</p> <p>Any help?</p>
idm
167,226
<p>You proved that $$\forall b&gt;0, |f'(x_0)|&lt;b$$ and this means that $f'(x_0)=0$. So your proof is already finish.</p>
885,478
<p>Let $f$ be a function defined on an interval $I$ differentiable at a point $x_o$ in the interior of $I$.</p> <p>Prove that if $\exists a&gt;0$ $ \ [x_o -a, x_o+a] \subset I$ and $ \ \forall x \in [x_o -a, x_o+a] \ \ f(x) \leq f(x_o)$, then $f'(x_o)=0$.</p> <p>I did it as follows:</p> <p>Let b>0.</p> <p>Since $f$ is differentiable at $x_o$, $$ \exists a_o&gt;0 \ \ \text{s.t} \ \ \forall x \in I \ \ \ \ \ 0&lt;|x-x_o|&lt;a_o \implies \left| \frac{f(x)-f(x_o)}{x-x_0} - f'(x_o)\right| &lt;b$$ Let $x_1 \in (x_o,x_o+a) \forall x \in I; f(x_1) \leq f(x_o)$ $$ \left| \frac{f(x_1)-f(x_o)}{x_1-x_0} - f'(x_o)\right| &lt;b \\ -b &lt; f'(x_o)-\frac{f(x_1)-f(x_o)}{x_1-x_0} &lt;b \\ f'(x_o) &lt; b+ \frac{f(x_1)-f(x_o)}{x_1-x_0} &lt; b$$ $$f'(x_o) &lt; b \tag{1} $$ Similarly Let $x_2 \in (x_o-a,x_o) \forall x \in I; f(x_2) \leq f(x_o)$</p> <p>$$ \left| \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o)\right| &lt;b \\ -b &lt; \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o) &lt;b \\ -b&lt; -b + \frac{f(x_2)-f(x_o)}{x_2-x_0} &lt; f'(x_o)$$ $$-b&lt;f'(x_o) \tag{2} $$</p> <p>From $(1)$ and $(2)$, $$ -b &lt; f'(x_o) &lt;b \\ |f'(x_o)|&lt;b $$</p> <p>I'm stuck here, how can I go to $f'(x_o)=0$ from here?</p> <p>Any help?</p>
drhab
75,923
<p>As answer to your question see my comment.</p> <p>A more elegant way of working is:</p> <p>Define $g\left(x\right):=f\left(x+x_{0}\right)-f\left(x_{0}\right)$ and $J:=x_{0}+I$.</p> <p>Then $g$ is a function defined on interval $J$ differentiable at $0\in\left[-a,a\right]\subset J$ with $g\left(0\right)=0$ and $g\left(x\right)\leq0$ for each $x\in\left[-a,a\right]$.</p> <p>Since $g$ is differentiable at $0$ and $g\left(0\right)=0$ both limits $\lim_{x\rightarrow0+}\frac{g\left(x\right)}{x}$ and $\lim_{x\rightarrow0-}\frac{g\left(x\right)}{x}$ exist and coincide.</p> <p>Here $\frac{g\left(x\right)}{x}\geq0$ for $x\in\left[-a,0\right)$ implying that $\lim_{x\rightarrow0-}\frac{g\left(x\right)}{x}\geq0$ and $\frac{g\left(x\right)}{x}\leq0$ for $x\in\left(0,a\right]$ implying that $\lim_{x\rightarrow0+}\frac{g\left(x\right)}{x}\leq0$. </p> <p>So the limits can only coincide if both equal $0$. </p> <p>This proves that $g'\left(0\right)=0$ or equivalently that $f'\left(x_{0}\right)=0$.</p>
4,000,459
<p>How can we mathematically precisely argue that <span class="math-container">$$\lim_{n \to \infty} \left(1-\frac xn+\frac{x}{n^2}\right)^n = e^{-x}$$</span> holds?</p> <p>So how can we bring</p> <p><span class="math-container">$$1-\frac xn+\frac{x}{n^2} = 1- \frac{(n+1)x}{n^2} \approx 1 - \frac xn $$</span> and <span class="math-container">$$\lim_{n \to \infty} \left(1-\frac xn\right)^n=e^{-x}$$</span> together?</p> <p>Or can we use this form somehow: <span class="math-container">$$\left(1-\frac xn+\frac{x}{n^2}\right)^n = \left(\left( 1 - \frac{x}{\frac{n^2}{n+1}} \right)^{\frac{n^2}{n+1} }\right)^{\frac{n+1}{n}}$$</span></p>
lab bhattacharjee
33,337
<p>Hint:</p> <p><span class="math-container">$$\lim_{n\to\infty}\left(1-\dfrac xn+\dfrac x{n^2}\right)^n =\left(\lim_{n\to\infty}\left(1+\dfrac{x(1-n)}{n^2}\right)^{\dfrac{n^2}{x(1-n)}}\right)^{\lim_{n\to\infty}\dfrac{nx(1-n)}{n^2}}$$</span></p> <p>Now in the inner limit set <span class="math-container">$\dfrac{x(1-n)}{n^2}=y\implies y\to0$</span></p> <p>and <span class="math-container">$\lim_{n\to\infty}\dfrac{nx(1-n)}{n^2}=x\lim_{n\to\infty}\left(\dfrac1n-1\right)=?$</span></p>
1,511,518
<p>Let <span class="math-container">$GL_n^+$</span> be the group of <span class="math-container">$n \times n$</span> real invertible matrices with positive determinant</p> <p>Does there exist a left invariant isotropic Riemannian metric on <span class="math-container">$GL_n^+$</span>?</p> <p>(By "isotropic" I mean that the sectional curvature <span class="math-container">$k(p,\sigma)$</span> for <span class="math-container">$\sigma \subseteq T_pM$</span> does not deped on <span class="math-container">$\sigma$</span>. By left invariance it does not depend on <span class="math-container">$p$</span> as well).</p> <hr> <p>I present a proof below, but I suspect that there are simpler ones:</p> <p>Suppose such a metric <span class="math-container">$g$</span> exists. Then the left invariance implies <span class="math-container">$(GL_n^+,g)$</span> is complete, hence it's universal covering space is complete, simply connected, and with constant curvature, so it must be diffeomorphic to <span class="math-container">$\mathbb{R}^{n^2}$</span> or <span class="math-container">$\, \mathbb{S}^{n^2}$</span>. </p> <p>Since compact spaces can cover only compact spaces, this rules out <span class="math-container">$\mathbb{S}^{n^2}$</span>, so the universal cover of <span class="math-container">$GL_n^+$</span> must be <span class="math-container">$\mathbb{R}^{n^2}$</span>.</p> <p>However, <a href="http://www.sciencedirect.com/science/article/pii/S0001870876800023" rel="nofollow noreferrer">Milnor states here</a> that the universal cover of a Lie group <span class="math-container">$G$</span> is <strong>not</strong> homeomorphic to Euclidean space iff <span class="math-container">$G$</span> contains a compact non-abelian subgroup*. Since <span class="math-container">$GL_n^+$</span> contains <span class="math-container">$SO(n)$</span> we have a contradiction, at least for <span class="math-container">$n \ge 3$</span>.</p> <hr> <p>*I actually don't know how to prove this. Any hints or references would be welcome. Milnor states this in theorem 3.4.</p>
Yury Ustinovskiy
289,282
<p>Here is a topological argument ruling out covering by $\mathbb R^{n^2}$. However, it might be an overkill for this statement.</p> <p>Assume $n&gt;2$. If $\mathbb R^{n^2}$ were the universal cover of $GL^+_n$, we have $GL^+_n=K(\pi, 1)$, where $\pi=\pi_1(GL_n^+)$. At the same time, $\pi_1(GL_n^+)=\pi_1(SO(n))=\mathbb Z_2$, so $K(\pi, 1)=\mathbb R P^{\infty}$, which is clearly not homotopy equivalent to a finite dimensional space.</p>
4,508,840
<p>I need to find the summation <span class="math-container">$$S=\sum_{r=0}^{1010} \binom{1010}r \sum_{k=2r+1}^{2021}\binom{2021}k$$</span></p> <p>I tried various things like replacing <span class="math-container">$k$</span> by <span class="math-container">$2021-k$</span> and trying to add the 2 summations to a pattern but was unable to find a solution. For more insights into the question, this was essential to solve a probability question wherein there were 2 players, A and B. A rolls a dice <span class="math-container">$2021$</span> times, and B rolls it <span class="math-container">$1010$</span> times. We had to find the probability of A having number of odd numbers more than twice of B. So if B had <span class="math-container">$r$</span> odd numbers, A could have odd numbers from <span class="math-container">$2r+1$</span> to <span class="math-container">$2021$</span>, hence the summation. I can get the required probability by dividing this by <span class="math-container">$2^{2021+1010}$</span>.</p>
epi163sqrt
132,007
<p>A variation. We obtain <span class="math-container">\begin{align*} \color{blue}{S_n}&amp;=\sum_{r=0}^{n}\binom{n}{r}\sum_{k=2r+1}^{2n+1}\binom{2n+1}{k}\\ &amp;\,\,\color{blue}{=\sum_{r=0}^n\binom{n}{r}\sum_{k=2n-2r+1}^{2n+1}\binom{2n+1}{k}}\tag{$r\to\ n-r$, (1)}\\ \\ \color{blue}{S_n}&amp;=\sum_{r=0}^{n}\binom{n}{r}\sum_{k=2r+1}^{2n+1}\binom{2n+1}{k}\\ &amp;=\sum_{r=0}^n\binom{n}{r}\sum_{k=0}^{2n-2r}\binom{2n+1}{2r+1+k}\tag{index $k$ starts with $0$}\\ &amp;=\sum_{r=0}^n\binom{n}{r}\sum_{k=0}^{2n-2r}\binom{2n+1}{2n+1-k}\tag{$k\to 2n-2r-k$}\\ &amp;\,\,\color{blue}{=\sum_{r=0}^n\binom{n}{r}\sum_{k=0}^{2n-2r}\binom{2n+1}{k}}\tag{2}\\ \end{align*}</span></p> <blockquote> <p>Adding (1) and (2) and division by two gives <span class="math-container">\begin{align*} \color{blue}{S_n}&amp;=\frac{1}{2}\sum_{r=0}^n\binom{n}{r}\sum_{k=0}^{2n+1}\binom{2n+1}{k}\\ &amp;=\frac{1}{2}\cdot 2^{n}\cdot 2^{2n+1}\\ &amp;\,\,\color{blue}{=2^{3n}} \end{align*}</span></p> </blockquote>
46,076
<p>I'm developing a larger package which includes several subpackages. My problem is, that I can't introduce the symbols in the subpackages to the autocompletion by loading the main package, but by calling a subpackage.</p> <p>Let's explain this with an example: I have a main package, called <code>main</code> which loads the subpackages <code>sub1</code>, <code>sub2</code> and <code>sub3</code>. The directory should look like</p> <pre><code>main ├── Kernel │ └── init.m ├── main.m ├── sub1.m ├── sub2.m └── sub3.m </code></pre> <p>The package <code>main</code> loads all three subpackages in its <code>BeginPackage</code> statement, while the package <code>sub1</code> loads <code>sub2</code> and <code>sub3</code>. The respective <code>init.m</code> looks like:</p> <pre><code>Get["main`sub3`"] Get["main`sub2`"] Get["main`sub1`"] Get["main`main`"] </code></pre> <p>The big problem I have is, that by calling</p> <pre><code>Needs["main`"] </code></pre> <p>I can see all symbols in <code>main</code>, but not those in the subpackages (though, they are usable).</p> <p>By calling</p> <pre><code>Needs["main`sub1`"] </code></pre> <p>I can see all symbols in the subpackages without any problems but can't load <code>main</code> anymore, because then some definitions are redone, which leads to error messages about protected symbols. The most interesting thing is, that the <code>$ContextPath</code> includes the subpackages in both cases.</p> <p>Is there a nice possibility to get the symbols of the subpackages into the autocompletion?</p> <hr> <p><em>Update</em>: By prepending the path of a subpackage to the <code>$ContextPath</code>, I can load the corresponding symbols of that package into the autocompletion.</p> <pre><code>$ContextPath = Prepend[$ContextPath, "main`sub1`"] </code></pre> <p>Unfortunately, this is not very nice and yields duplicates in the <code>$ContextPath</code>.</p> <p>Is there a better, automatic way to achieve this?</p>
kglr
125
<p>In versions 10.2+ there is <code>BlockMap</code>:</p> <pre><code>a = {q, r, s, t, u, v, w, x, y}; BlockMap[Mean, a, 3] </code></pre> <blockquote> <p>{1/3 (q + r + s), 1/3 (t + u + v), 1/3 (w + x + y)}</p> </blockquote> <p>Although this is much slower than the alternatives in Mr.Wizard's answer, its elegance may be of value since OP says</p> <blockquote> <p><em>I have a table of 200 elements</em></p> </blockquote> <p>Also, an undocumented 6-argument form of <code>Partition</code>:</p> <pre><code>Partition[a, 3, 3, None, {}, Mean[{##}] &amp;] </code></pre> <blockquote> <p>{1/3 (q + r + s), 1/3 (t + u + v), 1/3 (w + x + y)}</p> </blockquote>
598,838
<p><span class="math-container">$11$</span> out of <span class="math-container">$36$</span>? I got this by writing down the number of possible outcomes (<span class="math-container">$36$</span>) and then counting how many of the pairs had a <span class="math-container">$6$</span> in them: <span class="math-container">$(1,6)$</span>, <span class="math-container">$(2,6)$</span>, <span class="math-container">$(3,6)$</span>, <span class="math-container">$(4,6)$</span>, <span class="math-container">$(5,6)$</span>, <span class="math-container">$(6,6)$</span>, <span class="math-container">$(6,5)$</span>, <span class="math-container">$(6,4)$</span>, <span class="math-container">$(6,3)$</span>, <span class="math-container">$(6,2)$</span>, <span class="math-container">$(6,1)$</span>. Is this correct?</p>
shaun
260,837
<p>The chance of getting a $6$ with the first dice is $1/6$ and the chance with the second dice is $1/6$ therfore the chance of getting a $6$ with either dice is $1/6 + 1/6 = 1/3$....not $11/36$!!!</p>
2,316,514
<p>I want to calculate $\lim_{x \to 1}\frac{\sqrt{|x^2 - x|}}{x^2 - 1}$ . I tried to compute limit when $x \to 1^{+}$ and $x \to 1^{-}$ but didn't get any result . </p> <p>Please help .</p> <p>Note : I think it doesn't have limit but I can't prove it .</p>
szw1710
130,298
<p>First we should define what is a 3-dimensional spline. Spline functions are immanantly connected with a plane. Maybe you should try Bezier curves, which are independent on dimension?</p>
369,435
<p>My task is as in the topic, I've given function $$f(x)=\frac{1}{1+x+x^2+x^3}$$ My solution is following (when $|x|&lt;1$):$$\frac{1}{1+x+x^2+x^3}=\frac{1}{(x+1)+(x^2+1)}=\frac{1}{1-(-x)}\cdot\frac{1}{1-(-x^2)}=$$$$=\sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k$$ Now I try to calculate it the following way:</p> <p>\begin{align} &amp; {}\qquad \sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k \\[8pt] &amp; =(-x+x^2-x^3+x^4-x^5+x^6-x^7+x^8-x^9+\cdots)\cdot(-x^2+x^4-x^6+x^8-x^{10}+\cdots) \\[8pt] &amp; =x^3-x^4+0 \cdot x^5+0 \cdot x^6 +x^7-x^8+0 \cdot x^9 +0 \cdot x^{10} +x^{11}+\cdots \end{align}</p> <p>And now I conclude that it is equal to $\sum_{k=0}^{\infty}(x^{3+4 \cdot k}-x^{4+4 \cdot k})$ ($|x|&lt;1$) Is it correct? Are there any faster ways to solve that types of tasks? Any hints will be appreciated, thanks in advance.</p>
Dimitris
37,229
<p>Use the Cauchy product: $$\sum_{k=0}^\infty a_kx^k\cdot \sum_{k=1}^\infty b_kx^k=\sum_{k=0}^\infty c_kx^k$$ where $$c_k=\sum_{n=0}^k a_n\cdot b_{k-n}$$</p> <p>In your case: $a_k=(-1)^k$ and $$b_k=\begin{cases}0 &amp; ,k =2l+1 \\(-1)^k&amp;,k=2l\end{cases}$$</p>
1,632,677
<p>How can I arrive at a series expansion for $$\frac{1}{\sqrt{x^3-1}}$$ at $x \to 1^{+}$? Experimentation with WolframAlpha shows that all expansions of things like $$\frac{1}{\sqrt{x^y - 1}}$$ have $$\frac{1}{\sqrt{y}\sqrt{x-1}}$$ as the first term, which I don’t know how to obtain.</p>
Olivier Oloa
118,798
<p>Set $x:=1+\epsilon$, with $\epsilon \to 0^+.$ Then, by the binomial theorem, $$ x^3=(1+\epsilon)^3=1+3\epsilon+3\epsilon^2+\epsilon^3 $$ giving $$ \sqrt{x^3-1}=\sqrt{3\epsilon+3\epsilon^2+\epsilon^3}=\sqrt{3}\:\sqrt{\epsilon}\:\sqrt{1+\epsilon+O(\epsilon^2)} \tag1 $$ Observe that, by the Taylor expansion, as $\epsilon \to 0^+$, $$ \sqrt{1+\epsilon+O(\epsilon^2)}=1+O(\epsilon). \tag2 $$ From $(1)$ and $(2)$, one gets $$ \frac1{\sqrt{x^3-1}}=\frac1{\sqrt{3}\:\sqrt{\epsilon}}\frac1{\sqrt{1+\epsilon+O(\epsilon^2)}}=\frac1{\sqrt{3}\:\sqrt{\epsilon}}\frac1{\left(1+O(\epsilon)\right)}=\frac1{\sqrt{3}\:\sqrt{\epsilon}}\left(1+O(\epsilon) \right) $$ or, using $\epsilon=x-1$,</p> <blockquote> <p>$$ \frac1{\sqrt{x^3-1}}=\frac1{\sqrt{3}\:\sqrt{x-1}}+O(\sqrt{x-1}). $$ </p> </blockquote> <p>Similarly, one obtains, for $y&gt;0$, as $x \to 1^+$,</p> <blockquote> <p>$$ \frac1{\sqrt{x^y-1}}=\frac1{\sqrt{y}\:\sqrt{x-1}}+O(\sqrt{x-1}). $$</p> </blockquote>
37,380
<p>I do have noisy data and want to smooth them by a Savitzky-Golay filter because I want to keep the magnitude of the signal. </p> <p>a) Is there a ready-to-use Filter available for that? </p> <p>b) what are appropriate values for m (the half width) and for the coefficients for 3000-4000 data points?</p>
Alexey Popkov
280
<p>Several years ago in <a href="https://groups.google.com/d/msg/comp.soft-sys.math.mathematica/5k6rHhIfUXY/62WGusZFAiMJ">related MathGroups thread</a> Virgil P. Stokes suggested: </p> <blockquote> <p>A few years back I wrote a <em>Mathematica</em> notebook that shows how one can obtain the SG smoother from Gram polynomials. The code is not very elegant; but, it is a rather general implementation that should be easy to understand. Contact me if you are interested and I will be glad to forward the notebook to you.</p> </blockquote> <p>I contacted him and received the notebook. I find his implementation of the Savitzky-Golay filter quite stable and working pretty well. Here I publish it with his permission:</p> <pre><code>Clear[m, i]; (* m, i are global variables !! *) Clear[GramPolys, LSCoeffs, SGSmooth]; GramPolys[mm_, nmax_] := Module[{k, m = mm}, (* equations (1a), (1b) *) (* Define recursive equation for Gram polynomials as a function of m,i for degrees 0,1,...,nmax *) p[m, 0, i] = 1; p[m, -1, i] = 0; p[m_, k_, i_] := p[m, k, i] = 2*(2*k - 1)/(k*(2*m - k + 1))*i*p[m, k - 1, i] - (k - 1)*(2*m + k)/(k*(2*m - k + 1))*p[m, k - 2, i]; (* Return coefficients for degrees 0,1,...,nmax in a list *) Table[p[mm, k, i] // FullSimplify, {k, 0, nmax}] ]; LSCoeffs[m_, n_, d_] := Module[{k, j, sum, clist, polynomial, cclist}, polynomial = GramPolys[m, n]; clist = {}; Do[(* points in each sliding window *) sum = 0; Do[ (* degree loop *) num = (2 k + 1) FactorialPower[2 m, k]; den = FactorialPower[2 m + k + 1, k + 1]; t1 = polynomial[[k + 1]] /. {i -&gt; j}; t2 = polynomial[[k + 1]]; sum = sum + (num/den)*t1*t2 // FullSimplify; (*Print["k,polynomial[[k+1]]: ",k,", ",polynomial[[k+1]]];*) , {k, 0, n}]; clist = Append[clist, sum]; , {j, -m, m}]; Table[D[clist, {i, d}] /. {i -&gt; j}, {j, -m, m}] ]; SGSmooth[cc_, data_] := Module[{m, y, datal, datar, k, kk, n, yy}, n = Length[data]; m = (Length[cc] - 1)/2; (* Left end --- first 2*m+1 points used *) datal = Take[data, 2*m + 1]; (* Smooth first m points (1,2,...,m-1,m) *) kk = 0; Table[(kk = kk + 1; y[k] = ListConvolve[Reverse[cc[[kk]]], datal][[1]]), {k, -m, -1}]; (* Smooth central points (m+1,m+2,...n-m-1) *) y[0] = ListConvolve[Reverse[cc[[m + 1]]], data]; (* Right end --- last 2*m+1 points used *) datar = Take[data, {n - (2*m + 1) + 1, n}]; (* Smooth last m points (n-m,n-m+1,...,n) *) kk = m + 1; Table[(kk = kk + 1; y[k] = ListConvolve[Reverse[cc[[kk]]], datar][[1]]), {k, 1, m}]; (* And now we concatenate the front-end, central, and back- end estimated data values *) yy = Join[Table[y[k], {k, -m, -1}], y[0], Table[y[k], {k, 1, m}]] ]; </code></pre> <blockquote> <pre><code>Usage: SGOutput = SGSmooth[LSCoeffs[m,n,d], data] Inputs: m = half-width of smoothing window; i.e., 2m+1 points in smoothing kernel n = degree of LS polynomial (n &lt; 2m+1) d = order of derivative (d =0, smoother; d = 1, 1st derivative; ...) data = list of uniformly sampled (spaced) data values to be smoothed (length(data) &gt;=2m+1) Outputs: SGOutput = list of smoothed data values </code></pre> </blockquote>
3,120,090
<blockquote> <p>Find all the values of <span class="math-container">$\theta$</span> that satisfy the equation <span class="math-container">$$\cos(x \theta ) + \cos( (x+2) \theta ) = \cos( \theta )$$</span></p> </blockquote> <p>I've tried simplifying with factor formulae and a combo of compound angle formulae, and I'm still stuck. I get to <span class="math-container">$\theta = 180^\circ$</span> and <span class="math-container">$\theta = \frac{60^\circ}{x+1}$</span>, but I'm unsure if that's correct. </p> <p>It seems to work for <span class="math-container">$\theta=180^\circ$</span>, but I can't verify the other solution. I feel as though it should be a numerical solution, but I'm unsure. </p>
Michael Rozenberg
190,319
<p>it's <span class="math-container">$$2\cos(x+1)\theta\cos\theta=\cos\theta$$</span> or <span class="math-container">$$(2\cos(x+1)\theta-1)\cos\theta=0.$$</span> Can you end it now?</p> <p>Actually, <span class="math-container">$\cos\theta=0$</span> gives <span class="math-container">$$\theta=\frac{\pi}{2}+\pi k,$$</span> where <span class="math-container">$k\in\mathbb Z$</span>.</p> <p>Also, there is a mistake in your second sequence. </p> <p>I used <span class="math-container">$$\cos\alpha+\cos\beta=2\cos\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2}$$</span> and <span class="math-container">$$\pi=180^{\circ}.$$</span></p>
96,952
<p>How would I go about solving the following for <code>c</code>?</p> <pre><code>Solve[0 == Sum[(t[i]*m[i] - c*t[i]^2)/s[i]^2, {i, 1, n}], c, Reals] </code></pre> <p>I get the error </p> <blockquote> <p>Solve::nsmet : This system cannot be solved with the methods available to Solve</p> </blockquote> <p>but it is fairly straight forward to get that </p> <pre><code>c = Sum[m[i]*t[i]/s[i]^2, {i, 1, n}] / Sum[t[i]^2/s[i]^2, {i, 1, n}] </code></pre> <p>I need to repeat this for a similar equation that is not quite so straight forward to do on paper. I intended to use <em>Mathematica</em> to check my result but since I cannot verify the one I am confident in, I may be out of luck. I am new to <em>Mathematica</em> so I would not be surprised if the problem is my ignorance.</p> <p><strong>Edit:</strong> A friend pointed out that <code>N</code> was a defined function in <em>Mathematica</em> (also mentioned in a comment below). I replaced that with <code>n</code> with the same overall outcome. </p>
bbgodfrey
1,063
<p>If the variable to be solved for appears in the <code>Sum</code> in polynomial form, then the following works.</p> <pre><code>solvsum[s_, z_] := Module[{coef, in = s[[2]], cf = CoefficientList[s[[1]], z]}, Solve[0 == Sum[coef[i] z^(i - 1), {i, Length[cf]}], z] /. Table[coef[j] -&gt; Sum[cf[[j]], Evaluate[in]], {j, Length[cf]}]] </code></pre> <p>For the <code>Sum</code> in the question,</p> <pre><code>s1 = Sum[(t[i]*m[i] - c*t[i]^2)/s[i]^2, {i, 1, n}]; solvsum[s1, c] (* {{c -&gt; -(Sum[(m[i]*t[i])/s[i]^2, {i, 1, n}]/Sum[-(t[i]^2/s[i]^2), {i, 1, n}])}} *) </code></pre>
3,003,982
<p>By integrating by parts twice, show that <span class="math-container">$I_n$</span>, as defined below for integers <span class="math-container">$n &gt; 1$</span>, has the value shown.</p> <blockquote> <p><span class="math-container">$$I_n = \int_0^{\pi / 2} \sin n \theta \cos \theta \,d\theta = \frac{n-\sin(\frac{\pi n}{2})}{n^2 -1}$$</span></p> </blockquote> <p>I can do this using the formula <span class="math-container">$$\sin A \cos B = \frac{1}{2}[\sin(A-B)+\sin(A+B)] ,$$</span> but when I try using integration by parts I get stuck in a loop of integrating the same thing over and over.</p>
Travis Willse
155,629
<p><strong>Hint</strong> Following what you've done already, integrating by parts with <span class="math-container">$u = \sin n \theta$</span>, <span class="math-container">$dv = \cos \theta \,d\theta$</span> gives <span class="math-container">\begin{multline}\color{#df0000}{I_n} = \underbrace{\sin n \theta}_u \, \underbrace{\sin \theta}_v \vert_0^{\pi / 2} - \int_0^{\pi / 2} \underbrace{\sin \theta}_v \, \underbrace{\cos n\theta \, d\theta}_{du} = \sin \frac{\pi n}{2} - n \color{#1f1fff}{J_n}, \\ \color{#1f1fff}{J_n := \int_0^{\pi / 2} \cos n \theta \sin \theta \, d\theta} .\end{multline}</span></p> <p>We now apply integration by parts to the integral <span class="math-container">$\color{#3f3fff}{J_n}$</span> with <span class="math-container">$p = \cos n \theta$</span>, <span class="math-container">$dq = \sin \theta \,d\theta$</span>: <span class="math-container">$$\color{#3f3fff}{J_n} = \cos n \theta (-\cos \theta)\vert_0^{\pi / 2} - \int_0^{\pi / 2} \underbrace{-\cos \theta}_q \cdot \underbrace{- n \sin n \theta \,d\theta}_{dp} = 1 - n \color{#df0000}{I_n} .$$</span></p> <blockquote class="spoiler"> <p>Substituting to eliminate <span class="math-container">$\color{#3f3fff}{J_n}$</span> gives <span class="math-container">$\color{#df0000}{I_n} = \sin \frac{\pi n}{2} - n (1 - n \color{#df0000}{I_n})$</span>, and rearranging to solve for <span class="math-container">$\color{#df0000}{I_n}$</span> gives the claimed identity: <span class="math-container">$$\color{#df0000}{\boxed{I_n = \frac{n - \sin \frac{\pi n}{2}}{n^2 - 1}}} .$$</span> Notice that for <span class="math-container">$n \equiv 0, 2 \pmod 4$</span> this simplifies to <span class="math-container">$\frac{n}{n^2 - 1}$</span>, for <span class="math-container">$n \equiv 1 \pmod 4$</span> to <span class="math-container">$\frac{1}{n + 1}$</span>, and for <span class="math-container">$n \equiv 3 \pmod 4$</span> to <span class="math-container">$\frac{1}{n - 1}$</span>.</p> </blockquote>
3,190,601
<p>Suppose <span class="math-container">$f$</span> is continuous on <span class="math-container">$[-1,1]$</span> and differentiable on <span class="math-container">$(-1,1)$</span>. Find:</p> <p><span class="math-container">$$\lim_{x\rightarrow{0^{+}}}{\bigg(x\int_x^1{\frac{f(t)}{\sin^2(t)}}\: dt\bigg)}$$</span></p> <p>I am trying to use l'hopital's rule with:</p> <p><span class="math-container">$$\lim_{x\rightarrow{0^{+}}}{\bigg(\frac{\int_x^1{\frac{f(t)}{\sin^2(t)}}\: dt}{\frac{1}{x}}\bigg)}$$</span></p> <p>However this only works if the integral evaluates to <span class="math-container">$\pm$</span>infinity. when <span class="math-container">$x=0$</span>. Since <span class="math-container">$f(t)$</span> is general I'm not really sure what to do next. </p> <p>Help would be appreciated :)</p>
user662984
662,984
<p>Define <span class="math-container">$m(x)=\inf_{t\in[0,x]}t^2\frac{f(t)}{\sin^2(t)}$</span> and <span class="math-container">$M(x)=\sup_{t\in[0,x]}t^2\frac{f(t)}{\sin^2(t)}$</span>. Since <span class="math-container">$f$</span> is continuous, then <span class="math-container">$m(x)\to f(0)$</span> and <span class="math-container">$M(x)\to f(0)$</span> as <span class="math-container">$x\to0^+$</span>.</p> <p>For <span class="math-container">$x,y\in(0,1)$</span>, with <span class="math-container">$y&lt;x$</span>, there is <span class="math-container">$c\in(x,y)$</span> such that <span class="math-container">$$\frac{\int_{y}^{x}\frac{f(t)}{\sin^2(t)}}{\frac{1}{x}-\frac{1}{y}}=\frac{\int_{x}^{1}\frac{f(t)}{\sin^2(t)}-\int_{y}^{1}\frac{f(t)}{\sin^2(t)}}{\frac{1}{x}-\frac{1}{y}}=\frac{\frac{f(c)}{\sin^2(c)}}{\frac{1}{c^2}}$$</span></p> <p>Therefore, <span class="math-container">$$m(x)\leq\frac{y\int_{x}^{1}\frac{f(x)}{\sin^2(x)}-y\int_{y}^{1}\frac{f(y)}{\sin^2(y)}}{\frac{y}{x}-1}\leq M(x)$$</span></p> <p>Taking <span class="math-container">$\liminf$</span> and <span class="math-container">$\limsup$</span> as <span class="math-container">$y\to0^+$</span> we get <span class="math-container">$$m(x)\leq\liminf_{y\to0^+}(\text{ or }\limsup_{y\to0^+}) y\int_{y}^{1}\frac{f(y)}{\sin^2(y)}\leq M(x)$$</span></p> <p>Therefore, taking <span class="math-container">$x\to0^+$</span> we get that <span class="math-container">$\liminf(\text{ or }\limsup) y\int_{y}^{1}\frac{f(y)}{\sin^2(y)}=f(0)$</span>.</p>
3,190,601
<p>Suppose <span class="math-container">$f$</span> is continuous on <span class="math-container">$[-1,1]$</span> and differentiable on <span class="math-container">$(-1,1)$</span>. Find:</p> <p><span class="math-container">$$\lim_{x\rightarrow{0^{+}}}{\bigg(x\int_x^1{\frac{f(t)}{\sin^2(t)}}\: dt\bigg)}$$</span></p> <p>I am trying to use l'hopital's rule with:</p> <p><span class="math-container">$$\lim_{x\rightarrow{0^{+}}}{\bigg(\frac{\int_x^1{\frac{f(t)}{\sin^2(t)}}\: dt}{\frac{1}{x}}\bigg)}$$</span></p> <p>However this only works if the integral evaluates to <span class="math-container">$\pm$</span>infinity. when <span class="math-container">$x=0$</span>. Since <span class="math-container">$f(t)$</span> is general I'm not really sure what to do next. </p> <p>Help would be appreciated :)</p>
Paramanand Singh
72,031
<p>If one assumes that <span class="math-container">$f$</span> is constant, then the integral evaluates to <span class="math-container">$f(0)(\cot x-\cot 1)$</span> and thus the desired limit is <span class="math-container">$f(0)$</span>.</p> <p>This motivates us to prove that the limit is <span class="math-container">$f(0)$</span> even when <span class="math-container">$f$</span> is not constant. We need only continuity of <span class="math-container">$f$</span> at <span class="math-container">$0$</span>. Let <span class="math-container">$\epsilon&gt;0$</span> then there is a <span class="math-container">$\delta &gt;0$</span> such that <span class="math-container">$$|f(x) - f(0)|&lt;\epsilon$$</span> whenever <span class="math-container">$|x|&lt;\delta$</span>.</p> <p>We deal with the case when <span class="math-container">$x\to 0^+$</span> (the case <span class="math-container">$x\to 0^-$</span> being similar). We have <span class="math-container">$$x\int_{x}^{1}\frac{f(t)} {\sin^2t}\, dt=x\int_{x} ^{\delta} \frac{f(t)} {\sin^2t}\,dt+x\int_{\delta} ^{1}\frac{f(t)}{\sin^2t}\,dt$$</span> As noted in the start of the answer we have <span class="math-container">$$x\int_{x} ^{\delta} \frac{f(0)}{\sin^2t}\,dt=f(0)(x\cot x-x\cot\delta) $$</span> and hence <span class="math-container">$$\left|x\int_{x} ^{1}\frac{f(t)}{\sin^2t}\,dt-f(0)(x\cot x - x\cot\delta)\right|\leq x\int_{x} ^{\delta}\frac{|f(t)-f(0)|}{\sin^2t}\,dt+x\int_{\delta}^{1}\frac{|f(t)|}{\sin^2t}\,dt$$</span> and the RHS does not exceed <span class="math-container">$$\epsilon (x\cot x-x\cot \delta) +x\int_{\delta} ^{1}\frac{|f(t)|}{\sin^2t}\,dt$$</span> Taking limits as <span class="math-container">$x\to 0^+$</span> we see that <span class="math-container">$$f(0)-\epsilon\leq \liminf_{x\to 0^{+}} x\int_{x} ^{1}\frac{f(t)}{\sin^2t}\,dt\leq\limsup_{x\to 0^{+}} x\int_{x} ^{1}\frac{f(t)}{\sin^2t}\,dt\leq f(0)+ \epsilon $$</span> Since <span class="math-container">$\epsilon $</span> is an arbitrary positive number the desired limit is <span class="math-container">$f(0)$</span>.</p>
2,523,570
<p>$15x^2-4x-4$, I factored it out to this: $$5x(3x-2)+2(3x+2).$$ But I don’t know what to do next since the twos in the brackets have opposite signs, or is it still possible to factor them out?</p>
Michael Rozenberg
190,319
<p>$$15x^2-4x-4=15x^2-10x+6x-4=5x(3x-2)+2(3x-2)=(5x+2)(3x-2).$$</p>
1,266,674
<p>There are $2^{10} =1024$ possible $10$ -letters strings in which each letter is either an $A$ or a $B$. Find the number of such strings that do not have more than $3$ adjacent letters that are identical.</p>
André Nicolas
6,312
<p>Let $f(n)$ be the number of strings of length $n$ that begin with A and do not have $4$ or more consecutive occurrences of the same letter (good strings). Then the answer to our problem is $2f(10)$.</p> <p>The second letter could be a B. There are $f(n-1)$ such good strings.</p> <p>The second letter could be an A, and the next a B. There are $f(n-2)$ such good strings.</p> <p>The second and third letter could be an A, and the next a B. There are $f(n-3)$ such good strings. </p> <p>So for $n\gt 3$ we have $f(n)=f(n-1)+f(n-2)+f(n-3)$. It is easy to see that $f(1)=1$, $f(2)=2$, and $f(3)=4$. Now we can use the "tribonacci" recurrence to climb to $10$.</p>
1,965,040
<p>Find the cyclic subgroups $&lt;\rho_1&gt;, &lt;\rho_2&gt;, and &lt;\mu_1&gt;$ of $S_3$.<a href="https://i.stack.imgur.com/AOnTC.png" rel="nofollow noreferrer"> Elements of $S_3$.</a></p> <p>I know the answer is suppose to be $&lt;\rho_1&gt; = &lt;\rho_2&gt; = \{\rho_0, \rho_1, \rho_2 \}$ and $&lt;\mu_1&gt; = \{ \rho_0, \rho_1 \}$. I'm not sure if my work shows this. For $ &lt;\rho_1, &lt;\rho_2&gt;$ is it sufficient to say that</p> <p>$(231)(123)= (231)$</p> <p>$(231)(231)= (312)$ &amp; $(312)(231)=(123)$ &amp; $(123)(231)=(231)$</p> <p>We can say $&lt;\rho_1&gt; = &lt;\rho_2&gt; = \{\rho_0, \rho_1, \rho_2 \}$ </p>
Marc Bogaerts
118,955
<p>I suppose you mean that you are considering cosets of an ideal $I$ in a ring $R$, then $$(a+I)(b+I) = ab +aI+bI +I*I = ab + I + I + I= ab + I$$. Because $aI = I$, $I*I = I$ and $I + I=I$ as consequences of the definition of an ideal.</p>
1,965,040
<p>Find the cyclic subgroups $&lt;\rho_1&gt;, &lt;\rho_2&gt;, and &lt;\mu_1&gt;$ of $S_3$.<a href="https://i.stack.imgur.com/AOnTC.png" rel="nofollow noreferrer"> Elements of $S_3$.</a></p> <p>I know the answer is suppose to be $&lt;\rho_1&gt; = &lt;\rho_2&gt; = \{\rho_0, \rho_1, \rho_2 \}$ and $&lt;\mu_1&gt; = \{ \rho_0, \rho_1 \}$. I'm not sure if my work shows this. For $ &lt;\rho_1, &lt;\rho_2&gt;$ is it sufficient to say that</p> <p>$(231)(123)= (231)$</p> <p>$(231)(231)= (312)$ &amp; $(312)(231)=(123)$ &amp; $(123)(231)=(231)$</p> <p>We can say $&lt;\rho_1&gt; = &lt;\rho_2&gt; = \{\rho_0, \rho_1, \rho_2 \}$ </p>
Yanyu Wang
743,565
<p>An Excellent Question! I am also troubled by this question for a long time. The key point here is to remember why we define the product on the quotient ring this way. You may think this is a set equation, however what really matters is that we can verify the product is well-defined. In another word, it is independent of the choice of the representation element. You can check the definition of quotient group, you will find out the definition there is actually a set equation (What a bad coincidence, which may make you think the set equation is the crucial thing). But if you check a textbook instead of just a definition, you will find out the reason why we define a normal subgroup is to keep the product well-defined!</p>
2,965,993
<p>Suppose you have the surface <span class="math-container">$\xi$</span> defined in <span class="math-container">$\mathbb{R}^3$</span> by the equation: <span class="math-container">$$ \xi :\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1 $$</span> For <span class="math-container">$ x \geq 0$</span> , <span class="math-container">$ y \geq 0$</span> and <span class="math-container">$ z \geq 0$</span>. Now take any point <span class="math-container">$P \in \xi$</span> and consider the tangent plane (<span class="math-container">$\pi_t)$</span> to <span class="math-container">$\xi$</span> at <span class="math-container">$P$</span>. Calculate the minimum volume of the region determined by the <span class="math-container">$xy$</span>, <span class="math-container">$yz$</span>, <span class="math-container">$xz$</span> planes and <span class="math-container">$\pi_t$</span>.</p> <p><img src="https://i.stack.imgur.com/6zwo0.png" alt="Ellipsoid and tetrahedron."></p>
dan_fulea
550,003
<p>Consider a point <span class="math-container">$P(x_0,y_0,z_0)\in \Bbb R_{&gt;0}^3$</span> on the given ellipsoid <span class="math-container">$(E)$</span> with equation <span class="math-container">$$ \frac {x^2}{a^2} + \frac {y^2}{b^2} + \frac {z^2}{c^2} =1\ . $$</span> Then the plane tangent in <span class="math-container">$P$</span> to <span class="math-container">$(E)$</span> is given by the linear part of the Taylor polynomial around <span class="math-container">$(x_0,y_o,z_0)$</span> of first order of the polynomial of second degree in the above equation, equated to zero. To isolate this polynomial of degree one, write <span class="math-container">$$ \begin{aligned} &amp;\frac {x^2}{a^2} + \frac {y^2}{b^2} + \frac {z^2}{c^2} -1 \\ &amp;\qquad= \frac 1{a^2} ((x-x_0)+x_0)^2+ \frac 1{b^2} ((y-y_0)+y_0)^2+ \frac 1{c^2} ((z-z_0)+z_0)^2 -1 \\ &amp;\qquad= \underbrace{ \left( \frac 1{a^2} x_0^2+ \frac 1{b^2} y_0^2+ \frac 1{c^2} z_0^2 -1 \right)}_{=0} \\ &amp;\qquad\qquad+ \frac 2{a^2} x_0(x-x_0)+ \frac 2{b^2} y_0(y-y_0)+ \frac 2{c^2} z_0(z-z_0) \\ &amp;\qquad\qquad\qquad\qquad+ \text{higher order terms} \\ &amp;\qquad\qquad\qquad\qquad \text{ containing factors $(x-x_0)^2$, and $(y-y_0)^2$, and $(z-z_0)^2$ .} \end{aligned} $$</span> So the equation of the tangent plane in <span class="math-container">$P$</span> to <span class="math-container">$(E)$</span> is (given by "dedoubling"): <span class="math-container">$$ \frac 1{a^2} x_0(x-x_0)+ \frac 1{b^2} y_0(y-y_0)+ \frac 1{c^2} z_0(z-z_0)=0\ . $$</span> Simpler maybe: <span class="math-container">$$ \frac 1{a^2} x_0x+ \frac 1{b^2} y_0y+ \frac 1{c^2} z_0z =1\ . $$</span> This plane hits the <span class="math-container">$Ox$</span> axis in the point with <span class="math-container">$y=z=0$</span> and <span class="math-container">$x=a^2/x_0$</span>. Similar formulas for the intersections with the other axes.</p> <p>So we have to minimize the volume: <span class="math-container">$$ V =\frac 16\frac{a^2\;b^2\;c^2}{x_0\;y_0\;z_0}\ . $$</span> For this we have to maximize the product <span class="math-container">$x_0y_0z_0$</span>. The inequality between the arithmetic and geometric mean solves the problem. <span class="math-container">$$ 1 = \frac {x_0^2}{a^2} + \frac {y_0^2}{b^2} + \frac {z_0^2}{c^2} \ge 3\left(\frac {x_0^2y_0^2z_0^2}{a^2b^2c^2}\right)^{1/3} = 3\left(\frac {x_0y_0z_0}{abc}\right)^{2/3}\ , $$</span> so <span class="math-container">$$ x_0y_0z_0\le 3^{-3/2}abc\ . $$</span> The quality holds for the point <span class="math-container">$P^*\left(\frac a{\sqrt 3},\frac b{\sqrt 3},\frac c{\sqrt 3}\right)$</span>. We get the minimal volume: <span class="math-container">$$ V^* = \frac 16\frac{a^2b^2c^2}{3^{-3/2}abc}= \frac {\sqrt 3}2\; abc\ . $$</span></p>
699,933
<p>There is a proof of the real case of Cauchy-Schwarz inequality that expands $\|\lambda v - w\|^2 \geq 0 $, gets a quadratic in $\lambda$, and takes the discriminant to get the Cauchy-Schwarz inequality. In trying to do the same thing in the complex case, I ran into some trouble. First, there are proofs <a href="http://www.artofproblemsolving.com/Wiki/index.php/Cauchy-Schwarz_Inequality#Complex_Form" rel="nofollow noreferrer">here</a>, <a href="https://math.stackexchange.com/questions/446148/cauchy-schwarz-for-complex-numbers">here</a>, <a href="https://math.stackexchange.com/questions/202406/proof-of-cauchy-schwarz-inequality">here</a>, and <a href="http://ckrao.wordpress.com/2011/02/18/two-interesting-proofs-of-the-cauchy-schwarz-inequality-complex-case/" rel="nofollow noreferrer">here</a>, but none of them do it the way I'm thinking of.</p> <p>If I similarly expand $\|\lambda v - w\|^2_{\mathbb{C}},$ I get $|\lambda|^2\|v\|^2 - 2 \text{Re}(\lambda \langle v,w\rangle) + \|w\|^2$. How can I manipulate this to get Cauchy-Schwarz using the discriminant? My problem is that $$|\lambda|^2\|v\|^2 - 2 \text{Re}(\lambda \langle v,w\rangle) + \|w\|^2 \geq |\lambda|^2\|v\|^2 - 2 | \lambda ||\langle v,w\rangle | + \|w\|^2,$$ so I can't be sure that the latter term is $\geq 0$.</p>
mookid
131,738
<blockquote> <p>If I similarly expand $\|\lambda v - w\|^2_{\mathbb{C}},$ I get $|\lambda|^2\|v\|^2 - 2 \text{Re}(\lambda \langle v,w\rangle) + \|w\|^2$.</p> </blockquote> <p>Your idea can work almost the same way.</p> <p>Consider values $\lambda(r) = r \exp i\theta$, taking $\theta$ such as $\lambda(r) \langle v,w\rangle \in \mathbb R$. You get $$ 0\le|\lambda|^2\|v\|^2 - 2 \text{Re}(\lambda \langle v,w\rangle) + \|w\|^2 = r^2\|v\|^2 \pm 2 r |\langle v,w\rangle| + \|w\|^2 $$</p> <p>the $\pm$ depends on the sign of the real part.</p>
2,830,926
<p>I am implementing a program in C which requires that given 4 points should be arranged such that they form a quadrilateral.(assume no three are collinear)<br> Currently , I am ordering the points in the order of their slope with respect to origin.<br> See <a href="https://ibb.co/cfDHeo" rel="nofollow noreferrer">https://ibb.co/cfDHeo</a> .<br> In this case a,b,c,d are in descending order of slope but on joining a to b , b to c , c to d , d to a - I don't get a quadrilateral .<br> So my method fails in such cases.<br></p> <p>I need suggestion.</p> <p>Thanks.</p>
David M.
398,989
<p>Sounds like the issue is with calculating the second derivative, so here are some tips. When I have to do a tedious derivative, I like to fold up the notation a little to make things easier. So in this case, I would define</p> <p>$$ f(u)\equiv 1-(1-p)e^u, $$</p> <p>then the MGF is</p> <p>$$ m_X(u)=\bigg(\frac{p}{f(u)}\bigg)^r. $$</p> <p>Using this notation, we have that</p> <p>$$ m'_X(u)=-\frac{rp^rf'(u)}{\big[f(u)\big]^{r+1}}. $$</p> <p>This is the tedious part. Using the quotient rule, the second derivative comes out to</p> <p>$$ m''_X(u)=\frac{rp^r\big[(r+1)[f'(u)]^2-f(u)f''(u)\big]}{\big[f(u)\big]^{r+2}} $$</p> <p>You can then easily evaluate $f(0)$, $f'(0)$, <em>etc.</em> to get the desired result.</p>
2,830,926
<p>I am implementing a program in C which requires that given 4 points should be arranged such that they form a quadrilateral.(assume no three are collinear)<br> Currently , I am ordering the points in the order of their slope with respect to origin.<br> See <a href="https://ibb.co/cfDHeo" rel="nofollow noreferrer">https://ibb.co/cfDHeo</a> .<br> In this case a,b,c,d are in descending order of slope but on joining a to b , b to c , c to d , d to a - I don't get a quadrilateral .<br> So my method fails in such cases.<br></p> <p>I need suggestion.</p> <p>Thanks.</p>
Clarinetist
81,560
<p>I wanted to state that finding the variance of $X$, given the MGF of $X$, is usually much easier done with the <strong>cumulant generating function</strong> (CGF).</p> <p>Suppose $M_X$ is the MGF of $X$. Then the CGF is given by $\varphi_X = \ln M_X$.</p> <p>One of the very nice things about $\varphi_X$ is that $\left. \varphi^{\prime\prime}_X(t) \right|_{t = 0} = \text{Var}(X)$, thus eliminating the need to find $\mathbb{E}[X]$ and $\mathbb{E}[X^2]$ separately.</p> <p>Given $$M_X(u) = \left[ \dfrac{p}{1-(1-p)e^u}\right]^r \implies\varphi_X(u)=\ln M_X(u)=r\ln\left[ \dfrac{p}{1-(1-p)e^u}\right]$$ where we have used the property $\ln(a^b) = b\ln(a)$.</p> <p>The first derivative of $\varphi_X$ is given by $$\varphi_X^{\prime}(u) = r\dfrac{1}{p/[1-(1-p)e^u]} \cdot p \cdot [1-(1-p)e^u]^{-2}[(1-p)e^{u}] =\dfrac{r(1-p)e^u}{1-(1-p)e^u}$$ [It's worth noting that $\left.\varphi^{\prime}_X(u)\right|_{u = 0} = \mathbb{E}[X]$.]</p> <p>To make finding the second derivative easier, divide both the numerator and denominator by $e^u$, obtaining $$\varphi^{\prime}_X(u)=\dfrac{r(1-p)}{e^{-u}-(1-p)}$$ The second derivative is then $$\varphi^{\prime\prime}_X(u)=r(1-p)(-1)[e^{-u}-(1-p)]^{-2}(-e^{-u})=\dfrac{re^{-u}(1-p)}{[e^{-u}-(1-p)]^2}$$ and evaluating this at $u = 0$ gives $$\left.\varphi^{\prime\prime}_X(u)\right|_{u=0}=\dfrac{re^{-0}(1-p)}{[e^{-0}-(1-p)]^2} = \dfrac{r(1-p)}{p^2} = \text{Var}(X)$$ as desired.</p>
2,029,538
<blockquote> <p>A function $f(x)$, where $x$ is a real number , is defined implicitly by the following formula: $$f(x)=x-\int^{\frac{\pi}{2}}_0f(x)\sin(x)dx$$ Find the explicit function for $f(x)$ in its simplest form.</p> </blockquote> <p>This question appeared in the recent New Zealand Qualifications Authority 2016 Scholarship Calculus examination.</p> <hr> <p>What I have done</p> <p>Let $f(x)=y$</p> <p>$$f(x)=x-\int^{\frac{\pi}{2}}_0f(x)\sin(x)dx \Rightarrow y=x-\int^{\frac{\pi}{2}}_0y\sin(x)dx$$</p> <p>$$ y=x-\int^{\frac{\pi}{2}}_0y\sin(x)dx \Leftrightarrow \int^{\frac{\pi}{2}}_0y\sin(x)dx=x-y$$</p> <p>Consider the integral</p> <p>$$ \int^{\frac{\pi}{2}}_0y\sin(x)dx$$</p> <p>Let $u=y\Rightarrow du=dy$ and $dv= \sin(x) \Rightarrow v=-\cos(x)$</p> <p>$$ \int^{\frac{\pi}{2}}_0y\sin(x)dx = \left[y\sin(x) \right]^{\frac{\pi}{2}}_0+\int^{\frac{\pi}{2}}_0\cos(x) dydx$$ </p> <p>How can I continue?</p>
msm
350,875
<p>$$f(x)=x-\int^{\frac{\pi}{2}}_0f(x')\sin(x')dx'$$ multiply by $\sin(x)$ and integrate from $0$ to $\frac{\pi}{2}$: $$f(x)\sin(x)=x\sin(x)-\left(\int^{\frac{\pi}{2}}_0f(x)\sin(x)dx\right)\sin(x)$$ $$\int^{\frac{\pi}{2}}_0f(x)\sin(x)dx=\int^{\frac{\pi}{2}}_0x\sin(x)dx-\left(\int^{\frac{\pi}{2}}_0f(x)\sin(x)dx\right)\int^{\frac{\pi}{2}}_0\sin(x)dx$$ $$x-f(x)=1-(x-f(x))(1)$$ So $$2(x-f(x))=1$$ which means $$f(x)=x-\frac{1}{2}$$</p>
2,969,363
<p>I have 3 points in space A, B, and C all with (x,y,z) coordinates, therefore I know the distances between all these points. I wish to find point D(x,y,z) and I know the distances BD and CD, I do NOT know AD.</p> <p>The method I have attempted to solve this using is first saying that there are two spheres known on points B and C with radius r (distance to point D). The third sphere is found by setting the law of cosines equal to the formula for distance between two vectors ((V1*V2)/(|V1||V2|)) = ((a^2+b^2-c^2)/2ab). </p> <p>Now point D should be the intersection of these three spheres, but I have not been able to calculate this or find a way to. I either need help finding point D and I can give numeric points for an example, or I need to know if I need more information to solve (another point with a distance to D known).</p> <p>Ok, now assuming that distance AD is known, how do I calculate point D? <a href="https://i.stack.imgur.com/vzmbz.png" rel="nofollow noreferrer">This is what it looks like when I graph the two spheres and the one I calculated, as you can see it intersects on point F. (D in this case)</a></p>
quasi
400,434
<p>Suppose the known distances are <span class="math-container">$$d(B,C)=d(A,C)=d(C,D)=1$$</span> and <span class="math-container">$$d(A,B)=d(B,D)=\sqrt{2}$$</span> For concreteness, we can place <span class="math-container">$A,B,C$</span> ae <span class="math-container">$$C=(0,0,0),\;\;B=(1,0,0),\;\;A=(0,1,0)$$</span> Then if <span class="math-container">$D$</span> is any point on the circle in the <span class="math-container">$yz$</span>-plane, centered at the origin, with radius <span class="math-container">$1$</span>, all of the distance specifications are satisfied. <p> But since <span class="math-container">$D$</span> can be <em>any</em> point on that circle, it follows <span class="math-container">$D$</span> is not uniquely determined. <p> Note that as <span class="math-container">$D$</span> traverses the circle, <span class="math-container">$d(A,D)$</span> varies from a minimum of <span class="math-container">$0$</span> (when <span class="math-container">$D=A$</span>), to a maximum of <span class="math-container">$2$</span> (when <span class="math-container">$D=(0,-1,0))$</span>, so <span class="math-container">$d(A,D)$</span> is also not uniquely determined. <p> If <span class="math-container">$d(A,D)$</span> is also given, say <span class="math-container">$d(A,D)=t$</span>, where <span class="math-container">$0\le t\le 2$</span>, then <span class="math-container">$D=(x,y,z)$</span> is determined by the system <span class="math-container">$$ \begin{cases} x=0\\[4pt] y^2+z^2=1\\[4pt] x^2+(y-1)^2+z^2=t^2\\ \end{cases} $$</span> which yields <span class="math-container">$$D=\left(0,\,1-{\small{\frac{t^2}{2}}},\,\pm{\small{\frac{t}{2}}}\sqrt{4-t^2}\right)$$</span> hence,</p> <ul> <li>If <span class="math-container">$t=0$</span>, we get <span class="math-container">$D=(0,1,0)$</span>.<span class="math-container">$\\[4pt]$</span> <li>If <span class="math-container">$t=2$</span>, we get <span class="math-container">$D=(0,-1,0)$</span>.<span class="math-container">$\\[4pt]$</span> <li>If <span class="math-container">$0 &lt; t &lt; 2$</span>, there are two choices for <span class="math-container">$D$</span>, as specified above. </ul>
2,596,457
<ul> <li>If $\lim _{n\rightarrow \infty }a_{n}=a$ then $\left\{a_{n}:n\in\mathbb{N}\right\} \cup \left\{ a\right\}$ is compact.</li> </ul> <p><strong>I couldn't do anything. Can you give a hint?</strong></p> <p>Note: in the question $a_n\in\mathbb{R}$.</p>
Kurtland Chua
249,134
<p>Hint: Compactness is equivalent to sequential compactness in metric spaces. Given any sequence $(b_n)$ with terms from $S = \{a_n : n \in \mathbb{N}\} \cup \{a\}$, there are two cases - either the sequence only uses finitely many values from $S$ or infinitely many of them. Can you find a convergent subsequence in either case? </p>
1,213,663
<p>If you measure a task &amp; it takes 3 seconds, then the next time you do the same task, it takes you 1 second, is the difference 200% or 67%? </p> <p>Or would you say the difference is 200% because 3-1=2 or 200% better -- but the percentage of difference is 2/3 or 67%? I'm pretty sure I'm confusing something if not someone. Be that as it may, I need to explain this clearly so that the analysis is clear &amp; credible. The example I would site would be: Let's say you are mesuring system transaction response times &amp; on two separate tests find the response time improvement noted. (Thanks PH)</p>
John Hughes
114,036
<p>Changing rows is the same as multiplying by a permutation (which is a rotation, and perhaps a reflection)in the codomain. That means that the SVD before and after look like $$ M = U D V^t \\ M' = P U D V^t $$ In the case where there's no reflection ($det P &gt; 0$), using $PU$ as $U'$, you get an SVD for $M'$. The singular values are evidently the same. I'll elt you handle the case with the negative determinant. </p>
4,233,619
<p>Consider all natural numbers whose decimal expansion has only the even digits <span class="math-container">$0,2,4,6,8$</span>. Suppose these are arranged in increasing order. If <span class="math-container">$a_n$</span> denotes the <span class="math-container">$n$</span>-th number in this sequence then the value of the limit: <span class="math-container">$\displaystyle \lim_{n\to \infty}\frac{\log a_n}{\log n}=$</span></p> <p>(a) <span class="math-container">$0$</span>.</p> <p>(b) <span class="math-container">$\log_5 10$</span></p> <p>(c) <span class="math-container">$\log_210$</span></p> <p>(d) <span class="math-container">$2$</span></p> <p>I observe that the sequence <span class="math-container">$\{a_n\}=\{2,4,6,8,20,22,24,26,28,40,42,44,46,48,60,62,64,68,80,82,84,86,88,200,202,204,206,208,\cdots\}$</span>.</p> <p>I am not getting the explicit formula for <span class="math-container">$a_n$</span>, but it shows that the growth is exponential. How to find the exact value of the limit ?</p> <p>Also I found that <span class="math-container">$a_n \ge n$</span> always, from which we get the required limit is <span class="math-container">$\ge 1$</span>. So option (a) is incorrect.</p> <p>Any hint. please</p>
José Carlos Santos
446,262
<p>The correct option is the first one. Assuming that <span class="math-container">$x&gt;0$</span> and that <span class="math-container">$a\in\Bbb R$</span>, you always have <span class="math-container">$\log\left(x^a\right)=a\log(x)$</span>. So, when <span class="math-container">$a=y^z$</span>, you have<span class="math-container">\begin{align}\log\left(x^{y^z}\right)&amp;=\log\left(x^a\right)\\&amp;=a\log(x)\\&amp;=y^z\log(x).\end{align}</span>The “bring out every exponent” rule is not really a rule, at least not in the sense that you apply it to literally <em>every</em> level of the expression that you are dealing with.</p>
195,556
<p>My math background is very narrow. I've mostly read logic, recursive function theory, and set theory.</p> <p>In recursive function theory one studies <a href="http://en.wikipedia.org/wiki/Partial_functions">partial functions</a> on the set of natural numbers. </p> <p>Are there other areas of mathematics in which (non-total) partial functions are important? If so, would someone please supply some references?</p> <p>Thanks!</p>
Peter Smith
35,151
<p>In one sense, surely, it is deeply important that the square root function with domain and co-domain the positive rationals $\mathbb{Q}$ is partial (as are the cube root function, fourth root function, etc. etc.). That's the non-trivial ancient discovery that leads us to introduce the concept of irrational numbers.</p> <p>Likewise, it is important the square root function etc. with domain and co-domain the reals $\mathbb{R}$ is partial. Wanting an algebraically complete field we introduce the concept of complex numbers. </p> <p>Of course those cases are very familiar, but that surely does not make them unimportant! (And note that <em>these</em> cases are not met by <em>restricting</em> the domain to avoid partiality, but -- in the first place -- by <em>augmenting</em> the codomain.)</p>
195,556
<p>My math background is very narrow. I've mostly read logic, recursive function theory, and set theory.</p> <p>In recursive function theory one studies <a href="http://en.wikipedia.org/wiki/Partial_functions">partial functions</a> on the set of natural numbers. </p> <p>Are there other areas of mathematics in which (non-total) partial functions are important? If so, would someone please supply some references?</p> <p>Thanks!</p>
Carl Mummert
630
<p>In functional analysis, the concept of an <a href="http://en.wikipedia.org/wiki/Unbounded_operator" rel="noreferrer">unbounded operator</a> is closely connected with partial functions. The natural examples of unbounded operators are linear operators that are defined only on a dense proper subspace of a Banach space. For example, the "derivative" operator is an unbounded linear operator on the space $L_2[0,1]$, but it is far from being total. The use of partial functions turns out to be vital; for example the <a href="http://en.wikipedia.org/wiki/Hellinger%E2%80%93Toeplitz_theorem" rel="noreferrer">Hellinger–Toeplitz theorem</a> is often interpreted as saying that it is necessary to consider partial operators in order to formalize quantum mechanics. </p>
2,049,777
<p>$2.$ Find the dimensions of </p> <p>(a) the space of all vectors in $R^n$ whose components add to zero;</p> <p>(c) the space of all solutions to $\frac{d^2y(t)}{dt^2} −3 \frac{dy(t)}{dt} +2y(t) = 0$. </p> <p>for (a) Im pretty sure that the dimension is $n-1$ but people seem to differ, I thought it is $n-1$ since there is a condition of components adding to zero, but I can't make up with the basis could someone help?? and for (c) its a differential equation, Ive dealt with polynomial matrixes, but this is my first time to encounter one that involves differential factor does I really am curious about the basis of (c) since the dimension will equal to the rank of the basis right??</p>
Doug M
317,162
<p>a) The dimension is n-1</p> <p>You could have a vector $(x_1,x_2,x_3\cdots x_{n-1}, -\sum_\limits{i=1}^{n-1} xi)$ And that vector has $n-1$ components that are independent.</p> <p>b) that diff eq has a solution $y = C_1 e^{t}+ C_2 e^{2t}$</p> <p>$e^{t}, e^{2t}$ form the basis of a 2 dimensional vector space.</p>
2,348,811
<p>Whenever I go through the big pile of socks that just went through the laundry, and have to find the matching pairs, I usually do this like I am a simple automaton:</p> <p>I randomly pick a sock, and see if it matches any of the single socks I picked out earlier and that haven't found a match yet. If there is a match, I will fold the two socks together and put them in the 'done' pile, otherwise I will add the single sock to the 'no match yet' pile of single socks, and pick out another random sock.</p> <p>So, as I was doing this last night, I started thinking about this, and figured that the following would be true: The 'no match yet' pile can be expected to slowly grow, up to some point somewhere in the 'middle' of the process, after which the pile will gradually shrink, and eventually go down back to $0$. In fact, my intuition is that the expected number of loose socks as a function of the number of socks picked so far, is a symmetric function, with the maximum being when I have picked half of the socks.</p> <p>So, my questions are:</p> <p>With $n$ pairs of socks, what is the expected number of loose socks that are in my 'no match yet' pile after having picked $k$ socks?</p> <p>Is it true that this function is a symmetric function, and that the maximum is for $k=n$? (if so, I figure there must be a conceptual way of looking at the problem that makes this <em>immediately</em> clear, without using any formulas ... what is that way? Is it just that I can think of reversing the process?)</p> <p>Of course, this is all assuming there are $n$ pairs of socks total, and that there are no single socks in the original pile, and while this is something that <em>never</em> seems to apply to the pile of socks coming through my actual laundry, let's assume for the sake of mathematical simplicity that there really just are $n$ pairs of socks.</p>
Marko Riedel
44,883
<p>We can verify the accepted answer using the methodology from this <a href="https://math.stackexchange.com/questions/2172876/">MSE link</a> where we see that the problem is very similar to a coupon collector without replacement and two instances of $n$ types of coupons. Suppose we have $j$ instances. Start by asking about the probability of getting the following distribution of coupons:</p> <p>$$\prod_{q=1}^n C_q^{\alpha_a}$$</p> <p>where $\alpha_q$ says we have that many instances of type $q$ and is at most $j.$ We get from first principles the probability</p> <p>$$\frac{(nj-\sum_{q=1}^n \alpha_q)!}{(nj)!} \prod_{q=1}^n \frac{j!}{(j-\alpha_q)!}.$$</p> <p>Now when we multiply a probability by the total number of events we get the favorable events. Therefore the EGF for a given coupon type is</p> <p>$$\sum_{k=0}^j \frac{j!}{(j-k)!} \frac{z^k}{k!} = \sum_{k=0}^j {j\choose k} z^k = (1+z)^j.$$</p> <p>With $j=2$ and $n$ types of coupons we get</p> <p>$$m! [z^m] (1+z)^{2n}$$</p> <p>and asking for the total count after $m$ coupons have been drawn yields</p> <p>$$m! \times {2n\choose m}.$$</p> <p>Placing a marker on the singletons we find</p> <p>$$m! [z^m] \left.\frac{\partial}{\partial u} (1+2uz+z^2)^n\right|_{u=1} \\ = m! [z^m] \; \left. n \times (1+2uz+z^2)^{n-1} \times 2z \right|_{u=1} \\ = m! [z^m ] 2nz (1+z)^{2n-2} \\ = m! \times 2n {2n-2\choose m-1}.$$</p> <p>Divide to get the expectation</p> <p>$$ {2n\choose m}^{-1} 2n {2n-2\choose m-1} = 2n \frac{m! \times (2n-m)!}{(2n)!} \frac{(2n-2)!}{(m-1)! \times (2n-m-1)!} \\ = 2n \times m \times (2n-m) \frac{1}{(2n)(2n-1)} \\ = \frac{m\times (2n-m)}{2n-1}.$$</p>
2,033,790
<p>How do you prove the sequence $x_n = (\frac{n}{2})^n$ diverges?</p> <p>Here is my attempt: </p> <p>Suppose $x_n \to L$. This means $(\forall \epsilon &gt; 0) (\exists N \in \mathbb{N}) (\forall n&gt;N)|x_n-L| &lt; \epsilon$</p> <p>Assume $n &gt; N$</p> <p>Then $|x_n-L| &lt; \epsilon$</p> <p>$|(\frac{n}{2})^n-L| &lt; \epsilon$</p> <p>$|(\frac{n}{2})^n-\frac{2L}{2}| &lt; \epsilon$</p> <p>At this point I am stuck. I'm trying to arrive to a contradiction, but I'm unable to.</p>
Ethan Alwaise
221,420
<p>Convergent sequences must be bounded. So simply show that your sequence is unbounded. For this you can use $$\left(\frac{n}{2}\right)^n \geq \frac{n}{2}.$$</p>
934,353
<p>I am a high school student in Calculus, and we are finishing learning basic limits. I am reviewing for a big test tomorrow, and I could do all of the problems correctly except this one.</p> <p>I have no idea how to solve the problem this problem correctly. I looked up the answer online, but I can't figure out how they got their answer. All of the online tools show the steps using L'Hospital's rule or derivation, but I haven't learned either yet.</p> <p>This is the problem:</p> <p>$$\large\lim_{x\rightarrow 0}{\left(\frac{\frac{1}{\sqrt{1+x}}-1}{x}\right)}$$</p> <p>This is the problem that I did incorrectly. I converted the $-1$ to $\frac{\sqrt{1+x}}{\sqrt{1+x}}$, then subtracted the fraction, and multiplied the result by $\frac{1}{x}$ to remove the double division.</p> <p>$$\large\frac{1-\sqrt{1+x}}{x\sqrt{1+x}}$$</p> <p>When I substitute $x$, I get $0$, but the answer is $-\frac{1}{2}$. I am doing something simple incorrectly, but I really cannot figure it out.</p>
Adi Dani
12,848
<p>$$\lim_{x\to 0}\frac{\frac{1}{\sqrt{1+x}}-1}{x}=\lim_{x\to 0}\frac{1}{x\sqrt{1+x}}-\frac{1}{x}=$$ $$=\lim_{x\to 0}\frac{1-\sqrt{1+x}}{x\sqrt{1+x}}=\lim_{x\to 0}\frac{1-\sqrt{1+x}}{x\sqrt{1+x}}\frac{1+\sqrt{1+x}}{1+\sqrt{1+x}}=$$ $$=\lim_{x\to 0}\frac{1-(1+x)}{x\sqrt{1+x}(1+\sqrt{1+x})}=\lim_{x\to 0}\frac{-x}{x\sqrt{1+x}(1+\sqrt{1+x})}$$ $$=\lim_{x\to 0}\frac{-1}{\sqrt{1+x}(1+\sqrt{1+x})}=-\frac{1}{2}$$</p>
732,334
<p>How can we solve this equation? $x^4-8x^3+24x^2-32x+16=0.$ </p>
lab bhattacharjee
33,337
<p>As $x\ne0,$ dividing either sides by $x^2$ </p> <p>$$x^2+\left(\frac4x\right)^2-8\left(x+\frac4x\right)+24=0$$</p> <p>Now as $\displaystyle x^2+\left(\frac4x\right)^2=\left(x+\frac4x\right)^2-2\cdot x\cdot\frac4x$</p> <p>Setting $x+\dfrac4x=y,$ we get $\displaystyle y^2-8-8y+24=0\implies(y-4)^2=0\iff y=4$</p> <p>So, we have $\displaystyle x+\frac4x=4\iff(x-2)^2=0$</p>
732,334
<p>How can we solve this equation? $x^4-8x^3+24x^2-32x+16=0.$ </p>
Felix Marin
85,343
<p>$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ \begin{align} \color{#c00000}{\Large 0}&amp;=x^{4} - 8x^{3} + 24x^{2} - 32x + 16= 16\bracks{\pars{x \over 2}^{4} - 4\pars{x \over 2}^{3} + 6\pars{x \over 2}^{2} - 4\,{x \over 2} + 1} \\[3mm]&amp;=16\left\lbrack{4 \choose 0}\pars{x \over 2}^{4}\pars{-1}^{0} +{4 \choose 1}\pars{x \over 2}^{3}\pars{-1}^{1} +{4 \choose 2}\pars{x \over 2}^{2}\pars{-1}^{2} +{4 \choose 3}\,\pars{x \over 2}^{1}\pars{-1}^{3}\right. \\[3mm]&amp;\left.\phantom{16\bracks{}}\mbox{} + {4 \choose 4}\pars{x \over 2}^{0}\pars{-1}^{4}\right\rbrack =\color{#c00000}{16\bracks{{x \over 2} + \pars{-1}}^{4}} \quad\imp\quad\color{#00f}{\Large x = 2} \end{align}</p>
3,714,995
<blockquote> <p>Using sell method to find the volume of solid generated by revolving the region bounded by <span class="math-container">$$y=\sqrt{x},y=\frac{x-3}{2},y=0$$</span> about <span class="math-container">$x$</span> axis, is (using shell method)</p> </blockquote> <p>What I try:</p> <p><a href="https://i.stack.imgur.com/4D4Jw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4D4Jw.jpg" alt="enter image description here"></a></p> <p>Solving two given curves <span class="math-container">$$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$</span></p> <p>We have <span class="math-container">$x=1$</span> (Invalid) and <span class="math-container">$x=9$</span> (Valid).</p> <p>Put <span class="math-container">$x=9$</span> in <span class="math-container">$y=\sqrt{x}$</span> we have <span class="math-container">$y=3$</span></p> <p>Now Volume of solid form by rotation about <span class="math-container">$x$</span> axis is </p> <p><span class="math-container">$$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$</span></p> <p>Is my Volume Integral is right? If not then how do I solve it? Help me please.</p>
Chrystomath
84,081
<p>The spectrum of <span class="math-container">$A$</span> must be a subset of <span class="math-container">$\{-1,1\}$</span> by the spectral mapping theorem and the fact that <span class="math-container">$A$</span> is self-adjoint. Hence <span class="math-container">$A$</span> is unitary and <span class="math-container">$A=A^*=A^{-1}$</span>, so <span class="math-container">$A^2=I$</span>.</p>
3,714,995
<blockquote> <p>Using sell method to find the volume of solid generated by revolving the region bounded by <span class="math-container">$$y=\sqrt{x},y=\frac{x-3}{2},y=0$$</span> about <span class="math-container">$x$</span> axis, is (using shell method)</p> </blockquote> <p>What I try:</p> <p><a href="https://i.stack.imgur.com/4D4Jw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4D4Jw.jpg" alt="enter image description here"></a></p> <p>Solving two given curves <span class="math-container">$$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$</span></p> <p>We have <span class="math-container">$x=1$</span> (Invalid) and <span class="math-container">$x=9$</span> (Valid).</p> <p>Put <span class="math-container">$x=9$</span> in <span class="math-container">$y=\sqrt{x}$</span> we have <span class="math-container">$y=3$</span></p> <p>Now Volume of solid form by rotation about <span class="math-container">$x$</span> axis is </p> <p><span class="math-container">$$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$</span></p> <p>Is my Volume Integral is right? If not then how do I solve it? Help me please.</p>
user1551
1,551
<p>Let <span class="math-container">$P=A^2=A^\ast A$</span> and <span class="math-container">$S=P^{k-1}+P^{k-2}+\cdots+P+I$</span>. By the given conditions, <span class="math-container">$P$</span> is a positive operator, <span class="math-container">$S$</span> is strictly positive and <span class="math-container">$S(P-I)=P^k-I=A^{2k}-I=0$</span>. Hence <span class="math-container">$\langle S(P-I)x,(P-I)x\rangle=0$</span> for every vector <span class="math-container">$x$</span>. As <span class="math-container">$S$</span> is strictly positive, this implies that <span class="math-container">$(P-I)x=0$</span> for every <span class="math-container">$x$</span>. Therefore <span class="math-container">$P=I$</span>.</p>
2,082,815
<p>Find the $100^{th}$ power of the matrix $\left( \begin{matrix} 1&amp; 1\\ -2&amp; 4\end{matrix} \right)$.</p> <p>Can you give a hint/method?</p>
seeker
267,945
<p>The characteristic polynomial of the given matrix say $A$ is $x^2-5x+6$, the zeroes of whose are $2,3$. Thus there exist an invertible matrix $P$ such that $A=P\begin{pmatrix} 2 &amp; 0\\ 0 &amp; 3\end{pmatrix}P^{-1}$. Hence $A^{100}=P\begin{pmatrix} 2^{100} &amp; 0\\ 0 &amp; 3^{100}\end{pmatrix}P^{-1}$, where $P = \begin{pmatrix}1 &amp;1\\ 2 &amp; 1 \end{pmatrix}$</p>
47,561
<p>The Hilbert matrix is the square matrix given by</p> <p>$$H_{ij}=\frac{1}{i+j-1}$$</p> <p>Wikipedia states that its inverse is given by</p> <p>$$(H^{-1})_{ij} = (-1)^{i+j}(i+j-1) {{n+i-1}\choose{n-j}}{{n+j-1}\choose{n-i}}{{i+j-2}\choose{i-1}}^2$$</p> <p>It follows that the entries in the inverse matrix are all integers.</p> <p>I was wondering if there is a way to prove that its inverse is an integer matrix without using the formula above.</p> <p>Also, how would one go about proving the explicit formula for the inverse? Wikipedia refers me to a paper by Choi, but it only includes a brief sketch of the proof.</p>
L.Z. Wong
5,768
<p>Thanks everyone for the answers offered! After chasing down the various links, I came across a very similar <a href="http://groups.google.com/group/sci.math.symbolic/browse_frm/thread/c059a14bb8b824f3?pli=1">comment</a> by Deane Yang in 1991 (!), that offered an elegant outline of a proof. I felt it would be nice to flesh out the details of the proof. This proof doesn't use "Cholesky machinery", and it is possible to deduce that the entries of the inverse are integers without knowing the entries explicitly.</p> <p>First, we note that $$H_{ij} = \frac{1}{i+j-1}= \int_{0}^{1}x^{i+j-2}dx= \int_{0}^{1}x^{i-1}x^{j-1}dx$$ We can treat $\int_{0}^{1}p(x)q(x)dx$ as an inner product, $ \langle p, q \rangle $, on the vector space $P_{n-1}(x)$ of polynomials of degree $ &lt; n $.</p> <p>$H_n$, the $n \times n$ Hilbert matrix, corresponds to the <a href="http://en.wikipedia.org/wiki/Gramian_matrix">Gramian matrix</a> of the set of vectors $\{ 1,x,x^2,...,x^{n-1} \} $.</p> <p>Next, the <a href="http://en.wikipedia.org/wiki/Legendre_polynomials#Shifted_Legendre_polynomials">Shifted Legendre Polynomials</a> are given by $$\tilde{P_n}(x) = (-1)^n \sum_{k=0}^n {n \choose k} {n+k \choose k} (-x)^k$$</p> <p>We note that these are polynomials with integer coefficients. They also have the property that $$\int_{0}^{1} \tilde{P_m}(x) \tilde{P_n}(x)dx = {1 \over {2n + 1}} \delta_{mn}$$</p> <p>Also, $\tilde{P_m}(x)$ is a polynomial of degree $m$, so $\{ \tilde{P_0}(x), \tilde{P_1}(x), ... \tilde{P_{n-1}}(x) \}$ form an alternative basis of $P_{n-1}(x)$.</p> <p>The change of basis matrix, $P$, (from the standard basis to this new basis) can be obtained by choosing the coefficient of the appropriate power of $x$ in the above explicit formula for the Legendre polynomials. We have $$ P_{ij} = (-1)^{i+j-1} {j-1 \choose i-1} {i+j-2 \choose i-1}$$ (i.e. replace $n$ by $j-1$ and $k$ by $i-1$ in the formula for $\tilde{P_n}$).</p> <p>The <a href="http://en.wikipedia.org/wiki/Gramian_matrix#Change_of_basis">Gramian matrix under this change of basis</a> is $P^T H P$. But since the $\tilde{P_i}$'s are orthogonal, we get $$ (P^T H P)_{ii} = {1 \over 2i-1}$$.</p> <p>Let this diagonal matrix be $D$. Since it is diagonal, its inverse is given by $$(D^{-1})_{ii} = \frac{1}{Dii} = 2i -1 $$</p> <p>Then $$ P^T H P = D $$. So $$ H = (P^T)^{-1} D P^{-1} $$ and $$ H^{-1} = P D^{-1} P^T $$.</p> <p>Since, $P$, $P^T$ and $D$ are all integer matrices, $H^{-1}$ is an integer matrix. $ \blacksquare $</p>
47,561
<p>The Hilbert matrix is the square matrix given by</p> <p>$$H_{ij}=\frac{1}{i+j-1}$$</p> <p>Wikipedia states that its inverse is given by</p> <p>$$(H^{-1})_{ij} = (-1)^{i+j}(i+j-1) {{n+i-1}\choose{n-j}}{{n+j-1}\choose{n-i}}{{i+j-2}\choose{i-1}}^2$$</p> <p>It follows that the entries in the inverse matrix are all integers.</p> <p>I was wondering if there is a way to prove that its inverse is an integer matrix without using the formula above.</p> <p>Also, how would one go about proving the explicit formula for the inverse? Wikipedia refers me to a paper by Choi, but it only includes a brief sketch of the proof.</p>
Scot Adams
103,568
<p>I don't have any new content to add, but I did notice some context:</p> <p>A &quot;lattice&quot; in a real vector space <span class="math-container">$V$</span> is the <span class="math-container">${\mathbb Z}$</span>-span of an <span class="math-container">${\mathbb R}$</span>-basis of <span class="math-container">$V$</span>. Any ordered <span class="math-container">${\mathbb R}$</span>-basis whose <span class="math-container">${\mathbb Z}$</span>-span is the lattice <span class="math-container">$L$</span> is said to be an &quot;ordered base&quot; of <span class="math-container">$L$</span>.</p> <p>Let <span class="math-container">$L$</span> be a lattice in a real inner product space <span class="math-container">$V$</span>. (The inner product being, by definition, symmetric, bilinear and positive definite.) The &quot;dual lattice&quot; to <span class="math-container">$L$</span> in <span class="math-container">$V$</span> consists of vectors <span class="math-container">$v\in V$</span> such that, for all <span class="math-container">$w\in L$</span>, we have: <span class="math-container">$\langle v,w\rangle\in{\mathbb Z}$</span>. The dual of <span class="math-container">$L$</span> will be denoted <span class="math-container">$L^*$</span>.</p> <p>Let <span class="math-container">$B$</span> be an ordered basis in a real inner product space <span class="math-container">$V$</span>. Then the &quot;Gramian matrix&quot; of <span class="math-container">$B$</span> is the matrix whose <span class="math-container">$i,j$</span>-entry is <span class="math-container">$\langle B_i,B_j\rangle$</span>.</p> <p>A matrix is &quot;integral&quot; if all of its entries are integers.</p> <p>Lemma: Let <span class="math-container">$B$</span> be an ordered base of a lattice <span class="math-container">$L$</span> in a real inner product space. Then: <span class="math-container">$L^*\subseteq L$</span> iff the inverse of the Gramian matrix of <span class="math-container">$B$</span> is integral.</p> <p>The proof of this lemma is not hard.</p> <p>Now fix a positive integer <span class="math-container">$d$</span> and let <span class="math-container">$V$</span> be the real inner product space consisting of real polynomials <span class="math-container">${\mathbb R}\to{\mathbb R}$</span> of degree <span class="math-container">$&lt;d$</span>, with inner product given by: <span class="math-container">$\langle P,Q\rangle=\int_0^1 PQ$</span>.</p> <p>Let <span class="math-container">$M$</span> be the lattice in <span class="math-container">$V$</span> consisting of all polynomials in <span class="math-container">$V$</span> that have integer coefficients.</p> <p>Let <span class="math-container">${\bf1}$</span> denote the constant function <span class="math-container">${\mathbb R}\to{\mathbb R}$</span> with value <span class="math-container">$1$</span>. Let <span class="math-container">${\bf x}:{\mathbb R}\to{\mathbb R}$</span> denote the identity function. Let <span class="math-container">$B:=({\bf1},{\bf x},{\bf x}^2,...,{\bf x}^{d-1})$</span>, an ordered base of <span class="math-container">$M$</span>.</p> <p>The Gramian matrix of <span class="math-container">$B$</span> is the Hilbert <span class="math-container">$d\times d$</span> matrix, so, by the lemma, we wish to show that <span class="math-container">$M^*\subseteq M$</span>.</p> <p>Let <span class="math-container">$A:=(\tilde P_0,...\tilde P_{d-1})$</span>, the ordered list of the <span class="math-container">$0$</span>th through <span class="math-container">$(d-1)$</span>st shifted Legendre polynomials. This is an ordered basis of <span class="math-container">$V$</span>. Let <span class="math-container">$L$</span> be the <span class="math-container">${\mathbb Z}$</span>-span of <span class="math-container">$A$</span>. The <span class="math-container">$L$</span> is a lattice in <span class="math-container">$V$</span>.</p> <p>Since the shifted Legendre polynomials have integer coefficients, <span class="math-container">$L\subseteq M$</span>. It follows that <span class="math-container">$L^*\supseteq M^*$</span>.</p> <p>The Gramian matrix of <span class="math-container">$A$</span> is diagonal, with each diagonal entry in the set <span class="math-container">$\{1,1/2,1/3,1/4,\dots\}$</span>. So, by the lemma, <span class="math-container">$L^*\subseteq L$</span>.</p> <p>Then <span class="math-container">$M^*\subseteq L^*\subseteq L\subseteq M$</span>. QED</p>
1,964,139
<p>$D_2n$ is not abelian. However, the group of rotations, denoted $R$, is. I've already shown that $R$ is a normal subgroup of $D_2n$; however I'm stuck at showing the quotient group is abelian.</p> <p>I know if it is abelian, $xRyR=yRxR$ but I get stuck at $xRyR=(xy)R$. But $x$ and $y$ are not necessarily commutative. How do I continue? </p>
Jason DeVito
331
<p>Dietrich's proof is fine, but I wanted to show you how you could finish off your own proof.</p> <p>First note that $R$ has order $n$ and $D_{2n}$ has order $2n$, so $D_{2n}/R$ consists of two elements. Of course, one is the coset containing the identity, $1R$, and the other is given by taking any reflection $y$ and creating the coset $yR$.</p> <p>We need to check that $(1R)(1R) = (1R)(1R)$, that $(yR)(yR) = (yR)(yR)$, and that $(1R)(yR) = (yR)(1R)$. The first two follow automatically.</p> <p>The last follows because the identity commutes with everything: we have $$(1R)(yR) = (1y)R = (y1)R = (yR)(1R).$$</p> <p>One last ponit: we never explicitly used the fact that $y$ is a reflection, just that $y\notin R$. So this proof applies whenever we have an index two subgroup (which is then automatically normal). This gives another way of looking at Dietrich's proof, which only needed that fact that $|D_{2n}/R| = 2$.</p>
914,440
<p>By the definition of topology, I feel topology is just a principle to define "open sets" on a space(in other words, just a tool to expand the conception of open sets so that we can get some new forms of open sets.) But I think in the practical cases, we just considered Euclidean space most and the traditional form open set in Euclidean space works pretty well. And these new form open sets seem to have no common use. So why we need to expand the conception of the open sets? What's the motivation behind?Thanks.</p>
Lolman
160,018
<p>We need topology so we can work limits. And limits are important.</p>
1,038,152
<p>Let $B \subseteq \mathbb{R}_{+}$ such that B is non-empty. consider $B^{-1} = \left \{b^{-1} : b\in B \right \}$.<br> Show that if $B^{-1}$ is unbounded from above, then $\inf\left(B\right)=0$</p> <p>How can i prove that? tnx!</p>
John Hughes
114,036
<p>Sometimes, you actually have to look at the topology. </p> <p>A chain $c$ in $H_1(X, A)$ is a collection of edges (with coefficients, but those will turn out to be irrelevant); the boundary is a collection of pairs-of-points in $A$. Since $A$ is path connected, for each such pair we can find a path in $A$ that connects them. The sum of all these paths, with the appropriate coefficients, has boundary equal to $\partial c$. Hence $\partial c = 0$ as an element of the homology group $H_0(X, A)$. Since the map before it in the exact sequence is a zero map, $(i)_0$ must be injective. </p> <p>Small hint: nowhere in your reasoning did you use the path-connectedness of $A$, so it was clear to me that it had to be part of the story. (Esp. when you consider the case where $X$ is the unit interval and $A = \{0, 1\}$ is its boundary: in this case, the map is <em>not</em> injective, but that's because $A$ is not connected.)</p>
306,744
<p>So if the definition of continuity is: $\forall$ $\epsilon \gt 0$ $\exists$ $\delta \gt 0:|x-t|\lt \delta \implies |f(x)-f(t)|\lt \epsilon$. However, I get confused when I think of it this way because it's first talking about the $\epsilon$ and then it talks of the $\delta$ condition. Would it be equivalent to say: $\forall$ $\delta \gt 0$ $\exists$ $\epsilon \gt0$ $:|x-t|\lt \delta \implies|f(x)-f(t)|\lt \epsilon$. I guess what I'm asking is whether there is a certain order proofs or more formal statements need to follow. I know I only changed the place where I said there is a $\delta$ but is that permissable in a "formal" way of writing?</p>
Julien
38,053
<p>What you gave first is the definition of uniform continuity. You have to fix $x$ before embarking the $\forall \epsilon$ thing. That's for continuity at $x$, of course. </p> <p>Now to answer your question: no, this is not legal to swap $\epsilon$ and $\delta$ like you did.</p> <p>The funny condition you obtain with this swapping is satisfied by lots of non continuous functions. For instance, any bounded function satisfies it.</p>
4,272,214
<h2>The Equation</h2> <p>How can I analytically show that there are <strong>no real solutions</strong> for <span class="math-container">$\sqrt[3]{x-3}+\sqrt[3]{1-x}=1$</span>?</p> <h2>My attempt</h2> <p>With <span class="math-container">$u = -x+2$</span></p> <p><span class="math-container">$\sqrt[3]{u-1}-\sqrt[3]{u+1}=1$</span></p> <p>Raising to the power of <span class="math-container">$3$</span></p> <p><span class="math-container">$$(u+1)^{2/3}(u-1)^{1/3} - (u+1)^{1/3}(u-1)^{2/3}=1\\(u+1)^{1/3}(u^2-1)^{1/3} - (u-1)^{1/3}(u^2-1)^{1/3}=1\\(u^2-1)^{1/3}\cdot\boxed{\left[(u+1)^{1/3}-(u-1)^{1/3}\right]}=1$$</span></p> <p>Raising to the power of <span class="math-container">$3$</span>:</p> <p><span class="math-container">$$(u^2-1)\cdot\left[3(u+1)^{1/3}(u-1)^{2/3}-3(u+1)^{2/3}(u-1)^{1/3}+2\right]=1\\(u^2-1)\cdot\left[3(u^2-1)^{1/3}(u-1)^{1/3}-3(u+1)^{1/3}(u^2-1)^{1/3}+2\right]=1$$</span></p> <p>Thus: <span class="math-container">$(u^2-1)\cdot\left[3(u^2-1)^{1/3}\boxed{\left[(u-1)^{1/3}-(u+1)^{1/3}\right]}+2\right]=1$</span></p> <p>And with <span class="math-container">$y = (u-1)^{1/3}-(u+1)^{1/3}$</span>, we can say that:</p> <p><span class="math-container">$y^3=-3y(u^2-1)^{1/3}+2$</span></p> <p>I am stuck... Any tips for this radical equation?</p>
greenturtle3141
372,663
<p>If you're looking for real solutions, I think you can look more closely at <span class="math-container">$\sqrt[3]{u-1} - \sqrt[3]{u+1} = 1$</span>. It seems difficult for the left side to be positive, and indeed we would be done if it is the case that it is non-positive for all real <span class="math-container">$u$</span>. Indeed, we have that <span class="math-container">$1 &gt; -1$</span> and <span class="math-container">$u+1 &gt; u-1$</span>, hence <span class="math-container">$\sqrt[3]{u+1} \geq \sqrt[3]{u-1}$</span> because <span class="math-container">$\sqrt[3]{x}$</span> is an increasing function. Now if <span class="math-container">$u$</span> were a real solution then <span class="math-container">$1 = \sqrt[3]{u-1} - \sqrt[3]{u+1} \leq 0$</span>, contradiction.</p>
2,971,980
<p>Show that if <span class="math-container">$0&lt;b&lt;1$</span> it follows that <span class="math-container">$$\lim_{n\to\infty}b^n=0$$</span> I have no idea how to express <span class="math-container">$N$</span> in terms of <span class="math-container">$\varepsilon$</span>. I tried using logarithms but I don't see how to find <span class="math-container">$N$</span> from this.</p>
Gibbs
498,844
<p>By definition, if you choose <span class="math-container">$\varepsilon &gt; 0$</span>, then you can find a natural number <span class="math-container">$N$</span> such that <span class="math-container">$\lvert b^n\rvert &lt; \varepsilon$</span> when <span class="math-container">$n &gt; N$</span>. Since <span class="math-container">$0&lt;b&lt;1$</span>, then you have <span class="math-container">$b^n &lt; \varepsilon$</span>, so <span class="math-container">$n &gt; \log_b \varepsilon$</span>, because <span class="math-container">$\log_b$</span> is a decreasing function. Therefore any natural number <span class="math-container">$N$</span> greater than <span class="math-container">$\log_b \varepsilon$</span> works, e.g. <span class="math-container">$N = \lfloor{\log_b \varepsilon \rfloor}+1.$</span></p>
774,332
<p>I have a radial Schrödinger equation for a particle in Coulomb potential:</p> <p>$$i\partial_t f(r,t)=-\frac1{r^2}\partial_r\left(r^2\partial_r f(r,t)\right)-\frac2rf(r,t)$$</p> <p>with initial condition</p> <p>$$f(r,0)=e^{-r^2}$$</p> <p>and boundary conditions</p> <p>$$\begin{cases} |f(0,t)|&lt;\infty\\ |f(\infty,t)|&lt;\infty. \end{cases}$$</p> <p>Trying to solve it, I couldn't come up to any analytical solution, so I tried to solve it numerically.</p> <p>Simplest what I thought of was finite differences method. But while I can limit domain to make it finite, imposing zero Dirichlet condition at $r=A$ for some $A$, I would need to somehow impose the condition of regularity at $r=0$, where $f$ doesn't have to be neither zero, nor non-zero in general. I don't know how to do this with finite differences.</p> <p>Another approach I tried was to expand the initial condition in eigenfunctions of the Hamiltonian operator (the RHS of the PDE) and then approximate the solution taking finite number of eigenstates into account. But the expansion appeared to converge extremely slowly, and after taking 200 bound eigenstates (simple because of closed-form solution in terms of Laguerre polynomials) and 70 eigenstates of continuous spectrum (taking those of them which vanish at $r=A$, so as to impose zero boundary condition at $A$; had $A=10$), I still got nothing similar to initial function.</p> <p>So, the question is: is there any explicit solution for this IBVP (be it closed-form one or in some sort of series, but necessarily explicit and fast convergent)? If no, how can it be solved numerically?</p>
rich
224,652
<p>This may be a wild goose chase, and at the very least it will be a mess, but here are my thoughts. The time-independent Schrodinger equation (TISE) with Coulomb potential can be solved exactly -- the negative-energy solutions are in any quantum mechanics book and the positive-energy solutions are confluent hypergeometric functions of some sort (I believe they are given in Schiff, for instance). So you can write a general solution of the time-dependent equation (TDSE) as a linear combination of all solutions of the TISE each multiplied by exp(-iEt); the coefficients are determined by the initial condition. (You only need the l=0 solutions, of course, since your initial condition is rotationally invariant.) I have not mentioned boundary conditions, and indeed I can only say that I think the above solutions of the TISE are well-behaved at the origin. The positive-energy solutions surely don't go to zero as r->infinity but I think the sum must do so.</p>
1,104,163
<p>I have come up with the equation in the form $${{dy}\over dx} = axe^{by}$$, where a and b are arbitrary real numbers, for a project I am working on. I want to be able to find its integral and differentiation if possible. Does anyone know of a possible solution for $y$ and/or ${d^2 y}\over {dx^2}$?</p>
Chinny84
92,628
<p>$$ \frac{d}{dx}\mathrm{e}^{-by} = -b\mathrm{e}^{-by}y' $$ Thus we can rewrite your equation as $$ -\frac{1}{b}\left(\mathrm{e}^{-by}\right)' = ax $$</p>
140,615
<p>A rectangular page is to have a printed area of 62 square inches. If the border is to be 1 inch wide on top and bottom and only 1/2 inch wide on each side find the dimensions of the page that will use the least amount of paper</p> <p>Can someone explain how to do this?</p> <p>I started with:</p> <p>$$A = (x + 2)(y + 1) $$</p> <p>Then I isolate y and come up with my new equation:</p> <p>$$A = (x+2)\left(\frac{62}{x + 2}{-1}\right)$$ </p> <p>Then I think my next step is to create my derivative, but wouldn't it come out to -1?</p> <p>Anyways, I would appreciate if someone could give me a nudge in the right direction.</p> <p><strong>EDIT</strong> </p> <p>How does this look for a derivative?</p> <p>$$A = \left(\frac{x^2-124}{x^2}\right)$$ </p> <p>Then to solve: $$ {x} = 11.1 $$ </p> <p>$$ y = 98 / 11.1 $$</p> <p>Does that seem about right?</p> <p>If not, the only thing I would have left is setting it to 0 and solving.</p>
Ross Millikan
1,827
<p>Hint: How did you get the term $\left(\frac {98}{x+2}-1\right)$? You should have $62=xy$ to give the desired printable area, so $A=(x+2)(\frac{62}x+1)$. Then, you are right, you should take $\frac {dA}{dx}$ and set it to $0$ to find $x$.</p>
2,119,971
<p>If $a\mid c$ and $b\mid c$, must $ab$ divide $c$? Justify your answer.</p> <p>$a\mid c$, $c=ak$ for some integer $k$</p> <p>$b\mid c$, $c=bu$ for some integer $u$</p> <p>From here I wanted to try to check if there were counter examples I could use,</p> <p>$c\ne(ab)w$ for some integer $w$</p> <p>From here I got stuck because there is nothing I can plug into that equation so I know that I am probably missing something.</p>
Michael Hardy
11,667
<p>$4\mid 12$ and $6\mid12$ but $4\times6\nmid12. \qquad$</p> <p>The proposition is true when $\gcd(a,b)=1.$</p>
1,526,882
<p>Let $$r(x,y)=\begin{cases} y &amp;\mbox{ if } 0\leq y\leq x \\ x &amp;\mbox{ if } x\leq y\leq 1\end{cases}$$</p> <p>Show that $v(x)=\int_0^1r(x,y)f(y) \ dy$ satisfies $-v''(x)=f(x)$, where $0\leq x \leq 1$ and $f$ is continuous. </p> <p>How can I take the second derivative of this? When I try to do it I feel like differentiating under the integral sign will yield $v''(r)=0$ due to $r_{xx}(x,y)=0$.</p>
copper.hat
27,978
<p>Note that $r=\min$, hence Lipschitz with rank one, and ${\partial r(x,y) \over \partial x} $ is defined for all $x\neq y$. We see that ${\partial r(x,y) \over \partial x} = \begin{cases} 0, &amp; y&lt;x \\ 1, &amp; x&lt;y\end{cases}$. Hence ${r(x+h,y)-r(x,y) \over h} \to 1_{[x,1]}(y)$, which is integrable (and uniformly bounded by one), hence $v'(x) = \int_x^1 f(y) dy$ and so differentiating again we have $v''(x) = -f(x)$.</p>
2,319,766
<p>Lets say ,I have 100 numbers(1 to 100).I have to create various combinations of 10 numbers out of these 100 numbers such that no two combinations have more than 5 numbers in common given a particular number can be used max three times. E.g.</p> <ol> <li>Combination 1: 1,2,3,4,5,6,7,8,9,10</li> <li>Combination 2: 1,2,3,4,5,11,12,13,14,15</li> <li>Combination 3: 1,2,3,4,5,16,17,18,19,20</li> <li>Combination 4: 6,7,8,9,10,11,12,13,14,15</li> </ol> <p>Here Combination 1,2,3 have numbers 1 to 5 in common whereas combination 1 and 4 have numbers 6 to 10 in common. I am finding it difficult to understand how to approach this problem. What would be the starting point if I have to apply this logic on N numbers.</p>
epi163sqrt
132,007
<p>We can also solve the problem with a little dose of algebra.</p> <blockquote> <p>Zero or more cats give $$1+x+x^2+\cdots=\frac{1}{1-x}$$</p> <p>The same holds for dogs as well as for Guiana pigs. Putting all together gives zero or more cats, dogs and Guiana pigs: $$\left(\frac{1}{1-x}\right)^3$$</p> <p>Since we want all combinations of $14$ cats, dogs and Guiana pigs, we calculate</p> <p>\begin{align*} \color{blue}{[x^{14}]\left(\frac{1}{1-x}\right)^3}&amp;=[x^{14}]\sum_{n=0}^\infty \binom{n+2}{2}x^n=\binom{16}{2}\color{blue}{=120} \end{align*}</p> </blockquote>
496,479
<p>This symbols are used to describe left recursion : </p> <blockquote> <p>$A\to B\,\alpha\,|\,C$<br>$B\to A\,\beta\,|\,D,$</p> </blockquote> <p>It is taken from : <a href="http://en.wikipedia.org/wiki/Left_recursion" rel="nofollow">http://en.wikipedia.org/wiki/Left_recursion</a></p> <p>How can these symbols be explained to a non-mathematician ? Can provide an explanation for these statements in image ?</p>
mau
89
<p>First of all, uppercase letters mean something which must still be worked out, while Greek letters mean actual symbols for the language. The arrow $\rightarrow$ means "if you find something like the symbols on my left, you may substitute them with the symbols on my right", and the vertical bar | means "or".</p> <p>Now the first line says that if you find an $A$ you may either substitute it with $B\alpha$ or with $C$. Suppose to choose the first one; now $\alpha$ is fixed, while $B$ may be substituted with $A\beta$ (or $D$, but we won't do it).</p> <p>This means that if you start with $A$ you <strong>may</strong> - note, not <strong>must</strong> - transform it to $B\alpha$ first and $A\beta\alpha$ then. It's like you have left something on your right ($\beta\alpha$) and now you are ready to start it over like if nothing happened. It's <em>left</em> recursion because you are going "left", and is <em>indirect</em> because you need more than one step.</p>
4,350,781
<p><a href="https://i.stack.imgur.com/57qXm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57qXm.jpg" alt="enter image description here" /></a></p> <p>I was able to follow most of the proof but I don’t understand how the author concludes that <span class="math-container">$r=0$</span> at the final part. I would appreciate it if someone could clarify. Thanks</p>
user170231
170,231
<p>This is an attempt to capture how I tend to visualize a region when setting up a volume integral. Consider the following plots:</p> <p><a href="https://i.stack.imgur.com/WXpRj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WXpRj.png" alt="enter image description here" /></a></p> <p>On the left, the forest of arrows join the <span class="math-container">$y,z$</span>-plane (i.e. the plane <span class="math-container">$x=0$</span>) and the given plane <span class="math-container">$x+y+z=1$</span>. They represent the range that the variable <span class="math-container">$x$</span> can take on, <span class="math-container">$0 \le x \le 1 - y - z$</span>.</p> <p>On the right, there's a similar array of arrows that join the <span class="math-container">$z$</span>-axis (i.e. the line <span class="math-container">$x=y=0$</span>) to the line where the plane <span class="math-container">$x+y+z=1$</span> intersects <span class="math-container">$x=0$</span>. This line has equation <span class="math-container">$y + z = 1$</span>, or <span class="math-container">$y = 1 - z$</span>. These arrows represent the range that <span class="math-container">$y$</span> can take on, <span class="math-container">$0 \le y \le 1 - z$</span>.</p> <p>It's not pictured, but the last step is recognizing that <span class="math-container">$z$</span> must range between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>.</p> <p>Then the volume of the region is given by the triple integral</p> <p><span class="math-container">$$\int_0^1 \int_0^{1-z} \int_0^{1-y-z} dx \, dy \, dz$$</span></p>
3,029,585
<p>I have been stumped for a few days on this...It would be great if anyone can point me to enlightenment :)</p> <p>Here's what I have tried. Let <span class="math-container">$Y = X^3$</span>, where X is a standard normal distribution with mean 0 and variance 1. Then</p> <p><span class="math-container">$P(Y \leq y) = P(X^3 \leq y) = P(X \leq y^{1/3})$</span>, for both <span class="math-container">$y$</span> positive or negative.</p> <p>The above is the CDF of <span class="math-container">$X^3$</span>; differentiating w.r.t. to <span class="math-container">$y$</span> should give me the PDF, which is</p> <p><span class="math-container">$f_X(y^{1/3})\frac{1}{3}y^{-2/3}$</span>, where <span class="math-container">$f_X$</span> is the PDF of the standard normal. </p> <hr> <p>But THAT cannot be the answer, because it would take on negative values for <span class="math-container">$y &lt; 0$</span>. Whereas a PDF should always be non-negative. I have seen a "correct" answer in a math paper, where it changes <span class="math-container">$y$</span> to <span class="math-container">$|y|$</span> and results in an integral that converges (in Mathematica). </p> <p>But where can I introduce the absolute terms!? Much thanks :D</p>
Sarvesh Ravichandran Iyer
316,409
<p>Let <span class="math-container">$L = \mathbb Q(3^{\frac 1{2^n}})$</span>. </p> <p>Why is <span class="math-container">$L$</span> algebraic over <span class="math-container">$\mathbb Q$</span>? It is because the generating set of <span class="math-container">$L$</span> is <span class="math-container">$\mathbb Q \cup \{3^{\frac 1{2^n}}\}$</span>, which are all contained in <span class="math-container">$\bar{\mathbb Q}$</span>, the algebraic closure of <span class="math-container">$\mathbb Q$</span>. Therefore, <span class="math-container">$L \subset \bar {\mathbb Q}$</span>, which by definition of the algebraic closure implies it is algebraic over <span class="math-container">$\mathbb Q$</span>.</p> <p>However, <span class="math-container">$L$</span> contains subfields of the form <span class="math-container">$K_n$</span>, where <span class="math-container">$K_n = \mathbb Q(3^{\frac 1{2^n}})$</span>. One can conclude that <span class="math-container">$[K_n : \mathbb Q] = 2^n$</span>, since <span class="math-container">$x^{2^n} - 3$</span> is irreducible via the Eisenstein criterion(shifted).</p> <p>Then, suppose <span class="math-container">$L$</span> were finite, then <span class="math-container">$[L : \mathbb Q] &lt; \infty $</span>, but by the tower property it is equal to <span class="math-container">$[L : K_n][K_n : \mathbb Q]$</span> for each <span class="math-container">$n$</span>. In particular, <span class="math-container">$[L : \mathbb Q]$</span> is a multiple of <span class="math-container">$2^n$</span> for all <span class="math-container">$n$</span>, clearly impossible if it is finite.</p>
1,059,989
<p>I mean are there examples of problems that have been proven to be undecidable, in the sense that it would not be possible to devise a deterministic computer program that outputs a solution for an instance of the problem. And yet human mathematicians have come up with such a solution.</p>
user2566092
87,313
<p>In a certain sense made rigorous elsewhere in a mathematical logic conjecture whose origin I can't remember, every mathematician's proof is expressible in terms of basic semantics that a computer can understand given basic axioms and deduction rules. If you accept that, then no, there is no computer undecidable problem that a mathematician can figure out a deterministic decision procedure for.</p>
1,250,459
<p>Let $A\in M_{m \times n}(\mathbb{R})$, $x\in \mathbb{R}^n$ and $b,y\in \mathbb{R}^m$. Show that if $Ax=b$ and $A^ty=0_{\mathbb{R}^m}$, then $\langle b,y\rangle=0$. Also make a geometric interpretation.</p> <p>I think I may have something to do with overdetermined / underdetermined system, but do not know how to prove it.</p> <p>Take this opportunity to ask for the appointment of a linear algebra and matrix reference, I'm able to do the numerical exercises mostly, but those involving statements are a big problem.</p>
Community
-1
<p>Topology is literally the study of open sets. <strong>A topology</strong> is a collection of open sets.</p> <p>In $\mathbb{R}$, open sets are arbitrary unions of open intervals, $(a,b)$ and closed sets are arbitrary intersections of closed intervals, $[a,b]$. These are important because they define limits, continuity, etc. </p> <p>In topology, you simply <em>define</em> what "open" means for your space. In that, you define limits, continuity, etc.</p>
2,553,610
<p>I come across this result:</p> <blockquote> <p>Any power is conditionally convergent for at most two values of $x$, the endpoints of its interval of convergence.</p> </blockquote> <p>If it is so then why?</p>
José Carlos Santos
446,262
<p>Suppose you have a power series<span class="math-container">$$\tag{1}\sum_{n=0}^\infty a_n(x-a)^n$$</span>which converges at some point other than <span class="math-container">$a$</span>, but does not converge everywhere. So, it converges at some <span class="math-container">$x_0$</span> with <span class="math-container">$x\neq a$</span>. Therefore, <span class="math-container">$\lim_{n\to\infty}a_n(x_0-a)^n=0$</span> and it can be deduced from this that if <span class="math-container">$|x-a|&lt;|x_0-a|$</span> then the series <span class="math-container">$(1)$</span> converges absolutely. Therefore, if <span class="math-container">$|x-a|&gt;|x_0-a|$</span>, then the series <span class="math-container">$(1)$</span> diverges. The statement that you mentioned follows from this.</p>