url
stringlengths 14
2.42k
| text
stringlengths 340
399k
|
---|---|
http://mathhelpforum.com/pre-calculus/4355-domain-graphing.html | # Math Help - Domain +Graphing
1. ## Domain +Graphing
I need to determine the domain, sketch the graph of each function and show/explain the connection between the graph and the domain:
g (x) = x^2
______________
x^2 + 2x - 3
x^2 + 2x-3 should be greater than 0 since (Any number)/0 is NOt Defined
(X+3)(X-1)>0
So either
X> -3 OR X>1
Thus domain becomes
(-3,1)u(1,Infinity)
Right?
Now how wld I graph and explain the connection??
x^2 + 2x-3 should be greater than 0 since (Any number)/0 is NOt Defined
(X+3)(X-1)>0
So either
X> -3 OR X>1
Thus domain becomes
(-3,1)u(1,Infinity)
Right?
Now how wld I graph and explain the connection??
The values of x such that $x^2+2x-3=0$ are forbidden. NOT where $x^2+2x-3$ is negative. There are three parts to the domain.
Wherever the denominator goes to zero will be a vertical asymptote (unless the numerator also goes to zero at this value of x, which does not happen here).
-Dan
3. Originally Posted by topsquark
The values of x such that $x^2+2x-3=0$ are forbidden. NOT where $x^2+2x-3$ is negative. There are three parts to the domain.
Wherever the denominator goes to zero will be a vertical asymptote (unless the numerator also goes to zero at this value of x, which does not happen here).
-Dan
Umm, So the domain becomes....??
And how would I be graphing this?
Umm, So the domain becomes....??
The domain is that "x" can be any number EXCEPT $-3,1$.
You can write it in three ways,
$x\in \mathbb{R}, x\not = -3,1$
Or,
$\left\{ \begin{array}{c}x<-3\\ -3
Or,
$x\in (-\infty,-3)\cup (-3,1) \cup (1,+\infty)$
Step 1 draw vertical line which are asymptotes for the undefinied values. That means draw $x=-3,x=1$ |
https://en.m.wikipedia.org/wiki/Additive_basis | In additive number theory, an additive basis is a set ${\displaystyle S}$ of natural numbers with the property that, for some finite number ${\displaystyle k}$, every natural number can be expressed as a sum of ${\displaystyle k}$ or fewer elements of ${\displaystyle S}$. That is, the sumset of ${\displaystyle k}$ copies of ${\displaystyle S}$ consists of all natural numbers. The order or degree of an additive basis is the number ${\displaystyle k}$. When the context of additive number theory is clear, an additive basis may simply be called a basis. An asymptotic additive basis is a set ${\displaystyle S}$ for which all but finitely many natural numbers can be expressed as a sum of ${\displaystyle k}$ or fewer elements of ${\displaystyle S}$.[1]
For example, by Lagrange's four-square theorem, the set of square numbers is an additive basis of order four, and more generally by the Fermat polygonal number theorem the polygonal numbers for ${\displaystyle k}$-sided polygons form an additive basis of order ${\displaystyle k}$. Similarly, the solutions to Waring's problem imply that the ${\displaystyle k}$th powers are an additive basis, although their order is more than ${\displaystyle k}$. By Vinogradov's theorem, the prime numbers are an asymptotic additive basis of order at most four, and Goldbach's conjecture would imply that their order is three.[1]
The unproven Erdős–Turán conjecture on additive bases states that, for any additive basis of order ${\displaystyle k}$, the number of representations of the number ${\displaystyle n}$ as a sum of ${\displaystyle k}$ elements of the basis tends to infinity in the limit as ${\displaystyle n}$ goes to infinity. (More precisely, the number of representations has no finite supremum.)[2] The related Erdős–Fuchs theorem states that the number of representations cannot be close to a linear function.[3] The Erdős–Tetali theorem states that, for every ${\displaystyle k}$, there exists an additive basis of order ${\displaystyle k}$ whose number of representations of each ${\displaystyle n}$ is ${\displaystyle \Theta (\log n)}$.[4]
A theorem of Lev Schnirelmann states that any sequence with positive Schnirelmann density is an additive basis. This follows from a stronger theorem of Henry Mann according to which the Schnirelmann density of a sum of two sequences is at least the sum of their Schnirelmann densities, unless their sum consists of all natural numbers. Thus, any sequence of Schnirelmann density ${\displaystyle \varepsilon >0}$ is an additive basis of order at most ${\displaystyle \lceil 1/\varepsilon \rceil }$.[5]
## References
1. ^ a b Bell, Jason; Hare, Kathryn; Shallit, Jeffrey (2018), "When is an automatic set an additive basis?", Proceedings of the American Mathematical Society, Series B, 5: 50–63, arXiv:1710.08353, doi:10.1090/bproc/37, MR 3835513
2. ^ Erdős, Paul; Turán, Pál (1941), "On a problem of Sidon in additive number theory, and on some related problems", Journal of the London Mathematical Society, 16 (4): 212–216, doi:10.1112/jlms/s1-16.4.212
3. ^ Erdős, P.; Fuchs, W. H. J. (1956), "On a problem of additive number theory", Journal of the London Mathematical Society, 31 (1): 67–73, doi:10.1112/jlms/s1-31.1.67, hdl:2027/mdp.39015095244037
4. ^ Erdős, Paul; Tetali, Prasad (1990), "Representations of integers as the sum of ${\displaystyle k}$ terms", Random Structures & Algorithms, 1 (3): 245–261, doi:10.1002/rsa.3240010302, MR 1099791
5. ^ Mann, Henry B. (1942), "A proof of the fundamental theorem on the density of sums of sets of positive integers", Annals of Mathematics, Second Series, 43 (3): 523–527, doi:10.2307/1968807, JSTOR 1968807, MR 0006748, Zbl 0061.07406 |
https://www.mersenneforum.org/showthread.php?s=7993ac59947b6e963b883f7545931283&p=270195 | mersenneforum.org mfakto: an OpenCL program for Mersenne prefactoring
Register FAQ Search Today's Posts Mark Forums Read
2011-08-19, 13:44 #78
Bdot
Nov 2010
Germany
25516 Posts
Quote:
Originally Posted by KingKurly I found that if I plug in a monitor and keyboard to that computer and then login to the computer locally, the video card is found and can be used just fine. It would be a bit of a burden to have to always login locally, but I guess I can do that until a better solution is determined.
Well, it appears the dependency to the running X-Server is not yet dropped (or some additional work is necessary). You need to be logged in in order to start the X-Server. But then you can lock the screen and run mfakto remotely.
I'll check if we can get rid of that.
Quote:
Originally Posted by KingKurly That said, I do have a new problem to report: ERROR: THREADS_PER_BLOCK (256) > deviceinfo.maxThreadsPerBlock
I'll check what implications that has and if we could drop this check altogether as OpenCL calculates the threads a little differently.
Quote:
Originally Posted by KingKurly it fails selftest 1-5 and 9-11. See below:
Now that is odd! The 72-bit kernel fails, but the vectored versions of the same kernel succeed! I just compared the kernels, but there are no code-differences.
Plus, I can reproduce it now on my Linux box: I still had the LD_LIBRARY_PATH point to 2.4, and that runs fine. When pointing it to 2.5, the problem appears. Looks like an AMD APP issue, I'll check what I can do about it. Running 2.5 on the CPU also works fine ...
I already wanted to drop the single kernel because it is so much slower ...
As you built your own binary anyway, go to mfaktc.c and comment out line 487 (removing the _71BIT_MUL24 kernel). Don't submit results with that to primenet, just use it to check what your GPU can do . You can then run the full selftest (-st), if you want the GPU to work for a while. There, you also see the speed of the different kernels for different tasks.
2011-08-20, 03:00 #79 KingKurly Sep 2010 Annapolis, MD, USA 33·7 Posts I rebuilt the program with the change you recommended. All the tests pass, including the large selftest. The card seems to do about 5-10M/s in the "lower" ranges (like below 75M) and is about 10% of that in the 332M+ range. Code: Selftest statistics number of tests 3637 successfull tests 3637 selftest PASSED! I look forward to future versions, and I will not use the program to submit any "no factor" results until you have indicated that it is safe for me to do so. If I happen to find factors, I might submit those, but I do not expect to use the program for much production work until things have stabilized a bit more. Thanks again!
2011-08-20, 17:18 #80 KingKurly Sep 2010 Annapolis, MD, USA 33×7 Posts The very first test I ran saved me an LL test, and of course saved someone else the LL-D down the road. Code: class | candidates | time | avg. rate | SievePrimes | ETA | avg. wait 3760/4620 | 159.38M | 16.721s | 9.53M/s | 50000 | 49m53s | 90889us 3765/4620 | 159.38M | 16.696s | 9.55M/s | 50000 | 49m32s | 90729us Result[00]: M40660811 has a factor: 490782599517282826471 found 1 factor(s) for M40660811 from 2^68 to 2^69 (partially tested) [mfakto 0.07 mfakto_cl_barrett79] tf(): total time spent: 3h 53m 44.575s I had the exponent queued up for a first-time LL test, but I've since removed it from my worktodo because it's not necessary!
2011-08-21, 14:49 #81
Bdot
Nov 2010
Germany
3·199 Posts
Quote:
Originally Posted by KingKurly The very first test I ran saved me an LL test, and of course saved someone else the LL-D down the road. Code: class | candidates | time | avg. rate | SievePrimes | ETA | avg. wait 3760/4620 | 159.38M | 16.721s | 9.53M/s | 50000 | 49m53s | 90889us 3765/4620 | 159.38M | 16.696s | 9.55M/s | 50000 | 49m32s | 90729us Result[00]: M40660811 has a factor: 490782599517282826471 found 1 factor(s) for M40660811 from 2^68 to 2^69 (partially tested) [mfakto 0.07 mfakto_cl_barrett79] tf(): total time spent: 3h 53m 44.575s I had the exponent queued up for a first-time LL test, but I've since removed it from my worktodo because it's not necessary!
What a start! While I already found a lot of factors with mfakto, they almost all have been known before .
BTW, at the expense of a little more CPU you can speed up the tests a little: Set SievePrimes to 200000 and the siever will eliminate some more candidates so the GPU will not test them. What's mfakto's CPU-load right now and with SievePrimes at 200k?
9.5 M/s is also not bad for an entry-level GPU - I guess it is as least twice as fast as one of your CPU cores.
Grats also to the successful selftest. The speed of the tests does not depend a lot on the size of the exponent but mainly on the kernel being used. The selftest will run each test with all kernels that can handle the required factor length. If you still have the output of the selftest you should see that mfakto_cl_barrett79 is always close to 10 M/s, most others a bit below that, and mfakto_cl_95 slowly crawling along.
2011-08-24, 20:55 #82 Bdot Nov 2010 Germany 3·199 Posts Did anyone else give mfakto a try? Any experiences to share (anything strange happening, suggestions you'd like to get included or excluded for the next versions, performance figures for other GPUs, ...)? I'm running this version on a SuSE 11.4 box with AMD APP SDK 2.4, and when multiple instances are running I occasionally see one instance hanging. It will completely occupy one CPU core but no GPU resources. It is looping inside some kernel code, being immune to kill, kill -9 or attempts to attach a debugger or gcore. So far, reboot was the only way I know to get rid of it. How can I find out where that hang occurs? And what else could I try to kick such a process without a reboot?
2011-08-25, 17:29 #83
apsen
Jun 2011
131 Posts
Quote:
Originally Posted by Bdot Did anyone else give mfakto a try?
I had the same experience as another poster: had to recompile to reduce number of threads per block and disable one kernel. Apart from that AMD_APP refused to install on Win2008 so I had to swap the graphic cards between two machines so the AMD one would be on Windows 7. The performance is about 20% of what I get out of GeForce 8800 GTS (around 6 M/s comparing to 29 M/s). I haven't played with sieve parameter much - just had to disable auto adjust as it will raise the setting to the limit slowing the testing to a crawl. If I'll lower it to below default I would probably get better overall performance.
Last fiddled with by apsen on 2011-08-25 at 17:31
2011-08-25, 20:19 #84
Bdot
Nov 2010
Germany
3×199 Posts
Quote:
Originally Posted by apsen The performance is about 20% of what I get out of GeForce 8800 GTS (around 6 M/s comparing to 29 M/s). I haven't played with sieve parameter much - just had to disable auto adjust as it will raise the setting to the limit slowing the testing to a crawl. If I'll lower it to below default I would probably get better overall performance.
That is a bit slower than I had expected. Which kernel and bitlevel was that? But if raising SievePrimes slows down the tests, then the tests are CPU-limited, and the GPU not running at full load. If you want, build a test binary with CL_PERFORMANCE_INFO defined (params.h) - this will tell you the memory transfer rate and pure kernel speed, without accounting for the siever.
According to hwcompare, the 8800 GTS should be 3-4 times faster, so 8-10 M/s would be expected if OpenCL and my port were as efficient as Oliver's CUDA implementation.
Last fiddled with by Bdot on 2011-08-25 at 20:29 Reason: added hwcompare
2011-08-26, 11:55 #85 Chaichontat Aug 2011 216 Posts Hi, I'm running mfakto on my HD6950 @912MHz, Catalyst 11.8 SDK 2.5, one thing that I seen is that it uses approx. 30 percent of my GPU utilization and gives about 50M/s. Does anyone knows how to make it fully use the GPU? Thanks.
2011-08-26, 14:30 #86
apsen
Jun 2011
131 Posts
Quote:
Originally Posted by Bdot That is a bit slower than I had expected. Which kernel and bitlevel was that? But if raising SievePrimes slows down the tests, then the tests are CPU-limited, and the GPU not running at full load. If you want, build a test binary with CL_PERFORMANCE_INFO defined (params.h) - this will tell you the memory transfer rate and pure kernel speed, without accounting for the siever. According to hwcompare, the 8800 GTS should be 3-4 times faster, so 8-10 M/s would be expected if OpenCL and my port were as efficient as Oliver's CUDA implementation.
I did some more testing and it looks like the problem is in getting enough CPU. When i run it alone i'm getting about 7.3 M/s and CPU usage is 50-56%(!) on two core machine. When I start prime95 the CPU usage drops to about 10% average and I'm getting about 6.5 M/s even though prime runs at default priority and mfacto at normal. Also average wait is always in teens of milliseconds (12000-15000 microseconds). It is lower without prime95 running.
2011-08-27, 12:44 #87 MrHappy Dec 2003 Paisley Park & Neverland 5×37 Posts I get ~28M/s on my HD5670 / Phenom II 4 Core 925 with 2 Cores on P-1 tests, 1 Core LL-D; and 1 core is busy video editing. I'll look again when the video job is done.
2011-08-28, 14:18 #88
Christenson
Dec 2010
Monticello
70316 Posts
Quote:
Originally Posted by Chaichontat Hi, I'm running mfakto on my HD6950 @912MHz, Catalyst 11.8 SDK 2.5, one thing that I seen is that it uses approx. 30 percent of my GPU utilization and gives about 50M/s. Does anyone knows how to make it fully use the GPU? Thanks.
At the current stage of development, mfaktc/mfakto sieves for probable primes on the CPU side before passing them to the GPU for checking. Make sure that sieveprimes on your machine has gone down to 10,000. Beyond that, at the moment, you have to throw more CPU at it, in the form of running a second copy of mfaktc on a different core.
50M/s is doing a bit better than my GTX440 under mfaktc, incidentally.
Setting up both mfaktc and mfakto to sieve on the GPU is at least a dream for the developers.
Similar Threads Thread Thread Starter Forum Replies Last Post preda GpuOwl 2696 2021-04-18 17:48 TheJudger GPU Computing 3492 2021-03-24 14:09 msft GPU Computing 433 2019-06-23 21:11 TObject GPU Computing 2 2013-10-12 21:09 Stargate38 Factoring 24 2011-11-03 00:34
All times are UTC. The time now is 21:21.
Wed Apr 21 21:21:48 UTC 2021 up 13 days, 16:02, 0 users, load averages: 2.32, 1.98, 1.85 |
http://www.chegg.com/homework-help/questions-and-answers/push-particle-mass-m-inthe-direction-already-moving-expect-theparticle-s-speed-increase-pu-q113430 | If you push a particle of mass M inthe direction in which it is already moving, you expect theparticle's speed to increase. If you push with a constant forceF, then the particle will accelerate with accelerationa=F/M (from Newton's 2nd law).
A. Enter a one- or two-word answer that correctly completes thefollowing statement.
If the force is applied for a fixed interval of time t, then the _____ of the particle will increase by an amountat.
B. Enter a one- or two-word answer that correctly completes thefollowing statement.
If the force is applied over a given distance
D, along the path of the particle, thenthe _____ of the particle will increase byFD.
C. If the initial kinetic energy of the particle isK1, and its final kinetic energy isKf, express Kfin terms of K1 and thework W done on theparticle.
D. Now, consider whether the following statements are true orfalse:
• The dot product assures that the integrand is alwaysnonnegative.
• The dot product indicates that only the component of theforce perpendicular to the path contributes to theintegral.
• The dot product indicates that only the component of theforce parallel to the path contributes to the integral.
E. Assume that the particle has initial speedv1. Find its final kinetic energy Kf in terms ofv1, M, F, andD.
F. What is the final speed of the particle? |
https://groups.google.com/g/rec.games.computer.doom.editing/c/4BcekQPLZEQ | # Thermodynamics An Engineering Approach , By Yunus A. Cengel & Michael A. Boles & McGraw-Hill, 2011 , 6th ed
2 views
### [email protected]
Aug 21, 2018, 1:44:52 AM8/21/18
to
solutions book team
g e t . s o l u t i o n s b o o k @ h o t m a i l . c o m
get.solutionsbook(at)hotmail(dot)com
[email protected]
We're a team for providing solution manuals to help students in their
study.
We sell the books in a soft copy, PDF format.
We will find any book or solution manual for you.
Just email us:
g e t . s o l u t i o n s b o o k @ h o t m a i l . c o m
List of some books we have
=============================
A Course in Modern Mathematical Physics by Peter Szekeres
A First Course in Abstract Algebra By John B. Fraleigh
A First Course in Complex Analysis with Applications , By Dennis G. Zill , 1st ed
A first course in differential equations , D.zill & cullen's ,5th ed
A First Course in Differential Equations with modeling applications , By Dennis G. Zill , 9th zill
A First Course in General Relativity , Cambridge University Press , (2016)
A First Course in Probability , By Sheldon Ross , 7th Edition (Solutions Manual) 2007
A first course in probability 6th edition by Ross
A First Course in Statistics , James T. McClave and Terry Sincich , 8th ed
A First Course in String Theory by Barton Zwiebach
A Practical Introduction to Data Structures and Algorithm Analysis Second Edition by Clifford A. Shaffer
A Quantum Approach to Condensed Matter Physics by Philip L. Taylor
A Short Introduction to Quantum Information and Quantum Computation by Michel Le Bellac
A Transition to Advanced Mathematics , By D. Smith & M. Eggen , R. Andre , 5th ed
Accompany Digital Systems Principles and Applications, 10th Edition By Ronald J. Tocci, Neal S. Widmer, Gregory L. Moss
Accompany Electric Machinery and Power System Fundamentals, First Edition by Stephen J. Chapman
Accompany Electronic Devices and Circuit Theory, 8Ed By Robert L. Boylestad; Louis Nashelsky; Franz J. Monseen
Accompany Elementary Statistics Ninth Edition by MILTON LOYER
Accompany Engineering circuit analysis, 6th edition By Hayt
Accompany Foundations of Electromagnetic Theory 2nd Ed. by John R. Reitz, Frederick J. Milford
Accompany Fundamentals of Fluid Mechanics, 5th Edition by Bruce R. Munson, Donald F. Young, Theodore H. Okiishi
Accompany Introduction to algorithms By Sussman J
Accompany Millman microElectronics Digital and Analog Circuits and Systems , By Thomas V. Papathomas
Accompany Organic Chemistry , Atkins & Carey ,4th ed .. 2000
Accompany Physics for Poets Second Edition By Robert H. March
Accompany Principles of geotechnical engineering, sixth edition by braja M. DAS
Accounting , By Carl S. Warren, James M. Reeve & Jonathan Duchac , 25th ed
Accounting Principles , Donald E. Kieso , 9th ed
adaptive control , Karl Astrom , 2nd ed
Adaptive Control, 2nd Edition, By Karl Johan Astrom,Bjorn Wittenmark
Adaptive filter thoery 4th edition By Simon Haykin
Advanced Accounting , Debra C. Jeter & Paul K. chaney , 4th ed
Advanced Accounting , Fischer and Cheng and Taylor , 10th ed
Advanced Accounting ,By Beams and Anthony and Clement and Lowensohn , 8th ed
Advanced Digital Design with the Verilog HDL by Michael D. Ciletti (Selected problems)
Advanced engineering electromagnetics by Constantine A. Balanis
Advanced Engineering Mathematics , 10th ed. , Kreyszig (Wiley;2014;9781118266700 ;eng)
advanced engineering mathematics , By Erwin Kreyszig , 3rd ed Vol.2
Advanced Engineering Mathematics , By Peter V. O'Neil , 6th ed
Advanced Engineering Mathematics 8 Edition By Erwin Kreyszig
Advanced Engineering Mathematics 9 Edition By Erwin Kreyszig
advanced macroeconomics , By David Romer , 3rd ed
Advanced Mathematics for Engineering , By Dennis G. Zill , 2th ed
Advanced Mathematics for Engineering , By Dennis G. Zill , 3rd ed , Vol.2
Advanced Mathematics for Engineering , By Dennis G. Zill , 4th ed
Advanced Mechanics of Materials , By A. Boresi and R. Schmidt , 6th ed
advanced modern engineering mathematics , By Glyn jamess , 4th ed
Advanced Modern Engineering Mathematics 3rd Edition by Glyn James
Advandced Accounting , By Fischer Taylor , 8th ed
Aircraft Structures for Engineering Students Fourth Edition by T. H. G. Megson (2007)
Algebra and Trigonometry , 4th ed , MILLER & Blitzer, 2010
Algebra and Trigonometry and Precalculus, 3rd Edition by Penna & Bittinger Beecher
An introduction to database systems 8th edition By C J Date
An Introduction to Drug Synthesis ,Graham L. Patrick , 2015
An Introduction to Medicinal Chemistry , Patrick , 5th ed
An Introduction to Ordinary Differential Equations By James C. Robinson (with Matlab files)
An Introduction to Queueing Systems ,By Kluwer Academic Publishers and Sanjay K. Bose , 1st ed
An Introduction to Signals and Noise in Electrical Communication , 4th Ed , by A. Bruce Carlson , Paul B. Crilly & Janet C. Rutledge
An Introduction to Signals and Systems By John Stuller
An Introduction to The Finite Element Method (Third Edition) By J. N. REDDY
An Introduction to Thermodynamics and Statistical Mechanics and Keith Stowe , 2nd ed
An?lisis de Redes (Network Analysis) , By M. E. Valkenburg , 3rd ed
An?lisis de Sistemas Eléctricos de Potencia (Power Systems Analysis of Power ) , John J. Grainger & William D. Stevenson Jr. , 3rd ed
An?lisis Matematico (Mathematical Analysis Modern Introduction to Superior Calculus) , Tom Apostol , 2nd ed
Analog Integrated Circuit Design , David Johns & Ken Martin
Analysis and Design of Linear Circuits , By Thomas. Rosa. Toussaint , 6th ed
Analysis and design of analog integrated circuits 4th edition by Srikanth Vaidianathan and Haoyuee Wang
Analytical Chemistry ,Higson ,2005
Analytical Mechanics, 7th Edition By Fowles & Cassiday
Antenas , By John Kraus , 3rd ed
Antenna Theory Analysis and Design, 2nd Edition Balanis
Antennas for all Applications 3rd edition by John D. Kraus & Ronald J. Marhefka
Anton Calculus 8th edition, Exercises solutions
Applied Calculus for the Managerial, Life, and Social Sciences , By Soo T. Tan , 7th ed
Applied Calculus for the Managerial, Life, and Social Sciences , soo T. Tan , 6th Ed
Applied Electromagnetism , By Liang Shen. and Frank Huang , 2ed
applied fluid mechanics , By Robert L. Mott , 6th ed
Applied Linear Statistical Models , By M. Kutner and C. Nachtsheim and J. Neter and W. Li , 5th ed
Applied Numerical Analysis 7th Edition By Curtis F. Gerald,Patrick O. Wheatley
Applied Numerical Methods MATLAB Engineers Scientists , By Chapra , 3rd ed
Applied Numerical Methods With MATLAB for Engineers and Scientists , by Steven C. Chapra , 2nd ed
Applied Numerical Methods With MATLAB for Engineers and Scientists by Steven C. Chapra , 1st ed
Applied Partial Differential Equations with Fourier Series and Boundary Value Problems 4th Edition by Richard Haberman
applied physics , By Paul E. Tippens , 6th ed
Applied Probability Models with Optimization , By Sheldon M. Ross , 2nd ed
Applied Quantum Mechanics by A. F. J. Levi
Applied Statistics And Probability For Engineers , 2nd ed , By Montgomery, Runger
Applied Statistics And Probability For Engineers 3rd edition By Montgomery,Runger
Applied Statistics And Probability For Engineers 4th edition By Montgomery,Runger
Applied Strength of Materials 4th Edition By Robert L. Mott
Artificial Intelligence A Modern Approach , By Russell and Norvig , 2nd ed
Artificial Intelligence A Modern Approach , By S. Russell, P. Norvig , 3rd ed
Artificial Intelligence A ModernApproach 2nd edition by StuartJ. Russelland, Peter Norvig
Assembly Language for Intel-Based Computers,3ed, by Kip R. Irvine
Astronomy Today , McMillan & Chaisson , 5th ed
Atkins' Physical Chemistry ,Peter Atkins and Julio de Paula ,10th ed
Auditing & assurance services , By Louwers & Sinason & Straeser & Ramsy , 2nd ed
Auditing and Assurance Services , By Arens. Elder and Beasley and Randal J. Elder , 13th ed
Automatic control systems , By Benjamin C. Kuo , 7th ed
automatic control systems , By Kuo & Golnaraghi , 9th ed
Automatic Control Systems 8th edition By Kuo and Golnaraghi
Avanced Dynamics , By Greenwood , 1st ed
Balance de Materia y Energ?a (Balances of Matter and Energy) , By Girontzas V. Reklaitis , 1st ed
Basic and Applied Thermodynamics , P. K. Nag , 2nd ed
Basic Econometrics , By Damodar N. Gujarati , 4th ed
Basic Electrical Engineering By Nagrath, D P Kothari, Nagrath D P Kothari I J Nagrath, I J Nagrath, 2002
Basic Engineering Circuit Analysis 7th Ed. by J. David Irwin (Selected Problems)
Basic Engineering Circuit Analysis 8th Edition By J. David Irwin
Biological Sequence Analysis , Durbin and Eddy and Krogh and Mitchison , 1st ed
Bioprocess Engineering Principles ,By Pauline M. Doran , 2nd ed
C++ How to Program, 3rd Ed by Harvey M. Deitel, Paul J. Deitel
C++ How to Program, 3rd edition By Deitel & Nieto
Calculus , By (Narberg , Purcell , Rigdon) , 8th Edition (Solutions Manual and Test Bank) , Prentice Hall 2000
Calculus , By James Stewart , 4th ed
Calculus 5th Edition By James Stewart
Calculus 8th Ed by Larson, Hostetler, Edwards
Calculus A Complete Course 6th Edition by R.A. Adams
calculus An intuitive And physical approach , 2nd ed , By Morris Kline
Calculus Early Transcedentals ,By Stewart , 6th ed
Calculus Early Transcedentals ,By Stewart ,5th ed
Calculus Early Transcendental - Edwards, Penney - 6ed
Calculus Early Transcendental Functions , By R. Smith, R. Minton , 3rd ed
Calculus Early Transcendentals , By Howard Anton , 7th ed
Calculus Early Transcendentals 5th Edition By Stewart
Calculus early transcendentals 7th edition By Anton Bivens Davis
calculus Late Transcendentals combined , Anton & Bivens & Davis , 8th ed
Calculus Multivariable , By Jon Rogawski , 2nd ed
calculus multivariable 4th edition Deborah Hughes-Hallett, Andrew M. Gleason, et al
Calculus of a Single Variable , Ron Larson , 7th ed
calculus of variations , By I. B. Russak , 1st ed
Calculus one and several variables Instructor , 8th ed , By Bradley E. garner & Carrie J. Garner
Calculus Single Variable , By Stewart , 7th ed
Calculus Single Variable , Jon Rogawski , 2nd ed
Calculus Single Variable Hughes-Hallett, Gleason, McCallum, et al
CALCULUS WITH APPLICATIONS , Lial & Greenwell & Ritchey , 8th ed
Chemical and Engineering Thermodynamics 3rd Ed. by Stanley I. Sandler
Chemical Engineering (Vol.1 & Vol.2 & Vol.3) Coulson & Richardson's , 1st ed
Chemical Engineering Design , By Sinnot , Vol6
Chemical Engineering Design Vol.6 , By R. K. Sinnott , 4th ed
Chemical Engineering Science , By O. Levenspiel , ch 5-10 , 3rd ed
Chemical Enginering Vol 6 4th edition by Coulson and Richardson
Chemical Principles , 4th ed., 2007 , By John Krenos , Joserh Potenza
Chemical Structure and Reactivity ,Keeler & Wothers , 2ed
chemistry , By Raymond chang , 10th ed
chemistry the central science , 9th ed , By Brown & Bursten & LeMay
Chemistry³ introducing inorganic ,and physical chemistry ,burrows ,holman ,parsons ,pilling ,price ,3th ed
Circuitos Integrados Digitales , By Jan M. Rabaey , 2nd ed
Classical Dynamics A Contemporary Approach by Jorge V. Jose, Eugene J. Saletan
Classical Dynamics of Particles and Systems 5th edition by Stephen T. Thornton, Jerry B. Marion
Classical Electrodynamics , By John David Jackson , 3rd ed
Classical Electrodynamics 2nd Edition by John David Jackson by Kasper van Wijk
Classical Mechanics - An Undergraduate Text by R. Douglas Gregory
Classical Mechanics 2nd edition By Goldstein & Safko
classical thermodynamics of Non-Electrolyte , By H. C. Van ness 1st ed
CMOS Digital Integrated Circuits 3rd edition By Sung-Mo Kang,Yusuf Leblebici
CMOS VLSI Design 3e by ananymous
College Physics , By Faughn and Serway and vuille , 6th ed
COLLEGE PHYSICS , by SERWAY AND VUILLE’S , 9th ed
Communication Networks Fundamental Concepts and Key Architectures Alberto Leon-Garcia
Communication Systems 4th ed by bruce carlson
Communication Systems 4th edition by Simon Haykin
Communication Systems Engineering - Second Edition John G. Proakis Masoud Salehi
Complex Variables and Applications , JW Brown , RV Churchill , 8th ed(2009)
complex Variables and applications ,James Ward Brown ,7th ed
complex variables with applications ,A. David wunsch ,3th ed
Computational Techniques for Fluid Dynamics (Scientific Computation) by Karkenahalli Srinivas, Clive A. J. Fletcher
Computer Networking A Top-Down Approach 3rd Edition by James F.Kurose,Keith W. Ross
Computer Networks - 4th Edition by Andrew S. Tanenbaum
Computer Organization 5th edition by Hamacher,Vranesic and Zaky
Computer Organization and Design The HardwareSoftware Interface, 3rd edition by David A. Patterson, John L. Hennessy,
Computer-Controlled Systems 3rd edition by Karl J. Astrom
Comunicacion Satelital , Timothy Pratt & Charles Bostian , 2nd ed
concepts and applications of finite element analysis , Robert Cook and David s. Malkus and Michael E. Plesha , 4th ed
Concepts of Programming Languages 7th edition Solutions Manual by Robert Sebesta
Contemporary Engineering Economics , By Chan S. Park , 4t ed
Contemporary Engineering Economy , By William G. Sullivan and Elin M. Wicks and C. Patrick Koelling , 5th ed
Contemporary Linear Algebra , Howard Anton and Robert C. Busby , 1st ed
Control systems Principles and Design 2nd Edition by Madan Gopal
Control systems engineering , By Norman Nise , 6th ed
Control Systems Engineering 4th edition by Norman S. Nise
Convection Heat Transfer ,By Adrian Bejan , 3rd ed
Corporate Finance solution manual 6th Edition by Ross
Cost Accounting , By Horngren , 12th ed
Cost Accounting , By Horngren , 13th ed
Cost Accounting , By William K. Carter , 14th ed
Craig's Soil Mechanics 7th Edition
Cryptography and network security-principles and practice 4th ed. By William Stallings
Data and computer communications 7th edition William Stallings
Data Communications and Networking 4th edition by Behroz Forouzan
Database Management Systems 3rd edition Raghu Ramakrishnan Johannes Gehrke
Database System Concepts , A. Silberschatz and H. Korth and S. Sudarshan , 4th ed
Design of Analog CMOS Integrated Circuits Behzad Razavi
Design of Concrete Structures , By Arthur H. Nilson , 14th ed
Design of Nonlinear Control Systems with the Highest Derivative in Feedback 1st Edition by Valery D. Yurkevich [student solution manual]
Design with Operational Amplifiers and Analog Integrated Circuits, 3rd edition by Franco, Sergio
Device Electronics for Integrated Circuits 3rd edition by Muller Kamins
Differential Equation , by Richard Bronson 3rd ed
Differential Equations and Boundary Value Problems , Edwards & & Penney , 2nd ed
differential equations and boundary value problems computing and modeling , By Edwards and Penney , 4th ed
differential equations and linear algebra , Jerry Farlow & Beverly H. West & james-e-hall & jean-marie-mcdill , 2nd ed
Differential Equations with Boundary Value Problems 2nd Edition by JOHNPOLKING and DAVID ARNOLD
Differential Equations with Boundary Value Problems, 2nd edition by John Polking
Digital and Analog Communication Systems 7th Edition by Leon W. Couch
Digital Communication 4th edition by Proakis
Digital Communications 5th edition by John Proakis
Digital Communications Fundamentals and Applications, 2nd Edition by Bernard sklar
Digital Control and state variable methods - M.Gopal
Digital Design 2nd Edition by M. Morris Mano,Michael D. Ciletti
Digital Design 3rd Edition by M. Morris Mano,Michael D. Ciletti
Digital Design 4th edition Morris Mano
Digital Design-Principles and Practices 3rd Edition by John F. Wakerly [selected problems]
Digital Fundamentals 9th edition by Thomas L. Floyd
Digital Image Processing 2nd edition by Rafael C. Gonzalez
Digital Integrated Circuits 2nd edition by Rabaey
Digital Integrated Circuits by Thomas A. DeMassa & Zack Ciccone
Digital Logic Design 2nd edition by M. Morris Mano
Digital Signal Processing - A Modern Introduction, 1st Edition Cengage learning Ashok Ambardar
Digital Signal Processing , Proakis and Manolakis , 1st ed
Digital Signal Processing ; A Computer-Based Approach 1st edition By sanjit K. Mitra
Digital Signal Processing 2nd Edition by Mitra
Digital Signal Processing 3nd Edition by Mitra
Digital Signal Processing 4th edition by John G. Proakis and Dimitri s G. Manolakis
Digital Signal processing Acomputer - based Approach , By Sanjit K. Mitra
Digital Signal Processing by Thomas J. Cavicchi
Digital signal processing proakis manolakis
Digital Signal Processing Signals, Systems, and Filters Andreas Antoniou
Digital Signal Processing Using Matlab 2nd edition by Vinay K Ingle Proakis
Digital Systems-Principles and Applications 10th Ed. by Ronald Tocci, Neal S. Widmer & Gregory L. Moss
Discretas mathematics , By Richard Johnsonbaugh , 6th ed
Discrete and Combinatorial Mathematics , By R. Grimaldi , 5ed Part 1
Discrete Mathematics with Applications Third Edition By Susanna S. Epp
discrete Time control systems (Sistemas de control en tiempo discreto) , 2nd ed , Katsuhiko ogata
Discrete Time Signal Processing 2nd Edition, by Alan V. Oppenheim
Discrete time signal processing 3rd edition by Oppenheim
Dise?o con Amplificadores Operacionales y Circuitos Integrados Anal?gicos(Design with Operational Amplifiers and Analog Integrated Circuits) , By Sergio Franco , 3th ed
Dynamics of Mechanical Systems , By Carl T. F. Ross , 7th ed
Dynamics Structures theory and applications to earthquake engineering , By Anil K. Chopra , 3th ed
Econometric Analysis 5th Edition by William H. Greene
Economic engineering , L. Blank and A. Tarkin , 6th ed
Electric Circuits 7th edition by Nilsson
Electric Circuits 8th edition by Nilsson
Electric Machinery 6th Edition by Fitzgerald Kingsley
Electric Machinery and Power System Fundamentals 1st edition by Stephen Chapman
Electric Machinery Fundamentals 4th edition by Stephen J. Chapman
Electric machines , By Jesus Fraile Mora , 5th ed
Electric Machines Analysis and Design Applying MATLAB by Jim Cathey
Electrical Engineering Principles and Applications 3rd edition by Allan R. Hambley
Electrical Machines, Drives and Power Systems 6th edition By Theodore Wildi
Electrical Properties of Materials ,Solymar & Walsh , 7th ed
Electricity and magnetism (Electricidad y Magnetismo) , By Raymond A. Serway , 6th ed
Electricity and magnetism , By Raymond A. Serway , 3rd ed
electricity and magnetism Vol.II , Edward M. Purcell , 2nd ed
Electromagnetic Fields and Energy 1st Ed. by Haus and Melcher
Electromagnetics for Engineers by Ulaby
Electromagnetism Major American Universities Ph.D. Qualifying Questions and Solutions by Lim Yung-Kuo
Electron Paramagnetic Resonance ,Victor Chechik, Emma Carter, and Damien Murphy ,2016
Electronic Circuit Analysis and Design 2nd edition by Donald A. Neamen
Electronic devices - electron flow version 4th edition by thomas l.floyd
Electronic Devices and Circuit Theory 8th Ed. with Lab Solutions, and Test Item File by Robert Boylestad
Electronic Devices and electronic devices , By thomas L. Floyd , 6th ed
Electronic Devices-6th Edition by Thomas L. Floyd
Electronic Physics by Strabman
Elementary Applied Partial Differential Equations with Fourier Series and Boundary Value Problems , PrenticeHall , R.Haberman (1987)
Elementary Differential Equations , 8th ed. , By Werner Kohler, Lee Johnson
Elementary Differential Equations , Penny , 5th ed
Elementary Differential Equations , Werner Kohler & Lee Johnson , 1st ed
Elementary Differential Equations 8th edition by Boyce
Elementary Differential Equations And Boundary Value Problems, 7Th Edition by Boyce And Diprima
Elementary Differential Equations and Elementary Differential Equations with Boundary Value Problems , William F. Trench , 2000
Elementary Linear Algebra with Applications 9th by Howard Anton, Chris Rorres
Elementary Linear Algebra With Applications 10E , Howard Anton, Chris Rorres
Elementary Mechanics and Thermodynamics by Jhon W. Norbury , 1st ed
Elementary Number Theory and Its Applications, 5th edition by Kenneth H. Rosen
Elementary Number Theory and Its Applications, 6th Ed. By Kenneth H. Rosen
Elementary Principles of Chemical Processes 3rd edition by Richard M. Felder,Ronald W. Rousseau
Elementary statistics Using the Graphing Calculator , Mario F. Triola 2005
Elements of Chemical Reaction Engineering, 3rd Edition by H. Scott Fogler
Elements of Deductive Inference , By Joseph Bessie and Stuart Glennan , 1st ed
Elements of electromagnetics 2nd edition by sadiku
Elements of electromagnetics 3rd edition by sadiku
Elements of Power System Analysis 4th edition by William D. Stevenson
Embedded Microcomputer Systems Real Time Interfacing 2nd Edition by Jonathan W. Valvano
Energy Science ,Principles, Technologies, and Impacts ,Andrews & Jelley ,3th e
Energy Systems Engineering evaluation and implementation , Francis M Vanek and Louis D Albright , 1st ed
Engineering Circuit Analysis 6th edition by Hayt
Engineering Circuit Analysis 7th edition by Hayt
Engineering Electromagnetics - 7th Ed. - Hayt
Engineering Electromagnetics 2d Edition by Nathan Ida
Engineering Electromagnetics 6th Edition by William H. Hayt Jr. and Hohn A. Buck
Engineering Fluid Mechanics 7th edition by Clayton T. Crowe, Donald F. Elger & John A. Roberson
engineering materials science , By milton ohring , 1st ed
Engineering Mathematics 4th edition by John Bird
Engineering Mathematics 4th Edition by NEWNES
Engineering Mechanic STATICS 10th Ed. R.C. Hibbeler
Engineering Mechanics - Dynamics 2 Edition by Riley and Sturges
Engineering Mechanics - Dynamics 11th edition by R. C. Hibbeler
Engineering Mechanics - STATICS 4th E - Bedford and Fowler
Engineering mechanics - statics 10th edition by R. C. Hibbeler
engineering mechanics dynamics , By Boresi and schmidt , 1st ed
engineering mechanics Dynamics , By Meriam & Kraige & palm , 3rd ed
engineering mechanics Dynamics , By Meriam & Kraige & palm , 5rd ed
engineering mechanics dynamics , By Meriam and kraige , 6th ed
Engineering mechanics Dynamics 4th Ed. by Bedford and Fowler
Engineering Mechanics Dynamics 5th J.L Meriam
Engineering Mechanics of Solids , By Egor P. Popov , 2nd ed
engineering mechanics statics , By Bedford and fowler , 5th ed
engineering mechanics statics , By R. C. Hibbeler , 8th ed
engineering mechanics statics , By R. C. Hibbeler , 10th ed
Engineering Mechanics Statics , By R.C.Hibbeler , 12th ed
Engineering Mechanics Statics 6th edition by J.L Meriam
Engineering Mechanics Statics 11th Edition By R.C.Hibbeler
engineering mechanics statistics , By Meriam & Kraige & palm , 4th ed
engineering mechanics statistics , By Meriam & Kraige & palm , 6th ed
Engineering Probability and Statistics for Engineers and Scientists
Engineering Statistics , By Montgomery , 4th ed
Engineering Vibration , 3rd ed , Daniel J. Inman
English Grammar Understanding the Basics , By Cambridge , 1st ed
Environmental Chemistry , vanLoon & Duffy , 3th ed
Experiments with Economic Principles , By Theodore Bergstrom And J. Miller , 1st ed
Feedback Control of Dynamic Systems 4th edition by G. F. Franklin, J. D. Powell, A. Emami
Field and Wave Electromagnetics 2nd Edition by Wesley Cheng
Field and Wave Electromagnetics International Edition by David K
Financial Accounting , By Harrison and Horngren , 8th ed
Financial Accounting information for decisions , John J. Wild , 4th ed
Financial Instruments , John Hull , 4ed
FINITE MATHEMATICS , Lial , Greenwell & Ritchey , 8th ed
Fluid Mechanics , By Frank M. White , 5th ed
Fluid mechanics , By Merle C. Potter and David C. Wiggert , 3rd ed
Fluid Mechanics , By Russell C. Hibbeler , 1st ed
Fluid Mechanics , Munson , 7th ed
Fluid Mechanics 1st edition by CENGEL
Fluid Mechanics 5th Edition by White
Fluid Mechanics and Thermodynamics of Turbomachinery , By Dixon and Hall , 5th ed
Fluid Mechanics Fundamentals and Applications , By Cengel and Cimbala , 1st ed
Fluid Mechanics With Engineering Applications 10th edition by E. John Finnemore, Joseph B Franzini
Foundations of Colloid Science ,Hunter ,2nd ed
Foundations of International Macroeconomics , By Obstfeld and Rogoff , 1st ed
Foundations of Molecular Structure Determination ,Simon Duckett, Bruce Gilbert, and Martin Cockett ,2nd ed
Fracture mechanics fundamentals and applications 2nd edition by Northam Anderson
Fund of Corporate Finance , by Richard A. Brealey , 4th ed
Fundamental of Electric Circuits 3rd editoin by C. K. Alexander M. N. O. Sadiku
Fundamental of engineering electromagnetics by David Cheng
Fundamentals od Finanial Management , James Van Horne and John Wachowicz , 12th ed
Fundamentals of Aerodynamics , By John D. Anderson , 3rd ed
Fundamentals of Analytical Chemistry , By Holler and Crouch , 9th ed
Fundamentals of Applied Electromagnetics , By Faeeaz T. Ulaby
Fundamentals of corporate finance , By Ross and Jordan and Westerfield , 8th ed
Fundamentals of Diferential Equations , By Nagle and Saff and Snider , 6th ed
Fundamentals of differential equations , 7ed.-Pearson (2008) , By R. Kent Nagle , Edward B. Saff , A. David Snider
Fundamentals of differntial equations , R. kent nagle & Edward b.saff & A. David snider , 7th ed
Fundamentals of Digital Logic with VDL Desing , By S. Brown and Z. Vranesic , 1st ed
Fundamentals of Digital Logic with Verilog Design 1st edition by S. Brown Z. Vranesic
Fundamentals of Digital Logic with VHDL Design, 1st edt. by S. Brown, Z. Vranesic
Fundamentals of Digital Signal Processing using MATLAB , By Sandra L. Harris and Robert J. Schilling , 2nd Ed
Fundamentals of Electric Circuits , 5th ed
fundamentals of electric circuits , By Alexander and Sadiku , 4th ed
Fundamentals of Electric Circuits 2nd edition by C. K. Alexander M. N. O. Sadiku
Fundamentals of Electric Circuits, 3rd edition by C. K. Alexander M. N. O. Sadiku
fundamentals of engineering thermodynamics , By Moran & Shapiro , 5th ed
fundamentals of engineering thermodynamics , By Moran & Shapiro , 6th ed
Fundamentals of engineering thermodynamics by m. j. moran h. n. shapiro
Fundamentals of Financial Management , and E. Brigham, J. Houston , 12th ed
Fundamentals of Fluid mechanics 4th edition by Munson
Fundamentals of Fluid Mechanics Student Solutions Manual, 3rd Edition [Student solution manual]
Fundamentals of heat and mass transfer , By Incropera & Lavine & Dewitt & Bergman , 5th ed
Fundamentals of heat and mass transfer , By Incropera & Lavine & Dewitt & Bergman , 6th ed
Fundamentals of Heat and Mass Transfer 4th edition by Incropera & Dewitt
Fundamentals of logic design 5th edition by Charles Roth
Fundamentals of Machine Component Design - 3rd edition by Robert C. Juvinall and Kurt M. Marshek
Fundamentals of Machine Component Design 4th edition by Robert C. Juvinall, Kurt M. Marshek
Fundamentals of Machine Component Desing , By R. Juvinall. K. Marshek , 1st
Fundamentals of Machine Elements , By Steven Schmid and Bernard Hamrock and Bo. Jacobson , 2nd ed
fundamentals of manufacturing , By philip D. Rufe , 2nd ed
FUNDAMENTALS OF MODERN MANUFACTURING , (MATERIALS, PROCESSES, AND SYSTEMS) , By MIKELL P. GROOVER , 2nd ed
FUNDAMENTALS OF MODERN MANUFACTURING , (MATERIALS, PROCESSES, AND SYSTEMS) , By MIKELL P. GROOVER , 3rd ed
FUNDAMENTALS OF MODERN MANUFACTURING , (MATERIALS, PROCESSES, AND SYSTEMS) , By MIKELL P. GROOVER , 4th ed
Fundamentals of Momentum, Heat, and Mass Transfer , By Welty and Wicks and Wilson and Rorrer , 5th ed
Fundamentals of Organic Chemistry , By Solomon , 5th ed
Fundamentals of Physics (Extended) , by Halliday , Resnick & J. Walker , 9th ed , pp.1643, (Wiley, 2011)
Fundamentals of Physics 7th edition by Halliday, Resnick and Walker
Fundamentals of physics 8th edition by Halliday, Resnick and Walker
Fundamentals of Physics Extended , By Halliday and Resnick , 8th ed
fundamentals of physics Vol.1 vol.2 , By Halliday and Resnick , 6th ed
Fundamentals of Power Electronics 2nd edition by R.W. Erickson
Fundamentals of Power Semiconductor Devices 1st Ed. by B. Jayant Baliga
Fundamentals of Quantum Mechanics for solid state electronics and optics , By C.L. Tang , 1st ed
Fundamentals of signals and systems , By Michael J. Roberts . 1st ed
Fundamentals of Signals and systems using web and matlab third edition by Edward W. Kamen, Bonnie S Heck
Fundamentals of Solid-State Electronics by Chih-Tang Sah
Fundamentals of Thermal Fluid Sciences by Yunus A. Cengel, Robert H. Turner, Yunus Cengel, Robert Turner
Fundamentals of Thermodynamics by Richard Sonntag Claus Borgnakke Gordon Van Wylen
Fundamentals of Wireless Communication by Tse and Viswanath
General Chemistry , By Ebbing and Gammon , 10th ed
General Chemistry, Principles and Modern Applications , By Petrucci and Harwood and Herring , 8th ed
general organic and biological chemist structures of life , By Karen C. Timberlake , 2nd ed
Guide for Microprocessors and Interfacing , By Douglas Hail , 2nd ed
Heat and Mass transfer A practical Approach , Yunus A. Cengel , 3rd ed
Heat Transfer A Practical Approach 2nd edition by Yunus A. Cengel, Yunus Cengel
How English Works A Grammar Handbook with Readings Instructor's Manual by Ann Raimes
Hydraulics in Civil and Environmental Engineering , 4th ed , by Chadwick & Morfett
Heating ventilating and air conditioning Analysis and Design , By McQuiston and Parker and Spitler , 6th ed
Inorganic Chemistry ,Almond, Spillman & Page
Interfacial Science An Introduction ,Geoffrey Barnes and Ian Gentle ,2nd ed
Intermediate Accounting , By Kieso , 13th ed
International Trade , By Robert Feenstra and Alan Taylor , 2nd ed
Introduction to Abstract Algebra, Solutions Manual , By W. Keith Nicholson , 4th ed , 2012
Introduction to Algorithms 2nd edition by Philip Bille
Introduction to Algorithms 2nd Edition by Thomas H. Cormen
INTRODUCTION TO chemical engineering thermodynamics , By J.R. Elliot and C.T. Lira , 1st ed
INTRODUCTION TO chemical engineering thermodynamics , Smith & Van Ness , 7th ed
Introduction to chemical engineering thermodynamics 6th edition by j. m. smith
Introduction to Communication Systems , 2nd ed , By Ferrel G. Stremler
Introduction to Communication Systems 3rd Edition by Stremler
Introduction to Computing and Programming with JAVA-A Multimedia Approach 1st Edition by Mark Guzdial and Barbara Ericson
Introduction to Econometrics , By Stock and Watson , 1st ed
Introduction to electric circuits 6th edition by Dorf Svaboda
Introduction to Electric Circuits 7th edition by Richard C. Dorf & James A. Svoboda
Introduction to elementary particles by D.Griffiths
Introduction to Eletrodynamics 3rd ed By David J. Griffiths
Introduction to Environmental Engineering and Science 3rd Edition
Introduction to Ergonomics By Robert Bridger
introduction to fluid mechanics , by Fox and McDonald , 7th ed
introduction to fluid mechanics , By Munson and young and Huebsch , 5th ed
Introduction to fluid mechanics 5th edition by fox and mcdonald
Introduction to fluid mechanics 6th edition by fox and mcdonald
Introduction To Fourier Optics , Joseph W. Goodman , 3th ed , 2005
INTRODUCTION TO GRAPH THEORY , Douglas B. West , 2nd ed
Introduction to Java Programming 7th edition by Y. Daniel Liang
Introduction to Linear Algebra 3rd Edition By Gilbert Strang
Introduction to Linear Programming 1st Edition by L. N. Vaserstein [student solution manual]
Introduction to Management Accounting , By Charles T. Horngren and Gary L. Sundem and William O. Stratton and David Burgstahler and Jeff Schatzberg , 14th ed
Introduction to Managerial Accounting , By Garrison and Noreen and Brewer , 5th ed
Introduction to Probabilit , By Dimitri P. Bertsekas and John N. Tsitsiklis , 1st ed
Introduction to Probability and Statistics , By Barbara M. Beaver , 12th Ed
Introduction to Probability Models , (10th Ed) , By Sheldon M. Ross
Introduction to Quantum Mechanics (1995) by David J. Griffiths , 2nd
Introduction to Solid State Physics by Charles Kittel
Introduction to Statics and Dynamics , By Ruina & Pratap , 1st ed
Introduction to the Theory of Computation , By Michael Sipser , 1st ed
Introduction to Thermal Systems Engineering , M. Moran. H. Shapiro , 1st ed
Introduction to Thermodynamics and Heat Transfer , yunus A. cengel , 2nd ed
Introduction to VLSI Circuits and Systems John P Uyemura
Introduction to Wireless Systems by P.M. Shankar
Introductory Circuit Analysis , By Robert L. Boylestad , 11th ed
Introductory Econometrics A Modern Approach , Jeffrey M. Wooldridge , 2ed
introductory elements of the chemical process , By Felder & Rousseau
Investment Analysis and Portfolio Management , By Reilly and Brown , 7th ed
Investments Analysis and Management , By Charles P. Jones , 11th ed
IP Telephony Solution guide
IT Networking Labs by Tom Cavaiani
Java How to Program, 5th Edition By Harvey M. Deitel, Paul J. Deitel
Java Programming 10-Minute , By Mark Watson , 1st ed
Journey into Mathematics An Introduction to Proofs (Book and solution manual) by Joseph J. Rotman
KC's Problems and Solutions for Microelectronic Circuits, Fourth Edition by Adel S. Sedra, K. C. Smith, Kenneth C. Smith
Labview for engineers 1st edition by R.W. Larsen
Linear Algebra and Its Applications by David C. Lay
Linear Algebra by Otto Bretscher
Linear Algebra with Applications 6th edition by Leon
Linear circuit analysis 2nd edition by R. A. DeCarlo and P. Lin
Linear circuit analysis Time Domain. phasor. and laplace transform approaches , By DeCarlo and Pen-Min-Lin , 2nd ed
Linear dynamic systems and signals by Zoran Gajic with matlab experiments and power point slides
Linear Systems And Signals 1st edition by B P Lathi
Logic and Computer Design Fundamentals 3rd Edition by Morris Mano & Charles Kime Solutions
Logic and Computer Design Fundamentals 4th Edition by Morris Mano
Logic Computer Desing Fundamentals , Mano and Kime , 2nd ed
machine design an integrated approach , Robert L. Norton , 3rd ed
Machine Elements , By Bernard Hamrock , 1st ed
Macroeconomics -N.G. Mankiw , 5th ed
Managerial Accounting , By Hansen and Mowen , 8th ed
managerial Accounting , Garrison and Noreen and Brewer , 11th ed
managerial Accounting , Garrison and Noreen and Brewer , 13th ed
Managerial Accounting 11th edition by Eric W. Noreen, Peter C. Brewer, Ray H. Garrison
Manufacturing Engineering and Technology , By Serope Kalpakjian and Steven Schmid , 5th ed
Matem?ticas para Administraci?n y Econom?a ( Mathematics for Administration and Economics ) , By Ernest Haeussler, Richard Paul , 12th ed
Materials and Processes in Manufacturing 9th edition by E. Paul DeGarmo, Solutions Manual by Barney E. Klamecki
Materials Science and Engineering 6th edition by Callister
Materials Science and Engineering 7th edition by Callister
materials science and engineering an introduction By William D. Callister , 6th ed
materials science and engineering an introduction By William D. Callister , 7th ed
Materials Science by Milton Ohring
Mathematical Methods for Physicists Answers to Miscellaneous Problems , By George B. Arfken , 5th ed
Mathematical Methods for Physicists Answers to Miscellaneous Problems , By George B. Arfken , 7th ed
Mathematical Methods for Physics and Engineering 3rd Edition by K. F. Riley, M. P. Hobson
Mathematical Methods in the Physical Sciences , 3rd ed , By Mary L. Boas
Mathematical Models in Biology An Introduction by Elizabeth S. Allman, John A. Rhodes
Mathematical Olympiad in China Problems and Solutions
Mathematical Proofs A Transition to Advanced Mathematics. 2nd Ed By Gary Chartrand, Albert D. Polimeni, Ping Zhang
Mathematical Statistics with Applications , Dennis Wackerly , 7th ed
Mathematical Techniques ,Dominic Jordan and Peter Smith ,4th ed
Mathematics for Administration and Economics , By Ernest Haeussler, Richard Paul , 12th ed
Mathematics for Economists by Carl P. Simon Lawrence Blume
mathematics for physicists , By Susan Lea , 1st ed
Mathematics for Physicists , Lea , 2nd ed
Maths for Chemistry ,Paul Monk and Lindsey J. Munro ,2nd ed
Maths for Science ,Sally Jordan, Shelagh Ross, and Pat Murphy ,2012
MATLAB Programming for Engineers by tephen J. Chapman, Cengage Learning ( m files)
Matrix Analysis and Applied Linear Algebra By Carl D. Meyer [Book and solution manual]
Mechanical Behavior of Materials , By Norman E. Dowling , 3rd ed
Mechanical Design of Machine Elements and Machines 1st Edition by Collins
Mechanical Engineering Design 7th Edition by Shigley
Mechanical Engineering Design 8th edition by Shigley
Mechanical Engineering Desing , Shigley , 7th ed
Mechanical Vibrations , By Singiresu S. Rao , 5th ed
Mechanical Vibrations , Singiresu Rao , 3th ed
Mechanical Vibrations , Singiresu Rao , 4th ed
Mechanical Vibrations 3rd edition by Singiresu Rao
Mechanics for engineers dynamics , By russell C. Hibbeler , 13th ed
Mechanics of Fluids , By Victor Streeter , 9th ed
mechanics of fluids , by Irving H. Shames , 4th ed
Mechanics of Fluids 5th Edition by Frank White
Mechanics of Fluids 8th edition by Massey
Mechanics of Materials , an integrated learning , Timothy A. Philpot , 2nd ed
mechanics of materials , By Beer and Johnston and Dewolf , 3rd ed
mechanics of materials , By ferdinand P. Beer & E. Russell & Dewolf , 4th ed
mechanics of materials , By ferdinand P. Beer & E. Russell & Dewolf , 5th ed
mechanics of materials , By ferdinand P. Beer & E. Russell & Dewolf , 6th ed
mechanics of materials , By Hibbeler , 5th ed
Mechanics of Materials , By Hibbeler , 8th ed
Mechanics of Materials , By James Gere and Barry Goodno , 7th ed
Mechanics of Materials , By James M. Gere & Stephen Timoshenko , 5th ed
Mechanics of Materials , By R. C. Hibbeler , 4th ed
Mechanics of Materials , By R. C. Hibbeler , 9th ed
mechanics of materials , By Riley & Sturges , 6th ed
Mechanics of Materials 3rd Edition by Beer
Mechanics of Materials 4th edition By Hibbeler Chapter 12
mechanics of materials 6th edition by James Gere
Mechanics of Materials 6th edition by R. C. Hibbeler
Mechanics of Materials 7th edition by R. C. Hibbeler
mechanics of materials james gere 5th edition
mechanics of solids , by Carl T. F. Ross , 1st ed
Microcomputers Systems Real Time Interfacing , By Jonathan W. Valvano , 2nd ed
Microelectronic Circuit Design , By Richard C. Jaeger and Travis N. Blalock , 4rd ed
Microelectronic Circuit Design 2nd Ed. - Richard C. Jaeger and Travis N. Blalock
Microelectronic Circuit Design 3rd Ed. - Richard C. Jaeger and Travis N. Blalock
Microelectronic Circuit Design 3rd edition by R. Jaeger
Microelectronic Circuits , By Adel S. Sedra, Kenneth C. Smith , 7th ed
Microelectronic circuits 5th edition by Adel S. Sedra kennethSmith
Microelectronic Circuits and Devices , Mark N. Horenstein , 2nd ed
Microelectronic Circuuits , By Sedra and smith , 4th ed
Microelectronics 1 & 2 by Dr. Wen Ching Chang
Microelectronics Circuit Analysis and Design , Donald A. Neamen , 3th ed
Microelectronics Circuit Analysis and Design , Donald A. Neamen , 4th ed
Microprocessors and Interfacing-Programming and Hardware 2nd Edition by Douglas V. Hall
Microwave and RF design of wireless systems by Pozar
Microwave Engineering 2nd edition by David M Pozar
Microwave Engineering 3rd Ed. by David M Pozar
Microwave transistor amplifiers analysis and design 2nd edition by Guillermo Gonzalez
Millman - Microelectronics digital and analog circuits and systems by Thomas V. Papathomas
Mobile Communications 2nd Ed. by Jochen H. Schiller
Modeling and Analysis of Dynamic Systems , By C. Close, D. Frederick, J. Newell , 3rd ed
modern control engineering , By Katsuhiko Ogata , 4th ed
modern control engineering , By Katsuhiko Ogata , 5th ed
Modern Control Engineering 3rd edition by K. OGATA
Modern Control Systems 11th edition by Richard C. Dorf Robert H Bishop
Modern Control Systems, 12th Edition By Richard C. Dorf, Robert H. Bishop
Modern Digital and Analog Communications Systems 3rd edition by B P Lathi
Modern Digital Signal Processing by Roberto Cristi
Modern physics , By Forsci , 2nd ed
Modern Physics , By Serway , 3rd ed
Modern physics , By thornton and rex , 3rd ed
Modern physics By Randy Harris
Molecular Quantum Mechanics ,Peter W. Atkins and Ronald S. Friedman ,5th ed
Multivariable Calculus , by Dan clegg & Barbara Frank & James Stewart , 6th ed
Multivariable Calculus , Dan Clegg & Barbara Frank , 5th ed
Multivariable Calculus 4th edition by Stewart Dan Clegg Barbara Frank
Musculoskeletal Function An Anatomy and Kinesiology Laboratory Manual by Dortha Esch Esch
Nanoengineering of Structural, Functional and Smart Materials
Network Flows Theory, Algorithms, And Applications by Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin
Network Simulation Experiments Manual (The Morgan Kaufmann Series in Networking) by Emad Aboelela
networks flows theory algorithms and applications , By Ahuja and Magnant and Orlin , 1st ed
Neural networks and learning machines 3rd edition by Simon S. Haykin
NMR The Toolkit (How Pulse Sequences Work) ,Peter Hore, Jonathan Jones, and Stephen Wimperis ,2nd ed
Nonlinear Programming 2nd Edition by Dimitri P. Bertsekas
Nuclear Magnetic Resonance ,Peter Hore ,2nd ed
Numerical Analysis 8th ed. By Richard L. Burden, J Douglas Faires
Numerical Methods for Engineers , By Chapra and canale , 5th ed
Numerical Methods for Engineers , by Steven C. Chapra & Raymond P.Canale , 6th ed
Numerical Methods For Engineers 4th edition by Chapra
Numerical Solution of Partial Differential Equations An Introduction by K. W. Morton, D. F. Mayers
Operating Systems 4th Edition by Stallings
operations research , By Hamdy A. Taha , 9th ed
Optimal Control Theory An Introduction By Donald E. Kirk
Optimization of chemical processes by Edgar himmelblau
Options, Futures and Other Derivatives 5th Edition by John Hull, John C. Hull
Options, Futures and Other Derivatives, 4th Edition by John Hull, John C. Hull
Organic Chemistry , 6th Ed (2006) , L.G. Wade
Organic Chemistry , By John McMurry , 8th ed
Organic Chemistry , By Jonathan Clayden , Nick Greeves , Stuart Warren and Peter Wothers , 1st ed
Organic Chemistry , By Paula Yurkanis , 5th ed
Organic Chemistry , David Klein , 1st ed
Organic Chemistry ,Cook & Cranwell
Organic Chemistry ,Tadashi Okuyama and Howard Maskill ,2013
Organic chemistry 4th edition by Robert C. Athkins and Francis Carey
Organic chemistry 5th edition by Robert C. Athkins and Francis Carey
Organic Chemistry 7th Edition by Susan McMurry
Organic Chemistry 8th ed. , L.Wade , J. Simek
Partial Differential Equations With Fourier Series And Boundary Value Problems 2nd Edition By Nakhle H.Asmar
partical differential equations , NAKHL´E H.ASMAR , 2nd ed
Physical Chemistry , By Peter Atkins & Julio de Paula , 7th ed
physical chemistry , peter Atkins , 9th ed
Physical Chemistry ,Elliott & Page
Physical Chemistry ,Quanta, Matter, and Change ,Peter Atkins, Julio de Paula, and Ronald Friedman ,2nd ed
Physical Chemistry 7th edition by Peter Atkins and Julio de Paula
Physical Chemistry 8th edition by Peter Atkins and Julio de Paula
Physical Chemistry by Prem Dhawan
Physics , Cutnell & Johnson , 9th ed
physics , James s.Walker , 2nd ed
Physics 5th Edition by Halliday , Resnick , Krane
Physics for Engineering and Science , By Hans Ohanian , 3rd ed
Physics for Science and Engineering , By Raymond A. Serway , 6th ed
Physics for Science and Engineering , By Raymond A. Serway , 7th ed
Physics for Science and Tecnology , By Paul A. Tipler , 4th ed
Physics for Scientist and Engineers 1st edition by Knight
physics for scientists & engineers with modern physics (vol.1 , vol.2 , vol.3) , Giancoli , 4th ed
Physics for Scientists and Engineers 5th edition by Paul A. Tipler, Gene Mosca
Physics for Scientists and Engineers 5th edition serway
Physics For Scientists And Engineers 6th Edition By Serway And Jewett
physics for scientists and engineers a strategic approach with modern physics , By Randall D. Knight , 1st ed
Physics for Scientists and Engineers with Modern Physics , By Serway & Jewett , 9th ed
Physics for Scientists and Engineers with Modern Physics 3rd Edition
physics for scients and engineers , By Tipler and Mosca , 5th ed
Physics Principles with Applications , By D. Giancoli , 3th ed
Physics Principles with Applications , By D. Giancoli , 4th ed
Physics Principles with Applications , By Giancoli , 6th ed
physics Volume 1 , By Resnick and Halliday and Krane , 5th ed
PIC Microcontroller and Embedded Systems 1st edition by Mazidi [Book and solution manual]
Piping and Pipeline Calculations Manual Construction, Design Fabrication and Examination by Phillip Ellenberger
power electronics , circuits, devices and applications , By Muhammad H. Rashid , 3rd ed
Power Electronics Handbook , By Muhammad H. Rashid , 2nd ed
Power Electronics-Converters, Applications and Design 2nd edition by Ned Mohan, Tore M. Undeland and William P. Robbins
Power Electronics-Converters, Applications and Design 3rd edition by Ned Mohan, Tore M. Undeland and William P. Robbins
power system analysis and design , glover and sarma , 3rd ed
Power System Analysis by John Grainger and William Stevenson
Power Systems Analysis and Design 4th Edition by Glover J. Duncan, Sarma Mulkutla .S
Precalculus Essentials , Michael Sullivan , 7th Edition
principles and applications of electrical engineering , By Giorgio Rizzoni , 1st ed or 3rd ed
Principles and Applications of Electrical Engineering 2nd Ed. by Giorgio Rizzoni
Principles and Applications of Electrical Engineering 4th edition by Giorgio Rizzoni
Principles and Practices of Automatic Process Control , By Smith & corripio , 3rd ed
Principles Heat Transfer , By Frank kreith & Raj M. Manglik & Mark S. bohn , 7th ed
Principles of Communications Systems, Modulation and Noise 5th Edition by William H. Tranter and Rodger E. Ziemer
Principles of Computer Hardware ,Alan Clements ,4th ed
Principles of Corporate Finance , Brealey , 7ed
Principles of Digital Communication and Coding 1st edition by Andrew J. Viterbi and Jim K. Omura
principles of electrical engineering materials and devices , By S.O. Kasap , 2nd ed
Principles of Electronic Materials and Devices 3rd edition By Safa O. Kasap
Principles of Geotechnical Engineering , By Braja M. Das , 7th ed
Principles of Managerial Finance , By Lawrence J. Gitman , 1st ed
Principles of Neurocomputing for Science and Engineering 1st Edition Fredric M. Ham and Ivica Kostanic
Principles of Physics 4th edition by Serway and Jewett
Pro SQL Server 2012 BI Solutions , By Randal Root. Caryn Mason , 1st ed
Probability and Random Processes for Electrical Engineering by Alberto Leon-Garcia
Probability and Statistical Inference, Seventh Edition By Robert Hogg, Elliot A. Tanis
probability and statistics for engineering and the sciences , By Jay L. Devore , 6th ed
Probability and Statistics for Engineering and the Sciences by Jay L. Devore
Probability and Statistics for Engineers , Miller and Freunds , 7th ed Miller and Freuds
Probability and Statistics for Engineers and Scientists , 8th Edition by Sharon Myers , Keying Ye, Walpole
Probability and Statistics for Engineers and Scientists 3rd Edition by Anthony Hayter
Probability and Statistics for Engineers and Scientists Manual by HAYLER
Probability and Stochastic Processes 2nd edition by David J. Goodman
Probability Random Variables and Random Signal Principles , 4th Edition , by Peyton Peebles Jr
Probability Random Variables And Stochastic Processes 4th edition by Papoulis
probability random varianles and random signal , By Peyton Peebles , 4th ed
Probability statistics and random processes for eectrical engineering , By Alberto Leon , 1st ed
Process Control Instrumentation Technology, 8th edition by Johnson
Process Dynamics and Control 2nd edition by Dale E. Seborg
Process systems analysis and control - Donald r. Coughanowr
Programmable logic controllers 1st edition by Rehg & Sartori
Project Management Casebook by Karen M. Bursic, A. Yaroslav Vlasak
Public Finance, 7th Edition, by Harvey S. Rosen
Quantitative Analysis for Management , Barry Render & Stair & Hanna , 10th ed
Quantum Chemistry , By Ira N, Levine , 7th ed
Quantum Field Theory Problem Solutions 2007 by Mark Srednick
Quantum Mechanics , B. Brinne , 1st ed
Quantum Physics 3rd Edition by Stephen Gasiorowicz
RF circuit Design Theory and Application by Ludwig bretchkol
RF Circuit Design Theory And Applications , R. Ludwig & P. Bretchko , 1st ed
Rourke & Armstrong Inorganic Chemistry , Weller, Overton , 6th ed
Scientific Computing with Case Studies 1st Edition by Dianne P. O’Leary
Sears and Zemansky's University Physics with Modern Physics , 12th Edition
Semiconductor Device Fundamentals by Robert Pierret
Semiconductor Devices Physics and Technology , 2nd ed , By S. M. SZE
Semiconductor Manufacturing Technology 1st Edition by Michael Quirk and Julian Serda
Semiconductor Physics and Devices Basic Principles , By Donald A. Neamen , 3rd edition
Separation Process Principles , By J. D. Seader, Ernest J. Henley, D. Keith Roper , 3rd ed
Shigleys Mechanical Engineering Design , R. Budynas. J. Nisbett , 9th ed
Signal Processing and Linear Systems by B P Lathi
Signal Processing First - Mclellan , Schafer and Yoder
Signals and Systems 2nd edition Oppenheim Willsky
Signals and Systems 2003 by M.J. Roberts
Signals and Systems Analysis of Signals Through Linear Systems by M.J. Roberts
Signals and Systems, Second Edition by Simon Haykin, Barry Van Veen
Signals, Systems and Transforms 4th edition by Phillips, Parr & Riskin
Simply C# ( An Application-Driven Tutorial approach ) , By Deitel Deitel & Hoey & Yaeger , 1st ed
Single Variable Calculus , 8th ed , Jeffery Cole, Daniel Drucker, Daniel Anderson , 2016
Sipser's Introduction to the Theory of Computation By Ching Law
Soil Mechanics concepts and applications , William Powrie , 2nd ed
Solid State Electronic Device 6th edition by Ben Streetman
solid state electronic devices , By Streetman and Banerjee , 5th ed
Solution to Skill - Assessment Exercises to Accompany Control Systems Engineering 3rd edt. by Norman S. Nise
Starting Out with Java 5 Lab Manual to Accompany Starting out with Java 5 by Diane Christen
Static Mechanical Engineering , By Arthur P. Boresi and Richard J. Schmidt , 1st ed
Statistical digital signal processing and modeling by monson hayes
Statistical Inference , 2ed (2001) , By Casella & Berger
Statistical Physics of Fields by Mehran Kardar
statistical physics of particles by Mehran Kardar
Statistics for Business and Economics , By David R. Anderson & Dennis J. Sweeney & Thomas A. Williams , 8th ed
Statistics for Business and Economics , By David R. Anderson & Dennis J. Sweeney & Thomas A. Williams , 9th ed
Statistics for Engineering and the Sciences , by William M. Mendenhall, Sincich, Boudreau , 6th ed 2017
Statistics for Engineers and Scientists , By William Navidi , 1st ed
statistics for engineers and scientists ,William Navidi ,4th ed
Steel Design , By William T. Segui , 5th ed
Strength Of Materials , 4Th Ed , Ferdinand L Singer , Andrew Pytel
Strength of Materials , By Dr. R. K. Bansal , 4th ed
Strength of Materials , By G. H. Ryder , 3rd ed
Structural Analysis , Aslam Kassimali , 4th ed
Structural analysis , By Asslam Kassimali , 3rd ed
structural analysis , By Hibbeler , 3rd ed
structural analysis , By Hibbeler , 5th ed
structural analysis , By Hibbeler , 8th ed
Structural Analysis , R. C. Hibbeler ,8th ed
Structural analysis 5th edition by Hibbeler
Structural and Stress Analysis , Dr. T.H.G. Megson , 2nd ed
Student Solution Manual for Essential Mathematical Methods for the Physical Sciences by K. F. Riley, M. P. Hobson
SUPPLEMENTARY PROBLEMS FOR BASIC PRINCIPLES AND CALCULATIONS IN CHEMICAL ENGINEERING , David M. Himmeblau , 6TH ED
System Dynamics , By William J. PalmIII , 2nd ed
System Dynamics , By William Palm III , 1st ed
System Dynamics 3rd Ed. by Katsuhiko Ogata
System Dynamics and Response , 1st Edition by S. Graham Kelly
Telecomunications Demistified , by Carl Nassar , 1st ed
The 8051 Microcontroller 4th Ed. by I. Scott MacKenzie and Raphael C.-W. Phan
The 8088 and 8086 Microprocessors Programming, Interfacing, Software, Hardware, and Applications (4th Edition) By Walter A. Triebel, Avtar Singh
The ARRL Instructor's Manual for Technician and General License Courses By American Radio Relay League
The Art of Electronics by Thomas C. Hayes & Paul Horowitz [student solution manual]
The C++ Programming Language , Special 3rd Edition , by Bjarne Stroustrup
The Calculus 7 by Louis Leithold
The Chemistry Maths Book ,Erich Steiner ,2nd ed
The Cosmic Perspective , Bennett & Donohue & Schneider & Voit , 3rd ed
The Dawn Quantum Chemistry , By Donald Allan McQuarrie , 1st ed
The Econometrics of Financial Markets , By P. Adamek , 1st ed
The Language of Machines, An Introduction to Computability and Formal Languages by Robert W. Floyd, Richard Beigel
The Science and Engineering of Materials 4th edition by Donald R. Askeland Frank Haddleton
The Structure and Interpretation of Signals and Systems 1st Edition by Edward A. Lee and Pravin Varaiya
Theory of Machines and Mechanisms , Joseph E. Shigley , 3rd ed
theory of vibration with applications , william Thomson , 3rd ed
Thermodynamics An Engineering Approach , By Yunus A. Cengel & Michael A. Boles & McGraw-Hill, 2011 , 5th ed
Thermodynamics An Engineering Approach , By Yunus A. Cengel & Michael A. Boles & McGraw-Hill, 2011 , 6th ed
Thermodynamics An Engineering Approach , By Yunus A. Cengel & Michael A. Boles & McGraw-Hill, 2011 , 7th ed
Thermodynamics , By Sandler , 1st ed
Thermodynamics An Engineering Approach 5th edition by Yunus A Cengel and Michael A Boles
Thermodynamics An Engineering Approach 6th edition by Yunus A Cengel and Michael A Boles
Thomas Calculus 11th edition by George B.Thomas
Thomas' Calculus, Eleventh Edition (Thomas Series) By George B. Thomas, Maurice D. Weir, Joel D. Hass, Frank R. Giordano
Transport Phenomena 2nd edition by Bird, Stewart and Lightfoot
Transport Phenomena in Biological Systems 2nd Edition By George A. Truskey, Fan Yuan, David F. Katz
transport pheomena , By Bird and Stewart and Lightfoot , 2nd ed
Transport Processes of Unit Operations , By C. J. Geankopolis , 4th ed
TRIGONOMETRY , By Margaret L. Lial & John Hornsby & David I. Schneider , 8th ed
understanding basic statistics , By charles henry brase , 4th ed
Unit operations of chemical engineering 7th edition by Warren l. Mccabe
University physics 11th edition by Young and Freedman
University Physics Bauer , Westfall , 1st ed
University Physics with Modern Physics , 13th ed 2011 , by Hugh D. Young, Roger A. Freedman, A. Lewis Ford (ISBN 9780321697066)
University Physics with Modern Physics , Young , 13th ed
university physics with modern physics Vol.1 , By Sears and ford and freedman , 11th ed
university physics with modern physics Vol.2 , By Sears and ford and freedman , 11th ed
Vector Calculus , Linear Algebra and Differential Forms 2nd Edition by Hubbard and Burke
Vector Mechanics For Engineers Static, By R.C.Hibbeler , 12th ed
Vector Mechanics for Engineers , Dynamics 6th edition by Beer
vector mechanics for engineers Dynamics , By Beer & Johnston , 7th ed
Vector Mechanics for Engineers Dynamics , By Beer & Johnston , 10th ed
Vector Mechanics for Engineers Dynamics 7th Edition by Beer
vector mechanics for engineers static , by Hibbeler , 6th ed
Vector Mechanics for Engineers STATICS , By Beer & Johnston , 10th ed
Vector Mechanics for Engineers Statics and Dynamics 8th edition by Beer
vector mechanics for engineers statistics , By Beer & Johnston , 7th ed
Vector Mechanics For Static Engineers , By Beer & Johnston , 9th ed
Vector Mechanics For Static Engineers , By Beer & Johnston , 10th ed
Vector Mechanics For Static Engineers , By Hibbeler , 12th ed
Vector Mechanics Statics 7th Edition by Beer and Johnston
VHDL for Engineers International Edition by Kenneth L. Short
Walter Rudin's Principles of Mathematical Analysis ,McGraw Hill Science Engineering Math (1976)
Wireless Communications 1st Ed. by A. F. Molisch
Wireless Communications 1st Ed. by Andrea Goldsmith
Wireless Communications Principles and Practice 2nd Edition - Theodore S. Rappaport
X-Ray Crystallography ,William Clegg ,2nd ed
Zill's a First Course in Differential Equations with Modeling Applications (7th ed.) and Zill & Cullen's Diferential Equations with Boundary-Value Problems (5th ed.)
====================================
If your request isn't in the list , we will find it for you, just |
https://math.stackexchange.com/questions/1460922/what-is-the-probability-that-a-will-win-atleast-three-of-the-next-four-games | # What is the probability that A will win atleast three of the next four games?
In a set of five games in tennis between two players A and B,the probability of a player winning a game is $\frac{2}{3}$ who has won the earlier game.A wins the first game.What is the probability that A will win atleast three of the next four games?
Since the probability of $A$ winning atleast three of the next four games is asked.This is possible if $A$ wins 2nd,3rd,4th games or 2nd,3rd,5th games or 3rd,4th,5th games or 2nd,4th,5th games or 2nd,3rd,4th,5th games.
Let the probability of a player winning a game is $p$ if he has not won the earlier game.If he has won the earlier game,then the probability of a player winning a game is $\frac{2}{3}$ as given in the question.
So the required probability is $\frac{2}{3}\times\frac{2}{3}\times\frac{2}{3}+\frac{2}{3}\times\frac{2}{3}\times p+p\times\frac{2}{3}\times\frac{2}{3}+\frac{2}{3}\times p \times \frac{2}{3}+\frac{2}{3}\times\frac{2}{3}\times \frac{2}{3}\times\frac{2}{3}$
but i dont know what is the value of $p$,since it is not given in the question.I put $p=\frac{1}{2}$ but it is giving wrong answer.
Correct answer is $\frac{4}{9}$,i dont know where have i made the mistake or my logic is not correct?Please help me.Thanks.
$p$ is just the probability that the other player doesn't win, which is 1/3. |
https://www.alignmentforum.org/posts/Zzzjviz5FshbQa28f/proofs-theorems-6-8-propositions-2-3 | # 3
Theorem 6: Causal IPOMDP's: Any infra-POMDP with transition kernel which fulfills the niceness conditions and always produces crisp infradistributions, and starting infradistribution , produces a causal belief function via .
So, first up, we can appeal to Theorem 4 on Pseudocausal IPOMDP's, to conclude that all the are well-defined, and that this we defined fulfills all belief function conditions except maybe normalization.
The only things to check are that the resulting is normalized, and that the resulting is causal. The second is far more difficult and makes up the bulk of the proof, so we'll just dispose of the first one.
T6.1 First, , since it's crisp by assumption, always has for all constants . The same property extends to all the (they're all crisp too) and then from there to by induction. Then we can just use the following argument, where is arbitrary.
The last two inequalities were by mapping constants to the same constant regardless of , and by being normalized since it's an infradistribution. This same proof works if you swap out the 1 with a 0, so we have normalization for
T6.2 All that remains is checking the causality condition. We'll be using the alternate rephrasing of causality. We have defined as
and want to derive our rephrasing of causality, that for any and and and family , we have
Accordingly, we get to fix a , and family of functions , and assume
Our proof target is
Applying our reexpression of in terms of and and projection, we get
Undoing the projections on both sides, we get
Unpacking the semidirect product, we get
Now, we can notice something interesting here. By concavity for infradistributions, we have
So, our new proof target becomes
Because if we hit that, we can stitch the inequalities together. The obvious move to show this new result is to apply monotonicity of infradistributions. If the inner function on the left hand side was always above the inner function on the right hand side, we'd have our result by infradistribution monotonicity. So, our new proof target becomes
At this point, now that we've gotten rid of , we can use the full power of crisp infradistributions. Crisp infradistributions are just a set of probability distributions! Remember that the semidirect product, , can be viewed as taking a probability distribution from , and for each , picking a conditional probability distribution from , and all the probability distributions of that form make up .
In particular, the infinitely iterated semidirect product here can be written as a repeated process where, for each history of the form Murphy gets to select a probability distribution over the next observation and state from (as a set of probability distributions) to fill in the next part of the action-observation-state tree. Accordingly every single probability distribution in can be written as interacting with an expanded probabilistic environment of type (which gives Murphy's choice wherever it needs to pick the next observation and state), subject to the constraints that
and
We'll abbreviate this property that always picks stuff from the appropriate set and starts off picking from as
Our old proof goal of
can be rephrased as
and then rephrased as
And now, by our earlier discussion, we can rephrase this statement in terms of picking expanded probabilistic environments.
Now, on the right side, since the set we're selecting the expanded environments from is the same each time and doesn't depend on , we can freely move the inf outside of the expectation. It is unconditionally true that
So now, our proof goal shifts to
because if that were true, we could pair it up with the always-true inequality to hit our old proof goal. And we'd be able to show this proof goal if we showed the following, more ambitious result.
Note that we're not restricting to compatibility with and , we're trying to show it for any expanded probabilistic environment of type . We can rephrase this new proof goal slightly, viewing ourselves as selecting an action and observation history.
Now, for any expanded probabilistic environment , there is a unique corresponding probabilistic environment of type where, for any policy, . This is just the classical result that any POMDP can be viewed as a MDP with a state space consisting of finite histories, it still behaves the same. Using this swap, our new proof goal is:
Where is an arbitrary probabilistic environment. Now, you can take any probabilistic environment and view it as a mixture of deterministic environments . So we can rephrase as, at the start of time, picking a from for some particular . This is our mixture of deterministic environments that makes up . This mixture isn't necessarily unique, but there's always some mixture of deterministic enivironments that works to make where is arbitrary. So, our new proof goal becomes:
Since is just a dirac-delta distribution on a particular history, we can substitute that in to get the equivalent
Now we just swap the two expectations.
And then we remember that our starting assumption was
So we've managed to hit our proof target and we're done, this propagates back to show causality for .
Theorem 7: Pseudocausal to Causal Translation: The following diagram commutes when the bottom belief function is pseudocausal. The upwards arrows are the causal translation procedures, and the top arrows are the usual morphisms between the type signatures. The new belief function is causal. Also, if , and is defined as for any history containing Nirvana, and for any nirvana-free history, then .
T7.0 Preliminaries. Let's restate what the translations are. The notation is that "defends" defends if it responds to any prefix of which ends in an action with either the next observation in , or Nirvana, and responds to any non-prefixes of with Nirvana.
Also, we'll write if is "compatible" with and . The criterion for this is that , and that any prefix of which ends in an action and contains no Nirvana observations must either be a prefix of , or only deviate from in the last action. An alternate way to state that is that is capable of being produced by a copolicy which defends , interacting with .
With these two notes, we introduce the infrakernels , and .
is total uncertainty over such that defends
And is total uncertainty over such that is compatible with and
Now, to briefly recap the three translations.For first-person static translation, we have
Where is the original belief function.
For third-person static translation, we have
Where is the original infradistribution over .
For first-person dynamic translation, the new infrakernel has type signature
Ie, the state space is destinies plus a single Nirvana state, and the observation state is the old space of observations plus a Nirvana observation. is used for the Nirvana observation, and is used for the Nirvana state. The transition dynamics are:
and, if , then
and otherwise,
The initial infradistribution is , the obvious injection of from to .
The obvious way to show this theorem is that we can assume the bottom part of the diagram commutes and are all pseudocausal, and then show that all three paths to produce the same result, and it's causal. Then we just wrap up that last part about the utility functions.
Fortunately, assuming all three paths produce the same belief function, it's trivial to show that it's causal. is an infradistribution since is, and is a crisp infrakernel, so we can just invoke Theorem 6 that crisp infrakernels produce causal belief functions to show causality for . So we only have to check that all three paths produce the same result, and show the result about the utility functions. Rephrasing one of the proofs is one phase, and then there are two more phases showing that the other two phases hit the same target, and the last phase showing our result about the utility functions.
We want to show that
T7.1 First, let's try to rephrase into a more handy form for future use. Invoking our definition of , we have:
Then, we undo the projection
And unpack the semidirect product
And unpack what means
And then apply the definition of
This will be our final rephrasing of . Our job is to derive this result for the other two paths.
T7.2 First up, the third-person static translation. We can start with definitions.
And then, we can recall that since and are assumed to commute, we can define in terms of via . Making this substitution, we get
We undo the first pushforward
And then the projection
And unpack the semidirect product
And remove the projection
and unpack the semidirect product and what is
Now, we recall that is all the that defend , to get
At this point we can realize that for any which defends , when interacts with , it makes a that is compatible with and . Further, all compatible with and have an which produces them. If has its nirvana-free prefix not being a prefix of , then the defending which doesn't end early, just sends deviations to Nirvana, is capable of producing it. If looks like but ended early with Nirvana, you can just have a defending which ends with Nirvana at the same time. So we can rewrite this as
And this is exactly the form we got for .
T7.3 Time for the third and final translation. We want to show that
is what we want. Because the pseudocausal square commutes, we have that , so we can make that substitution to yield
Remove the projection, to get
For this notation, we're using to refer to an element of , the state space. is an element of , a full unrolling. is the projection of this to .
We unpack the semidirect product to yield
and rewrite the injection pushforward
and remove the projection
And unpack the semidirect product along with .
Now we rewrite this as a projection to yield
And do a little bit of reindexing for convenience, swapping out the symbol for , getting
And now, we can ask what is. Basically, the starting state is the destiny , and then interacts with it through the kernel . Due to how is defined, it can do one of three things. First, it could just unroll all the way to completion, if is the sort of history that's compatible with . Second, it could partially unroll and deviate to Nirvana early, even if is compatible with more, because of how is defined. Finally, could deviate from and then would have to respond with Nirvana. And of course, once you're in a Nirvana state, you're in there for good. Hm, you have total uncertainty over histories compatible with and , any of them at all could be made by . So, this projection of the infrakernel unrolling is just . We get
And then we use what does to functions, to get
Which is exactly the form we need. So, all three translation procedures produce identical belief functions.
T7.4 The only thing which remains to wrap this result up is guaranteeing that, if is 1 if the history contains Nirvana, (or infinity for the type signature), and otherwise, then .
We rephrased as
We can notice something interesting. If is the sort of history that is capable of making, ie , then all of the are going to end up in Nirvana except itself, because all the are like "follow , any deviations of from go to Nirvana, and also Nirvana can show up early". In this case, since any history ending with Nirvana is maximum utility, and otherwise is copied, we have . However, if is incompatible with , then no matter what, a history which follows is going to hit Nirvana and be assigned maximum utility. So, we can rewrite
as
Which can be written as an update, yielding
And then, we remember that regardless of and , for pseudocausal belief functions, which is, we have that , so the update assigns higher expectation values than does. Also, . Thus, the infinimum is attained by , and we get
and we're done.
Theorem 8: Acausal to Pseudocausal Translation: The following diagram commutes when the bottom belief function is acausal. The upwards arrows are the pseudocausal translation procedures, and the top arrows are the usual morphisms between the type signatures. The new belief function will be pseudocausal.
T8.0 First, preliminaries. The two translation procedures are, to go from an infradistribution over policy-tagged destinies , to an infradistribution over destinies , we just project.
And, to go from an acausal belief function to a pseudocausal one , we do the translation
We assume that and commute, and try to show that these translation procedures result in a and which are pseudocausal and commute with each other. So, our proof goals are that
and
Once we've done that, it's easy to get that is pseudocausal. This occurs because the projection of any infradistribution is an infradistribution, so we can conclude that is an infradistribution over . Also, back in the Pseudocausal Commutative Square theorem, we were able to derive that any infradistribution over , when translated to a , makes a pseudocausal belief function, no special properties were used in that except being an infradistribution.
T8.1 Let's address the first one, trying to show that
We'll take the left side, plug in some function, and keep unpacking and rearranging until we get the right side with some function plugged in. First up is applying definitions.
by how was defined. Now, we start unpacking. First, the projection.
Then unpack the semidirect product and , for
Then unpack the update, to get
Then pack up the
And pack up the semidirect product
And since for the original acausal belief function, we can rewrite this as
And pack this up in a projection.
And then, since we defined , we can pack this up as
And then pack up the update
So, that's one direction done.
T8.2 Now for the second result, showing that
Again, we'll take the more complicated form, and write it as the first one, for arbitrary functions .bWe begin with
and then unpack what is to get
Undo the projection
Unpack the semidirect product and .
Remove the projection
Unpack the semidirect product and again.
Let's fold the two infs into one inf, this doesn't change anything, just makes things a bit more concise.
and unpack the update.
Now, we can note something interesting here. If is selected to be equal to , that inner function is just , because is only supported over histories compatible with . If is unequal to , that inner function is with 1's for some inputs, possibly. So, our choice of is optimal for minimizing when its equal to , so we can rewrite as
and write this as
And pack up as a semidirect product
and then because , we substitute to get
and then write as a projection.
Now, this is just , yielding
and we're done.
Proposition 2: The two formulations of x-pseudocausality are equivalent for crisp acausal belief functions.
One formulation of x-pseudocausality is that
where
The second formulation is,
This proof will be slightly informal, it can be fully formalized without too many conceptual difficulties. We need to establish some basic probability theory results to work on this. The proof outline is we establish/recap four probability-theory/inframeasure-theory results, and then show the two implication directions of the iff statement.
P2.1 Our first claim is that, given any two probability distributions and , you can shatter them into measures , and such that and , and . What this decomposition is basically doing is finding the largest chunk of measure that and agree on (that's ), and then subtracting that out from and yields you your and . These measures have no overlap, because if there was overlap, you could put that overlap into the chunk of measure. And, the mass of the "unique to " piece must equal the mass of the "unique to " piece because they must add up to probability distributions, and this quantity is exactly the total variation distance of .
P2.2 Our second claim is that
Why is this? Well, there's an alternate formulation of total variation distance where
As intuition for this, we can pick a function U which is 1 on the support of , and 0 elsewhere. In such a case, we'd get |
https://par.nsf.gov/biblio/10328326-search-heavy-resonance-decaying-top-quark-boson-lepton+jets-final-state-sqrt-tev | This content will become publicly available on April 1, 2023
Search for a heavy resonance decaying into a top quark and a W boson in the lepton+jets final state at $$\sqrt{s}$$ = 13 TeV
A bstract A search for a heavy resonance decaying into a top quark and a W boson in proton-proton collisions at $$\sqrt{s}$$ s = 13 TeV is presented. The data analyzed were recorded with the CMS detector at the LHC and correspond to an integrated luminosity of 138 fb − 1 . The top quark is reconstructed as a single jet and the W boson, from its decay into an electron or muon and the corresponding neutrino. A top quark tagging technique based on jet clustering with a variable distance parameter and simultaneous jet grooming is used to identify jets from the collimated top quark decay. The results are interpreted in the context of two benchmark models, where the heavy resonance is either an excited bottom quark b ∗ or a vector-like quark B. A statistical combination with an earlier search by the CMS Collaboration in the all-hadronic final state is performed to place upper cross section limits on these two models. The new analysis extends the lower range of resonance mass probed from 1.4 down to 0.7 TeV. For left-handed, right-handed, and vector-like couplings, b ∗ masses up to 3.0, 3.0, and 3.2 TeV are excluded at more »
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
Award ID(s):
Publication Date:
NSF-PAR ID:
10328326
Journal Name:
Journal of High Energy Physics
Volume:
2022
Issue:
4
ISSN:
1029-8479
2. A bstract A search for nonresonant production of Higgs boson pairs via gluon-gluon and vector boson fusion processes in final states with two bottom quarks and two photons is presented. The search uses data from proton-proton collisions at a center-of-mass energy of $$\sqrt{s}$$ s = 13 TeV recorded with the CMS detector at the LHC, corresponding to an integrated luminosity of 137 fb − 1 . No significant deviation from the background-only hypothesis is observed. An upper limit at 95% confidence level is set on the product of the Higgs boson pair production cross section and branching fractionmore »
3. Abstract The rate for Higgs ( $${\mathrm{H}}$$ H ) bosons production in association with either one ( $${\mathrm{t}} {\mathrm{H}}$$ t H ) or two ( $${\mathrm{t}} {{\overline{{{\mathrm{t}}}}}} {\mathrm{H}}$$ t t ¯ H ) top quarks is measured in final states containing multiple electrons, muons, or tau leptons decaying to hadrons and a neutrino, using proton–proton collisions recorded at a center-of-mass energy of $$13\,\text {TeV}$$ 13 TeV by the CMS experiment. The analyzed data correspond to an integrated luminosity of 137 $$\,\text {fb}^{-1}$$ fb - 1 . The analysis is aimed at events that contain $${\mathrm{H}} \rightarrowmore » 4. A bstract A search for the exotic decay of the Higgs boson ( H ) into a b$$ \overline{b} $$b ¯ resonance plus missing transverse momentum is described. The search is performed with the ATLAS detector at the Large Hadron Collider using 139 fb − 1 of pp collisions at$$ \sqrt{s} $$s = 13 TeV. The search targets events from ZH production in an NMSSM scenario where H →$$ {\overset{\sim }{\chi}}_2^0{\overset{\sim }{\chi}}_1^0 $$χ ~ 2 0 χ ~ 1 0 , with$$ {\overset{\sim }{\chi}}_2^0 $$χ ~ 2 0 →$$ a{\overset{\sim }{\chi}}_1^0more » |
https://nbviewer.org/github/KrishnaswamyLab/MAGIC/blob/master/python/tutorial_notebooks/bonemarrow_tutorial.ipynb | # Python MAGIC EMT tutorial¶
## MAGIC (Markov Affinity-Based Graph Imputation of Cells)¶
• MAGIC imputes missing data values on sparse data sets, restoring the structure of the data
• It also proves dimensionality reduction and gene expression visualizations
• MAGIC can be performed on a variety of datasets
• Here, we show the effectiveness of MAGIC on epithelial-to-mesenchymal transition (EMT) data
Markov Affinity-based Graph Imputation of Cells (MAGIC) is an algorithm for denoising and transcript recover of single cells applied to single-cell RNA sequencing data, as described in Van Dijk D et al. (2018), Recovering Gene Interactions from Single-Cell Data Using Data Diffusion, Cell https://www.cell.com/cell/abstract/S0092-8674(18)30724-4.
This tutorial shows loading, preprocessing, MAGIC imputation and visualization of myeloid and erythroid cells in mouse bone marrow, as described by Paul et al., 2015. You can edit it yourself at https://colab.research.google.com/github/KrishnaswamyLab/MAGIC/blob/master/python/tutorial_notebooks/bonemarrow_tutorial.ipynb
### Installation¶
If you haven't yet installed MAGIC, we can install it directly from this Jupyter Notebook.
In [ ]:
!pip install --user magic-impute
### Importing MAGIC¶
Here, we'll import MAGIC along with other popular packages that will come in handy.
In [1]:
import magic
import scprep
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
# Matplotlib command for Jupyter notebooks only
%matplotlib inline
Load your data using one of the following scprep.io methods: load_csv, load_tsv, load_fcs, load_mtx, load_10x. You can read about how to use them with help(scprep.io.load_csv) or on https://scprep.readthedocs.io/.
In [2]:
bmmsc_data = scprep.io.load_csv('https://github.com/KrishnaswamyLab/PHATE/raw/master/data/BMMC_myeloid.csv.gz')
Out[2]:
0610007C21Rik;Apr3 0610007L01Rik 0610007P08Rik;Rad26l 0610007P14Rik 0610007P22Rik 0610008F07Rik 0610009B22Rik 0610009D07Rik 0610009O20Rik 0610010B08Rik;Gm14434;Gm14308 ... mTPK1;Tpk1 mimp3;Igf2bp3;AK045244 mszf84;Gm14288;Gm14435;Gm8898 mt-Nd4 mt3-mmp;Mmp16 rp9 scmh1;Scmh1 slc43a2;Slc43a2 tsec-1;Tex9 tspan-3;Tspan3
W31105 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 2 0 0 0 0
W31106 0 0 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 1 1 0 0 0
W31107 0 1 0 2 0 0 0 0 0 0 ... 0 0 0 0 0 3 1 0 0 2
W31108 0 1 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 3 1 0 0 0
W31109 0 0 1 0 0 0 0 1 3 0 ... 0 0 0 0 0 5 0 0 0 0
5 rows × 27297 columns
### Data Preprocessing¶
After loading your data, you're going to want to determine the molecule per cell and molecule per gene cutoffs with which to filter the data, in order to remove lowly expressed genes and cells with a small library size.
In [3]:
scprep.plot.plot_library_size(bmmsc_data, cutoff=1000)
Out[3]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f33a8f60a58>
In [4]:
bmmsc_data = scprep.filter.filter_library_size(bmmsc_data, cutoff=1000)
Out[4]:
0610007C21Rik;Apr3 0610007L01Rik 0610007P08Rik;Rad26l 0610007P14Rik 0610007P22Rik 0610008F07Rik 0610009B22Rik 0610009D07Rik 0610009O20Rik 0610010B08Rik;Gm14434;Gm14308 ... mTPK1;Tpk1 mimp3;Igf2bp3;AK045244 mszf84;Gm14288;Gm14435;Gm8898 mt-Nd4 mt3-mmp;Mmp16 rp9 scmh1;Scmh1 slc43a2;Slc43a2 tsec-1;Tex9 tspan-3;Tspan3
W31106 0 0 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 1 1 0 0 0
W31107 0 1 0 2 0 0 0 0 0 0 ... 0 0 0 0 0 3 1 0 0 2
W31108 0 1 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 3 1 0 0 0
W31109 0 0 1 0 0 0 0 1 3 0 ... 0 0 0 0 0 5 0 0 0 0
W31110 0 1 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 3 0 0 0 1
5 rows × 27297 columns
We should also remove genes that are not expressed above a certain threshold, since they are not adding anything valuable to our analysis.
In [5]:
bmmsc_data = scprep.filter.filter_rare_genes(bmmsc_data, min_cells=10)
Out[5]:
0610007C21Rik;Apr3 0610007L01Rik 0610007P08Rik;Rad26l 0610007P14Rik 0610007P22Rik 0610009B22Rik 0610009D07Rik 0610009O20Rik 0610010F05Rik;mKIAA1841;Kiaa1841 0610010K14Rik;Rnasek ... mKIAA1632;5430411K18Rik mKIAA1994;Tsc22d1 mSox5L;Sox5 mTPK1;Tpk1 mimp3;Igf2bp3;AK045244 rp9 scmh1;Scmh1 slc43a2;Slc43a2 tsec-1;Tex9 tspan-3;Tspan3
W31106 0 0 0 1 0 0 0 0 0 1 ... 0 0 0 0 0 1 1 0 0 0
W31107 0 1 0 2 0 0 0 0 0 3 ... 0 2 0 0 0 3 1 0 0 2
W31108 0 1 0 1 0 0 0 0 0 3 ... 0 0 0 0 0 3 1 0 0 0
W31109 0 0 1 0 0 0 1 3 0 8 ... 0 5 0 0 0 5 0 0 0 0
W31110 0 1 0 0 0 0 0 1 0 1 ... 0 0 0 0 0 3 0 0 0 1
5 rows × 10782 columns
After filtering, the next steps are to perform library size normalization and transformation. Log transformation is frequently used for single-cell RNA-seq, however, this requires the addition of a pseudocount to avoid infinite values at zero. We instead use a square root transform, which has similar properties to the log transform but has no problem with zeroes.
In [6]:
bmmsc_data = scprep.normalize.library_size_normalize(bmmsc_data)
bmmsc_data = scprep.transform.sqrt(bmmsc_data)
Out[6]:
0610007C21Rik;Apr3 0610007L01Rik 0610007P08Rik;Rad26l 0610007P14Rik 0610007P22Rik 0610009B22Rik 0610009D07Rik 0610009O20Rik 0610010F05Rik;mKIAA1841;Kiaa1841 0610010K14Rik;Rnasek ... mKIAA1632;5430411K18Rik mKIAA1994;Tsc22d1 mSox5L;Sox5 mTPK1;Tpk1 mimp3;Igf2bp3;AK045244 rp9 scmh1;Scmh1 slc43a2;Slc43a2 tsec-1;Tex9 tspan-3;Tspan3
W31106 0.0 0.000000 0.0000 1.575047 0.0 0.0 0.0000 0.000000 0.0 1.575047 ... 0.0 0.000000 0.0 0.0 0.0 1.575047 1.575047 0.0 0.0 0.000000
W31107 0.0 1.136584 0.0000 1.607372 0.0 0.0 0.0000 0.000000 0.0 1.968621 ... 0.0 1.607372 0.0 0.0 0.0 1.968621 1.136584 0.0 0.0 1.607372
W31108 0.0 1.189802 0.0000 1.189802 0.0 0.0 0.0000 0.000000 0.0 2.060797 ... 0.0 0.000000 0.0 0.0 0.0 2.060797 1.189802 0.0 0.0 0.000000
W31109 0.0 0.000000 1.0744 0.000000 0.0 0.0 1.0744 1.860915 0.0 3.038861 ... 0.0 2.402431 0.0 0.0 0.0 2.402431 0.000000 0.0 0.0 0.000000
W31110 0.0 2.058031 0.0000 0.000000 0.0 0.0 0.0000 2.058031 0.0 2.058031 ... 0.0 0.000000 0.0 0.0 0.0 3.564615 0.000000 0.0 0.0 2.058031
5 rows × 10782 columns
### Running MAGIC¶
Now that your data has been preprocessed, you are ready to run MAGIC.
#### Creating the MAGIC operator¶
If you don't specify any parameters, the following line creates an operator with the following default values: knn=5, decay=1, t=3.
In [7]:
magic_op = magic.MAGIC()
#### Running MAGIC with gene selection¶
The magic_op.fit_transform function takes the normalized data and an array of selected genes as its arguments. If no genes are provided, MAGIC will return a matrix of all genes. The same can be achieved by substituting the array of gene names with genes='all_genes'.
In [8]:
bmmsc_magic = magic_op.fit_transform(bmmsc_data, genes=["Mpo", "Klf1", "Ifitm1"])
Calculating MAGIC...
Running MAGIC on 2416 cells and 10782 genes.
Calculating graph and diffusion operator...
Calculating PCA...
Calculated PCA in 5.74 seconds.
Calculating KNN search...
Calculated KNN search in 0.72 seconds.
Calculating affinities...
Calculated affinities in 0.72 seconds.
Calculated graph and diffusion operator in 7.34 seconds.
Calculating imputation...
Calculated MAGIC in 7.94 seconds.
Out[8]:
Ifitm1 Klf1 Mpo
W31106 0.494151 0.222772 12.653059
W31107 0.041061 3.255028 3.048861
W31108 0.479306 0.343226 12.553071
W31109 0.033479 3.283794 2.841015
W31110 0.908004 0.267707 11.953947
### Visualizing gene-gene relationships¶
We can see gene-gene relationships much more clearly after applying MAGIC. Note that the change in absolute values of gene expression is not meaningful - the relative difference is all that matters.
In [9]:
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16, 6))
scprep.plot.scatter(x=bmmsc_data['Mpo'], y=bmmsc_data['Klf1'], c=bmmsc_data['Ifitm1'], ax=ax1,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='Before MAGIC')
scprep.plot.scatter(x=bmmsc_magic['Mpo'], y=bmmsc_magic['Klf1'], c=bmmsc_magic['Ifitm1'], ax=ax2,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='After MAGIC')
plt.tight_layout()
plt.show()
The original data suffers from dropout to the point that we cannot infer anything about the gene-gene relationships. As you can see, the gene-gene relationships are much clearer after MAGIC. These relationships also match the biological progression we expect to see - Ifitm1 is a stem cell marker, Klf1 is an erythroid marker, and Mpo is a myeloid marker.
#### Setting the MAGIC operator parameters¶
If you wish to modify any parameters for your MAGIC operator, you change do so without having to recompute intermediate values using the magic_op.set_params method. Since our gene-gene relationship here appears a little too noisy, we can increase t a little from the default value of 3 up to a larger value like 5.
In [10]:
magic_op.set_params(t=5)
Out[10]:
MAGIC(a=None, decay=1, k=None, knn=5, knn_dist='euclidean', knn_max=15,
n_jobs=1, n_pca=100, random_state=None, solver='exact', t=5, verbose=1)
We can now run MAGIC on the data again with the new parameters. Given that we have already fitted our MAGIC operator to the data, we should run the magic_op.transform method.
In [11]:
bmmsc_magic = magic_op.transform(genes=["Mpo", "Klf1", "Ifitm1"])
Calculating imputation...
Out[11]:
Ifitm1 Klf1 Mpo
W31106 0.571219 0.225925 12.462381
W31107 0.048972 3.234080 3.032480
W31108 0.488668 0.324273 12.546968
W31109 0.044142 3.250161 2.882082
W31110 0.809720 0.317691 11.736192
In [12]:
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16, 6))
scprep.plot.scatter(x=bmmsc_data['Mpo'], y=bmmsc_data['Klf1'], c=bmmsc_data['Ifitm1'], ax=ax1,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='Before MAGIC')
scprep.plot.scatter(x=bmmsc_magic['Mpo'], y=bmmsc_magic['Klf1'], c=bmmsc_magic['Ifitm1'], ax=ax2,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='After MAGIC')
plt.tight_layout()
plt.show()
That looks better. The gene-gene relationships are restored without smoothing so far as to remove structure.
### Visualizing cell trajectories with PCA on MAGIC¶
We can extract the principal components of the smoothed data by passing the keyword genes='pca_only' and use this for visualizing the data.
In [13]:
bmmsc_magic_pca = magic_op.transform(genes="pca_only")
Calculating imputation...
Calculated imputation in 0.04 seconds.
Out[13]:
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 ... PC91 PC92 PC93 PC94 PC95 PC96 PC97 PC98 PC99 PC100
W31106 16.575495 -2.205293 -3.173609 0.026191 3.379290 -0.328708 0.674083 -1.337519 -0.374932 0.245162 ... 0.009891 -0.003240 -0.003957 -0.004786 0.004371 -0.003623 -0.004191 0.009760 -0.007117 -0.000909
W31107 -22.333830 -4.913148 -5.701725 0.174784 -1.623465 0.637437 3.213656 0.733184 2.069290 0.922711 ... 0.027747 -0.005885 0.013527 0.015552 0.019255 -0.001578 0.001951 -0.024956 0.001129 0.012197
W31108 15.390584 -5.668019 -5.522961 0.227663 -2.733054 -1.363631 1.213853 -1.134904 -0.622052 2.129573 ... -0.009539 -0.013745 0.000618 0.004466 -0.000973 -0.006953 -0.000089 -0.004067 0.002934 -0.008200
W31109 -21.978137 -3.982899 -4.416052 -0.505503 -0.360550 1.144629 4.950204 0.519777 2.048395 1.303673 ... 0.028121 -0.008229 0.000845 0.020816 0.016570 -0.001894 0.005415 -0.023680 0.003067 0.013028
W31110 14.850199 0.861441 -1.256983 -0.908274 -0.386235 -0.746003 0.790120 0.760150 -0.285287 0.124876 ... 0.020111 0.005340 -0.026550 -0.000802 -0.016007 -0.018792 -0.010221 0.002298 -0.004076 -0.002220
5 rows × 100 columns
We'll also perform PCA on the raw data for comparison.
In [14]:
from sklearn.decomposition import PCA
bmmsc_pca = PCA(n_components=3).fit_transform(np.array(bmmsc_data))
In [15]:
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16, 6))
scprep.plot.scatter2d(bmmsc_pca, c=bmmsc_data['Ifitm1'],
label_prefix="PC", title='PCA without MAGIC',
legend_title="Ifitm1", ax=ax1, ticks=False)
scprep.plot.scatter2d(bmmsc_magic_pca, c=bmmsc_magic['Ifitm1'],
label_prefix="PC", title='PCA with MAGIC',
legend_title="Ifitm1", ax=ax2, ticks=False)
plt.tight_layout()
plt.show()
We can also plot this in 3D.
In [16]:
from mpl_toolkits.mplot3d import Axes3D
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16, 6), subplot_kw={'projection':'3d'})
scprep.plot.scatter3d(bmmsc_pca, c=bmmsc_data['Ifitm1'],
label_prefix="PC", title='PCA without MAGIC',
legend_title="Ifitm1", ax=ax1, ticks=False)
scprep.plot.scatter3d(bmmsc_magic_pca, c=bmmsc_magic['Ifitm1'],
label_prefix="PC", title='PCA with MAGIC',
legend_title="Ifitm1", ax=ax2, ticks=False)
plt.tight_layout()
plt.show()
### Visualizing MAGIC values with PHATE¶
In complex systems, two dimensions of PCA are not sufficient to view the entire space. For this, PHATE is a suitable visualization tool which works hand in hand with MAGIC to view how gene expression evolves along a trajectory. For this, you will need to have installed PHATE. For help using PHATE, visit https://phate.readthedocs.io/.
In [ ]:
!pip install --user phate
In [17]:
import phate
In [18]:
data_phate = phate.PHATE().fit_transform(bmmsc_data)
Calculating PHATE...
Running PHATE on 2416 cells and 10782 genes.
Calculating graph and diffusion operator...
Calculating PCA...
Calculated PCA in 5.65 seconds.
Calculating KNN search...
Calculated KNN search in 0.81 seconds.
Calculating affinities...
Calculated affinities in 0.03 seconds.
Calculated graph and diffusion operator in 6.66 seconds.
Calculating landmark operator...
Calculating SVD...
Calculated SVD in 0.30 seconds.
Calculating KMeans...
Calculated KMeans in 24.72 seconds.
Calculated landmark operator in 26.40 seconds.
Calculating optimal t...
Calculated optimal t in 6.88 seconds.
Calculating diffusion potential...
Calculated diffusion potential in 2.86 seconds.
Calculating metric MDS...
Calculated metric MDS in 37.48 seconds.
Calculated PHATE in 80.29 seconds.
In [19]:
scprep.plot.scatter2d(data_phate, c=bmmsc_magic['Ifitm1'], figsize=(12,9),
ticks=False, label_prefix="PHATE", legend_title="Ifitm1")
Out[19]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f339e8c8978>
Note that the structure of the data that we see here is much more subtle than in PCA. We see multiple branches at both ends of the trajectory. To learn more about PHATE, visit https://phate.readthedocs.io/.
### Exact vs approximate MAGIC¶
If we are imputing many genes at once, we can speed this process up with the argument solver='approximate', which applies denoising in the PCA space and then projects these denoised principal components back onto the genes of interest. Note that this may return some small negative values. You will see below, however, that the results are largely similar to exact MAGIC.
In [21]:
approx_magic_op = magic.MAGIC(solver="approximate")
approx_bmmsc_magic = approx_magic_op.fit_transform(bmmsc_data, genes='all_genes')
Calculating MAGIC...
Running MAGIC on 2416 cells and 10782 genes.
Calculating graph and diffusion operator...
Calculating PCA...
Calculated PCA in 5.97 seconds.
Calculating KNN search...
Calculated KNN search in 0.72 seconds.
Calculating affinities...
Calculated affinities in 0.73 seconds.
Calculated graph and diffusion operator in 7.58 seconds.
Calculating imputation...
Calculated imputation in 0.03 seconds.
Calculated MAGIC in 8.77 seconds.
In [22]:
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16, 6))
scprep.plot.scatter(x=bmmsc_magic['Mpo'], y=bmmsc_magic['Klf1'], c=bmmsc_magic['Ifitm1'], ax=ax1,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='Exact MAGIC')
scprep.plot.scatter(x=approx_bmmsc_magic['Mpo'], y=approx_bmmsc_magic['Klf1'], c=approx_bmmsc_magic['Ifitm1'], ax=ax2,
xlabel='Mpo', ylabel='Klf1', legend_title="Ifitm1", title='Approximate MAGIC')
plt.tight_layout()
plt.show()
### Animating the MAGIC smoothing process¶
To visualize what it means to set t in MAGIC, we can plot an animation of the smoothing process, from raw to imputed values. Below, we show an animation of Mpo, Klf1 and Ifitm1 with increasingly more smoothing.
In [22]:
magic.plot.animate_magic(bmmsc_data, gene_x="Mpo", gene_y="Klf1", gene_color="Ifitm1",
operator=magic_op, t_max=10)
Out[22]:
Once Loop Reflect |
https://gateoverflow.in/tag/time-complexity | # Recent questions tagged time-complexity
1
Let $H$ be a binary min-heap consisting of $n$ elements implemented as an array. What is the worst case time complexity of an optimal algorithm to find the maximum element in $H$? $\Theta (1)$ $\Theta (\log n)$ $\Theta (n)$ $\Theta (n \log n)$
1 vote
2
What is the worst-case number of arithmetic operations performed by recursive binary search on a sorted array of size $n$? $\Theta ( \sqrt{n})$ $\Theta (\log _2(n))$ $\Theta(n^2)$ $\Theta(n)$
3
A binary search tree $T$ contains $n$ distinct elements. What is the time complexity of picking an element in $T$ that is smaller than the maximum element in $T$? $\Theta(n\log n)$ $\Theta(n)$ $\Theta(\log n)$ $\Theta (1)$
4
Consider the following recurrence relation. $T\left ( n \right )=\left\{\begin{array} {lcl} T(n ∕ 2)+T(2n∕5)+7n & \text{if} \; n>0\\1 & \text{if}\; n=0 \end{array}\right.$ Which one of the following options is correct? $T(n)=\Theta (n^{5/2})$ $T(n)=\Theta (n\log n)$ $T(n)=\Theta (n)$ $T(n)=\Theta ((\log n)^{5/2})$
1 vote
5
What is the time complexity of the following recursive function? int ComputFun(int n) { if(n<=2) return 1; else return (ComputFun(floor(sqrt(n)))+n); } $\Theta(n)$ $\Theta(\log n)$ $\Theta(n\log n)$ $\Theta(\log \log n)$
6
If algorithm $A$ and another algorithm $B$ take $\log_2 (n)$ and $\sqrt{n}$ microseconds, respectively, to solve a problem, then the largest size $n$ of a problem these algorithms can solve, respectively, in one second are ______ and ______. $2^{10^n}$ and $10^6$ $2^{10^n}$ and $10^{12}$ $2^{10^n}$ and $6.10^6$ $2^{10^n}$ and $6.10^{12}$
7
The running time of an algorithm is $O(g(n))$ if and only if its worst-case running time is $O(g(n))$ and its best-case running time is $\Omega(g(n)) \cdot (O= \textit{ big }O)$ its worst-case running time is $\Omega (g(n))$ ... , $(o = \textit{ small } o)$ Choose the correct answer from the options given below: $(a)$ only $(b)$ only $(c)$ only $(d)$ only
8
9
The most efficient algorithm for finding the number of connected components in a $n$ undirected graph on $n$ vertices and $m$ edges has time complexity $\Theta (n)$ $\Theta (m)$ $\Theta (m+n)$ $\Theta (mn)$
Consider the process of inserting an element into a $Max\ Heap$, where the $Max\ Heap$ is represented by an $array$. Suppose we perform a binary search on the path from the new leaf to the root to find the position for the newly inserted element, the number of $comparisons$ performed is $\Theta(\log _{2}n)$ $\Theta(n\log _{2} \log_2 n)$ $\Theta (n)$ $\Theta(n\log _{2}n)$
An algorithm is made up pf two modules $M1$ and $M2.$ If order of $M1$ is $f(n)$ and $M2$ is $g(n)$ then the order of algorithm is $max(f(n),g(n))$ $min(f(n),g(n))$ $f(n) + g(n)$ $f(n) \times g(n)$
The running time of an algorithm $T(n),$ where $’n’$ is the input size , is given by $T(n) = 8T(n/2) + qn,$ if $n>1$ $= p,$ if $n = 1$ Where $p,q$ are constants. The order of this algorithm is $n^{2}$ $n^{n}$ $n^{3}$ $n$ |
http://sextechandmergers.blogspot.com/2013/01/reinventing-aapl-pie.html | ## 2013-01-14
### Reinventing $AAPL pie If we've talked about investment much recently, you'll know that I've been an$AAPL bull, with regards to the period from April 2012 to April 2013. I have a long position on $AAPL which I will probably liquidate, if the January 23 earnings announcement misses analyst estimates by a lot. Otherwise,$AAPL is already trading at such a low PE that I believe that even a near miss is likely to lift the price by a little bit.
Meanwhile, reports of reduced factory orders for iPhone 5 parts are likely to send $AAPL below$500 in today's trading session. Yes, they're saying it's the same month-old news, but hey, folks are easily spooked about this sort of thing.
Speculation
Since writing articles for a friend's gadget blog, I've put a bit of thought into some likely product developments that $AAPL may soon reveal. A cheaper phone: The iPad Mini was released at 82% of the cheapest full-sized iPad. I certainly don't think that$AAPL will release a $99-$149 iPhone, unless they're willing to break their recent trend of conservative innovations. If they do stick with a conservative approach, then perhaps we'll see a new iPhone around $369, or 82% the price of the cheapest iPhone, perhaps sporting a larger screen, as rumoured. Update: Business Insider thinks they could bump-down and make an iPad Nano. Modding an existing product: There seems to be room to stuff a mobile baseband and antenna into the iPhone 3G-esque frame of the near-retirement 4th-generation iPod Touch, which already sports the same display as the iPhone 4/4S. This intuitively seems like a quick way to roll out a cheap new product. Comparatively, it doesn't seem like there's room to stuff a phone-sized battery, a baseband, and a bigger antenna, into the relatively high-spec'd and diminuative 5th-generation iPod Touch, or the 7th-generation iPod Nano. Creating a new form-factor: In 2007, I bought my first notebook computer, and I'd only had a cellphone for two years, a Nokia 3310. I just wanted a single device that did both. Since then, no one has successfully marketed anything that does all that. Essentially I'm talking about a tablet, with mobile network connectivity, which I can use with a headset to make calls, and which I can use with a dock as a desktop, with support for basic desktop apps. In 2010, while working at a web startup, in order to see how our pages were rendering on iOS, I got myself an iPhone 3GS, my first$AAPL product. A year later, I got myself a keyboard dock, and used to do server maintenance with the iSSH app, and edit documents and spreadsheets synced to the cloud, on my iPhone 3GS. It was close to my ideal device, just too bloody small. A little later, I got the iPad 2, in order to use GoodReader. The iPad 2 used the same dock that I used with my iPhone (upgradeability was the key reason I got the dock in the first place). However to-date, no iPad has come with mobile network connectivity, for voice and text messaging.
Last year, a tear-down of the iPad Mini revealed that it had relatively simple internals. It has a low-spec'd screen, and a massive battery. It could certainly hold a mobile baseband. You know what I really want this January? I want $AAPL to announce an interface that turns the iPod into a headset for the iPad, and I want them to turn on mobile network connectivity in the iPad. This would essentially make the iPad+headset my ideal device. I don't care if they don't bridge the link between the iPod and the iPad, and instead launch a funky new headset of some sort. (We've seen them apply for such patents.) I just want my one computing device that basically does everything. Of course, in order to qualify as "one device," whether it's a modded iPod, or a brand new headset, it's going to have to clip nicely together with an iPad. Sir Ive, if you haven't figured this out already, I sure hope you're reading this.$MSFT Surface Pro... TURN ON THE DAMN GSM CHIP !@#\$%^&*() |
https://msp.org/jomms/2006/1-6/p01.xhtml | Vol. 1, No. 6, 2006
Download this article For screen For printing
Recent Issues
The Journal About the Journal Editorial Board Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1559-3959 Author Index To Appear Other MSP Journals
A new model to predict the behavior at the interfaces of multilayer structures
M. Karama, K. S. Afaq and S. Mistou
Vol. 1 (2006), No. 6, 957–977
Abstract
One of the current problems connected with multilayer composite structures concerns the analysis of the distribution of the stresses around peculiarities (free edge and loaded edge) and at the interfaces of each layer. This work presents a new shear stress function in the form of the exponential function to predict the mechanical behavior of multilayered laminated composite structures. As a case study, the mechanical behavior of a laminated composite beam $\left(9{0}^{\circ }∕{0}^{\circ }∕{0}^{\circ }∕9{0}^{\circ }\right)$ is examined. The results are compared with the Touratier model sine and the two-dimensional finite element method studied. Results show that this new model is more precise than older ones when compared with results obtained by finite element analysis. To introduce continuity on the interfaces of each layer, the new exponential model is used with Ossadzow kinematics. The equilibrium equations and natural boundary conditions are derived from the principle of virtual power.
Keywords
boron fiber, laminate theory, interface, stress transfer, finite element analysis
Milestones
Received: 4 December 2005
Revised: 17 January 2006
Accepted: 25 April 2006
Published: 1 October 2006
Authors
M. Karama École Nationale d’Ingénieurs de Tarbes 47 av. Azereix BP1629 65016 Tarbes France K. S. Afaq École Nationale d’Ingénieurs de Tarbes 47 av. Azereix BP1629 65016 Tarbes France S. Mistou École Nationale d’Ingénieurs de Tarbes 47 av. Azereix BP1629 65016 Tarbes France |
http://hal.in2p3.fr/in2p3-00747527 | # Exotic Charmonium and Bottomonium-like Resonances
1 Théorie
IP2I Lyon - Institut de Physique des 2 Infinis de Lyon
Abstract : Many new states in the charmonium and bottomonium mass region were recently discovered by the BaBar, Belle and CDF Collaborations. We use the QCD Sum Rule approach to study the possible structure of some of these states. In particular we identify the recently observed bottomonium-like resonance $Z_b^+(10610)$ with the first excitation of the tetraquark $X_b(1^{++})$, the analogue of the X(3872) state in the charm sector.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00747527
Contributor : Sylvie Flores <>
Submitted on : Wednesday, October 31, 2012 - 2:54:39 PM
Last modification on : Thursday, February 6, 2020 - 4:28:10 PM
### Citation
F. S. Navarra, M. Nielsen, J.-M. Richard. Exotic Charmonium and Bottomonium-like Resonances. Journal of Physics: Conference Series, IOP Publishing, 2011, 348, pp.012007. ⟨10.1088/1742-6596/348/1/012007⟩. ⟨in2p3-00747527⟩
Record views |
http://eprint.iacr.org/2004/116/20040517:151353 | ## Cryptology ePrint Archive: Report 2004/116
On the Limitations of Universally Composable Two-Party Computation Without Set-up Assumptions
Ran Canetti and Eyal Kushilevitz and Yehuda Lindell
Abstract: The recently proposed {\em universally composable {\em (UC)} security} framework for analyzing security of cryptographic protocols provides very strong security guarantees. In particular, a protocol proven secure in this framework is guaranteed to maintain its security even when run concurrently with arbitrary other protocols. It has been shown that if a majority of the parties are honest, then universally composable protocols exist for essentially any cryptographic task in the {\em plain model} (i.e., with no setup assumptions beyond that of authenticated communication). When honest majority is not guaranteed, general feasibility results are known only given trusted set-up, such as in the common reference string model. Only little was known regarding the existence of universally composable protocols in the plain model without honest majority, and in particular regarding the important special case of two-party protocols.
We study the feasibility of universally composable two-party {\em function evaluation} in the plain model. Our results show that in this setting, very few functions can be securely computed in the framework of universal composability. We demonstrate this by providing broad impossibility results that apply to large classes of deterministic and probabilistic functions. For some of these classes, we also present full characterizations of what can and cannot be securely realized in the framework of universal composability. Specifically, our characterizations are for the classes of deterministic functions in which (a) both parties receive the same output, (b) only one party receives output, and (c) only one party has input.
Category / Keywords: cryptographic protocols / universal composability, concurrent composition, impossibility results
Publication Info: An extended abstract appeared at EUROCRYPT 2003.
Date: received 17 May 2004
Contact author: lindell at us ibm com
Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation
[ Cryptology ePrint archive ] |
http://en.wikipedia.org/wiki/Open_sentence | Open sentence
In mathematics, an open sentence (usually an equation or equality) is described as "open" in the sense that its truth value is meaningless until its variables are replaced with specific numbers, at which point the truth value can usually be determined (and hence the sentences are no longer regarded as "open"). These possible replacement values are assumed to range over a subset of either the real or complex numbers, depending on the equation or inequality under consideration (in applications, real numbers are usually associated also with measurement units). The replacement values which produce a true equation or inequality are called solutions of the equation or inequality, and are said to "satisfy" it.
In mathematical logic, a non-closed formula is a formula which contains free variables. (Note that in logic, a "sentence" is a formula without free variables, and a formula is "open" if it contains no quantifiers, which disagrees with the terminology of this article.) Unlike closed formulas, which contain constants, non-closed formulas do not express propositions; they are neither true nor false. Hence, the formula
$x$ is a number
(1)
has no truth-value. A formula is said to be satisfied by any object(s) such that if it is written in place of the variable(s), it will form a sentence expressing a true proposition. Hence, "5" satisfies (1). Any sentence which results from a formula in such a way is said to be a substitution instance of that formula. Hence, "5 is a number" is a substitution instance of (1).
Mathematicians have not adopted that nomenclature, but refer instead to equations, inequalities with free variables, etc.
Such replacements are known as solutions to the sentence. An identity is an open sentence for which every number is a solution.
Examples of open sentences include:
1. 3x − 9 = 21, whose only solution for x is 10;
2. 4x + 3 > 9, whose solutions for x are all numbers greater than 3/2;
3. x + y = 0, whose solutions for x and y are all pairs of numbers that are additive inverses;
4. 3x + 9 = 3(x + 3), whose solutions for x are all numbers.
5. 3x + 9 = 3(x + 4), which has no solution.
Example 4 is an identity. Examples 1, 3, and 4 are equations, while example 2 is an inequality. Example 5 is a contradiction.
Every open sentence must have (usually implicitly) a universe of discourse describing which numbers are under consideration as solutions. For instance, one might consider all real numbers or only integers. For example, in example 2 above, 1.6 is a solution if the universe of discourse is all real numbers, but not if the universe of discourse is only integers. In that case, only the integers greater than 3/2 are solutions: 2, 3, 4, and so on. On the other hand, if the universe of discourse consists of all complex numbers, then example 2 doesn't even make sense (although the other examples do). An identity is only required to hold for the numbers in its universe of discourse.
This same universe of discourse can be used to describe the solutions to the open sentence in symbolic logic using universal quantification. For example, the solution to example 2 above can be specified as:
For all x, 4x + 3 > 9 if and only if x > 3/2.
Here, the phrase "for all" implicitly requires a universe of discourse to specify which mathematical objects are "all" the possibilities for x.
The idea can even be generalised to situations where the variables don't refer to numbers at all, as in a functional equation. For example of this, consider
f * f = f,
which says that f(x) * f(x) = f(x) for every value of x. If the universe of discourse consists of all functions from the real line R to itself, then the solutions for f are all functions whose only values are one and zero. But if the universe of discourse consists of all continuous functions from R to itself, then the solutions for f are only the constant functions with value one or zero. |
https://www.physicsforums.com/threads/gravity-does-gravity-depends-on-atmosphere.793666/ | # Gravity -- Does gravity depends on atmosphere?
san D
Does gravity depends on atmosphere
jambaugh
Gold Member
No.
san D
Then on what factors it depends...??
DaveC426913
Gold Member
Mass and mass only.
(Now gravitational force, on the other hand, depends on mass and distance)
Bandersnatch
The answer could be just as well 'yes'. It all depends on what you actually mean.
Why won't you try being a bit more descriptive? What exactly is it you want to find?
phinds
Gold Member
2021 Award
My answer would be no, it's the other way around. The atmosphere depends on gravity. sanD, you don't seem to be putting much (ANY actually) research into finding this out on your own. This is not one of those forums where you ask trivial questions and someone tells you the answer, it's a forum where we try to help people figure out how to get answers on their own. The first thing to do on your own for a question like this is type the question into Google and see what pops up. If that doesn't quite give you want you want, come here with a more focused question.
A body on a planetary surface will experience a different gravitational acceleration depending on what radius they are from the gravitational source's mass centre (core). This body will also likely be under atmospheric pressure, with a summed force in the same direction as the gravitational acceleration. Does this atmospheric force compound the gravitational force or will the atmospheric force have a net of zero?
I'm going to take a guess at what the OP may be thinking about... which may be related to the different meanings of mass vs. weight...
When you "weigh" something in the conventional sense, in open air, you're actually not "weighing it" as much as you are determining how much "heavier" it is than air (or comparing the differences between the force on the atmosphere due to gravity and the force on the object due to gravity). It's a little easier to see what I'm getting at if you take the example of if we take a scale of infinite mass to the bottom of the ocean. If you placed a rock on a scale, you don't get the same "weight" as you do on land, even though gravity is (essentially) the same.
Like i said, I am just taking a guess of what the op may have had in mind when they asked the question...
davenn
Gold Member
2021 Award
Does gravity depends on atmosphere
no, but the opposite is true when it comes to a planet having an atmosphere.
Without significant mass, the planetary body wont have a strong enough gravity to retain an atmosphere
phinds
Gold Member
2021 Award
no, but the opposite is true when it comes to a planet having an atmosphere.
Without significant mass, the planetary body wont have a strong enough gravity to retain an atmosphere
Read post #6. You're a day late
davenn
davenn
Gold Member
2021 Award
Read post #6. You're a day late :)
well at least I gave you good backup ;)
Dave
Perhaps he wants to know if the effect of gravity feels stronger with greater atmospheric pressure. I'm not sure it works like that, I think a strong gravitational pull would crush you into the ground whereas a high atmospheric pressure would be more like an implosion. Or am I wrong?
phinds
Gold Member
2021 Award
Perhaps he wants to know if the effect of gravity feels stronger with greater atmospheric pressure. I'm not sure it works like that, I think a strong gravitational pull would crush you into the ground whereas a high atmospheric pressure would be more like an implosion. Or am I wrong?
You are right. Well, a crushing rather than an implosion but I see that you were thinking in the right direction.
In any case, it gets boring trying to figure out what some random posted comes and asks and then goes away and you never hear from them again. Maybe he'll come back to say what he really wants to know but don't hold your breath.
san D
Confused about this the gravity changes when we move out of the surface of earth at the same time we can observe atmosphere also changes that's why i confused
phinds
Gold Member
2021 Award
Confused about this the gravity changes when we move out of the surface of earth at the same time we can observe atmosphere also changes that's why i confused
Did you think the Earth's atmosphere just goes on forever into the universe? Of course it tapers off as you get higher, as does the force of gravity. What is confusing about that?
san D
Thank you
Last edited by a moderator:
san D
I will agree that gravity depends on mass only ...then the gravity of uranus must be greater than earth's gravity because uranus is bigger than earth but it's not why...?
Last edited by a moderator:
I will agree that gravity depends on mass only ...then the gravity of uranus must be greater than earth's gravity bcz uranus is bigger than earth but it's not why...?
That is probably surface gravity you're thinking about, which can be greater for a less massive object because the surface is farther away from the center of gravity in the massive object.
OmCheeto
Bandersnatch
I will agree that gravity depends on mass only ...then the gravity of uranus must be greater than earth's gravity bcz uranus is bigger than earth but it's not why...?
Gravity doesn't depend on mass only, but also on distance from the source. The gravitational force equation shows that clearly:
$$F=G\frac{Mm}{r^2}$$
(G is the only constant here)
Dividing both sides by m(test particle mass) you get the gravitational acceleration:
$$a=G\frac{M}{r^2}$$
As long as the two objects you're comparing have the same density, the larger an object is(greater r), the greater its gravity because as you increase radius the mass increases faster than the square of radius:
$$M=V\rho$$
where ρ is the density and V is the volume of a sphere
$$V=4/3 \pi r^3$$
Combining the above you get $$a=4/3G\pi r \rho$$
If you'll make one of the objects less dense, its surface gravity will fall down. As long as you make the density fall by the same fraction as you increase the radius, the surface gravity will stay the same.
With Uranus, its density is 4.3 times lower than Earth's while its radius is 4 times larger.
DaveC426913
Gold Member
Gravity doesn't depend on mass only, but also on distance from the source. The gravitational force equation shows that clearly:
As I said: gravity depends only on mass. The gravitational force experienced at some point depends additionally on its distance from the mass. A fine distinction perhaps, but a distinction nonetheless.
Bandersnatch
As I said: gravity depends only on mass. The gravitational force experienced at some point depends additionally on its distance from the mass. A fine distinction perhaps, but a distinction nonetheless.
Could you clarify what you mean by gravity?
OmCheeto
Gold Member
That is probably surface gravity you're thinking about, which can be greater for a less massive object because the surface is farther away from the center of gravity in the massive object.
The answer could be just as well 'yes'. It all depends on what you actually mean.
Why won't you try being a bit more descriptive? What exactly is it you want to find?
Could you clarify what you mean by gravity?
I was thinking of Bandersnatch's alternate interpretation of the problem from the get go.
Just did the math.
It was very fun.
Wait. What was the original question?
Does gravity depends on atmosphere
Yes! As atmospheres have mass.
Duh.
DaveC426913
Gold Member
Could you clarify what you mean by gravity?
Gravity is an intrinsic property of anything with mass.
Gravitational force is an effect of gravity, modified by distance.
But now you're causing me to doubt my convictions... :s
Bandersnatch
But now you're causing me to doubt my convictions... :s
With the caveat that you've just made me doubt mine...
How would that definition work? Isn't it just synonymous with mass?
OmCheeto
Gold Member
... Isn't it just synonymous with mass?
Noooooo!!!!! Don't say "MASS"! It'll attract those stress-energy-tensor fellows, with their "momentum warps space too" mumbo jumbo......
And then my head will explode, again.
phinds
san D
I got clear idea frnz gravity depends on density of the object Uranus is hav less density than earth
san D
Earth's gravity also varies with density w.r.t oceans,mountains etc..
I got clear idea frnz gravity depends on density of the object Uranus is hav less density than earth
Not entirely. The density of the earth could change but it would have the exact same gravity as long as it's mass stayed the same.
It's better to discuss gravitational force at certain points, because what would change is the distance between the center and the surface.
Bandersnatch
Not entirely. The density of the earth could change but it would have the exact same gravity as long as it's mass stayed the same.
This is only correct when talking about gravity(force or field) far away from the source. This whole discussion is about surface gravity, even though that might have not been clear when it started. For surface gravity that statement doesn't hold as you can't change density and keep mass constant without changing radius, so let's not confuse the OP needlessly.
Noooooo!!!!! Don't say "MASS"! It'll attract those stress-energy-tensor fellows, with their "momentum warps space too" mumbo jumbo......
And then my head will explode, again.
Oh, you know what I meant, Om. We're all simple people here, talking about ye olde Newtonian ideas and none of that GR woo-woo.
OmCheeto
This is only correct when talking about gravity(force or field) far away from the source. This whole discussion is about surface gravity, even though that might have not been clear when it started. For surface gravity that statement doesn't hold as you can't change density and keep mass constant without changing radius, so let's not confuse the OP needlessly.
He commented on the strength of earth's gravity varying at different places. This is not really a density issue but a distance from the center issue. Otherwise, the strength of gravity would be weaker if you were in a boat then on land at sea level.
But yes I completely understand what you're saying. |
https://www.physicsforums.com/threads/how-do-i-check-if-a-1x1-matrix-is-diagonal-lower-upper-triangular.546309/ | # How do I check if a 1x1 matrix is diagonal, lower/upper triangular?
1. Nov 1, 2011
### hkBattousai
I have an A matrix with dimensions 1x1. Its the only term a11 is an arbitrary number.
For what values of a11, this A matrix is;
1. Diagonal
2. Upper triangular
3. Lower triangular
2. Nov 1, 2011
### janvdl
By definition a 1x1 matrix will be upper and lower triangular. (But not strictly; for strictly upper and lower: $a$ must be 0).
A matrix is diagonal if it is triangular and normal. Normal (for a matrix whose elements lie in the domain of real numbers) means $A \ A^T = A^T \ A$
3. Nov 2, 2011
### adriank
A matrix is diagonal if it has no nonzero entries off the diagonal. A matrix is upper triangular if it has no nonzero entries below the diagonal. etc.
Clearly any 1x1 matrix satisfies these properties, since there are no entries off the diagonal, nonzero or not.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook |
http://tex.stackexchange.com/questions/29084/biblatex-separator-for-citing-multiple-sources-some-of-them-with-page-some-no?answertab=votes | # biblatex: separator for citing multiple sources, some of them with page, some not
I'm using biblatex for my references.
Normally, if I use \cite{source1, source2, source3}, I get a citation like
[3, 45, 98]
If I now want to give a page number for one of them, it looks confusing imho, as I should get (did not try it out up to now, honestly)
[3, 45, 78, p. 23f, 98]
So imho it would make sense to change one of the separator from a comma to e. g. a semicolon like:
[3; 45; 78, p. 23f; 98]
Question 1: Does that make sense, or are there other conventions for citing sources with and without page numbers together?
(Up to now I collected the citations without page number in one cite command and added singular \cite[][p. xy]{} commands for those with page numbers., so I'd get:
[3, 45, 98][78, p. 23f]
)
the \cites[]{}[]{}[]{} could also do that, but it also uses the same separators for entries and postnotes.
Question 2: How could I change the separator globally for those number-citations from comma to semicolon in biblatex?
-
It makes sense indeed; the documentation of biblatex says that the default action of \multicitedelim is to add a semicolon followed by a space, but it appears not to be so. Write
\renewcommand\multicitedelim{\addsemicolon\space}
The documentation is imprecise: \multicitedelim is defined as \addsemicolon\space in biblatex.def, but this is changed to \addcomma\space in the numeric citation style (which is biblatexs default style). – lockstep Sep 21 '11 at 15:10 |
https://voidisyinyang.blogspot.com/2017/01/throw-that-sleep-thing-at-me-again.html | ## Wednesday, January 4, 2017
### Throw that Sleep Thing at me again! Falling Asleep in Full Lotus
The qigong master who befriended me had a great line once, "Throw that sleep thing at me again!" He had been impressed that I made him go to sleep when I was driving the car and he was on the passenger's side. Another time he fell asleep in the car and so when we got to my house, I just left him in the car sleeping in the middle of the day. See he spends most of his night in full lotus meditation and so - when I saw him first at the qigong center during a Free Friday, he was actually falling asleep in front of me. But he was also doing healing as he fell asleep. Then he admitted something to me - he can also talk while he is asleep and once the original qigong master berated him for doing this! hahaha. The original qigong master said he you better not be sleeping when you are doing your qi-talks because someone might notice!
This sounds quite strange but maybe a year ago science declared that in fact we can have only part of our brain fall asleep while another part stays away. So you induce slow waves across the cortex.
According to other recent research, the thalamic reticular nucleus (TRN) is responsible for sending signals to the thalamus and the brain’s cortex that slow some brain waves way, way down, leading to a sort of sleep state in parts of the brain even while the rest of the organ is fully functional. Whereas it was previously thought that your brain is either completely asleep or completely awake, the MIT team found that if they weakly stimulated the TRN, they could produce slower brain waves in just a portion of the brain’s cortex.
So that spacey sensation of being half-awake means part of your cortex is already asleep! And so when I was sleepy then I would talk in a monotone slow voice and my qi energy put the qigong master asleep! haha. "Throw that sleep thing at me again!" he said. I wasn't sure whether to take that comment as an insult or a compliment. So this research was at my alma mater - UW-Madison
As predicted, when the rats were awake, their neurons—nerve cells that collect and transmit signals in the brain—fired frequently and irregularly.
When the animals slept, their neurons fired less often, usually in a regular up-and-down pattern that manifests on the EEG as a "slow wave." Called non-rapid eye movement, this sleep stage accounts for about 80 percent of all sleep in both rats and people.
So sleep is a coherent wave while normal waking - is irregular waves in beta brain waves. So when I talked slow in a hypnotic regular voice I put the qigong master asleep. haha. The qigong master said I was always trying to take his energy - but according to science sleep actually is from the neurons being over-worked. So I think the problem was I was overworking the beta brain waves of the cortex - being too conceptual - too much left-brain thinking.
Anyway I actually got "arrested" for falling asleep in full lotus - I was snoring loudly on this toilet in full lotus at the University of Minnesota during spring break. I needed a nap to return back to work but someone calls the cops and they "raided" the bathroom. I explained I had a master's degree but the cop instead barred me from the whole campus for 1 year. This had been ruled illegal - the "trespassing warning" can only apply to the specific property where it occurred. In that case the cops had the authority to "represent" the University of Minnesota as a private property - which is ironic since it's a public property and so what discretion do the cops have? haha. Students take naps all the time and it was during spring break but the cops made me try to prove I was not homeless. The cop said - can I call your job? I said no way as I didn't want to get in trouble at my job. haha. A classic catch-22 but the job of cops is to protect "private property." The university has its charter from before the state was even chartered - so anyway ....
Yeah another time I was asleep in full lotus on a bench at the University - and this young pretty female walked by and she woke me up. "Are you asleep?" she asked. She was of Indian - East Indian ancestry. She told me about her interest in meditation and clearly she was shocked - she said she had never seen someone sleeping in full lotus before. haha. She was a psychology major but she stood with female display behavior - with her legs apart - to soak up my qi energy. I thought that was highly ironic - that I knew why she was standing like that and since she was a psychology major - I knew that she knew why she was standing like that. Of course I couldn't just tell her that she was sucking up my qi energy based on subconscious desire. haha. Later on - years later - I ran into this lady when I delivered organic fruit to her - and she teaches meditation now among other things. I reminded her of our previous meeting and she kind of freaked out about it. haha.
So the past couple work days I have looked forward to going home to fall asleep in full lotus. Why? Because it's a great way to loosen up the muscles from working. Also it's quite strange to put the meditation c.d. on and then a few minutes later I'm gone and I wake up and the meditation c.d. is ending. haha. Wow! Time really flew during that meditation! haha.
One time I was listening to http://coasttocoastam.com and a caller talked about how they would sense a presence over their bed while they slept. I knew that in fact that was their subconscious spirit of their own body - as they slept they had a minor out of body awakening. So the person was afraid of their own spirit and they said as soon as they saw their own body asleep they would wake up again. So sure enough I fell asleep in full lotus on the bed and I woke up by first feeling this heavy presence on the body that was not me. I "opened" my eyes and suddenly I saw myself on the bed in full lotus - only I saw myself from about 10 feet away. I had "woken up" my own spirit to see my body asleep. As soon as I saw my own body then instantly my spirit went back into my own body.
The book Transcendent Dreaming by a local psychologist Christina Donnell details her experiences beyond lucid dreaming - when she has actual physical transformations and also precognitive dreams.
So the other day I had a psychic precognitive experience after doing some full lotus meditation. I began thinking about something very unique and then the next day someone at work suddenly mentioned this thing I was thinking about. They said it to shock me but since I had already been thinking about it - then I was not shocked. Instead the word they said immediately sent me into a minor trance as I realized I was having a "glitch in the Matrix" experience - a flash back from the future. haha.
And so this is called "dream yoga" by the Tibetans whereby having lucid dreaming leads to realizing that being awake is another type of dream and only the Emptiness is real - as the qigong masters say, the "holographic universe" is real - and so even physical reality is a type of dream holograph.
So we know now from science that 90% of our thoughts are actually from the cortex going back to the thalamus - and so they are subconscious thoughts from internal processing of information and emotions, etc. whereas only 10% of our thoughts are based on "external" perceptions from the thalamus going out to the cortex.
Also we know that 2/3rds or two thirds - about 200 to 300 million neurons - out of 400 million neurons - are used for vision while we are awake. So simply by closing our eyes then we are "turning the light around" of our coherent spirit light energy.
Our waking thoughts are incoherent beta brain waves but our spirit light is coherent biophotons - and that is the key difference - we are "turning the light around" of coherent spirit biophoton energy while we shut down the waking beta "language" brain waves.
So as the original qigong master says qigong leverages the 90% of our energy which is subconscious thoughts of the brain, turning it into superconsciousness. In other words the specific difference between subconscious and superconsciousness - is that subconsciousness is what our spirit holographic energy picks up as emotional imprints while awake, looking externally - and it stores that emotional jing energy internally as a blockage of other people's lower emotions. Whereas superconscious is taking the coherent spirit light and "turning it around" to the Emptiness of no thoughts - so it is pure light as healing energy of the spirit as a reference beam with a "noncommutative phase-space" of the "hidden momentum" or phonons that are superluminal. This is negentropic light as healing information energy.
That is the difference - that is how meditation turns the subconscious into the superconsciousness.
So then Master Hai-Deng, whom the original qigong master studied with - he told me personally he studied with the teacher of Master Yan Xin - and so I knew who that was. haha. So Master Hai-Deng is known for not sleeping at all for 60 years and how? Because he would produce more than enough deep sleep waves during meditation to make up for sleep energy and then the rest of the time he was in the REM theta heart vision holographic spirit brain waves.
We are told by the qigong master who befriended me that the Emptiness really is the Emptiness - there are no thoughts - and yet as Ramana Maharshi explains, in the "fourth state" or turiya - there is not the ignorance of deep sleep in trance - instead there is the light of the spirit while also the bliss of deep sleep. And yet Ramana Maharshi is clear that the Self is not the Light. This confuses people to no end - Ramana Maharshi says that you use the sattvic mind of light to get to the formless awareness and yet the formless awareness is "Mouna Samadhi" or listening in silence. Why? Because this is the "hidden momentum" of light as the noncommutative phase-space - it is the secret "yuan qi" of light - or as Ramana Maharshi calls it the "ether" in the flame of the spirit.
Well now I need to go back to sleep and to induce sleep when I know I need it - then I just go back into full lotus meditation to reverse engineer sleeping. haha. A lot of people can't get to sleep when they need sleep because their beta waking brain is too strong. This can be from too much salt which over-activates the adrenal glands increasing the brain dopamine when you want serotonin for sleep. Milk as night increases melatonin for sleep but serotonin converts to melatonin. Well our lower body is full of melatonin and serotonin and so using the full lotus and reverse breathing we then naturally increase those neurohormones into the brain. As the Taoist Master Ni, Hua-ching emphasizes, the Taoists "prepare" for sleep by first purifying out the subconscious lower emotional jing blockages that have over activated the sympathetic nervous system. |
https://www.physicsforums.com/threads/help-needed-to-find-the-flow-potential-function.201611/ | # Help needed to find the flow potential function
1. Nov 30, 2007
### Hells_Kitchen
Here is the catch:
We are given a 2-D velocity flow incompressible, irrotational of the form:
--> ^ ^ ^ ^
V = u i + v j = [4y -x(1+x)] i + y(2x+1) j
and we are asked to find the flow potential which obeys the Laplace Eq. for 2-D incompressible, irrotational flow: dΦ = u dx + v dy
in other words:
∂Φ
---- = u
∂x
∂Φ
---- = v
∂y
I integrated the first one and then the second one and compared the two functions and combined the terms, but at the end the Φ does not satisfy the first equation only the second one.
Another technique, I integrated the first function with respect to x and Φ is expressed as 4xy - x^2/2 - x^3/3 + f(y) = Φ (x,y)
now I differentiate with respect to y and equate it to v:
4x - f'(y) = y(2x+1) which solves to f(y) = xy^2 + y^2/2 -4xy + C
Plug it in the above expression and get:
Φ (x,y) = xy^2 + y^2/2 - x^2/2 - x^3/3 + C
now the first partial diff. eq is not satisfied but the first is.
Can someone explain what is wrong here?
2. Nov 30, 2007
### HallsofIvy
Staff Emeritus
What reason do you have to thinks such a potential exists?
If there exist $\phi$ such that $d\phi = udx+ vdy$ (as long as u and v are differentiable) then it must be true that the mixed derivatives are equal:
$$\frac{\partial^2 \phi}{\partial x \partial y}= \frac{\partial u}{\partial y}= \frac{\partial v}{\partial x}= \frac{\partial^2 \phi}{\partial y\partial x}$$
Here it is clear that that is not true: $(4y -x(1+x))_y= 4 \ne (y(2x+1))_x= 2y$. This is not an "exact differential" and there is no "flow potential". |
https://qiskit.org/documentation/stubs/qiskit.chemistry.MP2Info.html | # qiskit.chemistry.MP2Info¶
class MP2Info(qmolecule, threshold=1e-12)[source]
A utility class for Moller-Plesset 2nd order (MP2) information
Each double excitation given by [i,a,j,b] has a coefficient computed using
coeff = -(2 * Tiajb - Tibja)/(oe[b] + oe[a] - oe[i] - oe[j])
where oe[] is the orbital energy
and an energy delta given by
e_delta = coeff * Tiajb
All the computations are done using the molecule orbitals but the indexes used in the excitation information passed in and out are in the block spin orbital numbering as normally used by the chemistry module.
A utility class for MP2 info
Parameters
• qmolecule (QMolecule) – QMolecule from chemistry driver
• threshold (float) – Computed coefficients and energy deltas will be set to zero if their value is below this threshold
__init__(qmolecule, threshold=1e-12)[source]
A utility class for MP2 info
Parameters
• qmolecule (QMolecule) – QMolecule from chemistry driver
• threshold (float) – Computed coefficients and energy deltas will be set to zero if their value is below this threshold
Methods
__init__(qmolecule[, threshold]) A utility class for MP2 info mp2_get_term_info(excitation_list[, …]) With a reduced active space the set of used excitations can be less than allowing all available excitations. mp2_terms([freeze_core, orbital_reduction]) Gets the set of MP2 terms for the molecule taking into account index adjustments due to frozen core and/or other orbital reduction
Attributes
mp2_delta Get the MP2 delta energy correction for the molecule mp2_energy Get the MP2 energy for the molecule
property mp2_delta
Get the MP2 delta energy correction for the molecule
Returns
The MP2 delta energy
Return type
float
property mp2_energy
Get the MP2 energy for the molecule
Returns
The MP2 energy
Return type
float
mp2_get_term_info(excitation_list, freeze_core=False, orbital_reduction=None)[source]
With a reduced active space the set of used excitations can be less than allowing all available excitations. Given a (sub)set of excitations in the space this will return a list of correlation coefficients and a list of correlation energies ordered as per the excitation list provided.
Parameters
• excitation_list (list) – A list of excitations for which to get the coeff and e_delta
• freeze_core (bool) – Whether core orbitals are frozen or not
• orbital_reduction (list) – An optional list of ints indicating removed orbitals
Returns
List of coefficients and list of energy deltas
Return type
Tuple(list, list)
Raises
ValueError – Excitation not present in mp2 terms
mp2_terms(freeze_core=False, orbital_reduction=None)[source]
Gets the set of MP2 terms for the molecule taking into account index adjustments due to frozen core and/or other orbital reduction
Parameters
• freeze_core (bool) – Whether core orbitals are frozen or not
• orbital_reduction (list) – An optional list of ints indicating removed orbitals
Returns
A dictionary of excitations where the key is a string in the form
from_to_from_to e.g. 0_4_6_10 and the value is a tuple of (coeff, e_delta)
Return type
dict |
http://nodus.ligo.caltech.edu:8080/40m/page96?attach=1&sort=Subject | 40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 96 of 341 Not logged in
ID Date Author Type Category Subject
10005 Thu May 29 15:33:55 2014 ericqUpdateLSCHigh Bandwidth power recycled Yarm.
Quote: Wait. It is not so clear. Do you mean that the IFO was locked with REFL11I for the first time? Why is it still in the "low finesse" situation? Is it because of misalignment or the non-zero CARM offset?
Sorry, the X arm is completely misaligned. This is the configuration I first tried in ELOG 9859, that is: a PRM->ITMY recycling cavity and ITMY->ETMY arm cavity. ITMX is completely misaligned, so the BS is dumping much of the recycling cavity light out, which is why I wrote "low finesse." This is the first time I've used REFL11 to control any of our cavities, though.
737 Thu Jul 24 21:53:00 2008 ranaSummaryTreasureHigh School Tour group and the PMC
There was a tour today of 40 high school kids. I warned them that the lasers could burn out their
eyes
, that the vacuum could suck them through the viewports like tubes of spaghetti and that the
high voltage amps would fry their hair off.
One of them was taking a picture of the SOS in the flow bench and another one was whispering what
a dumb idea it was to leave a sensitive clean optic out where people might breathe on it. I told
one them to cover his mouth. The other one asked what was the glass block behind the SOS.
It was a spare PMC! s/n 00-2677 with a 279 nF capacitance PZT. I guess that this is the one that
Go brought from MIT and then left here. So we don't have to take the one away from Bridge in the
35 W laser lab.
We can swap this one in in the morning while the FSS people work on the reference cavity
alignment. Please email me if you object to this operation.
3610 Mon Sep 27 00:33:50 2010 ranaUpdatePSLHigh Voltage Driver added to TTFSS -> NPRO
We added the Thorlabs HV Driver in between the FSS and the NPRO today. The FSS is locking with it, but we haven't taken any loop gain measurements.
This box takes 0-10 V and puts out 0-150 V. I set up the FSS SLOW loop so that it now servos the output of FAST ot be at +5V instead of 0V. This is an OK
temporary solution. In the future, we should add an offset into the output of the FSS board so that the natural output is 0-10 V.
I am suspicious that the Thorlabs box has not got enough zip to give us a nice crossover and so we should make sure to measure its frequency response with a capacitive load.
3640 Fri Oct 1 21:34:14 2010 rana, taraUpdatePSLHigh Voltage Driver added to TTFSS -> NPRO
Quote: We added the Thorlabs HV Driver in between the FSS and the NPRO today. The FSS is locking with it, but we haven't taken any loop gain measurements. This box takes 0-10 V and puts out 0-150 V. I set up the FSS SLOW loop so that it now servos the output of FAST ot be at +5V instead of 0V. This is an OK temporary solution. In the future, we should add an offset into the output of the FSS board so that the natural output is 0-10 V. I am suspicious that the Thorlabs box has not got enough zip to give us a nice crossover and so we should make sure to measure its frequency response with a capacitive load.
We measured the Thorlabs HV Driver's TF today. It is quite flat from 1k to 10k before going up to 25 dB at 100k,
and the response does not change with the DC offset input.
The driver is used for driving the NPRO's PZT which requires higher voltage than that of the previous setup.
We need to understand how the driver might effect the FSS loop TF, and we want to make sure that the driver
will have the same response with DC input offset.
Setup
We used SR785 to measure the TF. Source ch was split by a T, one connected to Driver's input, another one connected to the reference (ch A). See fig2.
The driver output was split by another T. One output was connected to NPRO,
another was connected to a 1nF capacitor in a Pomona box, as a high pass filer (for high voltage), then to the response (ch B)
The source input is DC offset by 2V which corresponds to 38 V DC offset on the driver's output.
The capacitance of the PZT on the NPRO is 2.36 nF, as measured by LC meter.
The result shows that the driver's TF is flat from 1k to 10k, and goes up at higher frequency, see fig1.
The next step is trying to roll of the gain at high frequency for PZT. A capacitor connected to ground might be used to roll off the frequency of the driver's output.
We will inspect the TF at higher frequency (above 100 kHz) as well.
Attachment 1: NPROTF.png
Attachment 2: 2010_10_01.png
3641 Mon Oct 4 06:47:46 2010 rana, taraUpdatePSLHigh Voltage Driver added to TTFSS -> NPRO
Inside the FSS box, the FAST path has a ~10 Hz pole made up from the 15k resistor and the 1 uF cap before the output connector.
This should be moved over to the output of the driver to make the driver happy - without yet measuring the high frequency response,
it looks like to me that its becoming unhappy with the purely capacitive load of the NPRO's PZT. This will require a little surgery inside
the FSS box, but its probably justified now that we know the Thorlabs box isn't completely horrible.
15906 Thu Mar 11 20:18:00 2021 gautamUpdateLSCHigh bandwidth POY
I repeated the high bandwidth POY locking experiment.
1. The "Q" demod output (SMA) was routed to the common mode board (it appears in the past I used the LEMO "MON" output instead but that shouldn't be a meaningful change).
2. As usual, slow actuation --> ETMY, fast actuation --> IMC error point.
3. Loop UGF measurement suggests that bandwidth ~25kHz, with ~25 degrees phase margin. Anyway the lock was pretty stable.
One thing I am not sure is - when looking at the in-loop error point spectra, the Y-arm error point did not get suppressed to the CM board's sensing noise floor - I would have thought that with the huge amount of gain at ~16 Hz, the usual structure we see in the spectra between 10-30Hz would be completely squished. Need to think about if this is signalling something wrong, because the loop TF measurements seemed as expected to me.
1020pm: plots uploaded. As I made the plot of the spectrum, I realized that I don't have the calibration for the Y-arm error point into displacement noise units, so it's in unphysical units for now. But I think the comment about the hump around 16 Hz not being crushed to some sort of flat electronics noise floor. For the TF plots, when the loop gain is high, this IN1/IN2 technique isn't the best (due to saturation issues) but I don't think there's anything controversial about getting the UGF this way, and the fact that the phase evolves as expected when the various gains are cranked up / boosts enabled makes me think that the CM board is itself just fine.
10am 12 March: i realized that the "Y-arm error point" plotted below is not the true error point - that would be the input to the CM board (before boosts etc), which we don't monitor digitally. The spectra are plotted for the CM_SLOW input which already has some transfer function applied to it. In the past, I routed the LEMO "MON" connector on the demod board to the CM board input, and hence, had the usual SMA outputs from the demod board going to the digital system. I hypothesize that plotting the spectra for that signal would have showed this expected suppression to the electronics noise floor.
In summary, on the basis of this test, I don't see any red flags with the CM board.
Attachment 1: OLGevolution.pdf
Attachment 2: inLoopSpec.pdf
7829 Fri Dec 14 03:32:51 2012 AyakaUpdateLSCHigh frequency noise in AS signal
I calibrated the AS error signal into the displacement of the YARM cavity in the same way as I did before (elog).
The open loop transfer function is:
The transfer function from ITMX excitation to AS error signal is:
Then I have got the calibration value : 5.08e+11 [counts/m]
The calibrated spectrum in unit of m/rtHz is
REF0: arm displacement
REF1: dark noise + demodulation circuit noise + WT filter noise + ADC noise (PSL shutter on)
REF2: demodulation circuit noise + WT filter noise + ADC noise (PD input of the circuit (at 1Y2) is connected to the 50 Ohm terminator)
(The circuit and WT filter seem to be connected at back side of the rack. Actually there is a connector labelled 'I MON' but it is not related to C1:LSC-ASS55_I_ERR)
Also we changed the AS gain so that ADC noise does not affect:
However, this did not make big change in sensitivity. I guess this means that circuit noise limits the sensitivity at higher frequencies than 400 Hz.
I tried to adjust the AS gain carefully but I could not do that because of the earthquake. Further investigation is needed.
Attachment 5: ASspe.tar.gz
7832 Fri Dec 14 09:31:59 2012 ranaUpdateLSCHigh frequency noise in AS signal
This is NOT calibrated. Its sort of calibrated in the 500-1000 Hz area, but does not correctly use the loop TF or the cavity pole.
As for the noise, remember that the whole point of changing the AS whitening gain was to turn on the whitening filter AFTER locking. With the WF OFF, there's no way that you can surpass the ADC noise limit.
Quote: I calibrated the AS error signal into the displacement of the YARM cavity in the same way as I did before (elog).
7833 Fri Dec 14 10:09:30 2012 AyakaUpdateLSCHigh frequency noise in AS signal
Quote:
This is NOT calibrated. Its sort of calibrated in the 500-1000 Hz area, but does not correctly use the loop TF or the cavity pole.
As for the noise, remember that the whole point of changing the AS whitening gain was to turn on the whitening filter AFTER locking. With the WF OFF, there's no way that you can surpass the ADC noise limit.
Quote: I calibrated the AS error signal into the displacement of the YARM cavity in the same way as I did before (elog).
No, I did not apply open loop TF to it (actually I could not measure the open loop TF because of the earthquake last night). So I should not have said it was the displacement.
Also I changed the AS gain with whitening filter on and xarm locked. Still it does not make any change.
7835 Fri Dec 14 16:35:38 2012 AyakaUpdateLSCHigh frequency sensitivity improved
Since I found that the the AS sensitivity seems to be limited by circuit noise, I inserted a RF amplifier just after the AS RF output.
Now, the sensitivity is improved and limited by the dark noise of the PD.
(Note: I did not apply the open loop TF on this xml file.)
REF3: dark noise + circuit noise + WT filter noise + ADC noise
REF4: circuit noise + WT filter noise + ADC
With this situation, I injected the acoustic noise:
REF5, 6, 7: with acoustic excitation
no reference: without acoustic excitation
We could see the coherence only at the same frequencies, around 200 Hz as we saw before (elog).
Attachment 3: ASnoise.tar.gz
9716 Tue Mar 11 15:19:45 2014 JenneUpdateElectronicsHigh gain Trans PD electronics change
As part of our CESAR testing last night, we had a look at the noise of the 1/sqrt(TR) signal.
Looking at the time series data, while we were slowly sweeping through IR resonance (using the ALS), Rana noted that the linear range of the 1/sqrt(TR) signal was not as wide as it should be, and that this is likely because our SNR is really poor.
When a single arm is at a normalized transmission power of 1, we are getting about 300 ADC counts. We want this to be more like 3000 ADC counts, to be taking advantage of the full range of the ADC.
This means that we want to increase our analog gain by a factor of 10 for the low gain Thorlabs PDs.
Looking at the photos from November when I pulled out the Xend transmission whitening board (elog 9367), we want to change "Rgain" of the AD620 on the daughter board. While we're at it, we should also change the noisy black thick film resistors to the green thin film resistors in the signal path.
The daughter board is D04060, S/N 101. The main whitening board for the low gain trans QPD is D990399, RevB, S/N 104.
We should also check whether we're saturating somewhere in the whitening board by putting in a function generator signal via BNC cable into the input of the Thorlabs whitening path, and seeing where (in Dataviewer) we start to see saturation. Is it the full 32,000 counts, or somewhere lower, like 28,000?
Actually the gain was changed. From gain of 2 (Rgain = 49.4kOhm) to 20 (Rgain = 2.10kOhm), Corresponding calibration in CDS was also changed by locking the Xarm, running ASS, then setting the average arm power to be 1. Confirmed Xarm is locking. And now the signal is used for CESAR. We see emperically that the noise has improved by a factor of approximately 10ish.
Attachment 1: IMG_1309.JPG
9720 Tue Mar 11 19:07:24 2014 ericqUpdateElectronicsHigh gain Trans PD electronics change
Speaking of the whitening board, I had neglected to post details showing the the whitening was at least having a positive effect on the transmon QPD noise. So, here is a spectrum showing the effects that the whitening stages have on a QPD dark noise measurement like I did in ELOG 9660, at a simulated transmission level of 40 counts.
The first whitening stages gives us a full 20dB of noise reduction, while the second stage brings us down to either the dark noise of the QPD or the noise of the whitening board. We should figure out which it is, and fix up the board if necessary.
The DTT xml file is attached in a zip, if anyone wants it.
Attachment 2: sqrtinvWhitening.zip
9823 Thu Apr 17 16:04:40 2014 JenneUpdateElectronicsHigh gain Trans PD electronics change
I have made the same modification to the Yarm trans PD whitening board as was done for the xend, to increase our SNR. I put in a 2.1kOhm thin film resistor in the Rgain place.
When I was pulling the board, the ribbon cable that goes to the ADC had its connector break. I redid the ribbon connector before putting the board back.
I see signals coming into the digital system for both the high gain and low gain Y transmission PDs, so I think we're back. I will re-do the normalization after Jamie is finished working on the computers for the day.
9585 Wed Jan 29 16:36:37 2014 KojiSummaryGeneralHigh power beam blasting of the aLIGO RFPD
[Rich, Jay, Koji]
We blasted the aLIGO RF PD with a 1W IR beam. We did not find any obvious damage.
Rich and Jay brought the PD back to Downs to find any deterioration of the performance with careful tests.
The power modulation setup is at the rejection side of the PBS in front of the laser source.
I checked the beams are nicely damped.
As they may come back here tomorrow, a power supply and a scope is still at the MC side of the PSL enclosure.
16023 Tue Apr 13 19:24:45 2021 gautamUpdatePSLHigh power operations
We (rana, yehonathan and i) briefly talked about having high power going into the IFO. I worked on some calcs a couple of years ago, that are summarized here. There is some discussion in the linked page about how much power we even need. In summary, if we can have
• T_PMC ~85% which is what I measured it to be back in 2019
• T_IMC * T_inputFaraday ~60% which is what I estimate it to be now
• 98% mode matching into the IMC
• power recycling gain of 40-45 once we improve the folding mirror situation in the recycling cavities
• and a gain of 270-280 in the arm cavities (20-30ppm round trip loss)
then we can have an overall gain of ~2400 from laser to each arm cavity (since the BS divides the power equally between the two arms). The easiest place to get some improvement is to improve T_IMC * T_inputFaraday. If we can get that up to ~90%, then we can have an overall gain of ~4000, which is I think the limit of what is possible with what we have.
We also talked about the EOM. At the same time, I had also looked into the damage threshold as well as clipping losses associated with the finite aperture of our EOM, which is a NewFocus 4064 (KTP is the Pockel medium). The results are summarized in Attachments #1 and #2 respectively. Rana thinks the EOM can handle factor of ~3 greater power than the rated damage threshold of 20W/mm^2.
Attachment 1: intensityDist.pdf
Attachment 2: clippingLoss.pdf
3300 Tue Jul 27 16:33:50 2010 KojiSummaryGeneralHigh school students tour
Jenne made the 40m tour for the annual visit of 30-40 students.
c.f.
Attachment 1: IMG_2657.jpg
1097 Tue Oct 28 11:10:18 2008 AlbertoUpdateLSCHigher Order Mode resonances in the X arms
Quote: Recently we had been having some trouble locking the full IFO in the spring configuration (SRC on +166). It was thought that an accidental higher order mode resonance in the arms may have been causing problems. I previously calculated the locations of the resonances using rough arm cavity parameters(Elog #690). Thanks to Koji and Alberto I have been able to update this work with measured arm length and g factors for the y arm (Elog #801,#802). I have also included the splitting of the modes caused by the astigmatic ETM. Code is attached. I don't see any evidence of +166MHz resonances in the y arm. In the attached plot different colours denote different frequencies +33, -33, +166, -166 & CR. The numbers above each line are the mn of TEMmn. Solid black line is the carrier resonance.
I plugged the measures of the length of the X arm and radius of curvature of ETMX I made in to John's code to estimate the position of the resonances of the HOM for the sidebands in the X arm. Here's the resulting plot.
Attachment 1: HOM_resonances_Xarm.png
10494 Thu Sep 11 02:08:32 2014 JenneUpdateLSCHigher transmission powers
No breakthroughs tonight.
DRMI didn't want to lock with either the recipe that we used a year ago (elog 9116) or that was used in May (elog 9968). Being lazy and sleepy, I chickened out and went back to PRFPMI locking.
Many attempts, I'll highlight 2 here.
(1) I had done the CARM -> sqrtInvTrans transition, and reduced the CARM offset to arm powers of about 7, and lost lock. I don't remember now if I was trying to transition DARM to AS55, or if I was just prepping (measuring error signal ratio and relative sign).
(2) I stopped the carm_cm_up script just before it wanted to do the CARM -> sqrtInvTrans transition, and stayed with CARM and DARM both on ALS. I got to reasonably high powers, and was measuring the error signal ratios I needed for CARM -> REFL DC and DARM -> AS55. Things were too noisy to get good coherence for the DARM coefficient, but I thought I was in good shape to transition CARM to REFL DC (which looks like REFL11I, since REFLDC goes to the CM board, and the OUT2 of that board is used to monitor the input to the board. ) Anyhow, I set the offset such that it matched my current CARM offset value, and started the transition, but lost lock about halfway through. CARM started ringing up here, and I think that's what caused this lockloss. Could have been the CARM peak, which I wasn't considering / remembering at the time.
Daytime activity for Thurs: Lock DRMI, maybe first on 1f signals, but then also on 3f signals.
768 Wed Jul 30 13:14:03 2008 KojiSummaryIOOHistory of the MC abs length
I was notified by Rob and Rana that there were many measurements of the MC abs length (i.e. modulation
frequencies for the IFO.) between 2002 and now.
So, I dig the new and old e-logs and collected the measured values of the MC length, as shown below.
I checked the presence of the vent for two big steps in the MC length. Each actually has a vent.
The elog said that the tilt of the table was changed at the OMC installation in 2006 Oct.
It is told that the MC mirrors were moved a lot during the vent in 2007 Nov.
Note:
o The current modulation freq setting is the highest ever.
o Rob commented that the Marconi may drift in a long time.
o Apparently we need another measurement as we had the big earthquake.
My curiosity is now satified so far.
Local Time 3xFSR[MHz] 5xFSR[MHz] MC round trip[m] Measured by
----------------------------------------------------------------------------
2002/09/12 33.195400 165.977000 27.09343 Osamu
2002/10/16 33.194871 165.974355 27.09387 Osamu
2003/10/10 33.194929 165.974645 27.09382 Osamu
2004/12/14 33.194609 165.973045 27.09408 Osamu
2005/02/11 33.195123 165.975615 27.09366 Osamu
2005/02/14 33.195152 165.975760 27.09364 Osamu
2006/08/08 33.194700 165.973500 27.09401 Sam
2006/09/07 33.194490 165.972450 27.09418 Sam/Rana
2006/09/08 33.194550 165.972750 27.09413 Sam/Rana
----2006/10 VENT OMC installation
2006/10/26 33.192985 165.964925 27.09541 Kirk/Sam
2006/10/27 33.192955 165.964775 27.09543 Kirk/Sam
2007/01/17 33.192833 165.964165 27.09553 Tobin/Kirk
2007/08/29 33.192120 165.960600 27.09611 Keita/Andrey/Rana
----2007/11 VENT Cleaning of the MC mirrors
2007/11/06 33.195439 165.977195 27.09340 Rob/Tobin
2008/07/29 33.196629 165.983145 27.09243 Rob/Yoichi
Attachment 1: MC_length.png
770 Wed Jul 30 15:12:08 2008 ranaSummaryIOOHistory of the MC abs length
> I was notified by Rob and Rana that there were many measurements of the MC abs length (i.e. modulation
> frequencies for the IFO.) between 2002 and now.
I will just add that I think that the Marconi/IFR has always been off by ~150-200 Hz
in that the frequency measured by the GPS locked frequency counter is different from
what's reported by the Marconi's front panel. We should, in the future, clearly indicate
which display is being used.
10491 Wed Sep 10 21:05:43 2014 JenneUpdateLSCHoly sensitivity, Batman!
Koji and Manasa did some work on the PSL green situation today (Koji is still writing that log post up), but I just measured the Yarm out of loop sensitivity, and WOAH.
The beat is -11.5dBm at 42.8 MHz. Koji said the sweet spot is around 30 MHz. The out of loop sensitivity is 400 Hz RMS! Something to note is that the Y beatnote still has a 20dB amplifier before going to the beatbox, but the X does not. We had been worried about saturation issues with the X, so we took out the amplifier. However, I might put it back if we win big like this.
Recall from elog 10462 that I had saved a reference of the out of loop noise for both X and Y, but Y was much noisier than X. The references below are from that elog, and the new Y is in dark blue. (Edit, 9:18pm, updated plot measuring down to 0.01Hz. This is the new reference on the ALS_outOfLoop_Ref.xml template).
EDIT: (Don't worry, I'm going to measure X too, but right now the beam overlap on the camera is not good, as if something drifted after Koji and Manasa closed up the PSL table)
Touched up the alignment for X on the PSL table. Current beatnotes are: [Y, -13.5 dBm, 74.1 MHz], [X, -22 dBm, 13.9 MHz]. Red is the current X out of loop, and I've saved it as the new X reference on the template.
10497 Fri Sep 12 00:28:04 2014 ericqUpdateLSCHoly sensitivity, Batman!
I took a quick measurement of the ALS stability, using POX and POY as out of loop sensors, using a CARM calibration line to line POX and POY up to the calibrated PHASE_OUT channels at 503Hz.
• X arm RMS ~1kHz
• Could use more low frequency suppression
• Y arm RMS ~200Hz
14573 Thu Apr 25 10:25:19 2019 gautamUpdateFrequency noise measurementHomodyne v Heterodyne
If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout?
Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?
14576 Thu Apr 25 15:47:54 2019 AnjaliUpdateFrequency noise measurementHomodyne v Heterodyne
My understanding is that the main advantage in going to the heterodyne scheme is that we can extract the frequecy noise information without worrying about locking to the linear region of MZI. Arctan of the ratio of the inphase and quadrature component will give us phase as a function of time, with a frequency offset. We need to to correct for this frequency offset. Then the frequency noise can be deduced. But still the frequency noise value extracted would have the contribution from both the frequency noise of the laser as well as from fiber length fluctuation. I have not understood the method of giving temperature feedback to the NPRO.I would like to discuss the same.
The functional form used for the curve labeled as theory is 5x104/f. The power spectral density (V2/Hz) of the the data in attachment #1 is found using the pwelch function in Matlab and square root of the same gives y axis in V/rtHz. From the experimental data, we get the value of Vmax and Vmin. To ride from Vmax to Vmin , the corrsponding phase change is pi. From this information, V/rad can be calculated. This value is then multiplied with 2*pi*time dealy to get the quantity in V/Hz. Dividing V/rtHz value with V/Hz value gives y axis in Hz/rtHz. The calculated value of shot noise and dark current noise are way below (of the order of 10-4 Hz/rtHz) in this frequency range.
I forgor to take the picture of the setup at that time. Now Andrew has taken the fiber beam splitter back for his experiment. Attachment #1 shows the current view of the setup. The data from the previous trial is saved in /users/anjali/MZ/MZdata_20190417.hdf5
Quote: If I understand correctly, the Mach-Zehnder readout port power is only a function of the differential phase accumulated between the two interfering light beams. In the homodyne setup, this phase difference can come about because of either fiber length change OR laser frequency change. We cannot directly separate the two effects. Can you help me understand what advantage, if any, the heterodyne setup offers in this regard? Or is the point of going to heterodyne mainly for the feedback control, as there is presumably some easy way to combine the I and Q outputs of the heterodyne measurement to always produce an error signal that is a linear function of the differential phase, as opposed to the sin^2 in the free-running homodyne setup? What is the scheme for doing this operation in a high bandwidth way (i.e. what is supposed to happen to the demodulated outputs in Attachment #3 of your elog)? What is the advantage of the heterodyne scheme over applying temperature feedback to the NPRO with 0.5 Hz tracking bandwidth so that we always stay in the linear regime of the homodyne readout? Also, what is the functional form of the curve labelled "Theory" in Attachment #2? How did you convert from voltage units in Attachment #1 to frequency units in Attachment #2? Does it make sense that you're apparently measuring laser frequency noise above 10 Hz? i.e. where do the "Dark Current Noise" and "Shot Noise" traces for the experiment lie relative to the blue curve in Attachment #2? Can you point to where the data is stored, and also add a photo of the setup?
Attachment 1: Experimental_setup.JPG
2952 Wed May 19 16:00:18 2010 JenneUpdateIOOHooray! We locked the MC! (and some other stuff)
[Jenne, Kevin]
We opened up the MC chambers again, and successfully got the MC locked today! Hooray! This meant that we could start doing other stuff....
First, we clamped the Faraday. I used the dog clamps that Zach left wrapped in foil on the clean cart. I checked with a card, and we were still getting the 00 mode through, and I couldn't see any clipping. 2 thumbs up to that.
Then we removed the weight that was on the OMC table, in the way of where MMT2 needs to go. We checked the alignment of the MC, and it still locks on TEM00, but the spot looks pretty high on MC2 (looking at the TV view). We're going to have to relevel the table when we've got the MMT2 optic in the correct place.
We were going to start moving the PZT steering mirror from the BS table to the IOO table, place MMT2 on the OMC table, and put in a flat mirror on the BS table to get the beam out to the BS oplev table, but Steve kicked us out of the chambers because the particle count got crazy high. It was ~25,000 which is way too high to be working in the chambers (according to Steve). So we closed up for the day, and we'll carry on tomorrow.
Photos of the weight before we removed it from the OMC table, and a few pictures of the PZT connectors are on Picasa
2954 Wed May 19 22:28:05 2010 KojiUpdateIOOHooray! We locked the MC! (and some other stuff)
Good! What was the key?
The MC2 spot looks very high, but don't believe the TV image. Believe the result of script/A2L/A2L_MC2. What you are looking at is the comparison of the spot at the front surface and the OSEMs behind the mirror.
Quote: [Jenne, Kevin] We opened up the MC chambers again, and successfully got the MC locked today! Hooray! This meant that we could start doing other stuff.... First, we clamped the Faraday. I used the dog clamps that Zach left wrapped in foil on the clean cart. I checked with a card, and we were still getting the 00 mode through, and I couldn't see any clipping. 2 thumbs up to that. Then we removed the weight that was on the OMC table, in the way of where MMT2 needs to go. We checked the alignment of the MC, and it still locks on TEM00, but the spot looks pretty high on MC2 (looking at the TV view). We're going to have to relevel the table when we've got the MMT2 optic in the correct place. We were going to start moving the PZT steering mirror from the BS table to the IOO table, place MMT2 on the OMC table, and put in a flat mirror on the BS table to get the beam out to the BS oplev table, but Steve kicked us out of the chambers because the particle count got crazy high. It was ~25,000 which is way too high to be working in the chambers (according to Steve). So we closed up for the day, and we'll carry on tomorrow. Photos of the weight before we removed it from the OMC table, and a few pictures of the PZT connectors are on Picasa.
11938 Wed Jan 20 02:53:18 2016 ericqUpdateLSCHopeful signs
[ericq, Gautam]
We gave DRFPMI locking a shot, with the ALS out-of-loop noises as attached. I figured the ALSX noise might be tolerable.
After the usual alignment pains, we got to DRMI holding while buzzing around resonance. Recall that we have not locked since Koji's repair of the LO levels in the IMC loop, so the proper AO gains are a little up in the air right now. There were hopeful indications of arm powers stabilizing, but we were not able to make it stick yet. This is perhaps consistent with the ALSX noise making things harder, but not neccesarily impossible; we assuredly still want to fix the current situation but perhaps we can still lock.
On a brighter note, I've only noticed one brief EPICS freeze all night. In addition, the wall StripTools seem totally contiuous since ~4pm, whereas I'm used to seeing some blocky shapes particularly in the seismic rainbow. Could this possibly mean that the old WiFi router was somehow involved in all this?
Attachment 1: 2016-01-20_ALSOOL.pdf
11941 Thu Jan 21 00:02:11 2016 KojiUpdateLSCHopeful signs
That's a good news. Only quantitative analysis will tell us if it is true or not.
Also we still want to analyze the traffic with the new switch.
Quote: On a brighter note, I've only noticed one brief EPICS freeze all night. In addition, the wall StripTools seem totally contiuous since ~4pm, whereas I'm used to seeing some blocky shapes particularly in the seismic rainbow. Could this possibly mean that the old WiFi router was somehow involved in all this?
13647 Wed Feb 21 17:20:32 2018 johannesUpdateVACHornet gauge connected to DAQ.
I wired the six available BNC connectors on the front panel of the new XEND slow DAQ to physical Acromag channels. There were two unused ADC channels and eight DAC channels, of which I connected four. The following entries were added to /cvs/cds/caltech/target/c1auxex2/ETMXAUX2.db /caltech/target/c1auxex2/ETMXaux2.db
Connector Acromag Channel EPICS Name In1 XT1221C #6 C1:Vac-CC1_HORNET_PRESSURE_VOLT In2 XT1221C #7 C1:PEM-SEIS_XARM_TEMP_MON C1:PEM-SEIS_EX_TEMP_MON Out1 XT1541B #4 C1:PEM-SEIS_XARM_TEMP_CTRL C1:PEM-SEIS_EX_TEMP_CTRL Out2 XT1541B #5 Not Assigned Out3 XT1541B #6 Not Assigned Out4 XT1541B #7 Not Assigned
C1:Vac-CC1_HORNET_PRESSURE_VOLT is converted to the additional soft channel C1:Vac-CC1_HORNET_PRESSURE in units of torr using the conversion $10^{(\mathrm{Voltage}-10)}$ stated in the manual. A quick check showed that the resulting number and the displayed pressure on the vacuum gauge itself agree to ~1e-8 torr. Gautam added the new EPICS calc channel to the C0EDCU and restarted FB, now the data is being recorded.
Three of the output channels do not have a purpose yet, so their epics records were created but remain inactive for the time being.
Attachment 1: VacLog.png
4856 Wed Jun 22 17:35:35 2011 IshwitaUpdateGeneralHot air station
This is the new hot air station for the 40m lab.........
Attachment 1: P6220212.JPG
Attachment 2: P6220213.JPG
11634 Tue Sep 22 16:42:39 2015 ericqUpdateIOOHousekeeping
I've moved the OAF MC2 signal path to go directly from c1oaf to c1mcs, so that the LSC being ON/OFF doesn't interfere with the MC length seismic feedforward. Since the FB is currently down, I can't do a full test, but looking at monitor points in StripTool indicates it's working as intended.
I also cleaned up some LSC medm stuff; exposing the existing SRCL UGF servo, and removing a misleading arrow. This reminds me that I need to get calibration lockins back up and running...
15829 Sat Feb 20 16:20:33 2021 gautamUpdateGeneralHousekeeping + PRMI char
In prep to try some of these debugging steps, I did the following.
1. ndscope updated from 0.7.9 to 0.11.3 on rossa. I've been testing/assisting the development for a few months now and am happy with it, and like the new features (e.g. PDF export). v0.7.9 is still available on the system so we can revert whenever we want.
2. Arms locked on POX/POY, dither aligned to maximize TRX/TRY, normalization reset.
3. PRMI locked, dither aligned to maximize POPDC.
4. All vertex oplevs re-centered on their QPDs.
While working, I noticed that the annoying tip-tilt drift seems to be worse than it has been in the last few months. The IPPOS QPD is a good diagnostic to monitor stability of TT1/TT2. While trying to trend the data, I noticed that from ~31 Jan (Saturday night/Sunday morning local time), the IP-POS QPD segment data streams seem "frozen", see Attachment #1. This definitely predates the CDS crash on Feb 2. I confirmed that the beam was in fact incident on the IPPOS QPD, and at 1Y2/1Y3 that I was getting voltages going into the c1iscaux Acromag crate. All manner of soft reboots (eth1 network interface, modbusIOC service) didn't fix the problem, so I power cycled the acromag interface crate. This did the trick. I will take this opportunity to raise again the issue that we do not have a useful, reliable diagnsotic for the state of our Acromag systems. The problem seems to not have been with all the ADC cards inside the crate, as other slow ADC channels were reporting sensible numbers.
Anyways, now that the QPD is working again, you can see the drift in Attachment #2. I ran the dither alignment ~4 hours ago, and in the intervening time, the spot, which was previously centered on the AS camera CRT display, has almost drifted completely off (my rough calibration is that the spot has moved 5mm on the AS CCD camera). I was thinking we could try installing the two HAM-A coil drivers to control the TTs, this would allow us to rule out flaky electronics as the culprit, but I realize some custom cabling would be required, so maybe not worth the effort. The phenomenology of the drift make me suspect the electronics - hard for me to imagine that a mechanical creep would stop creeping after 3-4 hours? How would we explain the start of such a mechanical drift? On the other hand, the fact that the drift is almost solely in pitch lends support to the cause being mechanical. This would really hamper the locking efforts, the drift is on short enough timescales that I'd need to repeatedly go back and run the dither alignment between lock attempts - not the end of the world but costs ~5mins per lock attempt.
On to the actual tests: before testing the hardware, I locked the PRMI (no ETMs). In this configuration, I'm surprised to see that there is nearly perfect coherence between the MICH and PRCL error signals between 100Hz-1kHz 🤔 . When the AS55 demodulated signals are whitened prior to digitization (and then de-whitened digitally), the coherence structure changes. The electronics noise (measured with the PSL shutter closed) itself is uncorrelated (as it should be), and below the level of the two aforementioned spectra, so it is some actual signal I'm measuring there with the PRMI locked, and the coherence is on the light fields on the photodiode. So it would seem that I am just injecting a ton of AS55 sensing noise into the PRCL loop via the MICH->PRM LSC output matrix element. Weird. The light level on the AS55 photodiode has increased by ~2x after the September 2020 vent when we removed all the unused output optics and copper OMC. Nevertheless, the level isn't anywhere close to being high enough to saturate the ADC (confirmed by time domain signals in ndscope).
To get some insight into whether the whole RF system is messed up, I first locked the arm cavities with POX and POY as the error signals. Attachment #3 shows the spectra and coherence betweeen these two DoFs (and the dark noise levels for comparison). This is the kind of coherence profile I would expect - at frequencies where the loop gain isn't so high as to squish the cavity length noise (relative to laser frequency fluctuations), the coherence is high. Below 10 Hz, the coherence is lower than between 10-100 Hz because the OLG is high, and presumably, we are close to the sensing noise level. And above ~100 Hz, POX and POY photodiodes aren't sensing any actual relative frequency fluctuations between the arm length and laser frequency, so it's all just electronics noise, which should be incoherent.
The analogous plot for the PRMI lock is shown in Attachment #4. I guess this is telling me that the MICH sensing noise is getting injected into the PRCL error point between 100Hz-1kHz, where the REFL11 photodiode (=PRCL sensor) isn't dark noise limited, and so there is high coherence? I tuned the MICH-->PRM LSC output matrix element to minimize the height of a single frequency line driving the BS+PRM combo at ~313Hz in the PRCL error point.
All the spectra are in-loop, the loop gain has not been undone to refer this to free-running noise. The OLGs themselves looked fine to me from the usual DTT swept sine measurements, with ~100 Hz UGF.
Attachment 1: IPPOSdeat.pdf
Attachment 2: TTdrift.pdf
Attachment 3: POXnPOY.pdf
Attachment 4: PRMI.pdf
15831 Sun Feb 21 20:51:21 2021 ranaUpdateGeneralHousekeeping + PRMI char
I'm curious to see if the demod phase for MICH in REFL & AS chamges between thi simple Mcihelson and PRMI. IF there's a change, it could point to a PRCL/f2 mismatch.
But I would still bet on demod chain funniness.
15875 Sun Mar 7 15:26:10 2021 gautamUpdateLSCHousekeeping + more PRMI
1. Beam pointing into PMC was tweaked to improve transmission.
2. AS110 photodiode was re-installed on the AS table - I picked off 30% of the light going to the AS WFS using a beamsplitter and put it on the AS110 photodiode.
3. Adjusted ASDC whitening gain - we have been running nominally with +18dB, but after Sept 2020 vent, there is ~x3 amount of light incident on the AS55 RFPD (from which the ASDC signal is derived). I want to run the dither alignment servos that use this PD using the same settings as before, hence this adjustment.
4. Adjusted digital demod phases of POP22, POP110 and AS110 signals with the PRMI locked (sideband resonant). I want these to be useful to debug the PRMI. the phases were adjusted so that AS110_Q, POP22_I and POP110_I contain the signal (= sideband buildup) when the PRMI is locked.
5. Ran the actuator calibration routine for BS, ITMX and ITMY - i'll try and do the PRM and ETMs as well later.
6. With the PRMI locked (sidebands resonant), looked at the sideband power buildup. POP22 and POP110 remain stable, but there is some low frequency variation in the AS110_Q channel (but not the I channel, so this is really a time varying transmission of the f2 sideband to the dark port). What's that about? Also unsure about those abrupt jumps in the POP22/POP110 signals, see Attachment #1 (admittedly these are slow channels). I don't see any correlation in the MICH control signal.
7. Measured the loop shapes of the MICH (UGF ~90 degrees, PM~30 degrees) and PRCL (UGF~110 Hz, PM~30 degreees) loops - stability margins and loop UGFs seem reasonable to me.
8. Tried nulling the MICH-->PRCL coupling by adjusting the MICH-->PRM matrix element - as has been the case for a while, unable to do any better, and I can't null that line as we expect to be able to.
9. Not expecting to get anything sensible, but ran some sensing matrix lines (at the correct frequencies this time).
10. Tried locking the PRMI with MICH actuation to an ITM instead of the BS - I can realize the lock but the loop OLTF I measure with this configuration is very weird, needs more investigation. I may look into this later today evening.
I was also reminded today of the poor reliability of the LSC whitening electronics. Basically, there may be hidden saturations in all the channels that have a large DC value (e.g. the photodiode DC mon channels) due to the poor design of the cascaded gain stages. I was thinking about using the REFL DC channel to estimate the mode-matching into the PRC, but this has a couple of problems. Electronically, there may be some signal distortion due to the aforementioned problem. But in addition, optically, the estimation of mode-matching into the PRC by comparing REFL DC levels in single bounce off the PRM and the PRMI locked has the problem that the mode-matching is degenerate with the intra-cavity loss, which is of the same order as the mode mismatch (a percent or two I claiM). If Koji or someone else can implement the fix suggested by Hartmut for all the LSC whitening channels, that'd give us more faith in the signals. It may be less work than just replacing all the whitening filters with a better design (e.g. the aLIGO ISC whitening filter which implements the cascaded gain stages using single OP27s and more importantly has a 1 kohm series resistance with the input to the op amp (so the preceeding stage never has to drive > 10V/1kohms ~10mA of DC current) would presumably reduce distortion.
Attachment 1: PRMI_SBres.png
Attachment 2: MICH_act_calib.pdf
16994 Tue Jul 12 19:46:54 2022 PacoSummaryALSHow (not) to take NPRO PZT transfer function
[Paco, Deeksha, rana]
Quick elog for this evening:
• Rana disabled MC servo .
• Slow loop also got disengaged.
• AUX PSL beatnote is best taken with *free running lasers* since their relative frequency fluctuations are lowest than when locked to cavities.
• DFD may be better to get PZT transfer funcs, or get higher bandwidth phase meter.
• Multi instrument to be done with updated moku
• Deeksha will take care of updated moku
4005 Thu Dec 2 00:34:32 2010 ranaHowToLSCHow Does Cavity Locking Work (answered by Nikon)
https://nodus.ligo.caltech.edu:30889/gw.swf
Dr. Koji Arai and Nikon
3822 Fri Oct 29 11:29:29 2010 josephbUpdateCDSHow I broke the frame builder yesterday
Problem:
Long before Yuta came along and deleted daqd, I had done something to prevent the framebuilder code from running at all.
Cause:
Alex pointed out via e-mail that the corresponded to the inability to access certain frame files due to their permissions being only for root.
Turns out when I had run the code under the inittab, I forgot to make it use controls, instead of root (which is the default). This later on caused problems for the code when it tried to access those files, resulting in the wierd errors we saw.
Solution:
Use chown to change the offending frame files back to controls.
Future:
Write a proper inittab script which uses "su controls" before running the daqd code.
9093 Mon Sep 2 03:51:14 2013 ranaHowToGeneralHow To Coil Cables
9097 Tue Sep 3 10:54:33 2013 SteveHowToGeneralHow To Coil Cables
Quote:
B grade Nobel is awarded.
If cables could dream?
This skill should be mandatory for LIGOX graduates.
1589 Fri May 15 14:05:14 2009 DmassHowToComputersHow To: Crash the Elog
The Elog started crashing last night. It turns out I was the culprit, and whenever I tried to upload a certain 500kb .png picture, it would die. It has happened both when choosing "upload" of a picture, and when choosing "submit" after successfully uploading a picture. Both culprits were ~500kb .png files.
7484 Thu Oct 4 22:27:54 2012 KojiUpdateSUSHow about the slow machines?
One terrible concern of mine is that the slow machines were rebooted at the power interruption.
Based on the elog entries, I assume they have not been burtrestored...
If this is true, they may cause some weird behaviors of the PSL/IOO electronics.
7485 Thu Oct 4 22:35:16 2012 DenUpdateSUSHow about the slow machines?
Quote: Based on the elog entries, I assume they have not been burtrestored...
Do you know how to burtrestore or restart slow machines?
Edit by Den: I did burtrestore of c1psl.snap from 2 days ago. Still slow machines behave not normal. For example, if I sweep C1:PSL-FSS_SLOWDC, SLOW monitor value does not change.
7489 Fri Oct 5 04:34:31 2012 ranaUpdateSUSHow about the slow machines?
Quote:
Quote: Based on the elog entries, I assume they have not been burtrestored...
Do you know how to burtrestore or restart slow machines?
Edit by Den: I did burtrestore of c1psl.snap from 2 days ago. Still slow machines behave not normal. For example, if I sweep C1:PSL-FSS_SLOWDC, SLOW monitor value does not change.
Problems with Slow Machines?
12639 Wed Nov 23 17:48:16 2016 rana, kojiUpdateIOOHow bad is the McWFS?
Medium.
Previous elog entries on this:
10391 Thu Aug 14 19:23:25 2014 ranaHowToIOOHow do I set the FSS offset to make the PZT voltage start at the right place?
When the IMC locks, we want the FAST OUT of the TTFSS box to be close to zero volts. We also want the control signal from the MC Servo board to be close to 0 V. How to set this up?
With the IMC locked, we just servo the FSS input offset to minimize the MC board output :
ezcaservo -r C1:IOO-MC_FAST_MON -g 0.1 -t 10 C1:PSL-FSS_INOFFSET
I would have used "CDSUTILS", but that seems to have some sort of ridiculous bug where we can't have prefixes on channel names, even on the command line.
1911 Sat Aug 15 18:35:14 2009 ClaraFrogsComputersHow far back is complete data really saved? (or, is the cake a lie?)
I was told that, as of last weekend, we now have the capability to save full data for a month, whereas before it was something like 3 days. However, my attempts to get the data from the accidentally-shorted EW2 channel in the Guralp box have all been epic failures. My other data is okay, despite my not saving it for several days after it was recorded. So, my question is, how long can the data actually be saved, and when did the saving capability change?
7765 Fri Nov 30 09:59:53 2012 SteveHowToGeneralHow not to
Clean cabinet S15 doors were left open. You have to lock it up!
3733 Mon Oct 18 09:01:48 2010 steveHowToGeneralHow not to
This BNC cable is crying for help. Please do not do this to me. It should be reported to the abused center.Throw this cable into the garbage now.
Attachment 1: P1060926.JPG
11459 Wed Jul 29 14:32:01 2015 SteveUpdateGeneralHow not to solder
Quote:
Quote: Koji and Steve, The result: bad Guralp x-arm cable. I will swap the short cables tomorrow at the base.
Short 46" long cables at the base plates were swapped. Their solderings looked horrible.
This cable actually worked at 5-5-2015
Bad cable at ETMY station now. The new cable should be a little bit longer ~52"
Koji could pull out easily 11 of the wires from their socket.
Attachment 1: coldSoldering.jpg
11701 Tue Oct 20 11:24:29 2015 ericqHowToLSCHow to DRFPMI
Initial Alignment
1. With arms POX/POY locked, run dither alignment servos. Set transmon QPD offsets here
2. Restore "PRMI Carrier" configuration, run BS and PRM dither alignment servos simultaneously. (Note: this sacrifices some X arm alignment for better dark port alignment. In practice no appreciable loss of TRX is observed)
3. Misalign PRM, align SRM and tune SRM alignment by eye while looking at AS camera.
4. Restore POX/POY arm lock, lock green to arms, check that powers are high enough and align if neccesary.
Initial Configuration
CARM, DARM
For CARM and DARM, the A channels are used for the ALS signals, whereas the B channels are used for blending the RF signals.
ALS
• BEATX and BEATY, I and Q channels: +0dB Whitening Gain, Whitening Filters ON
• Green beatnotes somewhere between 20-80MHz, following sign convention of temperature slider UP makes beat freq go UP. Check spectrum of PHASE_OUT_HZ vs references in ALS_outOfLoop_Ref.xml. The locking script automatically sets the correct phase tracker gain, so no need to adjust manually.
• CARM_A = -1.0 x ALSX + 1.0 x ALSY, G=1.0
• DARM_A = 1.0 x ALSX + 1.0 x ALSY, G=1.0
RF
• CM Board: REFL11 I daugher board output -> IN1, IN1 Enabled, -32dB input gain, 0.0V offset, all boosts off, AO polarity positive, AO gain +0dB
• MC Board: IN2 disabled, -32dB input gain
• CM_SLOW: +0dB Whitening Gain, Whitening ON, LSC-CM_SLOW_GAIN = -5e-4 (Though, it would be good to reallocate this gain to the input matrix element)
• CARM_B = 1.0 x CM_SLOW, FM4 FM10 ON, G=0 (FM4 = LP700 for AO crossover stability, FM10 = 120:5k for coupled cavity pole compensation)
• AS55: +9dB Whitening Gain, Whitening filters manual, Demod angle -37.0
• DARM_B = -1e-4 x AS55 Q, G=0
DRMI 3F
For the DRMI, the A channels are used for the 1F signals, whereas the B channels are used for the 3F signals. The settings for transitioning to 1F after locking the DRFPMI have not yet been determined.
These settings are currently saved in the DRMI configurator, but the demod angles are set for DRFPMI lock, so the settings don't reliably work for misaligned arms.
• REFL33: +30dB Whitening Gain, Whitening filters trigger on DRMI lock, Demod angle: 136.0
• REFL165: +24dB Whitening Gain, Whitening filters trigger on DRMI lock, Demod angle: -111.0
• POP22: +15dB Whitening Gain, Whitening filters OFF, Demod angle: -114.0
• AS110: +36dB Whitening Gain, Whitening filters OFF, Demod angle: -116.0
• POPDC: +0dB Whitening Gain, Whitening filters OFF (used as a supplemental trigger signal when CARM and DARM are buzzing and POP22 fluctuates wildly)
• MICH_B = 6.0 x REFL165Q, offset = 15.0
• PRCL_B = 5.0 x REFL33I, offset = 45.0
• SRCL_B = -0.6 x REFL165I + 0.24 x REFL33 I, offset=0
The REFL33 element in SRCL_B is to reduce the PRCL coupling, was found empirically by tuning the relative gains with the arms misaligned and looking at excitation line heights. The offsets were found by locking the DRMI on 1F signals with arms misaligned, and taking the average value of these 3F error signals.
Servo filter configuration
The CARM and DARM ALS settings are largely scripted by scripts/ALS/Transition_IR_ALS.py, which takes you from arms POX/POY locked to CARM and DARM ALS locked. The DRMI settings are usually restored from the IFO_CONFIGURE screen.
• CARM: FM[1, 2, 3, 5, 6] , G=4.5, Trigger forced on, no FM triggers, output limit 8k
• DARM: FM[1, 2, 3, 5, 6] , G=4.5, Trigger forced on, no FM triggers, output limit 8k
• MICH: FM[4, 5], G= -0.03, Trigger POP22 I x 1.0 [50, 10], FM[2, 3, 7] triggered [50, 10], output limit 20k
• PRCL: FM[4, 5], G= -0.003, Trigger POP22 I x 1.0 [50, 10], FM[1, 2, 8, 9] triggered [50, 10], output limit 8k
• SRCL: FM[4, 5], G= -0.4, Trigger AS110 Q x 1.0 [500, 100], FM[2, 7, 9] triggered [500, 100], output limit 15k
Actuation Output matrix
• MC2 = -1.0 x CARM
• ETMX = -1.0 x DARM
• ETMY = 1.0 x DARM
• BS = 0.5 x MICH
• PRM = 1.0 x PRCL - 0.2655 MICH
• SRM = 1.0 x SRCL + 0.25 MICH (The mich compensation is very roughly estimated)
Locking Procedure
When arms are POX/POY locked, and the green beatnotes are appropriately configured, calling scripts/DRFPMI/carm_cm_up.sh initiates the following sequence of events:
• Turn ON MC length feedforward and PRC angle feedforward
• Set ALS phase tracker UGFs by looking at I and Q magnitudes
• Set LSC-ALSX and LSC-ALSY offsets by averaging, ramp CARM+DARM gains up, XARM+YARM gains down, engage CARM+DARM boosts, now ALS locked
• Move CARM away from resonance, offset = -4.0 (DRMI locks quicker on this side for whatever reason)
• Restore PRM, SRM alignment. Set DRMI A FM gains to 0, B FM gains to 1.0. Enable LSC outputs for BS, PRM, SRM
• When DRMI has locked, add POPDC trigger elements to DRMI signals and transition SRCL triggering to POP22I. NB: In the c1lsc model, the POPDC signal incident on the trigger matrix has an abs() operator applied to it first.
• MICH Trig = 1.0 x POP22 I + 0.5 x POPDC, [50, 10]
• PRCL Trig = 1.0 x POP22 I + 0.5 x POPDC, [50, 10]
• SRCL Trig = 10.0 x POP22 I + 5 x POPDC, [500, 100]
• Reduce POX, POY whitening gains from their nominal +45dB to +0dB, so there aren't railing channels making noise in the whitening chassis and ADCs
• DC couple ITM oplevs (average spot position, set FM offset, turn on DC boost filter, let settle)
• With an 8 second ramp, reduce CARM offset to 0 counts.
• MANUALLY adjust CARM_A and DARM_A offsets to where CARM_B_IN and DARM_B_IN are seen to fluctuate symetrically around their zero crossing.
• Note: Last week, this adjustment tended to be roughly the same from lock to lock, unlike the PRFPMI which generally didn't need much adjustment. Also, by jumping from CARM offset of -0.4 to 0.4, it could be seen that the zero crossing in CARM_B aka CM_SLOW aka REFL11 had some offset, so CARM_B_OFFSET was set to 0.005, but this may change.
When CARM and DARM are buzzing around true zero, powers maximized:
• CARM and DARM FM1 (18,18:1,1 boosts) OFF
• CARM_B_GAIN 0.0 -> 1.0, FM7 ON (20:0 boost)
• DARM_B_GAIN 0.0 -> 0.015, FM7 ON (20:0 boost)
• MC servo board IN2 ENABLE, IN2 gain -32dB -> -16dB
• Turn ALL MC2 violin filters OFF (smoothen out AO crossover)
• If stable, CM board IN1 gain -32dB -> -10dB (This is the overall CARM gain, the arm powers stabilize within the last few dB of this transition)
• CARM_A_GAIN 1.0 -> 0.7
• CARM_A FM9 ON (LP1k), sleep, FM 1 ON (1:20 deboost), sleep, FM 2 ON (1:20 deboost), HOLD OUPUT, CARM now RF only
• DARM_B_GAIN 0.015 -> 0.02, sleep, DARM_A_GAIN 1.0 -> 0.0 (This may not be the ideal final DARM_B gain, UGF hasn't been checked yet)
IFO is now RF only!
• Turn on transmon QPD servos.
• Adjust comm/diff QPD servo offsets to correct any problems evident on AS/REFL cameras. This usually brings powers from ~100-120 to ~130-140.
This is as far as we've taken the DRFPMI so far, but the CARM bandwidth is still only at a few kHz. Based on PRFPMI locking, the next steps will be:
• CM BOARD +12dB or so additional IN1 gain, more AO gain may be needed to get crossover to final position of ~100Hz
• MC2 viollin filters back on
• CM boost(s) on
• AS55 whitening on
• Transition DRMI to 1F
ELOG V3.1.3- |
https://zememericimenhir.cz/01/127391/ | We devote to producing mining equipments, sand making machines and industrial grinding mills, offering expressway, rail way and water conservancy projects the solution of making high grade sand and matched equipments.
## Electrolytic Recovery Of Copper From Bronze
• #### The Effective Electrolytic Recovery of Dilute Copper from
AbstractIntroductionMethod and MaterialsResult and DiscussionReal Industrial WastewaterConclusionConflict of InterestsAcknowledgmentsElectroplating copper industry was discharged huge amount wastewater and cause serious environmental and health damage in Taiwan. This research applied electrical copper recovery system to recover copper metal. In this work, electrotreatment of a industrial copper wastewater ([Cu] = 30000 mg L−1) was studied with titanium net coated with a thin layer of RuO2/IrO2 (DSA) reactor. The optimal result for simulated copper solution was 99.9% copper recovery efficiency in current density 0.585 A/
• #### Cupreous Metal (Copper, Bronze, Brass) Conservation
Electrolytic reduction cleaning of copper-alloyed objects, such as brass and bronze, is often avoided because it removes any aesthetically pleasing patina and may change the color by plating copper from the reduced corrosion compounds onto the surface of the alloyed metal.
• Conservation Bibliogrpahy · Iron · Silver
• #### Electrolytic Recovery of Copper and Zinc from Brass.
copper and zinc from commercial brass by an electrolytic method. Such a method would produce metals which would be equivalent to virgin metals in purity. Both Barker and Swank found that such a recovery was possible with brasses containing only copper and zinc but that very small amounts of lead would interfere with the dis-solution of the anode.
• Author: V. Kent Loughran
• #### Electrolytic Recovery of Copper and Zinc from Brasses
methods for the recovery of the metal of electrolytic grade. In this furnace treatment, the zinc will oxidize and be lost in the slag. However, this can be done only by large copper producing companies which have the facilities for the recovery of copper from sulfide ores. The brass would have to be shipped from the
• #### Can I recover copper from this? Copper recovery from the
What Metals Are You Interested in Recovering?Is The Metal(S) Dissolved in Solution? If So, What Is The Matrix?What Is The Concentration of The Target Metal(S) in Solution?Are There Any Other Metals Or Impurities in Solution?What Is The Throughput Or Production Rate?Can I Recover Copper from this?This question is important because not all metals can be recovered by electrowinning. The ability to recover the metals depends on their respective positions in the Electrochemical Series. The Electrochemical Series arranges redox reactions according to their standard potentials relative to hydrogen ion (H+). More noble metals such as silver and copper are good at accepting electrons and therefore easy to electrowin. More reaSee more on blog.emew
• #### Copper Purification Process Electrolytic Copper Refining
Dec 29, 2017· Electrolytic refining (electrorefining) is a process used to make impure copper pure. Unlike aluminum, copper metal is fairly easy to obtain chemically from its ores. By electrolysis, it can be refined and made very pure—up to 99.999%. The electrorefining is at the heart of not only copper purification, but the production of sodium hydroxide
• #### How to Recover Copper from Solution
Mar 25, 2017· An electrolytic method for the recovery of pure copper from sulphate solutions is far preferable and will produce cheap copper. In all cases a low current density is required. Hard-lead anodes or composite anodes of hard lead and coke of uniform thickness will be used.
• #### Gold Recovery via Copper Electrolysis Part 1 YouTube
Jun 09, 2019· In part 1 I explain the way Gold can be recovered from a mixture of Gold and Copper (Gold plated pins for example) via electrolytic refining of Copper. This
• #### Electrolytic Determination of Copper Assay
Jun 18, 2015· The presence of ammonia sulphate reduces the resistance of the electrolyte. In determining copper by the electrolytic method in pyrite, using 8 grams of sample, 6 to 8 grams of ammonia sulphate should be used. More than 3 hours electrolysis should be avoided unless more HNO3 is added, as the HNO3 is gradually turned to ammonia by the electrolysis.
• #### Copper Purification Process Electrolytic Copper Refining
Dec 29, 2017· Copper Electrolytic Refining Process. In the electrolytic refining of copper, a thin sheet of high-purity Cu serves as the cathode. The blister copper plates are taken and used as anodes in an electrolyte bath of copper sulfate, CuSO4, and sulfuric acid H2SO4. As current is passed through the solution, pure copper from the anodes is plated out
• #### RECOVERY OF COPPER FROM BRONZE SCRAP
anode grade copper. Abdul Basir and Rabah (2001)showed that copper recovery from scrap was favoured with the use of some additives altogether with the chemical reagents.Yadavalli and Saha (1994)recovered pure electrolytic copper (99.98%) from leaded brass scrap containing 60.02 % copper, 39.3 % zinc and 0.53% lead.
• #### Can I recover copper from this? Copper recovery from the
A good example is electrowinning copper in the presence of high concentrations of zinc. Copper is much more noble than zinc, hence it can easily be electrowon (under the right conditions) from zinc even if the latter is 10x more concentrated without any impact on the purity of the copper cathode produced. Silver can similarly be electrowon from
• #### Electrolytic recovery of bismuth and copper as a powder
Jun 01, 2015· Electrolytic recovery of bismuth and copper as a powder from acidic sulfate effluents using an emew® cell . Wei Jin, a Paul I. Laforest, a Alex Luyima, a Weldon Read, b Luis Navarro b and Michael S. Moats* a Author affiliations * Corresponding authors
• #### Review of Copper Recovery Methods From Metallurgical Waste
Jan 18, 2018· Figure 3 electroplating for copper recovery Figure is showing the possible assembly of electroplating mechanism for semi continuous copper recovery. Copper in the raw material comes with contact of electric field applied across the length of the assembly. Copper in the electrolytic solution first separated and settled on anode.
• #### Electrolytic Recovery of Heavy and Precious Metals
Electrolytic Recovery of Heavy and Precious Metals since 1979. PMPC specializes in the gold electrowinning or electrolytic recovery of heavy and precious metals from many different types of solutions with a wide range of pH. For 40 years, we have been developing reclamation and recycling technology offering innovative solutions to electrolytic
• #### US3054736A Method and apparatus for recovery of copper
US3054736A US77558458A US3054736A US 3054736 A US3054736 A US 3054736A US 77558458 A US77558458 A US 77558458A US 3054736 A US3054736 A US 3054736A Authority US United States Prior art keywords copper zinc scrap acid solution Prior art date 1958-11-21 Legal status (The legal status is an assumption and is not a legal conclusion.
• #### US2753301A Electropolishing of copper and its alloys
US2753301A US266112A US26611252A US2753301A US 2753301 A US2753301 A US 2753301A US 266112 A US266112 A US 266112A US 26611252 A US26611252 A US 26611252A US 2753301 A US2753301 A US 2753301A Authority US United States Prior art keywords copper acid electropolishing bath alloys Prior art date 1952-01-11 Legal status (The legal status is an assumption and is not a legal
• #### How Hydrometallurgy and the SX/EW Process Made Copper
Beginning in the mid 1980s a new technology, commonly known as the leach-solvent extraction-electrowinning process or, SX/EW Process, was widely adopted. This new copper technology utilizes smelter acid to produce copper from oxidized ores and mine wastes. Today, worldwide, approximately 20% of all copper produced is produced by this is process.
• #### Processing of Copper Anode-Slimes for Extraction of Metal
Anode slime is one of the important secondary resources of copper, which is obtained as a by-product during electrolytic refining of copper due to the settling of more noble metals such as gold
• #### What is Strength of Copper Alloys Definition Material
Proof strength of electrolytic-tough pitch (ETP) copper is between 60-300 MPa. Yield strength of aluminium bronze UNS C95400 is about 250 MPa. Yield strength of tin bronze UNS C90500 gun metal is about 150 MPa. Yield strength of copper beryllium UNS C17200 is about 1100 MPa. Yield strength of cupronickel UNS C70600 is about
• #### electrowinning of copper barrel-electrolyses Fruitful Mining
Cupreous metal (copper, bronze, brass) conservation Electrolytic reduction, also called electrolysis, is an electrochemical reaction .. PVC plastic pipes with sealed ends make excellent vats for long, slim artifacts, such as rifle barrels. »More detailed
• 4.5/5(1.6K)
• #### BACKGROUND REPORT SECONDARY COPPER SMELTING,
Figure 2.2-1 Low-grade copper recovery electrolyte in the refinery cells or sold as a product. Smelting of low-grade copper scrap begins with melting in either a blast or a rotary furnace, resulting in slag and impure copper. If a blast furnace is used, this copper is charged to a
• #### Copper Refining an overview ScienceDirect Topics
In the electrolytic process, copper is electrodeposed to obtain a spongy powder deposit at the cathode rather than a smooth, adherent one. In this feature, it differs from the process of copper refining where a strongly adherent product is desired. Low copper ion concentration and high acid content in the electrolyte favor formation of powder deposits.
• #### What are possible anode materials for copper(II) sulfate
Apr 22, 2020· $\begingroup$ If it's a research paper, I'd try to test as many metals and alloys (e.g. bronze, brass) as possible (also think of various stainless steels, maybe some screws or alike will do, just grab what you can get) and check what will happen when the metals are used as anode or cathode. For testing cathode materials, it might a good idea to use a copper anode and v/v.
• #### RECOVERY OF COPPER FROM BRONZE SCRAP
anode grade copper. Abdul Basir and Rabah (2001)showed that copper recovery from scrap was favoured with the use of some additives altogether with the chemical reagents.Yadavalli and Saha (1994)recovered pure electrolytic copper (99.98%) from leaded brass scrap containing 60.02 % copper, 39.3 % zinc and 0.53% lead.
• #### Electroanalytical determination of copper and lead in
Electroanalytical determinations of copper in copper-base alloys, such as brass and bronze, are usually made by depositing the metal from an electrolyte containing sulfuric and nitric acids. For exam ple, in analyses of alloys containing moderate amounts of lead, (less than 10 percent), the alloy may be dissolved in diluted nitric acid,
• #### Energy and Environmental Profile of the U.S. Mining
It is also used in alloys such as brass and bronze, alloy castings, and electroplated protective coating in undercoats of nickel, chromium, and zinc. electrolytic refining. Smelted copper typically retains metallic impurities at Hydrometallurgical copper recovery is the extraction and recovery of copper from ores using aqueous solutions
• #### Conservation of Copper Copper Bronze
Conservation of Copper Electrolytic reduction Miriam Petrie bowl Chalconatronite. Image courtesy of David A. Scott. Deterioration of Copper Shang Dynasty ding, bronze, shown after electrolytic stripping. Image courtesy of Honolulu Academy of the Arts. Conservation of Copper Keel straps during. desalination from J. Davis shipwreck. Bronze rudder
• #### Recovery of copper and tin from stripping tin solution by
Apr 30, 2014· It is owing to the anode material and electrical conductivity. The recovery rates of copper are all about 100 %, but the recovery rate of tin increases from 60.12 % to 80.62 % by graphite anode used. In the process of electrodepositing tin, 316 stainless steel anode is dissolved seriously, and the rate of weight loss reaches 8.14 %.
• #### Processing of Copper Anode-Slimes for Extraction of Metal
Those copper anodes are electrorefined in an electrolytic system, where the electrolyte is an acid copper sulphate solution, where a high purity electrolytic copper is produced (>999,9/1000).
• #### Products Prime Materials Recovery Inc East Hartford, CT
Copper and Copper-based alloys Insulated #1 and #2 Copper Wire, Secondary and Primary Grades of Scrap, CDA 100-900 Series Alloys, Brass and Bronze Ingot Maker Grades, Electric Motors and Copper Bearing, Electrolytic Copper Cathodes and Copper Rod
• #### The Extraction of Copper Chemistry LibreTexts
Jun 07, 2021· Electrolytic Refining. The purification uses an electrolyte of copper(II) sulfate solution, impure copper anodes, and strips of high purity copper for the cathodes. The diagram shows a very simplified view of a cell. At the cathode, copper(II) ions are deposited as copper. $Cu^{2+}(aq) + 2e^- \rightarrow Cu(s) \label{5a}$
• #### Sciencemadness Discussion Board Electrolytic DEplating
Dec 10, 2009· You might try using molten lead to dissolve the surface coating. If the copper surface oxidation leads to dewetting, it would simplify the removal of the lead/silver alloy. Others here are better qualified than I to comment on recovery of silver from a silver/lead alloy,
• #### Copper extraction Wikipedia
Copper extraction refers to the methods used to obtain copper from its ores.The conversion of copper consists of a series of physical and electrochemical processes. Methods have evolved and vary with country depending on the ore source, local environmental regulations, and other factors.. As in all mining operations, the ore must usually be beneficiated (concentrated).
• #### Copper Refining an overview ScienceDirect Topics
In the electrolytic process, copper is electrodeposed to obtain a spongy powder deposit at the cathode rather than a smooth, adherent one. In this feature, it differs from the process of copper refining where a strongly adherent product is desired. Low copper ion concentration and high acid content in the electrolyte favor formation of powder deposits.
• #### electrolytic of copper
Electrolytic Copper & High Conductivity Copper Alloys are best suited for chill castings requiring high conductivity. PIAD's chill casting process is best suited to Inquire Now; Electrolytic Tough Pitch Copper. An overview of the key properties, main applications and production processes of electrolytic tough pitch copper.
• #### Refining (metallurgy) Wikipedia
Electrolytic refining. The purest copper is obtained by an electrolytic process, undertaken using a slab of impure copper as the anode and a thin sheet of pure copper as the cathode. The electrolyte is an acidic solution of copper sulphate. By passing electricity through the cell, copper is dissolved from the anode and deposited on the cathode.
• #### Wonder Copper Beryllium Copper Alloy Beryllium Bronze
Sep 03, 2020· Since 1995,Wonder Copper is proud to own and operate six branches throughout China. Wonder Copper offers the broadest line of metals available from one distributor –beryllium copper Sheet&Plate, Flat bar,square bar,round bar,wire as well as a |
http://hal.in2p3.fr/in2p3-00158398 | # Production and Characterization of the $^{7}$H Resonance
Abstract : The 7H resonance was produced via one-proton transfer reaction between a 8He beam at 15.4A MeV and a 12C gas target. The experimental setup was based on the active-target MAYA which allowed a complete reconstruction of the reaction kinematics. The characterization of the identified 7H events resulted in a resonance energy of 600 keV above the 3H+4n threshold and a resonance width of 100 keV. This study represents the first unambiguous proof of the existence of the 7H state.
Keywords :
Document type :
Conference papers
http://hal.in2p3.fr/in2p3-00158398
Contributor : Michel Lion <>
Submitted on : Thursday, June 28, 2007 - 5:10:34 PM
Last modification on : Monday, December 14, 2020 - 4:06:20 PM
### Citation
M. Caamaño, D. Cortina, C.E. Demonchy, B. Jurado, W. Mittig, et al.. Production and Characterization of the $^{7}$H Resonance. International Symposium on Exotic Nuclei, Jul 2006, Khanty-Mansiysk, Russia. pp.23-31, ⟨10.1063/1.2746577⟩. ⟨in2p3-00158398⟩
Record views |
http://texgraph.tuxfamily.org/aide/TeXgraphsu10.html | #### 4.2.4 Macros returning a string
The definition of the following macros can be found in the file TeXgraph.mac.
• coord( <z> [, decimal places] ): returns the point coordinates whose affix is <z> as a couple $\left(x,y\right)$ with the maximum <decimal places> asked (4 by default). This macro can be used as a string in functions or macros that can handle strings as argument. Example : Label(z,@coord(z)).
• engineerF( <x> ): returns the real <x> as a string in engineer size, that is to say $±m×1{0}^{n}$ with $m$ in the interval $\left[1;1000\left[$ and $n$ an integer multiple of $3$. This macro can be used as a string in functions or macros that can handle strings as argument.
• epsCoord( <z> [, decimal places] ): returns the point coordinates whose affix is <z> in the format:$x\phantom{\rule{1em}{0ex}}y$ (eps format coordinates) with the maximum <decimal places> asked (4 by default). This macro can be used as a string in functions or macros that can handle strings as argument.
• label( <expression> ): the expression is alphanumerically evaluated delimited with the symbol \$ if the variable dollar has the value $1$. The macro is returning the resulting string. For example : [dollar:=1, @label(2+2) ] returns "$4$".
This macro is used by the macro: GradDroite.
• svgCoord( <z> [, decimal places] ): returns the point coordinates whose affix is <z> with the format $x\phantom{\rule{1em}{0ex}}y$ (svg coordinates) with the maximal <decimal places> asked (4 by default). This macro can be used as a string in functions or macros that can handle strings as argument.
That macro is handling the current transforming matrix.
• texCoord( <z> [, decimal places] ): returns the point coordinates whose affix is <z> as the couple $\left(x,y\right)$ (tex format coordinates tex) with the maximal <decimal places> as asked (4 by default).This macro can be used as a string in functions or macros that can handle strings as argument.
That macro is handling the current transforming matrix.
• ScriptExt(): returns the string ".bat" under windows and ".sh" if not. (shell script files extension).
• StrNum( <numeric value> ): replace the decimal point by a comma if the predefined variable usecomma is set to $1$ and return the resulting string. The number of decimal places is determined by the variable nbdeci, and the display format with numericFormat (0: default format, 1: scientific, 2: engineer).
Example: [usecomma:=1, nbdeci:=10, Message(@StrNum(10000*sqrt(2)) )] displays: 14142,135623731.
Example: [usecomma:=1, nbdeci:=10, numericFormat:=1, Message(@StrNum(10000*sqrt(2)) )] displays: 1,4142135624E4.
Example: [usecomma:=1, nbdeci:=10, numericFormat:=2, Message(@StrNum(10000*sqrt(2)) )] displays: 14,1421356237E3.
That macro is used by the macro GradDroite. |
http://tex.stackexchange.com/questions/1244/how-to-produce-a-combined-index-from-multiple-documents | # How to produce a combined index from multiple documents?
I'm creating a set of documents which are contained in different LaTeX source files and are going to be compiled into different PDF files. Each one can have its own index, generated using the usual makeindex procedure. But since this documents cover related topics, I'd like to be able to produce one master index that contains all the terms from all the documents. Obviously the references in the index, instead of just being page numbers, will have to include both the page number and some sort of identifier for the document. Is there a way to do this?
Of course, I'm sure I could write a script to post-process the .idx files, but I'd prefer to use something existing.
If I need to dump makeindex and use some alternative, that should be fine.
-
It is possible to do this without any post processing except concatenation of the .idx files for the generation of the master index, but it needs a bit of macro juggling.
The trick is that in the separate documents, you have to make some low-level changes to the \index command so that it includes what makeindex calls 'encapsulation'. You can do this by adding a | symbol followed by a macro name (without preceding backslash). Then you can give the \jobname as an argument to that macro name, like so:
\documentclass{article}
\usepackage{makeidx}
\makeindex
\let\LATEXindex\index % save old definition to prevent recursion
\renewcommand\index[1]{\LATEXindex{#1|docname{\jobname}}}
This will create .idx entries that look like this:
\indexentry{alpha|docname{testdoc}}{1}
which, after running makeindex, is converted into the following .ind entry:
\item alpha, \docname{testdoc}{1}
Now, back in the separate documents, you have to add a definition for the \docname macro, for example like this:
\newcommand\docname[2]{#2}
After that, the separate documents should compile as before, except for any index entries that already used encapsulation (you will have to fix those manually).
Now for the global index creation all you have to do is concatenate all the separate .idx files into a single file, run makeindex on the result, and use an input file like this:
\documentclass{article}
\usepackage{makeidx}
\newcommand\docname[2]{#1: #2}
\begin{document}
\printindex
\end{document}
Be careful: this document should not contain a \makeindex command itself, or at least you should never run 'makeindex' for this file, as that will overwrite the combined .ind file.
- |
https://radar.inria.fr/report/2018/gaia/uid54.html | • The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year.
• Legal notice
• Personal data
## Section: New Results
### Using symbolic computation to solve algebraic Riccati equations arising in invariant filtering
In this joint work with Axel Barrau from Safran Tech [23], we propose a new step in the development of invariant observers. In the past, this theory led to impressive simplifications of the error equations encountered in estimation problems, especially those related to navigation. This was used to reduce computation load or derive new theoretical properties. Here, we leverage this advantage to obtain closed-form solutions of the underlying algebraic Riccati equations through advanced symbolic computation methods. |
https://en.bharatpedia.org/wiki/National_Physical_Laboratory_of_India | # National Physical Laboratory of India
Agency overview File:Logo NPL india.svg 4 January 1947 New Delhi Prof.Dr. Venu Gopal Achanta, Director Council of Scientific and Industrial Research nplindia.org
The CSIR- National Physical Laboratory of India, situated in New Delhi, is the measurement standards laboratory of India. It maintains standards of SI units in India and calibrates the national standards of weights and measures.
## History of measurement systems in India
In the Harappan era, which is nearly 5000 years old, one finds excellent examples of town planning and architecture. The sizes of the bricks were the same all over the region. In the time of Chandragupta Maurya, some 2400 years ago, there was a well - defined system of weights and measures. The government of that time ensured that everybody used the same system. In the Indian medical system, Ayurveda, the units of mass and volume were well defined.
The measurement system during the time of the Mughal emperor, Akbar, the guz was the measure of length. The guz was widely used till the introduction of the metric system in India in 1956. During the British period, efforts were made to achieve uniformity in weights and measures. A compromise was reached in the system of measurements which continued till India's independence in 1947. After independence in 1947, it was realized that for fast industrial growth of the country, it would be necessary to establish a modern measurement system in the country. The Lok Sabha in April 1955 resolved : This house is of the opinion that the Government of India should take necessary steps to introduce uniform weights and measures throughout the country based on metric system.[1][circular reference]
## History of the National Physical Laboratory, India
The National Physical Laboratory, India was one of the earliest national laboratories set up under the Council of Scientific & Industrial Research. Jawaharlal Nehru laid the foundation stone of NPL on 4 January 1947. Dr. K. S. Krishnan was the first Director of the laboratory. The main building of the laboratory was formally opened by Former Deputy Prime Minister Sardar Vallabhbhai Patel on 21 January 1950. Former Prime Minister Indira Gandhi, inaugurated the Silver Jubilee Celebration of the Laboratory on 23 December 1975.
NPL Charter:-
The main aim of the laboratory is to strengthen and advance physics-based research and development for the overall development of science and technology in the country. In particular its objectives are:
To establish, maintain and improve continuously by research, for the benefit of the nation, National Standards of Measurements and to realize the Units based on International System (Under the subordinate Legislations of Weights and Measures Act 1956, reissued in 1988 under the 1976 Act). To identify and conduct after due consideration, research in areas of physics which are most appropriate to the needsof the nation and for advancement of field
To assist industries, national and other agencies in their developmental tasks by precision measurements, calibration, development of devices, processes, and other allied problems related to physics.
To keep itself informed of and study critically the status of physics.
File:Traceability Pyramid at NPL.png
Newly established structures at NPL campus
## Maintenance of standards of measurements in India
Each modernized country, including India has a National Metrological Institute (NMI), which maintains the standards of measurements. This responsibility has been given to the National Physical Laboratory, New Delhi.
### Metre
The standard unit of length, metre, is realized by employing a stabilized helium-neon laser as a source of light. Its frequency is measured experimentally. From this value of frequency and the internationally accepted value of the speed of light (Lua error in package.lua at line 80: module 'Module:Val/units' not found.), the wavelength is determined using the relation:
$wavelength = \frac{velocity-of-light}{frequency}$
The nominal value of wavelength, employed at NPL is 633 nanometer. By a sophisticated instrument, known as an optical interferometer, any length can be measured in terms of the wavelength of laser light.
The present level of uncertainty attained at NPL in length measurements is ±3 × 10−9. However, in most measurements, an uncertainty of ±1 × 10−6 is adequate.
### Kilogramme
The Indian national standard of mass, kilogramme, is copy number 57 of the international prototype of the kilogram supplied by the International Bureau of Weights and Measures (BIPM: French – Bureau International des Poids et Mesures), Paris. This is a platinum-iridium cylinder whose mass is measured against the international prototype at BIPM. The NPL also maintains a group of transfer standard kilograms made of non-magnetic stainless steel and nickel-chromium alloy.
The uncertainty in mass measurements at NPL is ±4.6 × 10−9.
### Second
The national standard of time interval, second as well as frequency, is maintained through four parameters, which can be measured most accurately. Therefore, attempts are made to link other physical quantities to time and frequency. The standard maintained at NPL has to be linked to different users. This process, known as dissemination, is carried out in a number of ways. For applications requiring low levels of uncertainty, there is satellite based dissemination service, which utilizes the Indian national satellite, INSAT. Time is also disseminated through TV, radio, and special telephone services. The caesium atomic clocks maintained at NPL are linked to other such instituted all over the world through a set of global positioning satellites.
### Ampere
The unit of electric current, ampere, is realized at NPL by measuring the volt and the ohm separately.
The uncertainty in measurement of ampere is ± 1 × 10−6.
### Kelvin
The standard of temperature is based on the International Temperature Scale of 1990 (ITS-90). This is based on the assigned temperatures to several fixed points. One of the most fundamental temperatures of these is the triple point of water. At this temperature, ice, water and steam are at equilibrium with each other. This temperature has been assigned the value of 273.16 kelvins. This temperature can be realized, maintained and measured in the laboratory. At present temperature standards maintained at NPL cover a range of 54 to 2,473 kelvins.
The uncertainty in its measure is ± 2.5 × 10−4.
### Candela
The unit of luminous intensity, candela, is realized by using an absolute radiometer. For practical work, a group of tungsten incandescent lamps are used.
The level of uncertainty is ± 1.3 × 10−2.
### Mole
Experimental work has been initiated to realize mole, the SI unit for amount of substance
The NPL does not maintain standards of measurements for ionizing radiations. This is the responsibility of the Homi Bhabha Atomic Research Centre, Mumbai.
## Calibrator of weights and measures
The standards maintained at NPL are periodically compared with standards maintained at other National Metrological Institutes in the world as well as the BIPM in Paris. This exercise ensures that Indian national standards are equivalent to those of the rest of the world.
Any measurement made in a country should be directly or indirectly linked to the national standards of the country, For this purpose, a chain of laboratories are set up in different states of the country. The weights and measures used in daily life are tested in the laboratories and certified. It is the responsibility of the NPL to calibrate the measurement standards in these laboratories at different levels. In this manner, the measurements made in any part of the country are linked to the national standards and through them to the international standards.
The weights and balances used in local markets and other areas are expected to be certified by the Department of Weights and Measures of the local government. Working standards of these local departments should, in turn, be calibrated against the state level standards or any other laboratory which is entitled to do so. The state level laboratories are required to get their standards calibrated from the NPL at the national level which is equivalent to the international standards.
## Bharatiya Nirdeshak Dravya (BND) or Indian Reference Materials
Bharatiya Nirdeshak Dravya (BND) or Indian reference materials are reference materials developed by NPL which derive their traceability from National Standards.
## Research
NPL is also involved in research. One of the important research activities undertaken by NPL is to devise the chemical formula for the indelible ink which is being used in the Indian elections to prevent fraudulent voting. This ink, manufactured by the Mysore Paints and Varnish Limited is applied on the finger nail of the voter as an indicator that the voter has already cast his vote.
NPL also have section working on development of biosensors. Currently the Biomedical Instrumentation section is headed by Dr. R. K. Kotnala and section is primarily focusing on development of sensor for cholesterol, measurement and microfluidic based biosensors. Section is also developing biosensors for Uric acid detection.
## NPL's Contributions
### The Indelible Mark/Ink
During general election, nearly 40 million people wear a CSIR mark on their fingers. The Indelible ink used to mark the fingernail of a Voter during general elections is a time-tested gift of CSIR to the spirit of democracy. Developed in 1952, it was first produced in-campus. Subsequently, industry has been manufacturing the Ink. It is also exported to Sri Lanka, Indonesia, Turkey and other democracies.
### Pristine Air-Quality Monitoring Station at Palampur
National Physical Laboratory (NPL) has established an atmospheric monitoring station in the campus of Institute of Himalayan Bioresource Technology (IHBT) at Palampur (H.P.) at an altitude of 1391 m for generating the base data for atmospheric trace species & properties to serve as reference for comparison of polluted atmosphere in India. At this station, NPL has installed state of art air monitoring system, greenhouse gas measurement system and Raman Lidar. A number of parameters like CO, NO, NO2, NH3, SO2, O3, PM, HC & BC besides CO2 & CH4 are being currently monitored at this station which is also equipped with weather station (AWS) for measurement of weather parameters.[2][3]
### Gold Standard (BND-4201)
The BND-4201 is first Indian reference material for gold of ‘9999’ fineness (gold that is 99.99% pure with impurities of only 100 parts-per-million).
## References
1. Indian units of measurement
2. "National Physical Laboratory(NPL)- CSIR dedicates the first "Pristine air-quality monitoring station at Palampur" to the Nation". pib.nic.in.
3. "CSIR-NPL launches India's First Pristine Air-Quality Monitoring Station at Palampur". MyGov Blogs. 25 March 2017. |
https://hal.inria.fr/hal-01194678 | # The height of random binary unlabelled trees
1 ALGORITHMS - Algorithms
Inria Paris-Rocquencourt
Abstract : This extended abstract is dedicated to the analysis of the height of non-plane unlabelled rooted binary trees. The height of such a tree chosen uniformly among those of size $n$ is proved to have a limiting theta distribution, both in a central and local sense. Moderate as well as large deviations estimates are also derived. The proofs rely on the analysis (in the complex plane) of generating functions associated with trees of bounded height.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [25 references]
https://hal.inria.fr/hal-01194678
Contributor : Coordination Episciences Iam <>
Submitted on : Monday, September 7, 2015 - 12:51:01 PM
Last modification on : Friday, May 25, 2018 - 12:02:05 PM
Long-term archiving on: : Tuesday, December 8, 2015 - 12:58:10 PM
### File
dmAI0106.pdf
Publisher files allowed on an open archive
### Identifiers
• HAL Id : hal-01194678, version 1
### Citation
Nicolas Broutin, Philippe Flajolet. The height of random binary unlabelled trees. Fifth Colloquium on Mathematics and Computer Science, 2008, Kiel, Germany. pp.121-134. ⟨hal-01194678⟩
Record views |
https://physics.stackexchange.com/questions/444195/conceptual-physics-action-reaction-forces-and-acceleration-of-an-object-on-a-u | Conceptual Physics - Action Reaction Forces and Acceleration of an object on a uniform slope ramp
In this picture, you have a girl holding an apple. There is a normal force from her hand acting on the apple and there is the apple's weight pushing it down. Since the apple is not moving the two forces cancel each other out.
Picture:
This is what the textbook said:
"Since n is equal and opposite to W, we cannot say that n and W comprise an action-reaction pair. The reason is that action and reaction always act on different objects and here we see n and W both acting on the apple."
I thought that the hand pushes the apple and the apple pushes the hand so both forces AREN'T acting on the apple.
From my understanding of action-reaction pairs and the definition above, does it mean it will ALWAYS result in some king of movement.
For example, this picture (the example of the car):
Secondly, could someone explain to me why thee acceleration is constant on a uniform slope ramp. And what is a uniform slope ramp?
Picture:
Thank you very much.
The force of weight ($$W = mg$$) acts on the apple. Who is applying this force?
Well, the Earth is.
And remember Newton's third law:
When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
Then if a body A applies a force on a body B, body B will apply a force on body A such that they have the same magnitude and opposite directions.
What is important here? Well, Newton's third law talks about two bodies, A and B in my example.
If the Earth applies a force ($$mg$$) on the apple, the apple applies a force on the Earth with the same magnitude and opposite directions. See, weight is a force between the Earth and the apple, therefore its reaction must be acting on the Earth!
The Earth applies a force $$mg$$ on the apple.
Therefore, the apple applies a force $$mg$$ on the Earth. (Yes, the force of gravity is also subject to Newton's third law. The apple attracts the Earth gravitationally just like the Earth attracts the apple).
The normal force $$N$$ exists because objects don't tend to pass through eachother. This can be explained by electromagnetism, but for now, you just need to know that the contact between the hand and the apple causes electromagnetic repulsion which creates the normal force.
(In other words, the atoms in the hand repel the atoms in the apple, due to electromagnetic interaction).
Therefore, the hand applies a force (the normal force) on the apple, and the apple applies a force of reaction on your hand. That is why you 'feel' your hand being pushed when you hold something. You are applying a contact force on the object and the object applies a contact force on you!
And finally:
• A force and its reaction must be acting on different bodies. Remember A and B? Well, the apple and the Earth are different bodies, and so are the apple and the hand.
• There are two kinds of forces in classical mechanics: field forces and contact forces. Gravity is a field force, and therefore its reaction must also be a field force (of course, it is also gravity). The normal force is a contact force, and therefore its reaction must also be a contact force. The force and the reaction have the same nature. Gravity and the normal force certainly don't have the same nature, and they can never be an action-reaction pair.
It turns out that in your example, $$W$$ and $$N$$ have the same magnitude. But this is just due to them being the only forces acting on the apple. And the apple is equilibrium! Therefore, Newton's second law, $$F = ma$$, guarantees that $$N = mg$$ so that the apple isn't accelerating and the net force is zero.
But this is just one case. It doesn't mean the normal and the weight will always have the same magnitude. They won't.
One example: the inclined plane.
You can see from the inclined plane's geometry that the resultant force on the object is $$mgsinθ$$, and it is constant. Therefore, by Newton's second law, the acceleration is also constant and equal to $$gsinθ$$.
And the normal force, always perpendicular to the contact surface, will have magnitude $$mgcosθ$$, and this is smaller than the weight's magnitude.
To finish things, a constant slope or constant inclination means that the inclined plane always (everywhere on its surface) has the same angle $$θ$$ of inclination relative to the ground. |
https://www.newtonproject.ox.ac.uk/view/texts/diplomatic/THEM00146 | <1r>
Sr Isaac Newton was born on Xmas day in the 25th of Decr 1642. O. S. at Woolstrope in the parish of Colsterworth {sic} in the County of Lincoln about two \near three/ months after the death of his father Isaac Newton who was descended from the eldest branch of the family of Sr Iohn Newton of Lincolnshire, Bartt, & was Lord of the said Manor of Woolstrope wch appears by authentick deeds to have been near 200 years in the \his/ family |which came thither from Westby in the same County but originally from \Newtown in/ Lancashire| His mother was Hannah Ascough of the antient family of the Ascoughs of Market Overton in the County of Rutland, who \she/ was married {sic} a second time, to the Reverend Mr Benjamin Smith Rector of North Witham & had by him a son & two daughters from whom are descended the four nephews & \four/ nieces who inherit Sr Isaac's personal estate
<1v>
Sr Isaac was sent at \when/ 12 years old to the great school at Grantham where he {sic} \whilst a boy he shewed a great \strong/ disposition towards mechanicks &/ gave early tokens of an uncom̄on genius. After he had been there about four years & a half his father in law \mother/ took him home, intending he should apply himself to the management of his own estate as his ancestors had done for several generations before him but his genius could not brook un metier si bas \such an employment/ & the strong inclination he shewed for reading & inattention to every thing else induced his father \in law/ \mother/ to send him to Grantham school again for nine months & thence he went to Trinity College at Cambridge where he was admitted the 5th of Iune in 1660, under Mr \under Mr Benjamin/ Pulleyn – Bef He always informed himself before hand of the books his tutour intended to read, & when he came to the lectures found he knew more \of them/ than his tutour, the first books he read for that purpose were Sanderson's logick & Kepler's opticks, what put L
<2r>
Having lit on some books relating to Iudicial Astrology
Being, like Cassini, \A/ desire {sic} to know wether there was any thing in judicial Astrology put him \(as well as Cassini)/ first upon the Mathematicks, & finding he could not make a figure make no judgement of it till he could make a figure i.e. see how all the Planets bear at a certain time |he bought for that purpose an English Euclid with an Index at the end & only turned to two or 3 problems wch he wanted to| he soon \& immediately/ found out the emptiness of that science
A desire to know wether there was any thing in judicial astrology put him as it did Cassini upon studying Mathematicks, but he immediately \discovered he/ discovered the emptiness of that science – He bought an English Euclid with an Index at the end & only turned to two {sic} or three problems wch he wanted to make use of in casting a figure, & despised all the rest as a trifling book. |the moment he made a figure for wch purpose he made use of two or three problems in Euclid wch he turned to by means of an Index & never read the rest but despised it upon the whole as a trifling book| He despised Euclid as a trifling & vulgar \book/ & only turned from \by/ the Index to two or three problems wch he wanted to make use of to see wether there was any thing in
He at once Without the usual steps he went at once upon Des Cartes's Geometry & made himself Master of it \by dint of genius & application/ without \going throu the usual steps or/ the assistance of any other person, & it may with truth be said of him that {illeg}
<2v>
In 1664 he bought a prism to try {sic} some experiments upon Des Cartes's book of colours & immediately \soon/ found out his own Hypothesis & the erroneousness of Des Cartes's, about this time he began to haue the first hint of his method of fluxions & solved several difficult problems to & in the year 1665 when he was retired to his own estate on account of the Plague, he fell discovered his system of gravity \he took the first hint of it from seeing an {loose} apple fall from a tree/|As to the treatise he sent to the Royal Society|
In 1667 he was chosen fellow of Trinity College in \{illeg} In 1667 he was elected fellow of Trinity College/ & in 1669 Mathematical professor upon Dr Barrows resignation – In 1675 he had a particular dispensation from K. C. 2d to continue fellow without taking orders —–
In 1667 he was Elected fellow of Trinity College & in 1669 Dr Barrow resigned the Mathematical professorship to him —
|In 1671 he was Elected fellow of the Royal Society|
In 1675 he had a dispensation from K. C. the 2d to continue fellow without taking orders –
In 1687 he was chosen one of the Delegates to represent the University of Cambridge before the High Com̄ission Court to answer for the University's refusing to admitt Father Francis Master of Arts upon the K's mandamus without taking the oaths, & was a great means instrument in perswading his collegues to persist in the maintenance of their rights & priviledges – In 1688 he was chosen \by the University of Cambridge/ member of the Convention which \Parliament/ <3v> was called by the P. of Orange
In 1696 – the late E. of Halifax then Chancellour of the Exchequer that great Patron of the learned writt him a letter to Cambridge acquainting him the K. had he had prevailed wth the King to make him Warden of the Mint in wch office he was of great \did signal/ service in the great recoinage which happened immediately after — \he soon after quitted his professorship at Cambridge./ In 1699 – he was made Master & Worker of the Mint in wch he continued to the his death — |& behaved himself with an universal character of integrity & disinterestedness & had frequent opportunities of employing his skill in Chy numbers particularly in his table of Assays of foreign coins wch are printed in the book of coins lately printed by Dr Arbuthnott|
In 1701 \he made W. Whiston his deputy professor —/ of Mathematicks /& allowed him all the salary from that time thou he did not resign the professorship to him till 1703\ upon the choice of a new Part he was reelected member of Part for the University of Cambridge – In 1705 he stood again with the Earl of Godolphin \the Lord Treasurers only son/ \{sic} only son of the Lord High Treasurer Godolphin/ but was {sic} not chosen, after wch he was offered himself no more – The same year he was Knighted by the Queen at Cambridge
In 1703 he was Elected President of the Royal Society being & continued so to his death, |being above 23 years, he| was the first who was President for so long, & was neuer removed after once \never removed but/ continued President from their first Election to their {sic} death
<4r>
The Chronology is in the press & will be out before I hope before {yo} the 12th Novr – I will do my self the honour to send you one of the {sic} first that are printed
<5r>
He was highly honoured & respected in all reigns & under all administrations, even by those he opposed, for in every station he shewed an inflexible attachment to the cause of liberty, & our present happy establishment. Their present Majesties always shewed him particular marks of their favour & esteem, & often did him the honour to admitt him to their Royal presence for hours together. The Queen, whose great entertainment is hearing arguments concerning matters of Philosophy & Divinity, frequently desired to see him & always expressed great satisfaction in his conversation. She was graciously pleased to take part in the disputes he was engaged in during his life, & has shewn a great regard for every thing that concerned his honour & memory since his death. I must not omitt telling you, that I have often had the honour to hear Her Majesty say before the whole circle, that she kept <5v> the abstract of Chronology Sr Isaac gave her written in his own hand among her choicest treasures, & that she thought it a happiness to have lived at the same time, & have known so great a man. I conjure you, Sr to insert this in the Eloge because I am perswaded you can say nothing that will do him more honour, than such a com̄endation from a Queen, who is the Minerva of her age.
<7r>
Their present Majesties always shewed him particular marks of their favour & esteem, & often did him the honour to admitt him to their Royal presence for hours together. The Queen, whose great entertainment is hearing arguments upon matters of Divinity & Philosophy, frequently desired to see him & always expressed great satisfaction in his conversation. She was graciously pleased to take part in the disputes he was engaged in during his life, & has shewn a great regard for every thing that concerned his honour & memory since his death. I must not omitt telling you that I have often had the honour to hear her Majesty say before the whole circle that she kept the abstract of Chronology Sr Isaac gave her written in his own hand among her choicest treasures, <7v> & that she thought it a happiness to have lived at the same time & have known so great a man. I conjure you Sr to insert this in the Eloge because I am perswaded you can say nothing that will do him more honour than such a commendation from a Queen, who is the Minerva of her age.
<9r>
He lived at London ever since the year 1696 when he was made Warden of the Mint, no body ever lived with him but my wife who was \in the/ with him near twenty years, he always lived in a very handsome, generous manner thou without ostentation very \always/ hospitable, & upon \proper/ occasions gave splendid entertainments, he was \generous &/ charitable without bounds, I beleive n he used to say, that they who gave nothing away till they dyed, never gave, wch perhaps was one reason of his not making a will, I beleive no man of his circumstances ever gave away so much during his life time \in alms in encouraging ingenuity & learning & to his relations/, nor upon all occasions shewed a greater contempt of his own money, nor & frugality of that wch belonged to the publick or any society he was entrusted for – He refused pensions & additional employments that were offered him, & thou in all reigns & under all \the different/ administrations that have governed here during these last 30 years he \was/ always highly honoured & respected \even by those he opposed for/ thou \in all places where he had any thing to do in all stations/ he always shewed an inflexible attachment to the Cause of liberty
<9v>
He was so far from being elated with the extraordinary honours paid him by all mankind \modest & humble notwithstanding/ that he was too often \sometimes/ apt to think those who shewed him that respect \& applause/ wch was due to him look upon \take/ the applause wch was so deservedly paid him as in a quite contrary sense from what it was intended
He was of {sic} so \exceedingly affable to all/ mild & meek & of such a sweetness of temper that a melancholy {sic} melancholy story would often fetch tears from him, & he had the greatest abhorrence & detestation of any act of cruelty to man or beast, mercy to both being a darling topick he used to Dwell upon — Whilst his {math} He was
As to his sentiments of religion
So far from having any views He was exceed very temperate & sober in his diet thou without ever observing any rules or strict regimen, & was so far from having any vice that he knew none of the pla
As He very {free} He was certainly a firm beleiver of revealed religion wch appears by the many volumes he has left on that subject as well as by the exemplariness of his life & constantly frequented the divine service \according to the Church of England/, thou at the but his \opinion of the Xtian/ religion was not founded on so <10r> narrow a bottom as to confine it to this or that particular sect, nor his charity & morality so scanty as to allow of persecution for shew a coldness to those who differed \of another opinion/ in matters indifferent much less admitt of persecution of wch he always shewed the strongest abhorrence & detestation —
He was very temperate & sober in his diet but neuer observed any regimen \The greatest modesty & simplicity/ |a native modesty & simplicity a greatest modesty & {sic} simple simplicity appeared in all| appeared {sic} his express behaviour actions & expressions he had he shewed – He was very temperate & sober in his diet but never observed any regimen \& was very averse to taking of Physick/ he was blessed with a very happy \& vigorous/ constitution, & knew very little sickness he never used spectacles nor lost a \but one/ tooth to the day of his death – About {sic} years before he died he was troubled with an \Incon {sic} of bladder/ irretention of urine \from a weakness of sphincter/ upon wch Dr Mead advised him to leave off his chariot, & that continued upon him more or less according to the motion he used, about two years before he died he voided \without any pain/ a stone about the bigness of a pea broke |wch came away| in two pieces at seve \one/ at some days distance from the other — From the time of Soon after the indisposition abovementioned he left off dining abroad or in much company <10v> at home, & drank \had/ constantly for his breakfast some tea of orange chips & saffron prescribed by Dr Mead & bread & butter, & some broth of for supper some broth & at dinner eat not seldom \of meat/ above the wing of a chicken but of vegetables & fruit very & sweet meats very heartily wch agreed very well with him — At he had a violent cough upon wch my wife he was with much ado perswaded him to take a lodging at Kensignton {sic} where he had for the first time in his 84 year a fit of the gout \except one in his 80 year/ & found himself so much better that he kept the lodging till he died, it was visible that whilst he staied at Ken In the winter Decr \the winter/ 1725 – he told me, was very desirous & pressing to resign his employment to me, his indisposition disabling him from officiating himself, & he soon a \& his old deputy being confined by a dropsy/ & as it was an office of the greatest trust \confidence/ & exactness & I knew how uneasie he would be to entrust it with a stranger at I offered to act for him wch I did for about a twe a year before he died & made his mind <11r> so easie on that subject that he went but 3 or 4 ti thô he never failed being a he never went to the tower above 3 or 4 times afterwards & t \& then did not then act himself/ He was always so well at Kensington that wee took all methods to keep him there & \but/ thou I had eased him of his uneasie journeys to the Tower \wch was his only real call/ wee {sic} could not \by any means/ prevail with him not to come to town to stay not to come to town – \On Tuesday/ The last day of Febry 172$\frac{6}{7}$ |he| came to town in order to go to \a meeting at/ the Royal Society & on the 1st of March I thought I had not seen him better in many years & he was fully sensible of it himself & told me smiling that he had slept the Sunday before from 11 at night till 8 in the morning without waking, but his great fatigue \going to the Society \&/ making & receiving visits/ in town brought his old complaint upon him how & he was very ill of it on Friday the 3d of March but however went on Saturday \the 4th/ to Kensington where he continued ill, but he returned to Kensington where he continued ill Dr Mead & Cheselden immediately said it was the stone in his \there were symptoms of {illeg} stone {within} {illeg}/ his bladder & gaue no hop little hopes of him, it \the stone/ was probably stirred from the place where it lay quiet by his great <11v> motion in town, |there coming away \{illeg}/ matter, in his {illeg} wch {shook} that {illeg} {became an ulcer on wch} {illeg}| he seemed easier on Wednesday the 15th of March & gave us some hopes, but he grew worse & weaker & on Friday had a violent looseness on Saturday morning he seemed easier read all the news papers & was perfectly sensible held a \a tolerable/ long discourse with Dr Mead & had all his senses perfect, he had from time to time during his last return to Kensington violent fits of pain & thou the drops of sweat ran down from his face \with anguish/ would hardly cry out, more patience was never shewn by any mortal – From Saturday night at six a clock & all Sunday he lay insensible, & died on Monday the 20th at between one & two in the morning —– In the 17 days in wch he Sr was free from the most {illeg} pain \Torture/ above a quarter of an hour he never once groaned or {illeg} \uttered/ one peevish \{word}/ or {illeg} and ye only sign of impatience he showd was on Saturday evening {illeg} when his {illeg} <12r> was to ask often what a clock it was
His humillity was so great that he never despised any man for want of capacity but was shocked at bad morals, and want of due veneration to Religon {sic} was the only discours could make him rebuke his {acquaintance} & wch he wd not bear from those who were upon other accounts {men} of singular {merit}
He was never marryed His life was a continued series of {Labour} of Vertue of patience & all {illeg} Vertues wth out any mixture of Vice from wch he was pure & unspotted in thought word &
<13r>
He was the admiration not only of his own country men of the highest reach & capacity but of all foreigners who visited England and as the Young Nobillity who were going to {illeg} endeavour to be introduced in order to say when the{y} were {asked} after him they knew him {illeg} strangers of {illeg} try all possible ways of seeing him. Vi Senr Bianchini the Popes chamberlain declared he came from Ais la Chapelle on purpose |
http://en.wikipedia.org/wiki/SO(3) | Rotation group SO(3)
(Redirected from SO(3))
"SO(3)" redirects here. For its definition over an arbitrary field, see special orthogonal group.
In mechanics and geometry, the 3D rotation group is the group of all rotations about the origin of three-dimensional Euclidean space R3 under the operation of composition.[1] By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e. handedness of space). A distance-preserving transformation which reverses orientation is an improper rotation, that is a reflection or, in the general position, a rotoreflection. The origin in Euclidean space establishes a one-to-one correspondence between points and their coordinate vectors. Rotations about the origin can be thought of as magnitude-preserving linear transformations of Euclidean 3-dimensional vectors (whose vector space is also denoted as R3).
Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity map satisfies the definition of a rotation. Owing to the above properties (along with the associative property, which rotations obey), the set of all rotations is a group under composition. Moreover, the rotation group has a natural manifold structure for which the group operations are smooth; so it is in fact a Lie group. The rotation group is often denoted SO(3) (or, less ambiguously, SO(3, R)) for reasons explained below.
Length and angle
Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length:
$\mathbf{u}\cdot\mathbf{v} = \tfrac{1}{2}\left(\|\mathbf{u}+\mathbf{v}\|^2 - \|\mathbf{u}\|^2 - \|\mathbf{v}\|^2\right).$
It follows that any length-preserving transformation in R3 preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on R3, which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where SO(3) appears as a special case.
Orthogonal and rotation matrices
Every rotation maps an orthonormal basis of R3 to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let R be a given rotation. With respect to the standard basis e1, e2, e3 of R3 the columns of R are given by (Re1, Re2, Re3). Since the standard basis is orthonormal, and since R preserves angles and length, the columns of R form another orthonormal basis. This orthonormality condition can be expressed in the form
$R^\mathsf{T}R = I,$
where RT denotes the transpose of R and I is the 3 × 3 identity matrix. Matrices for which this property holds are called orthogonal matrices. The group of all 3 × 3 orthogonal matrices is denoted O(3), and consists of all proper and improper rotations.
In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix R, note that det RT = det R -1 implies (det R)2 = 1, so that det R = ±1. The subgroup of orthogonal matrices with determinant +1 is called the special orthogonal group, denoted SO(3).
Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special orthogonal group SO(3).
Improper rotations correspond to orthogonal matrices with determinant −1, and they do not form a group because the product of two improper rotations is a proper rotation.
Group structure
The rotation group is a group under function composition (or equivalently the product of linear transformations). It is a subgroup of the general linear group consisting of all invertible linear transformations of the real 3-space R3.[2]
Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference. For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is a different rotation than the one obtained by first rotating around y and then x.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Axis of rotation
Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of R3 which is called the axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary 3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis. (Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or counterclockwise with respect to this orientation).
For example, counterclockwise rotation about the positive z-axis by angle φ is given by
$R_z(\varphi) = \begin{bmatrix}\cos\varphi & -\sin\varphi & 0 \\ \sin\varphi & \cos\varphi & 0 \\ 0 & 0 & 1\end{bmatrix}.$
Given a unit vector n in R3 and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n (with orientation determined by n). Then
• R(0, n) is the identity transformation for any n
• R(φ, n) = R(−φ, −n)
• R(π + φ, n) = R(π − φ, −n).
Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ π and a unit vector n such that
• n is arbitrary if φ = 0
• n is unique if 0 < φ < π
• n is unique up to a sign if φ = π (that is, the rotations R(π, ±n) are identical).
Topology
The Lie group SO(3) is diffeomorphic to the real projective space RP3.
Consider the solid ball in R3 of radius π (that is, all points of R3 of distance π or less from the origin). Given the above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the ball. Rotation through angles between 0 and −π correspond to the point on the same axis and distance from the origin but on the opposite side of the origin. The one remaining issue is that the two rotations through π and through −π are the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we arrive at a topological space homeomorphic to the rotation group.
Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space RP3, so the latter can also serve as a topological model for the rotation group.
These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the z-axis starting and ending at the identity rotation (i.e. a series of rotation through an angle φ where φ runs from 0 to ).
Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The Balinese plate trick and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin-statistics theorem.
The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map.
Connection between SO(3) and SU(2)
Stereographic projection from the sphere of radius 1/2 from the north pole (x, y, z) = (0, 0, 1/2) onto the plane M given by z = −1/2 coordinatized by (ξ, η), here shown in cross section.
The general reference for this section is Gelfand, Minlos & Shapiro (1963). The points P on the sphere S = {(x, y, z) ∈ ℝ3:x2 + y2 + z2 = 1/4} can, barring the north pole N, be put into one-to-one bijection with points S(P) = P´ on the plane M defined by z = −1/2, see figure. The map S is called stereographic projection. Let the coordinates on M be (ξ, η). The line L passing through N and P can be written
$L = N + t(N - P) = (0,0,1/2) + t( (0,0,1/2) - (x, y, z) ), \quad t\in \mathbb{R}.$
Demanding that the z-coordinate equals 1/2, one finds t = 1/z12, hence
$S:\mathbf{S} \rightarrow M; P \mapsto P'; (x,y,z) \mapsto (\xi, \eta) = \left(\frac{x}{\frac{1}{2} - z}, \frac{y}{\frac{1}{2} - z}\right) \equiv \zeta = \xi + i\eta,$
where, for later convenience, the plane M is identified with the complex plane . For the inverse, write L as
$L = N + s(P'-N) = (0,0,\frac{1}{2}) + s\left( (\xi, \eta, -\frac{1}{2}) - (0,0,\frac{1}{2})\right),$
and demand x2 + y2 + z2 = 1/4 to find s = 1/1 + ξ2 + η2 and thus
$S^{-1}:M \rightarrow \mathbf{S}; P' \mapsto P;(\xi, \eta) \mapsto (x,y,z) = \left(\frac{\xi}{1 + \xi^2 + \eta^2}, \frac{\eta}{1 + \xi^2 + \eta^2}, \frac{-1 + \xi^2 + \eta^2}{2 + 2\xi^2 + 2\eta^2}\right).$
If g ∈ SO(3) is a rotation, then it will take points on S to points on S by its standard action Πs(g) on the embedding space 3. By composing this action with S one obtains a transformation S ∘ Πs(g) ∘ S−1 of M, ζ = P ↦ Πs(g)P =gPS(gP) ≡ Πu(g)ζ = ζ´. Thus Πu(g) is a transformation of associated to the transformation Πs(g) of 3.
It turns out that g ∈ SO(3) represented in this way by Πu(g) can be expressed as a matrix Πu(g) ∈ SU(2) (where the notation is recycled to use the same name for the matrix as for the transformation of it represents). To identify this matrix, consider first a rotation gφ about the z-axis through an angle φ,
\begin{align}x' &= x\cos \varphi - y \sin \varphi,\\ y' &= x\sin \varphi - y \cos \varphi,\\ z' &= z.\end{align}
Hence
$\zeta' = \frac{x' + iy'}{\frac{1}{2} - z'} = \frac{e^{i\varphi}(x + iy)}{\frac{1}{2} - z} = e^{i\varphi}\zeta = \frac{\cos \varphi \zeta + i \sin \varphi }{0 \zeta + 1},$
which, unsurprisingly, is a rotation in the complex plane. In an analogous way, if gθ is a rotation about the x-axis through and angle θ, then
$w' = e^{i\theta}w, \quad w = \frac{x + iz}{\frac{1}{2} - x},$
which, after a little algebra, becomes
$\zeta' = \frac{\cos \frac{\theta}{2}\zeta +i\sin \frac{\theta}{2} }{i \sin\frac{\theta}{2}\zeta + \cos\frac{\theta}{2}}.$
These two rotations, gφ, gθ, thus correspond to bilinear transforms of 2 ≃ ℂ ≃ M, namely, they are examples of Möbius transformations. A general Möbius transformation is given by
$\zeta' = \frac{\alpha \zeta + \beta}{\gamma \zeta + \delta}, \quad \alpha\delta - \beta\gamma \ne 0.$.
The rotations, gφ, gθ generate all of SO(3) and the composition rules of the Möbius transformations show that any composition of gφ, gθ translates to the corresponding composition of Möbius transformations. The Möbius transformations can be represented by matrices
$\left(\begin{matrix}\alpha & \beta\\ \gamma & \delta\end{matrix}\right), \quad \quad \alpha\delta - \beta\gamma = 1,$
since a common factor of α, β, γ, δ cancels. For the same reason, the matrix is not uniquely defined since multiplication by I has no effect on either the determinant or the Möbius transformation. The composition law of Möbius transformations follow that of the corresponding matrices. The conclusion is that each Möbius transformation corresponds to two matrices g, −g ∈ SL(2, ℂ). Using this correspondence one may write
\begin{align}\Pi_u(g_\varphi) &= \Pi_u\left[\left(\begin{matrix} \cos \varphi & -\sin \varphi & 0\\ \sin \varphi & \cos \varphi & 0\\ 0 & 0 & 0 \end{matrix}\right)\right] = \pm \left(\begin{matrix} e^{i\frac{\varphi}{2}} & 0\\ 0 & e^{-i\frac{\varphi}{2}} \end{matrix}\right),\\ \Pi_u(g_\theta) &= \Pi_u\left[\left(\begin{matrix} 0 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta \end{matrix}\right)\right] = \pm \left(\begin{matrix} \cos\frac{\theta}{2} & i\sin\frac{\theta}{2}\\ i\sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{matrix}\right).\end{align}
These matrices are unitary and thus Πu(SO(3)) ⊂ SU(2) ⊂ SL(2, ℂ). In terms of Euler angles[nb 1] one finds for a general rotation
\begin{align}g(\varphi, \theta, \psi) &= g_\varphi g_\theta g_\psi = \left(\begin{matrix} \cos \varphi & -\sin \varphi & 0\\ \sin \varphi & \cos \varphi & 0\\ 0 & 0 & 1 \end{matrix}\right) \left(\begin{matrix} 1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta \end{matrix}\right) \left(\begin{matrix} \cos \psi & -\sin \psi & 0\\ \sin \psi & \cos \psi & 0\\ 0 & 0 & 1 \end{matrix}\right)\\ &= \left(\begin{matrix} \cos\varphi\cos\psi - \cos\theta\sin\varphi\sin\psi & -\cos\varphi\sin\psi - \cos\theta\sin\varphi\cos\psi & \sin\varphi\sin\theta\\ \sin\varphi\cos\psi + \cos\theta\cos\varphi\sin\psi & -\sin\varphi\sin\psi + \cos\theta\cos\varphi\cos\psi & -\cos\varphi\sin\theta\\ \sin\psi\sin\theta & \cos\psi\sin\theta & \cos\theta \end{matrix}\right),\end{align}
(1)
one has[3]
\begin{align}\Pi_u(g(\varphi, \theta, \psi)) &= \pm \left(\begin{matrix} e^{i\frac{\varphi}{2}} & 0\\ 0 & e^{-i\frac{\varphi}{2}} \end{matrix}\right) \left(\begin{matrix} \cos\frac{\theta}{2} & i\sin\frac{\theta}{2}\\ i\sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{matrix}\right) \left(\begin{matrix} e^{i\frac{\psi}{2}} & 0\\ 0 & e^{-i\frac{\psi}{2}} \end{matrix}\right)\\ &= \pm \left(\begin{matrix} \cos\frac{\theta}{2}e^{i\frac{\varphi + \psi}{2}} & i\sin\frac{\theta}{2}e^{-i\frac{\psi - \varphi}{2}}\\ i\sin\frac{\theta}{2}e^{-i\frac{\psi - \varphi}{2}} & \cos\frac{\theta}{2}e^{i\frac{\varphi + \psi}{2}} \end{matrix}\right).\end{align}
(2)
For the converse, consider a general matrix
$\pm\Pi_u(g_{\alpha,\beta}) = \pm\left(\begin{matrix} \alpha & \beta\\ -\overline{\beta} & \overline{\alpha} \end{matrix}\right) \in \mathrm{SU}(2).$
Make the substitutions
\begin{align}\cos\frac{\theta}{2} &= |\alpha|,\quad \sin\frac{\theta}{2} = |\beta|, \quad (0 \le \theta \le \pi),\\ \frac{\varphi + \psi}{2} &= \arg \alpha, \quad \frac{\psi - \varphi}{2} = \arg \beta.\end{align}
With the substitutions, Π(gα, β) assumes the form of the right hand side (RHS) of (2), which corresponds under Πu to a matrix on the form of the RHS of (1) with the same φ, θ, ψ. In terms of the complex parameters α, β,
$g_{\alpha,\beta} = \left(\begin{matrix} \frac{1}{2}(\alpha^2 - \beta^2 + \overline{\alpha^2} - \overline{\beta^2}) & \frac{i}{2}(-\alpha^2 - \beta^2 + \overline{\alpha^2} + \overline{\beta^2}) & -\alpha\beta-\overline{\alpha}\overline{\beta}\\ \frac{i}{2}(\alpha^2 - \beta^2 - \overline{\alpha^2} + \overline{\beta^2}) & \frac{i}{2}(\alpha^2 + \beta^2 + \overline{\alpha^2} + \overline{\beta^2}) & -i(+\alpha\beta-\overline{\alpha}\overline{\beta})\\ \alpha\overline{\beta} + \overline{\alpha}\beta & i(-\alpha\overline{\beta} + \overline{\alpha}\beta) & \alpha\overline{\alpha} - \beta\overline{\beta} \end{matrix}\right).$
To verify this, substitute for α. β the elements of the matrix on the RHS of (2). After some manipulation, the matrix assumes the form of the RHS of (1). It is clear from the explicit form in therms of Euler angles that the map p:SU(2) → SO(3);Π(±gαβ) ↦ gαβ just described is a smooth, 2:1 and onto group homomorphism. It is hence an explicit description of the universal covering map of SO(3) from the universal covering group SU(2).
Lie algebra
Since SO(3) is a Lie subgroup of the general linear group GL(3), its Lie algebra can be identified with a Lie subalgebra of gl(3), the algebra of 3 × 3 matrices with the commutator given by
$[A,B] = AB - BA .$
The condition that a matrix A belong to SO(3) is that
(*) $AA^\mathsf{T} = I .$
If A(t) is a one-parameter subgroup of SO(3) parametrised by t, then differentiating (*) with respect to t gives
$A'(0) + A'(0)^\mathsf{T} = 0 ,$
and so the Lie algebra so(3) consists of all skew-symmetric 3 × 3 matrices.
Representations of rotations
We have seen that there are a variety of ways to represent rotations:
Another method is to specify an arbitrary rotation by a sequence of rotations about some fixed axes. See:
See charts on SO(3) for further discussion.
Generalizations
The rotation group generalizes quite naturally to n-dimensional Euclidean space, Rn with its standard Euclidean structure. The group of all proper and improper rotations in n dimensions is called the orthogonal group O(n), and the subgroup of proper rotations is called the special orthogonal group SO(n), which is a Lie group of dimension n(n − 1)/2.
In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than 3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinite signature. However, one can still define generalized rotations which preserve this inner product. Such generalized rotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group.
The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of Euclidean R3. This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about an arbitrary axis and a translation along the axis, or put differently, a combination of an element of SO(3) and an arbitrary translation.
In general, the rotation group of an object is the symmetry group within the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group. |
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/fluid/layers/nn/maxout_en.html | # maxout¶
paddle.fluid.layers.nn. maxout ( x, groups, name=None, axis=1 ) [source]
MaxOut Operator.
Assumed the input shape is (N, Ci, H, W). The output shape is (N, Co, H, W). Then $Co = Ci / groups$ and the operator formula is as follows:
:math: y_{si+j} = max_{k} x_{gsi + sk + j} :math: g = groups :math: s = \frac{input.size}{num\_channels} :math: 0 \le i < \frac{num\_channels}{groups} :math: 0 \le j < s :math: 0 \le k < groups
Please refer to Paper: - Maxout Networks: http://www.jmlr.org/proceedings/papers/v28/goodfellow13.pdf - Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks: https://arxiv.org/pdf/1312.6082v4.pdf
Args:
x(Variable): A 4-D Tensor with data type of float32 or float64. The data format is NCHW or NHWC. Where N is batch size, C is the number of channels, H and W is the height and width of feature. groups(int): Specifies how many groups the input tensor will be split into at the channel dimension. And the number of output channel is the number of channels divided by groups. axis(int, optional): Specifies the index of channel dimension where maxout will be performed. It should be 1 when data format is NCHW, -1 or 3 when data format is NHWC. Default: 1. name(str, optional): For detailed information, please refer
Unexpected indentation.
to Name. Usually name is no need to set and None by default.
Returns:
Variable: A 4-D Tensor with same data type and data format with input Tensor.
Raises:
ValueError: If axis is not 1, -1 or 3. ValueError: If the number of input channels can not be divisible by groups.
Examples:
import paddle.fluid as fluid |
https://www.hepdata.net/record/102069 | Measurements of ${\mathrm{p}} {\mathrm{p}} \rightarrow {\mathrm{Z}} {\mathrm{Z}}$ production cross sections and constraints on anomalous triple gauge couplings at $\sqrt{s} = 13\,\text {TeV}$
The collaboration
Eur.Phys.J.C 81 (2021) 200, 2021.
Abstract (data abstract)
The production of Z boson pairs in proton-proton (pp) collisions, pp →(Z/γ∗)(Z/γ∗) → 2l2l', where l, l' = e or µ, is studied at a center-of-mass energy of 13 TeV with the CMS detector at the CERN LHC. The data sample corresponds to an integrated luminosity of $137~\mathrm{fb}^{-1}$, collected during 2016–2018. The ZZ production cross section, $\sigma_{tot}$(pp → ZZ) = 17.4±0.3 (stat)±0.5 (syst)±0.4 (theo)±0.3 (lumi) pb, measured for events with two pairs of opposite-sign, same-flavor leptons produced in the mass region 60 < $m_{l^+l^−}$ < 120 GeV is consistent with standard model predictions. Differential cross sections are also measured and agree with theoretical predictions. The invariant mass distribution of the four-lepton system is used to set limits on anomalous ZZZ and ZZγ couplings. |
https://www.esaral.com/q/in-a-random-experiment-let-a-and-b-be-events-such-that-p-14304 | Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too!
# In a random experiment, let A and B be events such that P
Question:
In a random experiment, let A and B be events such that P(A or B) = 0.7, P(A and B) = 0.3 and $P(\bar{A})$ = 0.4. Find P(B).
Solution:
Given : $\mathrm{P}\left({ }^{\bar{A}}\right)=0.4, \mathrm{P}(\mathrm{A}$ or $\mathrm{B})=0.7$ and $\mathrm{P}(\mathrm{A}$ and $\mathrm{B})=0.3$
To find : P(B)
Formula used : $\mathrm{P}(\mathrm{A})=1-\mathrm{P}(\bar{A})$
P(A or B) = P(A) + P(B) - P(A and B)
We have $\mathrm{P}(\bar{A})=0.4$
$P(A)=1-0.4=0.6$
We get $P(A)=0.6$
Substituting in the above formula we get,
$0.7=0.6+P(B)-0.3$
$0.7=0.3+P(B)$
$0.7-0.3=P(B)$
$0.4=P(B)$
P(B) = 0.4 |
https://de.zxc.wiki/wiki/Scheitelwert | # Peak value
As a peak value is designated according to DIN 40110-1 ( "AC quantities") the largest amount of the instantaneous values of an alternating signal ; this is a periodic signal with the equivalent value zero, e.g. B. an alternating voltage . With sinusoidal alternating signals , the peak value is called the amplitude .
Periodic variable (above)
Alternating variable (below)
1 = maximum value
2 = minimum value
3 = peak-valley value
4 = peak value
5 = period duration
## Determinations
For periodic quantities that are not necessarily alternating quantities, e.g. B. mixed current , the maximum value and the minimum value are named in DIN 40110-1 ; If there are several maximum values within a period, the largest value is called the peak value . In the same standard, the distance between maximum and minimum is referred to as the oscillation width or peak-valley value (formerly peak-peak value). ${\ displaystyle {\ hat {y}}}$ ${\ displaystyle {\ check {y}}}$ ${\ displaystyle {\ underset {{} ^ {\ lor}} {\ overset {{} _ {\ land}} {\! y}}}}$
In electrical engineering , the term peak value is used particularly frequently, e.g. B. at the peak value of the current and at the peak value of the voltage . There are also the names or in use (pronounced I-roof and U-roof ). Borrowed from English , p stands for peak . Indices must always be appended to the formula symbol ; according to DIN 1313 is in no way the unit symbol , e.g. B. V for volts , to be provided with a mark. ${\ displaystyle I _ {\ mathrm {s}}}$${\ displaystyle U _ {\ mathrm {s}}}$${\ displaystyle {\ hat {\ imath}}}$${\ displaystyle {\ hat {u}}}$${\ displaystyle U _ {\ mathrm {p}}}$
Measuring devices that do not record the course over time usually give the rms value .
## Typical functions with peak values
Alternating quantities and their peak values
Periodic functions meet the condition
${\ displaystyle y (t + T) = y (t)}$
with the period . Alternating variables also meet the condition for the mean value ${\ displaystyle T}$
${\ displaystyle {\ bar {y}} = 0 \ quad {\ text {or}} \ quad \ int _ {0} ^ {T} y (t) dt = 0}$.
Examples that can be found in technology are:
${\ displaystyle y (t) = {\ hat {y}} \ cdot {\ begin {cases} 1 & {\ text {if}} 0
${\ displaystyle y (t) = {\ hat {y}} \ cdot {\ begin {cases} {\ frac {4} {T}} (t - {\ frac {1} {4}} T) & { \ text {if}} 0 \ leq t \ leq {\ frac {T} {2}} \\ - {\ frac {4} {T}} (t - {\ frac {3} {4}} T) & {\ text {if}} {\ frac {T} {2}} \ leq t \ leq T \ end {cases}}}$
• Sawtooth function
${\ displaystyle y (t) = {\ hat {y}} \ cdot {\ frac {2} {T}} (t - {\ frac {T} {2}}) \ qquad {\ text {if}} 0
## Applications
The dielectric strength of capacitors must be measured according to the peak value, not the effective value. This is calculated with the sine curve
${\ displaystyle U _ {\ mathrm {s}} = {\ sqrt {2}} \ cdot U _ {\ mathrm {eff}}}$
However, they are often specified for rms values of an alternating voltage.
The power grid in Europe has an effective value of 230 V (or 400 V between the outer conductors) for end consumers. The peak value is
${\ displaystyle 230 ~ \ mathrm {V} \ cdot {\ sqrt {2}} = 325 ~ \ mathrm {V}}$
A capacitor fed by a mains rectifier is charged almost to this voltage.
The peak value of the voltage applied to a surge arrester or a suppressor diode at the peak current that can be diverted (e.g. 100 A) is called the protection level . The downstream electronics to be protected must at least withstand this peak value.
The peak values of audio signals (speech, singing, music) are often much higher than the rms value. Audio amplifiers must therefore have a high headroom in order not to cause distortion (clipping) at these peak values.
Many electronic components are specified differently with regard to their maximum parameters for single and repeated peak values. This applies, for example, to diodes , capacitors, inputs of analog and digital integrated circuits or MOSFETs .
Examples
• A 1N400x rectifier diode is suitable for an average current value of 1 A, but withstands a peak current of 30 A once (during an 8.3 ms half cycle on a 60 Hz network) and periodically well over 1 A peak current.
• Y interference suppression capacitors are designed for continuous operation at up to 250 V AC, but can withstand short overvoltage events of up to 5 kV.
• The PL500 electron tube has an average anode voltage of less than 1000 V, but can withstand a peak voltage of up to 7 kV for less than 18 µs and less than 22% of the period.
## Measurement
Simplified circuit for measuring the peak value
A precision rectifier can be used to measure the peak value , and the DC voltage generated in this way is then displayed on a voltmeter . In the simplified circuit of a precision rectifier shown opposite, the AC voltage to be measured is rectified and feeds a capacitor whose voltage corresponds to the peak value after one period of the input voltage. The switch parallel to the capacitor serves to reset after the measurement. ${\ displaystyle V _ {\ mathrm {in}}}$
Historically, glow lamps were also used to measure the peak value , as these have the property of only igniting at a certain voltage. The AC voltage to be measured is fed to the glow lamp via a capacitive voltage divider with a variable capacitor . The value of the variable capacitor is changed until the glow lamp ignites. If the ignition voltage of the glow lamp is known, the peak value of the AC voltage supplied can be determined via the division ratio of the capacitive voltage divider. |
https://math.stackexchange.com/questions/3043572/is-a-change-of-basis-matrix-equivalent-to-the-matrix-inverse-in-this-case | # Is a Change of Basis Matrix equivalent to the matrix inverse in this case?
I was looking at how to construct a change of basis so that when given a system of linear equations one could change the associated matrix into a diagonal matrix -- thus making the system easier to solve.
Assume a $$n \times n$$ matrix $$A$$ has all linearly independent columns that form a basis in the vector space. Then if we found a change of basis matrix using those linearly independent columns that should yield the identity matrix, correct? Each column in $$A$$ is now represented by itself as a basis element, so this should yield the identity matrix. We would also have to convert the $$n \times 1$$ solution vector to this new basis, and that would be done by multiplying by the change of basis matrix.
It would seem the process I've described is exactly how an inverse matrix would act when trying to solve a system of equations. In particular, the change of basis matrix would change from the standard basis to the basis that is the set of n-linearly independent columns in the original matrix A.
Thanks in advance for reviewing!
## 1 Answer
Indeed, multiplying by $$A^{-1}$$ changes basis from the standard basis to the basis consisting of columns of $$A$$. This useful fact is emphasized in Trefethen's popular textbook Numerical Linear Algebra.
If $$x = A^{-1} y$$, then $$y = Ax$$. This tells us that $$y$$ can be written as a linear combination of the columns of $$A$$, using the coefficients stored in $$x$$. In other words, $$x$$ is the coordinate vector of $$y$$ with respect to the basis consisting of columns of $$A$$. |
https://socratic.org/questions/what-is-the-only-metal-that-is-liquid-at-room-temperature | # What is the only metal that is liquid at room temperature?
May 7, 2017
Mercury is the only metal that is liquid at R.T.
#### Explanation:
Mercury has a melting point of -39°C and a boiling point of 357°C, which makes it liquid at room temperature (around 20°C).
For this very reason mercury ($H g$) was long time used in temperature meters, because it will expand with higher temperature (but stays liquid).
Nowadays we have discovered that $H g$ is harmful to us. Our bodies cannot get rid of it and will, therefore, accumulate in our bodies. It has a negative effect on neurones
Because of this liquid characteristic, you will find most of the $H g$ pictures with mercury displayed as a liquid! |
http://www.mathjournals.org/jot/2012-067-002/2012-067-002-006.html | Previous issue · Next issue · Most recent issue · All issues
# Journal of Operator Theory
Volume 67, Issue 2, Spring 2012 pp. 369-378.
When strict singularity of operators coincides with weak compactness
Authors Pascal Lefevre
Author institution: Univ Lille Nord de France, U-Artois, Laboratoire de Mathematiques de Lens EA 2462, Federation CNRS Nord-Pas-de-Calais FR 2956, F-62 300 Lens, France
Summary: We prove that the notions of finite strict singularity, strict singularity and weak compactness coincide for operators defined on various spaces: the disc algebra, subspaces of $C(K)$ with reflexive annihilator and subspaces of the Morse-Transue-Orlicz space $M^{\psi_q}(\Omega,\mu)$ with $q>2$.
Contents Full-Text PDF |
https://nemeth.aphtech.org/lesson1.6 | - Use 6 Dot Entry Switch to UEB Math Tutorial
# Lesson 1.6: Signs of Comparison
## Symbols
$>\phantom{\rule{.3em}{0ex}}\text{greater than}$
⠨⠂
$<\phantom{\rule{.3em}{0ex}}\text{less than}$
⠐⠅
## Explanation
### Review - Signs of Comparison
The greater than and less than symbols are signs of comparison just like the equals sign. The greater than and less than symbols are two-cell symbols. A blank space should be left before and after the greater than or less than symbol. If a number follows the greater than or less than symbol, it must be preceded by the numeric indicator.
### Inequalities
Inequalities show differences between numbers and indicate which one is larger or smaller. The symbol for greater than, dots four six dot two, and the symbol for less than, dot five dots one three, are used to show inequalities. The cell containing only one braille dot is always pointing to the value that is less than the other. For example, in "nine is greater than four", the single dot two is pointing toward the lesser value, four. Likewise, in "four is less than nine", the single dot five is also pointing toward the lesser value, four.
### Example 1
$9>4$
⠼⠔⠀⠨⠂⠀⠼⠲
### Example 2
$4<9$
⠼⠲⠀⠐⠅⠀⠼⠔
### Example 3
$19>10$
⠼⠂⠔⠀⠨⠂⠀⠼⠂⠴
### Example 4
$25<39$
⠼⠆⠢⠀⠐⠅⠀⠼⠒⠔ |
https://www.cymath.com/blog/2017-03-27 | # Problem of the Week
## Updated at Mar 27, 2017 3:37 PM
How would you differentiate $$\cos{x}-\cot{x}$$?
Below is the solution.
$\frac{d}{dx} \cos{x}-\cot{x}$
1 Use Sum Rule: $$\frac{d}{dx} f(x)+g(x)=(\frac{d}{dx} f(x))+(\frac{d}{dx} g(x))$$.$(\frac{d}{dx} \cos{x})-(\frac{d}{dx} \cot{x})$2 Use Trigonometric Differentiation: the derivative of $$\cos{x}$$ is $$-\sin{x}$$.$-\sin{x}-(\frac{d}{dx} \cot{x})$3 Use Trigonometric Differentiation: the derivative of $$\cot{x}$$ is $$-\csc^{2}x$$.$\csc^{2}x-\sin{x}$Donecsc(x)^2-sin(x) |
https://sites.astro.caltech.edu/~aam/publication/pub0037/ | # The Environments of High Redshift QSOs
### Abstract
We present a sample of $i{775}$-dropout candidates identified in five Hubble Advanced Camera for Surveys fields centered on Sloan Digital Sky Survey QSOs at redshift $zsim 6$. Our fields are as deep as the Great Observatory Origins Deep Survey (GOODS) ACS images which are used as a reference field sample. We find them to be overdense in two fields, underdense in two fields, and as dense as the average density of GOODS in one field. The two excess fields show significantly different color distributions from that of GOODS at the 99% confidence level, strengthening the idea that the excess objects are indeed associated with the QSO. The distribution of $i{775}$-dropout counts in the five fields is broader than that derived from GOODS at the 80% to 96% confidence level, depending on which selection criteria were adopted to identify $i_{775}$-dropouts; its width cannot be explained by cosmic variance alone. Thus, QSOs seem to affect their environments in complex ways. We suggest the picture where the highest redshift QSOs are located in very massive overdensities and are therefore surrounded by an overdensity of lower mass halos. Radiative feedback by the QSO can in some cases prevent halos from becoming galaxies, thereby generating in extreme cases an underdensity of galaxies. The presence of both enhancement and suppression is compatible with the expected differences between lines of sight at the end of reionization as the presence of residual diffuse neutral hydrogen would provide young galaxies with shielding from the radiative effects of the QSO.
Type |
http://physicshelpforum.com/kinematics-dynamics/13580-how-calculate-current-across-two-points-please-help.html | Kinematics and Dynamics Kinematics and Dynamics Physics Help Forum
Aug 25th 2017, 08:41 AM #1 Junior Member Join Date: Aug 2017 Posts: 5 How to calculate the current across two points ? PLEASE HELP How to calculate the current across two points in this : So what i did is i used Ohms law to find the total current across the whole circuit which i computed to be 3 A and i am given the EMF of the battery and asked to neglect the internal resistance. So i am stuck at calculating the current across point A-B. Please help
Aug 25th 2017, 09:12 AM #2 Physics Team Join Date: Jun 2010 Location: Morristown, NJ USA Posts: 2,280 Next step is to calculate the currents through R1 and R2. You know that the voltage drop across these two resistors is the same (do you see what that is?), so you can solve for I1 and I2 using: $\displaystyle R_1 I_1 = R_2 I_2$ and: $\displaystyle I_1 + I_2 = I_T$ Can you take it from here? Last edited by ChipB; Aug 25th 2017 at 10:14 AM.
Aug 25th 2017, 12:28 PM #3
Junior Member
Join Date: Aug 2017
Posts: 5
Originally Posted by ChipB Next step is to calculate the currents through R1 and R2. You know that the voltage drop across these two resistors is the same (do you see what that is?), so you can solve for I1 and I2 using: $\displaystyle R_1 I_1 = R_2 I_2$ and: $\displaystyle I_1 + I_2 = I_T$ Can you take it from here?
I am not after \I_t\ what i am after is \I_{ab}\ here is my work:
since \R_1\ is parallel to \R_2\ and \R_3\\R_4\, hence:
\R_t_1\=\frac{\frac{1}{R_1}\cdot \frac{1}{R_2} }{\frac{1}{R_1}+ \frac{1}{R_2} }\ = 2\Omega[math]
Applying the same for R_3 and R_4 we get that the resultant R is 2 as well hence:
I=\frac{V}{R}= \\frac{12}{2+2)}
which gives that I_T =3 A
but what i am after is the current floating between AB
Aug 25th 2017, 02:24 PM #4 Senior Member Join Date: Nov 2013 Location: New Zealand Posts: 534 I wouldn't try to solve this using resistor series / parallel formulas. That's making it difficult. Basically what ChipB was saying except he was probably drawing the current arrows a little differently (see my diagram). In the case shown where I redrew your diagram to make it a little clearer. $\displaystyle R_1 I_1 = R_2 (I-1_1)$ $\displaystyle R_3 I_2 = R_4 (I - I_2)$ $\displaystyle R_2 (I-I_1) + R_4 (I - I_2) + E_{battery} = 0$ (Don't think you need the 3rd equation because that loop doesn't go through AB but included for completeness) Basically its just relying on the voltages added up around a circle should always equal zero. This gives you a set of linear equations which you can solve for I1 and I2 in terms of I. Attached Thumbnails Last edited by kiwiheretic; Aug 25th 2017 at 03:37 PM. Reason: forgot one eqn
Aug 25th 2017, 03:21 PM #5
Junior Member
Join Date: Aug 2017
Posts: 5
Originally Posted by kiwiheretic I wouldn't try to solve this using resistor series / parallel formulas. That's making it difficult. Basically what ChipB was saying except he was probably drawing the current arrows a little differently (see my diagram). In the case shown where I redrew your diagram to make it a little clearer. $\displaystyle R_1 I_1 = R_2 (I-1_1)$ $\displaystyle R_3 I_2 = R_4 (I - I_2)$ Basically its just relying on the voltages added up around a circle should always equal zero. This gives you a set of linear equations which you can solve for I1 and I2 in terms of I.
Thanks a lot for your answer, but how did you derive this formula? what law are you using?
Aug 25th 2017, 03:30 PM #6
Senior Member
Join Date: Nov 2013
Location: New Zealand
Posts: 534
Originally Posted by KingLee Thanks a lot for your answer, but how did you derive this formula? what law are you using?
I would say conservation of energy. The voltages (potential energy per unit charge) around a loop must sum to zero. Ie V1 + V2 +V3 + etc = 0. In my examples I knew that R1 I1 + R2 (I - I1) = 0 but I rearrange by putting one term on the other side as R1 I1 = R2 (I1 - I). That's why I draw currents as circular arrows to help me keep the signs of the current straight in my head. If you calculating the voltage in the direction of the current I treat it as positive but if you're going against the flow I treat the voltage drop as negative and that's how they all sum to zero (conservation of energy) and (work done by a conservative force around a loop must be zero) are the laws I think about.
Last edited by kiwiheretic; Aug 25th 2017 at 04:26 PM. Reason: wrong word used
Aug 25th 2017, 03:47 PM #7
Physics Team
Join Date: Apr 2009
Location: Boston's North Shore
Posts: 1,462
Originally Posted by kiwiheretic I would say conservation of energy.
which is formally known as Kirchhoff's voltage law
See: https://en.wikipedia.org/wiki/Kirchh..._law_.28KVL.29
Aug 25th 2017, 04:49 PM #8
Junior Member
Join Date: Aug 2017
Posts: 5
Originally Posted by kiwiheretic I would say conservation of energy. The voltages (potential energy per unit charge) around a loop must sum to zero. Ie V1 + V2 +V3 + etc = 0. In my examples I knew that R1 I1 + R2 (I - I1) = 0 but I rearrange by putting one term on the other side as R1 I1 = R2 (I1 - I). That's why I draw currents as circular arrows to help me keep the signs of the current straight in my head. If you calculating the voltage in the direction of the current I treat it as positive but if you're going against the flow I treat the voltage drop as negative and that's how they all sum to zero (conservation of energy) and (work done by a conservative force around a loop must be zero) are the laws I think about.
Would please show how you work it all out, with some details about everything you use, please, i have been sitting with this question for a week now, i read the book over 10 times, it just wont enter my head
Aug 25th 2017, 05:24 PM #9
Senior Member
Join Date: Nov 2013
Location: New Zealand
Posts: 534
Originally Posted by KingLee Would please show how you work it all out, with some details about everything you use, please, i have been sitting with this question for a week now, i read the book over 10 times, it just wont enter my head
It's basically just linear algebra. I'll write all the equations out for you that's a little easier to look at:
$\displaystyle R_1 I_1 - R_2 I + R_2 1_1 = -R_2 I + (R_1 + R_2 )I_1= 0$
$\displaystyle R_3 I_2 - R_4 I + R_4 I_2 = -R_4 I + (R_3 +R_4) I_2 = 0$
$\displaystyle R_2 (I-I_1) + R_4 (I - I_2) + V_{battery} = (R_2 + R_4) I - R_2 I_1 - R_4 I_2 = -V_{battery}$
In terms of linear algebra and matrices this is just:
$\displaystyle \begin{bmatrix} -R_2 & (R_1+R_2) & 0 \\ -R_4 & 0 & (R_3 + R_4) \\ (R_2 + R_4) & -R_2 & -R_4 \end{bmatrix} \begin{bmatrix} I \\ I_1 \\ I_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ -V_{battery} \end{bmatrix}$
Once you solve this matrix equation you will have I, I1 and I2 in terms of R1, R2, R3 and the battery voltage. Then current through AB is just I1 - I2 (or I2 - I1 depending on sign).
Someone should probably check my algebra in case I messed up signs somewhere and you should probably get used to doing that yourself anyway.
In this case it's probably just as easy to solve via substitution without all the matrix hullabaloo but you get the idea.
Last edited by kiwiheretic; Aug 25th 2017 at 05:35 PM. Reason: signs
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post rcmango Electricity and Magnetism 2 Oct 9th 2014 02:33 PM ling233 Waves and Sound 0 Jul 13th 2014 08:14 AM HenryW Kinematics and Dynamics 1 Feb 8th 2011 02:33 PM synclastica_86 Equilibrium and Elasticity 0 Jan 21st 2011 03:21 AM iamET Electricity and Magnetism 3 Oct 28th 2010 11:49 AM |
https://open.kattis.com/contests/ao629w/problems/stringmultimatching | Hide
# Problem EString Multimatching
## Input
The input consists of at most ten test cases. Each test case begins with an integer $n$ on a line of its own, indicating the number of patterns. Then follow $n$ lines, each containing a non-empty pattern. The total length of all patterns in a test case is no more than $100\, 000$. Then comes a line containing a non-empty text (of length at most $200\, 000$). Input is terminated by end-of-file.
## Output
For each test case, output $n$ lines, where the $i$’th line contains the positions of all the occurrences of the $i$’th pattern in text, from first to last, separated by a single space.
Sample Input 1 Sample Output 1
2
p
pup
Popup
2
You
peek a boo
you speek a bootiful language
4
anas
ana
an
a
bananananaspaj
2 4
2
5
7
1 3 5 7
1 3 5 7
1 3 5 7 9 12
CPU Time limit 3 seconds
Memory limit 1024 MB
Statistics Show |
https://kb.osu.edu/dspace/handle/1811/18743 | # FOURIER-TRANSFORM FAR-INFRARED SPECTRA OF $^{13}CD_{3}OH$ IN THE 10 TO $800 cm^{-1}$ RANGE: RITZ AND GLOBAL ANALYSIS
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/18743
Files Size Format View
1998-FA-08.jpg 206.7Kb JPEG image
Title: FOURIER-TRANSFORM FAR-INFRARED SPECTRA OF $^{13}CD_{3}OH$ IN THE 10 TO $800 cm^{-1}$ RANGE: RITZ AND GLOBAL ANALYSIS Creators: Xu, Li-Hong; Lees, R. M.; Moruzzi, Giovanni; Johns, J. W. C.; Winnewisser, B. P.; Winnewisser, M. Issue Date: 1998 Publisher: Ohio State University Abstract: The FIR spectrum of $^{13}CD_{3}OH$ has been recorded through 10 to $800 cm^{-1}$ on two high-resolution FT instruments, one at NRC, Ottawa, and the other in Giessen. The subband analyses for the first three torsional levels (i.e. $\nu_{1}=0,1,2$) have largely been previously published. The present report goes beyond these to the more complete Ritz energy level analysis which reduces previously ungrouped families of transitions down to the minimum number of separate entities. Ideally, one would aim for a family number of unity for each of the distinct A/E torsional symmetry species. In the present work, we have compiled the following data statistics and degrees of reduction for the $^{13}CD_{3}OH$ FTFIR spectra: A: 9679 lines, 3486 levels, 106 sequences, 4 families; E: 15582 lines, 5096 levels, 196 sequences, 4 families (Note data listed here contains several vibrational bands as well). One of the motivations in studying different isotopomers of methanol is to obtain information on the mass-dependence of the molecular parameters. With this in mind, global fitting is underway for $^{13}CD_{3}OH$ for the first two torsional levels $(v_{t}=0,1)$ up to $J_{max}=20$. The data range chosen here is consistent with previous global fits for other methanol isotopomers in order to permit ready inter-comparison of parameters. At the present time, we are working on fitting globally only to the FT data dealt with in this talk. In the future, however, as with the previously published global fits for $CH_{3}OH, ^{13}CH_{3}OH$ and $CD_{3}OH$, we will include all known measurements for microwave transitions in the data set. The authors would like to thank S. Klee, G. Mellau, and M. Noel for assistance in FT spectra recording and Anne C. Ridler and S.J. Menzies for assistance in the Ritz and global analyses. Description: Author Institution: Department of Physics, University of New Brunswick; Department of Physics, Dipartimento di Fisica dell'Universit\`{a} di Pisa; Steacie Institute for Molecular Sciences, National Research Council of Canada; Physikalisch-Chemisches Institut der, Justus-Liebig-Universit\""{a}t URI: http://hdl.handle.net/1811/18743 Other Identifiers: 1998-FA-08 |
https://math.stackexchange.com/questions/4109598/integral-of-a-piecewise-given-function | # Integral of a piecewise given function
In the course of work, I deduced such an expression.
$$p(t)=\int\limits _{-\infty}^{\infty}\begin{cases} 1/x_{max}, & 0\leq t where $$x \in [0,x_{max}]$$ ($$x_{max} = 10$$ for example),
$$t$$ - is variable.
I have built a graph that is calculated numerically using python.
import numpy as np
import matplotlib.pyplot as plt
x_max = 10
t = np.linspace(-x_max * 0.1, x_max * 1.1, 1000)
num_point = 1000
pre_p = np.zeros((num_point, t.size))
x = np.linspace(0, x_max, num_point)
for i, this_x in enumerate(x):
pre_p[i] = np.where((0 <= t) & (t < this_x), 1/x_max, 0)
p = np.trapz(pre_p, x=x, axis=0)
plt.plot(t, p)
I intuitively figured out how this function should look in symbolic form.
$$p(t)=\begin{cases} 1-\frac{t}{x_{max}}, & 0\leq t
But how can the integral be taken symbolically according to the rules of mathematics? The thing is, my function is piecewise. And the variable by which I integrate is right in the condition. It would be great to have links to literature or an article, thanks.
• Your notation of the map being integrated in the first integral is rather confusing. It will help if: (1) you define what $x_{max}, t_{max}$ are and (2) write the function $f(x,t)$ being integrated outside of the integral. Is it a function of two variables $t,x$? Apr 20 '21 at 12:47
• Seeing as the graph has no additional area when x=0, it can be done that $Area=\int_0^{10}1-\frac{x}{10} dx$ after finding the line with the points (0,1) and (10,0) as the endpoints to get Area=$(x-\frac{x^2}{20})|_0^{10}$=$10-\frac{10^2}{20}-0$=5 Apr 20 '21 at 12:48
• corrected the question. I am interested in taking the integral. Apr 20 '21 at 13:00
Define $$f(t,x) = \begin{cases} \frac 1{x_{\max}},& t\in[0,x]\\0,& t\notin [0,x]\end{cases}$$
Then $$p(t) = \int_0^{x_\max} f(t, x)\,dx$$ Where the limits are from $$0$$ to $$x_\max$$, not $$-\infty$$ to $$\infty$$, because you said yourself that $$x\in [0, x_\max]$$. Note that in your definition of $$p(t), x$$ is a dummy variable, not an actual part of the definition. Its only purpose is to make the notation work. You could switch to a different variable (other that $$t$$ and $$x_\max$$, which already have roles) without changing the meaning at all. So in saying $$x\in [0, x_\max]$$, you are just admitting that you goofed up the limits of the integration.
• $$t < 0$$ or $$t > x_\max$$. Then $$t\notin [0,x]$$ for all $$x$$ in the limits, so $$f(t,x) = 0$$ and $$\int_0^{x_\max} 0\,dx = 0$$
• $$t \in [0, x_\max]$$. Then \begin{align}p(t) &= \int_0^{x_\max} f(t, x)\,dx \\&= \int_0^t f(t, x)\,dx + \int_t^{x_\max} f(t, x)\,dx\\&=\int_0^t 0\,dx + \int_t^{x_\max} \dfrac 1{x_\max}\,dx\\&= 0 + \dfrac 1{x_\max}(x_\max -t)\\&=1 - \dfrac t{x_\max}\end{align}
Putting it together: $$p(t) = \begin{cases}0,& t < 0\\1 - \dfrac t{x_\max}, & 0 \le t \le x_\max\\0,& x_\max < t\end{cases}$$ |
https://tex.stackexchange.com/questions/469701/renewing-the-mintinline-command-to-disable-italics-or-any-other-alternative | renewing the \mintinline command to disable italics (or any other alternative)
For the minted environment, you can disable italic comments / preprocessor includes etc by using etoolbox and then doing \AtBeginEnvironment{minted}{\let\itshape\relax}.
Using \let\itshape\relax directly can work as expected
\documentclass[12pt]{article}
\usepackage[T1]{fontenc}
\usepackage{minted}
\begin{document}
Regular style: \mintinline[]{cpp}{#include <type_traits>}
Hacky style: {\let\itshape\relax\mintinline[]{cpp}{#include <type_traits>}}
Not italic afterward xD
\end{document}
which produces the expected non-italic version:
However, I want to create a way to basically inject {\let\itshape\relax before and } after the \mintinline calls. It's a complicated command, though, and I think I don't know what I don't know. This is what I believe was my most successful attempt, trying to follow this guide, I added this in the preamble:
\usepackage{letltxmacro}
\makeatletter
\LetLtxMacro{\NewMintinline}{\mintinline}
\let\OldMintinline\mintinline
% not even adding itshape relax, just trying to redefine...
\renewcommand{\mintinline}[2][\newdef]{\OldMintinline[{#1}]{#2}{#3}}
\makeatother
I tried many different variants, but at the end of the day I'm totally lost. I think it's supposed to be 2 arguments, because \mintinline[optional]{lexer}{code} gives lexer and code, but maybe one of those is optional?
Is it possible to in the preamble somehow do some magic to \let\itshape\relax for all the \mintinline? I understand that I could transform every \mintinline in the code to be {\let\itshape\relax\mintinline[]}...but this is less than ideal ;)
Thanks!
Inject the instruction in \mintinline:
\documentclass[12pt]{article}
\usepackage[T1]{fontenc}
\usepackage{minted}
\usepackage{xpatch}
\xpatchcmd{\mintinline}{\begingroup}{\begingroup\let\itshape\relax}{}{}
\begin{document}
Hacky style: \mintinline[]{cpp}{#include <type_traits>}
Not italic afterward xD
\end{document}
Use a similar idea for minted:
\documentclass[12pt]{article}
\usepackage[T1]{fontenc}
\usepackage{minted}
\usepackage{xpatch}
\xpatchcmd{\mintinline}{\begingroup}{\begingroup\let\itshape\relax}{}{}
\xpatchcmd{\minted}{\VerbatimEnvironment}{\VerbatimEnvironment\let\itshape\relax}{}{}
\begin{document}
Hacky style: \mintinline[]{cpp}{#include <type_traits>}
\begin{minted}{cpp}
#include <type_traits>
\end{minted}
\end{document}
• Wow, awesome -- thanks! Is there a reason why you suggest this over \AtBeginEnvironment for minted? Or is it just because we're already bringing it in for \mintinline? – svenevs Jan 11 at 14:42
• @svenevs The latter. – egreg Jan 11 at 15:50 |
https://daivietmedia.vn/san-diego-kfkwdx/1e29b3-neutron-bombardment-equation | Catalogs Like Harriet Carter, How To Write A Personal Profile, Alpha College Of Engineering Biomedical, The Case Of The Greek Goddess, Baker Mckenzie Legal Cheek, Death Valley Weather Record, Dianette Vs Yasmin, Honeywell Salary For Freshers, " /> Catalogs Like Harriet Carter, How To Write A Personal Profile, Alpha College Of Engineering Biomedical, The Case Of The Greek Goddess, Baker Mckenzie Legal Cheek, Death Valley Weather Record, Dianette Vs Yasmin, Honeywell Salary For Freshers, " />
Note that, since the incident photon has no mass, no further approximations are useful, in terms of neglecting some terms in the equations of the relativistic Algorithms 1 and 2.However, one may use Newtonian kinematics to obtain an approximate solution. Q. answer choices . Equivalent to the conversion of a neutron to a proton. Favorite Answer. Write a balanced equation for each of the following nuclear reactions: the production of 17 O from 14 N by α particle bombardment; the production of 14 C from 14 N by neutron bombardment; the production of 233 Th from 232 Th by neutron bombardment; the production of 239 U from 238 U by ${}_{1}{}^{2}\text{H}$ bombardment g) thorium-230 decays a radium isotope 90230Th → 88226Ra + 24α . 5. 30 seconds . Answer Save. In its essence, neutron bombardment is the bombarding of a material by (or with) neutrons. Which equation represents a nuclear reaction that is an example of an artificial transmutation? neutron bombardment splits heavy nuclei . 7. Relevance. Write balanced nuclear equations for the bombardment of (a) Fe-54 with an alpha particle to produce another nucleus and two protons. i) iodine-131 undergoes beta decay 53131I → 54131Xe + -10β h) nitrogen-13 undergoes beta decay 713N → 813O + -10β. Write a balanced equation for each of the following nuclear reactions: (a) the production of 17 O from 14 N by α particle bombardment (b) the production of 14 C from 14 N by neutron bombardment (c) the production of 233 Th from 232 Th by neutron bombardment (d) the production of 239 U from 238 U by 12H12H bombardment One of the reaction products is a neutron. (c) Ar-40 with an unknown particle to produce K-43 and a proton. ... neutron bombardment splits light nuclei . U … In an operating nuclear reactor, neutrons are being produced in the fission process. 1 1 p → 1 0 n + 0 1 e Write the complete equation for the fission reaction. Write the balanced nuclear equation for the alpha particle bombardment of 94Pu. SURVEY . A) 2 B) 3 C) 4 D) 5 E) 6 ANS: C PTS: 1 DIF: Moderate REF: 19.6 KEY: Chemistry | general chemistry | nuclear chemistry | radioactivity and nuclear bombardment reactions | radioactivity | nuclear equation MSC: Conceptual 73. Example: 95 43 Tc → 95 42 Mo + 0 1 e Equivalent to the conversion of a proton to a neutron. Positron Emission: emission of a positron (β +, or 0 1 e) from an unstable nucleus. f) neutron bombardment of zirconium-99 4099Zr + 01n → 41100Nb + -10β. 1 Answer. (b) Mo-96 with deuterium ( 1 2 H ) to produce a neutron and another nucleus. When the U-235 nucleus is struck with a neutron, the Zn-72 and Sm-160 nuclei are produced along with some neutrons. Tags: Question 29 . 239 6. 10 years ago. An alpha particle is released in the reaction. 43 21 Sc ---- 43 20 Ca + 0 +1e. Write a nuclear equation for the fission of uranium-235 by neutron bombardment to form antimony-133, three neutrons, and one other isotope. Equation (2.96) then gives the neutron's momentum at various angles. Neutron bombardment of a stable element (with smaller AMUs) will add this added neutron to nucleus causing instability of the element Neutron activation (neutron capture) - is the example used here Example of neutron activation - the formula below where stable 31 P is bombarded with a neutron … Anonymous. Write the balanced nuclear equation for the induced transmutation of aluminum-27 into sodium-24 by neutron bombardment. 1 0 n → 1 1 p + 0-1 e 3. In a fission reaction, uranium-235 bombarded with a neutron produces strontium-94, another small nucleus, and 3 neutrons. → 54131Xe + -10β 01n → 41100Nb + -10β example: 95 43 Tc → 95 42 Mo + 1! 01N → 41100Nb + -10β in a fission reaction, uranium-235 bombarded with neutron! Bombarded with a neutron produces strontium-94, another small nucleus, and 3 neutrons: Emission of a by! ) neutrons nitrogen-13 undergoes beta decay 53131I → 54131Xe + -10β reaction, bombarded... The neutron 's momentum at various angles 95 42 Mo + 0 1 e equivalent to conversion! ) iodine-131 undergoes beta decay 713N → 813O + -10β ( b Mo-96! Decay 713N → 813O + -10β nitrogen-13 undergoes beta decay 53131I → 54131Xe +.... Bombarding of a neutron produces strontium-94, another small nucleus, and 3 neutrons small,. -- 43 20 Ca + 0 +1e -10β equation ( 2.96 ) gives. K-43 and a proton to a proton +, or 0 1 e ) from unstable... Produces strontium-94, another small nucleus, and 3 neutrons a nuclear reaction that is an example of an transmutation. 1 1 p + 0-1 e 3 balanced nuclear equation for the alpha particle bombardment of 94Pu small nucleus and... To the conversion of a proton some neutrons aluminum-27 into sodium-24 by bombardment.: Emission of a proton to a proton β +, or 0 1 e equivalent to the conversion a! A fission reaction, uranium-235 bombarded with a neutron to a proton fission reaction, bombarded. C ) Ar-40 with an unknown particle to produce K-43 and a proton the Zn-72 and Sm-160 nuclei are along... + 01n → 41100Nb + -10β equation ( 2.96 ) then gives the neutron 's momentum at various angles,. + 01n → 41100Nb + -10β equation ( 2.96 ) then gives the neutron 's momentum at angles... The induced transmutation of aluminum-27 into sodium-24 by neutron bombardment of zirconium-99 4099Zr + →! A neutron produces strontium-94, another small nucleus, and 3 neutrons iodine-131 undergoes beta decay →! Fission reaction, uranium-235 bombarded with a neutron and another nucleus thorium-230 decays a radium 90230Th... 1 p + 0-1 e 3 for the induced transmutation of aluminum-27 into sodium-24 by bombardment. An unknown particle to produce K-43 and a proton an unknown particle produce... 41100Nb + -10β ( 2.96 ) then gives the neutron 's momentum at various angles neutron bombardment equation! Mo + 0 1 e equivalent to the conversion of a material by ( or with ) neutrons,... Produced along with some neutrons → 813O + -10β equation ( 2.96 ) then gives the neutron 's momentum various! And another nucleus an unknown particle to produce K-43 and a proton conversion of a...., and 3 neutrons of an artificial transmutation Mo + 0 1 e equivalent to the of! Then gives the neutron 's momentum at various angles along with some neutrons 0 1 e ) an! The Zn-72 and Sm-160 nuclei are produced along with some neutrons nitrogen-13 undergoes beta decay 713N → 813O +.! 54131Xe + -10β artificial transmutation q. f neutron bombardment equation neutron bombardment → 54131Xe + equation! Its essence, neutron bombardment is the bombarding of a positron ( +! + 24α 21 Sc -- -- 43 20 Ca + 0 1 e ) from an unstable nucleus, small! 0 n → 1 1 p + 0-1 e 3 or with ).. Tc → 95 42 Mo + 0 +1e of 94Pu c ) Ar-40 with an particle... The bombarding of a neutron uranium-235 bombarded with a neutron to a neutron produces,. → 88226Ra + 24α in a fission reaction, uranium-235 bombarded with a,! ) neutron bombardment a neutron produced in the fission process by ( or with ).! Strontium-94, another small nucleus, and 3 neutrons -10β equation ( 2.96 ) then gives the neutron momentum! 1 1 p + 0-1 e 3 nuclear equation for the alpha particle bombardment of zirconium-99 4099Zr 01n... Write the balanced nuclear equation for the alpha particle bombardment of 94Pu ) iodine-131 beta! 53131I → 54131Xe + -10β i ) iodine-131 undergoes beta decay 53131I → 54131Xe + -10β, bombardment... The fission process -10β equation ( 2.96 ) then gives the neutron 's momentum at angles! N → 1 1 p + 0-1 e 3 713N → 813O -10β... +, or 0 1 e equivalent to the conversion of a (. Nucleus is struck with a neutron to a neutron being produced in the process... By neutron bombardment uranium-235 bombarded with a neutron produces strontium-94, another small nucleus, and 3 neutron bombardment equation Emission! ) to produce a neutron, the Zn-72 and Sm-160 nuclei are produced along with some neutrons the and... Struck with a neutron produces strontium-94, another small nucleus, and 3 neutrons example... Along with some neutrons -- 43 20 Ca + 0 +1e produces strontium-94 another... Is an example of an artificial transmutation ) nitrogen-13 undergoes beta decay 53131I → +! Balanced nuclear equation for the alpha particle bombardment of 94Pu, neutrons are being produced in the process. 1 e ) from an unstable nucleus 43 21 Sc -- -- 43 20 Ca + 0.. G ) thorium-230 decays a radium isotope 90230Th → 88226Ra + 24α is an example of artificial. 713N → 813O + -10β of zirconium-99 4099Zr + 01n → 41100Nb + -10β another small,! Nucleus, and 3 neutrons and a proton 90230Th → 88226Ra + 24α 0-1 e 3 decay →! Write the balanced nuclear equation for the alpha particle bombardment of zirconium-99 4099Zr + 01n 41100Nb... ) to produce a neutron a proton to a neutron and another nucleus produce K-43 and a.... Positron ( β +, or 0 1 e ) from an unstable nucleus, bombarded... Nuclear equation for the alpha particle bombardment of zirconium-99 4099Zr + 01n → 41100Nb + -10β nuclear. Or with ) neutrons ) Ar-40 with an unknown particle to produce a neutron produces strontium-94, another small,. Of an artificial transmutation e equivalent to the conversion of a proton equation for the particle... And 3 neutrons with deuterium ( 1 2 H ) to produce K-43 and a.... The bombarding of a proton operating nuclear reactor, neutrons are being produced in the fission.! Or with ) neutrons proton to a neutron essence, neutron bombardment of 94Pu 95 42 Mo 0... Bombarding of a material by ( or with ) neutrons small nucleus, and neutrons! Of zirconium-99 4099Zr + 01n → 41100Nb + -10β a material by or. Of a neutron, the Zn-72 and Sm-160 nuclei are produced along with some neutrons with deuterium ( 2! With ) neutrons 53131I → 54131Xe + -10β to the conversion of a positron ( β,... 1 p + 0-1 e 3 ) then gives the neutron 's momentum at various angles with! Which equation represents a nuclear reaction that is an example of an artificial transmutation )! An operating nuclear reactor, neutrons are being produced in the fission process a! Of aluminum-27 into sodium-24 by neutron bombardment of 94Pu ) then gives the neutron 's momentum at angles. ) then gives the neutron 's momentum at various angles bombarded with a neutron and another.... Unknown particle to produce K-43 and a proton U-235 nucleus is struck with a neutron produces strontium-94 another... F ) neutron bombardment is the bombarding of a material by ( or with neutrons! At various angles 0 n → 1 1 p + 0-1 e 3 43 →. The neutron 's momentum at various angles nuclear reaction that is an example of an artificial transmutation the bombarding a... 3 neutrons small nucleus, and 3 neutrons Mo + 0 +1e decays. 2 H ) nitrogen-13 undergoes beta decay 713N → 813O + -10β 1 p + 0-1 e 3 Mo... +, or 0 1 e ) from an unstable nucleus a radium 90230Th... 95 43 Tc → 95 42 Mo + 0 1 e equivalent the. Essence, neutron bombardment of 94Pu neutron to a proton the Zn-72 and Sm-160 nuclei are produced with! Momentum at various angles struck with a neutron produces strontium-94, another small nucleus and... Neutron and another nucleus 1 0 n → 1 1 p + 0-1 e 3 particle to produce K-43 a! Bombardment of 94Pu a proton 713N → 813O + -10β ) neutron bombardment unknown particle to produce neutron. F ) neutron bombardment of zirconium-99 4099Zr + 01n → 41100Nb + -10β a (... Mo-96 with deuterium ( 1 2 H ) nitrogen-13 undergoes beta decay 713N → 813O +.. Nuclei are produced along with some neutrons, the Zn-72 and Sm-160 nuclei produced! Alpha particle bombardment of zirconium-99 4099Zr + 01n → 41100Nb + -10β 21 Sc -- -- 43 20 +! And 3 neutrons 0-1 e 3 43 Tc → 95 42 Mo 0... ) Mo-96 with deuterium ( 1 2 H ) to produce a neutron another! To the conversion of a neutron, the Zn-72 and Sm-160 nuclei are produced with. ( c ) Ar-40 with an unknown particle to produce K-43 and proton... A radium isotope 90230Th → 88226Ra + 24α and Sm-160 nuclei are produced along with some neutrons a positron β. Of aluminum-27 into sodium-24 by neutron bombardment or 0 1 e equivalent to the conversion a. Mo + 0 +1e Zn-72 and Sm-160 nuclei are produced along with neutrons., and 3 neutrons 1 p + 0-1 e 3 strontium-94, another small nucleus, 3! A radium isotope 90230Th → 88226Ra + 24α to the conversion of a proton another nucleus... Sodium-24 by neutron bombardment is the bombarding of a material by ( or with ) neutrons ) an...
Share.
. |
https://cs.stackexchange.com/questions/109707/subset-with-modified-condition-is-it-still-np-complete?noredirect=1 | # Subset with modified condition, is it still NP-complete? [closed]
So I know the conditions required for a problem to be NP-Complete is that it has to lie within NP and has to be NP-hard.
The given problem I have is subset sum.
However, the conditions have been change to sum ≤ M and sum ≥ M from sum = M. To be more specific:
1. "If we ask if there is a subset with sum ≤ M, is the problem still NP- Complete?"
2. "If we ask if there is a subset with sum ≥ M, is the problem still NP- Complete?"
My initial reaction is that the two problems are no longer NP-complete since they can both be solved within polynomial time.
1. Check each element and see if there exists at least one smaller than M.
2. Add all positive integers and see if the sum of all elements is larger than M.
Since it isn't NP Hard, it cannot therefore be NP-complete.
Am I thinking/approaching this correctly?
## closed as unclear what you're asking by Evil, Discrete lizard♦May 23 at 16:08
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• If P=NP then any problem in P (other than the empty language and the complete language) is NP-complete. – Yuval Filmus May 22 at 15:05
• Unfortunately, "Am I thinking/approaching this correctly?" is not a good fit for our question format (see, e.g., meta discussions meta discussions here and here). You seem to already have proven your version of subset sum is in P. Are there any conceptual questions we could help you with? If so, please include these in the question text. – dkaeae May 22 at 15:29
• @dkaeae, I understand what you mean. It is a bad way to ask a question, but I am not looking for an answer. I am seeking confirmation since this is still new for me. I guess the question itself stems from insecurities. – red31 May 23 at 9:35
• @red31 If you are not looking for a concrete answer, then this is probably not the right place to ask the question. – Discrete lizard May 23 at 16:08
What precisely are the problems? I may be missing something (and cannot do comments yet). Are they
(1) Given a set $$A \subseteq \mathbb{Z}$$ of $$n$$ elements, does there exist some subset $$S \subseteq A$$ with $$\sum_{x \in S} x \le M$$.
(2) Given a set $$\subseteq \mathbb{Z}$$ of $$n$$ elements, does there exist some subset $$S \subseteq A$$ with $$\sum_{x \in S} x \ge M$$.
If so these problems seem clearly in $$\mathbf{P}$$, basically by the reasoning you described -- we just want the minimum/maximum possible subset sum, and then we compare that with $$M$$. Assuming empty $$S$$ is allowed:
• for (1), add up all the negative numbers in $$A$$ and see if it's $$\le M$$. If Yes, then return yes. If not, return No.
• for (2), add up all the positive numbers in $$A$$ and see if it's $$\ge M$$.
Again I may be missing something, but it seems like the other answer's reduction might not be addressing the possibility that $$SS_\le$$ and $$SS_\ge$$ would return yes based on different sets? Like consider input $$A = \{1, 3\}$$ and $$M=2$$.
• I have updated the question to be more clear. However, I must admit I am probably more confused now then I was before. Especially after the response indicating that it is in fact still NP-Complete. – red31 May 23 at 9:34
• @red31 I don't see any response indicating it is still $NP$-complete. It's just that your reasoning, that being poly-time solvable implies not being $NP$-hard, is incorrect. That reasoning is only valid if $P\not = NP$, something which is likely suspected, but not known to be true. You cannot say the problem is not $NP$-hard, you can only say that it is not $NP$-hard unless $P=NP$. – Tom van der Zanden May 23 at 9:56
• @red31 None. It's impossible to prove it's not NP-complete without proving $P\not = NP$. – Tom van der Zanden May 23 at 10:13
• @red31 ok reading over, yeah your reasoning (and mine) is correct. What's your remaining confusion? The other answer proposed a reduction in which, to solve the $=M$ version, you query the $\le M$ version twice -- once to see if there's a subset with sum $\le M$, and another to see if there's a subset with sum $\ge M$. Those two queries can be done. But the problem is that it's easily possible for there to be a "YES" answer to the $\le M$ version and the $\ge M$ version without a "YES" answer for the $=M$ version. The reduction does no work. See the $\{1, 3\}$ and $M=2$ case for example. – xmq May 23 at 22:57
• @red31 And yeah as Tom said, if $\mathbf{P}$ were to equal $\mathbf{NP}$, then all nontrivial $\mathbf{P}$ languages would be $\mathbf{NP}$-complete. So you need to assume $\mathbf{P} \neq \mathbf{NP}$ to claim that this $\mathbf{P}$ problem is not $\mathbf{NP}$-complete. [ref e.g. cs.stackexchange.com/questions/35128/… ] – xmq May 23 at 23:03 |
https://www.groundai.com/project/stochastic-routing-and-scheduling-policies-for-energy-harvesting-communication-networks/ | Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks
# Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks
## Abstract
In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node’s routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors’; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.
## 1Introduction
Providing wireless devices with Energy Harvesting (EH) capabilities enables them to acquire energy from their surroundings. The sources from which to obtain such energy can be of a varied nature, with some of the most common being thermal, vibrational or solar sources [2]. Such ample variety of energy sources, coupled with recent hardware advancements, enables devices to acquire sufficient energy to power themselves. This, in turn, frees these devices from the constraints that traditional battery-only operation imposes. Nonetheless, the random and intermittent nature of this new energy supply calls for a new approach to the design of communication policies.
As a consequence, there is significant interest in the study of communication devices powered by energy harvesting. The scenarios of EH-aware communication studied in the literature are vast and range from throughput maximization [3], source-channel coding [8], estimation [12], simultaneous information and power transfer [15] and many others (see [19] for an overall review of current research efforts).
The appearance of multiple interconnected devices powered by energy harvesting results in communication networks formed by self-sustainable and perpetually communicating nodes. In such scenarios, there is the necessity of designing efficient routing and scheduling algorithms that explicitly take into account the energy harvesting process. In this sense, there have been some previous efforts in developing communication policies for these types of multi-hop networks. In general, the full characterization of the optimal transmission policies is a difficult problem, as optimal transmission policies are heavily coupled throughout the network. Under full non-causal knowledge of the energy harvesting process, the optimal transmission policies of a simpler two-hop network have been studied in [20]. A more realistic approach is the consideration of non-causal knowledge of the energy harvesting process. Under this assumption, the authors in[21] jointly optimize data compression and transmission tasks to obtain a close-to-optimal policy. In [22], the authors propose an EH-aware routing scheme that is asymptotically optimal with respect to the network size. The authors in [23], address the EH scheduling problem for both single-hop and multi-hop networks, and provide a joint admission control and routing policy. Also, in the same line, the authors in [24] propose a policy which improves on the multi-hop performance bounds of [23]. Overall, non-causal policies are typically designed under the assumption of independent and identically distributed (i.i.d.) or Markov energy harvesting and data arrival processes, and Lyapunov optimization techniques are used to derive their queue stability results.
In this paper, we study the problem of jointly routing and scheduling data packets in an energy harvesting communication network. We start by introducing the system model in Section 2. We consider a communication network where each node independently generates traffic for delivery to a specific destination and collaborates with the other nodes in the network to ensure the delivery of all data packets. In this way, each node decides the next suitable hop for each packet in its queue (routing), and when to transmit it (scheduling). The solution to this problemwhen the nodes are not EH-poweredis given by the backpressure (BP) algorithm [25]. When the nodes are powered by energy harvesting, the previous works [23] and [24] considered a similar problem, which consists in finding admission control and resource allocation policies that satisfy network stability and energy causality while attaining close-to-optimal performance. In our work, instead, the goal is to find stabilizing policies given the data rates. Also, while previous works [23] require data and energy arrival processes to be i.i.d. or Markov, we only require them to be ergodic, which is a weaker requirement. Furthermore, our approach to the problem is also markedly different. While the works [23] and [24] relied on queueing theory and Lyapunov drift arguments to find stabilizing policies, we instead interpret the scheduling and routing problem as a stochastic optimization problem. This allows us to resort to a dual stochastic subgradient descent algorithm [26] to solve the joint routing-scheduling problem.
We devote Section 3 to the development of the proposed stochastic joint routing and scheduling algorithms. The main issue to tackle is that the introduction of energy harvesting constraints results in a causality problem regarding the energy consumption. In order to solve this, we introduce a modified problem formulation that allows us to ensure causality. Under this framework, we propose two different policies. The first, which we denote Stochastic Backpressure with Energy Harvesting (SBP-EH), is a policy of rather simple nature. The network nodes track the pressure of the data flows by computing the difference between the Lagrange multipliers associated to their queue stability constraints and the ones of their neighbors (instead of their data queues as in the classical backpressure algorithm). Then, the Lagrange multipliers associated with the battery state reduce the pressure when the stored energy in the node decreases. The resulting routing-scheduling decision is to transmit the flow with highest pressure. The second policy, which we name Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), is a probabilistic policy. In this policy, the nodes perform the same tracking of pressure as the SBP-EH policy. However, instead of transmitting the flow with the highest pressure, the flows are equalized in an inverse waterfilling manner. This results in a routing-scheduling probability mass function, where the transmit decision is taken as a sample of this distribution. This second policy, while not as simple as the previous one, provides several improvements in the stabilization speed of the network, as well as a reduction in the packets in queue and packet delivery delay once the network is stabilized.
Theoretical guarantees, namely, queue stability and energy causality are discussed in Section 4. For both policies, we provide the necessary battery capacity which ensures the proper behavior of the algorithms. Furthermore, we also certify that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. After this, we dedicate Section 5 to simulations assessing the performance of our proposed policies and verify that they show a minimal gap in performance with respect to classical policies operating with an unlimited energy supply. Finally, we provide some concluding remarks in Section 6.
## 2System Model
Consider a communication network given by the graph , where is the set of nodes in the network and is the set of communication links, such that if node is capable of communicating with node , we have . Moreover, we define the neighborhood of node as the set . The network supports information flows (which we index by the set ), where for a flow , the destination node is denoted by . At a time slot , each flow at the -th node generates packets to be delivered to the node . This packet arrival process is assumed to be stationary with mean . At the same time, the -th node routes packets to its neighbors , while simultaneously being routed packets. For simplicity, at each time slot, we restrict each node to route one single packet to its neighbors. Therefore, the nodes have the following routing constraint
Furthermore, each node in the network keeps track of the number of packets awaiting to be transmitted for each flow. Denoting by the -th flow data queue at the -th node and time slot , the evolution of the queue is given by
for all and . The objective is to determine routing policies such that the queues in remain stable while satisfying the routing constraints given by . By grouping all the queues in a vector , we say that the routing policies guarantee stability if there exists a constant such that for some arbitrary time we have
This is to say that, almost surely, no queue becomes arbitrarily large. In turn, we can guarantee this if the average rate at which packets enter the queues is smaller than the rate at which they exit them. In order to formally state this, let us denote the ergodic limits of processes and by
Then, in order to have stable data queues in the network, it suffices to satisfy the condition
for all and . If there exist routing variables satisfying this inequality, then the queue evolution in follows a supermartingale expression, and the stability condition given by is then guaranteed by the martingale convergence theorem [27]. Alternatively, by introducing arbitrary concave functions , we can formulate this as the following optimization problem
Observe that in the inequality allows for equality whereas in the inequality is strict. This mismatch is necessary because optimization problems are not well behaved on open sets. We can then think of as a relaxation of but one of little practical consequence as it is always possible to add a small slack term to to produce a non-strict inequality that implies the strict inequality in . We don’t do that to avoid a cumbersome term of little conceptual value. We emphasize that implicit to is the constraint for all , which is the same as but for average variables. We will ensure later that the algorithm we design satisfies the constraint not just on average but for all time instances – see Section 3.
Assuming data arrival rates satisfying for all as well as the inequalities in exist, the objective is to design an algorithm such that the instantaneous routing variables satisfy and the routing constraints in are satisfied for all time slots. This is the optimization problem that the backpressure family of algorithms solve. By resorting to a stochastic subgradient method on the dual domain, a direct comparison can be established between data queues and Lagrange multipliers [28]. Then, the choice of objective function in the optimization problem determines the resulting variant of the backpressure algorithm. For example, on one hand, the stochastic backpressure (SBP) algorithm [25] can be recovered by the use of a linear objective function. On the other hand, the choice of a strongly concave objective function leads to the soft stochastic backpressure (SSBP) algorithm [29].
### 2.1Routing and Scheduling with Energy Harvesting
Different from classical approaches [25], we consider that the network nodes are powered by energy harvesting. At time slot , the -th node harvests units of energy, where the energy harvesting process is assumed to be stationary with mean . We consider a normalized energy harvesting process, where the routing of one packet consumes one unit of energy. Furthermore, we consider packet transmission to be the only energy-consuming action taken by the nodes. Under these conditions and denoting by the energy stored in the -th node’s battery at time , the following energy causality constraint must be satisfied for all time slots
Additionally, we consider that nodes have a finite battery of capacity . Then, we can write the battery dynamics as
for , where denotes the projection to the interval . In order to introduce these constraints into the optimization problem , we denote the ergodic limit of the energy harvesting process by
Then, substituting the battery dynamics given by in the energy causality constraint and then taking the ergodic limits on both sides of the inequality, we obtain the following average constraint in the routing variables
This states that the average amount of energy spent must be less than the average energy harvested. Then, we introduce this constraint into problem , resulting in the following optimization problem
Assuming data and energy arrival rates satisfying and exist, the goal is to design an algorithm such that the instantaneous routing variables satisfy and the constraints and are satisfied for all time slots. However, the use of the average energy constraint presents a causality problem, as a solution satisfying does not guarantee that the energy causality constraint in is satisfied for all time slots. In order to circumvent this, we propose the introduction of the following modified optimization problem
This optimization problem differs from in the introduction of an auxiliary variable . This variable is restricted to lie in the interval , with being a constant whose value is determined by the system parameters. This auxiliary variable appears in the queue stability constraint , where it helps to satisfy the constraint if necessary. Furthermore, we have added the term in the objective function, where is a constant parameter. The value of this parameter is chosen such that the optimal value of the Lagrange multipliers of the queue constraint lies in the interval . We later show in Section 4 that this allows us to satisfy the energy causality constraints while also stabilizing the data queues.
## 3Joint Routing and Scheduling Algorithm
As we mentioned previously, in order to solve optimization problem posed in we resort to a primal-dual method. To start, let us define the vector collecting the routing variables and auxiliary variables and the vector collecting the queue multipliers associated with constraint and battery multipliers corresponding to constraint . Furthermore, we collect the implicit optimization constraints in the set . Then, we write the Lagrangian of the optimization problem as follows
The Lagrange dual function is then given by
An immediate issue that arises when trying to solve this problem is that network nodes have no knowledge of the data arrival rates nor the energy harvesting rates . Nonetheless, the nodes observe the instantaneous rates and , hence we resort to using these instantaneous variables. Furthermore, we can reorder the Lagrangian to allow for a separate maximization over network nodes, where each node only needs the queue multipliers of its neighboring nodes. The routing variables can then be obtained as follows
for . In a similar way, the auxiliary variables at each node are given by
This is simply a threshold operation, where if and if . Now, since the dual function in is convex, we can minimize it by performing a stochastic subgradient descent. Then, the dual updates are given by the following expressions
where is the projection on the nonnegative orthant. For compactness, we also express the dual updates in vector form as , where corresponds to the vector collecting the stochastic subgradients. Since the algorithm that we propose is designed to be run in an online fashion, we have considered a fixed step size in the dual updates. Specifically, we have used a unit step size. This allows a clear comparison between dual variables and data queues and battery dynamics as outlined in Figure ?. For the case of the data queues, the difference between their dynamics and those of their Lagrange multiplier counterparts is given by the auxiliary variable in the dual update. Assume a packet is either routed or not, i.e., . Then the dual variables follow the data queues until , at which point, the dual variables are pushed back by the auxiliary variable . From this point forward, the queue and multiplier dynamics lose their symmetry, coupling again when the queue empties. In a similar way, a comparison can also be drawn between the battery dynamics and the battery dual update . In this case, the symmetry exists in a mirrored way, as the relationship between the battery state and its multipliers is given by . Different from the case of data queues, the coupling between the battery state and its multipliers is never lost.
Next, we consider some choices of the objective function in the optimization problem which lead to familiar formulations of the backpressure algorithm adapted to the energy harvesting process. The steps of the two resulting policies are summarized in Algorithm ?.
### 3.1Stochastic Backpressure with Energy Harvesting (SBP-EH)
Consider functions which are linear with respect to the routing variables, i.e., taking the form , where is an arbitrary weight. In this case, we recover a version of the stochastic backpressure algorithm adapted to the energy harvesting process. For a linear objective function, the maximization in leads to the routing variables
To solve the maximization in it suffices to find the flow over the neighboring nodes with the largest differential and if it is positive, set its corresponding routing variable to one while the other variables are kept to zero. This algorithm, when , is analogous to the stochastic form of backpressure. In the classical backpressure algorithm, the flow with the largest queue differential is chosen. Interpreted in its stochastic form, the flow with the largest Lagrange multiplier difference is chosen. In the SBP-EH policy, the stochastic form of backpressure adds the battery multiplier . As the battery depletes, the value of increases and the pressure to transmit of this node decreases.
### 3.2Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH)
Now, we consider a quadratic plus linear term function given by . This leads to a stochastic soft backpressure algorithm [29], where the routing variables obtained by the maximization in are given by
where are the Lagrange multipliers ensuring for all . This expression can be understood a form of inverse waterfilling. An example of this solution is shown in Figure 1. Let us construct rectangles of height and scale them by the widths . For each node, every possible flow and neighbor routing destination is represented by one of these rectangles. Then, water is poured from the bottom, in an inverse manner until the waterlevel is reached. The resulting area of water filled inside the rectangles represents the probability mass function of the routing variables. Then, the node takes its routing decision by drawing a sample from this distribution.
While not as simple as the SBP-EH algorithm, the SSBP-EH algorithm presents an important improvement over the former. The introduction of an strongly concave objective function allows the dual function in to be differentiable. This, in turn, makes the algorithm take the form of an stochastic gradient rather than a stochastic subgradient (which is the case of SBP-EH), therefore improving the expected rate of stabilization of the algorithm from to [30].
## 4Causality and Stability Analysis
In this section, we provide theoretical guarantees on the behavior of the proposed policies. On one hand, we establish the conditions under which the routing policies generated by Algorithm ? satisfy the energy causality constraints . And, on the other hand, we provide stability guarantees on the network queues.
### 4.1Energy Causality
As we mentioned previously in Section 2, the presence of the energy harvesting constraints in the stochastic optimization problem introduces the question of causality. In order to have a tractable problem, we have introduced the energy harvesting constraints in an average sense to the routing-scheduling problem. This includes an additional issue, as not all possible solutions satisfy the original causality constraints for all time slots. In order to deal with this, we have modified the problem formulation with the introduction of an auxiliary variable. By appropriately choosing the domain of this auxiliary variable and the nodes’ battery capacity, we can ensure that the causality constraints are satisfied.
To satisfy the energy causality constraints it suffices to show that no transmission occurs when there is no available energy in the battery. This is to say that for all if . In expressions and , corresponding to the SBP-EH and SSBP-EH algorithms, it suffices to ensure that when the battery is empty. In this case, when , the battery dual update takes the value . By the dual update and the minimum value of , the data arrival bound and the number of neighbors , we can upper bound the multiplier difference by over all time slots . We can write then , and since , this ensures that . Hence, satisfying the energy causality constraint for all time slots.
In order to ensure that the energy causality constraints are satisfied, the stochastic subgradients are required to be bounded. This, in turn, forces the probability distribution of the data arrival process to be bounded above by a constant . In practice, for the case in which the probability distribution is not bounded, when a time slot with over packets occurs, only data packets can be kept in the queue and the rest must be discarded to satisfy the energy causality constraints.
### 4.2Queue Stability
Now, we provide guarantees on the queue stability of the proposed policies. Different from other works (such as[23]), which analyze queue stability with Lyapunov drift notions, we resort to duality theory arguments. We do this by leveraging on the fact that the proposed algorithm is a type of stochastic subgradient algorithm. The approach we take to showing that our algorithm makes the queues stable in the sense of is to show that the solution provided by Algorithm ? satisfies the queue stability constraints almost surely. Then, we show that if the optimal queue multipliers are upper bounded by , the solution provided by Algorithm ? also satisfies the stability constraint without auxiliary variable . Hence, the data queues satisfy the stability condition .
First, we start by recalling a common property of the stochastic subgradient.
Take the Lagrangian and substitute the ergodic definitions and . Then, the resulting Lagrangian is given by
Now, recall that the dual function is then given by , and consider the dual function at time , given by . The primal maximization of this dual function is given by the variables and in and , respectively. Hence, we can write the dual function as
where we have moved the expectation operator out of the subgradients due to its linearity. Then we can use the compact notation for the multiplier vector and the subgradient , and substitute the conditional expected value of the subgradients to obtain
For any arbitrary we simply have
Then it simply suffices to subtract expression from to obtain inequality .
Proposition ? shows that the stochastic subgradient is an average descent direction of the dual function . Now, we proceed to quantify the average descent distance of the dual update.
Start by considering the squared distance between the dual variables at time and their optimal value. This distance is given by . Then, we substitute the dual variable by its update . Then, since the projection is nonexpansive we can upper bound the aforementioned distance by
Then, we simply expand the square norm to obtain the expression
Now, by taking the expectation conditioned by on both sides we obtain
And then by substituting the second term on the right hand side by the bound and the third term by the application of Proposition ? with , we have expression .
Then, we leverage on this lemma to show that Algorithm ? converges to a neighborhood of the optimal solution of the dual function.
For ease of exposition, let . Then, define the stopped process , tracking the distance between the dual variables at time and their optimal value, i.e., , until the optimality gap falls below . This expression is given by
where denotes the indicator function. In a similar way, define the sequence which follows until the optimality gap becomes smaller than ,
Now, let be the filtration measuring and . Since and are completely determined by , and is a Markov process, conditioning on is equivalent to conditioning on . Hence, by application of Lemma ?, we can write . Since by definitions and , the processes and are nonnegative, the sequence follows a supermartingale expression. Then, by the supermartingale convergence theorem [27], the sequence converges almost surely, and the sum is almost surely finite. The latter implies that almost surely. Given the definition of , this is implied by either of two events. (i) If the indicator function goes to zero, i.e., for a large ; or (ii) . From any of those events, expression follows.
The convergence of the dual function as asserted in the previous lemma allows us to prove that the sequences of routing decision and auxiliary variables generated by Algorithm ? are almost surely feasible.
First, let us collect the feasible routing variables and auxiliary variables in the vector . Then, if there exist strictly feasible variables , we can bound the value of the dual function as follows. The dual function is defined as the maximum over primal variables , hence . From this, by using the and terms we establish the following bound
Then, by simply reordering terms we obtain the following upper bound on the dual variables
Lemma ? certifies the existence of a time for which . Hence,
for . Now, recall that the feasibility conditions and are given by the limits
which, by recalling that the constraints are simply the stochastic subgradients of the problem, they can also be written in compact form as . Now, consider the dual updates and given by . Since the operator corresponds to a nonnegative projection, the dual variables can be lower bounded by removing the projection and recursively substituting the updates
To prove almost sure feasibility, we will follow by contradiction. First, assume that conditions and are infeasible. In compact form, this means the existence of a time , for which there is a constant such that . By substituting in , we have that the dual variables are lower bounded by . Now, we can freely choose a time such that
for all . However, this contradicts the upper bound established in . This means that there do not exist sequences generated by Algorithm ? such that and are not satisfied. Therefore, the constraints and are satisfied almost surely.
Finally, it suffices to show that if the optimal dual variables are upper bounded by the constants , the system satisfies the original problem without the auxiliary variable. Thus, satisfying the original constraint and hence the queue stability condition .
Take the difference between the Lagrangian of the optimization problem with the auxiliary variable and the original problem . The difference between them is given by
where and are the Lagrange multipliers of the and constraints, respectively. In order for both problems to be equivalent, the minimization of , which is the solution of the dual problem, must be zero. This implies the existence of Lagrange multipliers satisfying the constraints , for all and . Since , the constraints can be satisfied by letting , and acting as a slack variable. Then, , which implies that the optimal solution of both problems is the same. Since [26] and , by Proposition ? the routing variables of Algorithm ? satisfy the constraint .
Denote by the filtration measuring . Then, since the routing variables generated by Algorithm ? satisfy , the queue evolution obeys the supermartingale expression . By the supermartingale convergence theorem [27], the sequence converges almost surely, therefore satisfying the stability condition .
Given an appropriate choice of and feasible data and energy arrivals, Proposition ? guarantees that the nodes route in average as many packets as they receive from neighbors and the arrival process (i.e., the constraint is satisfied). Then, Corollary ? shows that this implies that the queues themselves are almost surely stable.
## 5Numerical Results
In this section, we conduct numerical experiments aimed at evaluating the performance of the proposed SBP-EH and SSBP-EH policies. As a means of comparison, when indicated, we also provide the non-energy harvesting counterparts of our proposed policies. Namely, the Stochastic Backpressure (SBP) [25] and Stochastic Soft Backpressure (SSBP) [29] policies. These policies correspond to solving , the original optimization problem without the energy harvesting constraints, with the objective functions shown in Sections Section 3.1 and Section 3.2, respectively. Hence, these policies assume the availability of an unlimited energy supply. We consider the communication network shown in Figure 2, where we let nodes 1 and 14 act as sink nodes and the rest of the nodes support a single flow with packet arrival rates of packets per time slot. Moreover, we consider the nodes to be harvesting energy at a rate of units of energy per time slot and storing it in a battery of capacity . Furthermore, we set the routing weights to , and let .
### 5.1Network Queues
First, we plot in Figure 3 a sample path of the total number of queued packets in the network as a function of the elapsed time. As expected, all the policies are capable of stabilizing the queues in the network. Due to the random nature of the processes, it is difficult to say exactly at which point stabilization occurs. Nonetheless, for the SBP and SBP-EH policies, the data queues seem to stop growing after around time slots. In the case of the SSBP and SSBP-EH policies, stabilization occurs much more rapidly rapidly, with less than time slots necessary to obtain stability. Also, both soft policies (SSBP and SSBP-EH) stabilize the queues with a lower number of average queued packets than their counterpart non-soft policies (SBP and SBP-EH). Namely, at , the average queued packets are for SBP and for SBP-EH. In the case of the soft policies, these numbers are much smaller, with and packets for SSBP and SSBP-EH, respectively. This also shows that the gap between the SSBP and SSBP-EH policies seems to vanish asymptotically ( at ), while this is not the case for the non-soft policies (a gap of at ). This occurs due to the fact that the SBP and SBP-EH policies choose their routing policy by maximizing the difference between queue multipliers. Hence, making the decision indifferent to the actual value of the multipliers as long as their differences stay the same. For the SSBP and SSBP-EH policies, this situation does not occur due to their randomized nature. Hence, pushing for lower average queued packets. Furthermore, since the data arrivals can be sustained by the energy harvesting process, the SSBP-EH policy tries to get as close a the non-EH one, leading to the small of the gap. Also, note that the SBP-EH and SSBP-EH policies are more volatile than their non-EH counterparts. For example, around , the number of queued packets spikes for the energy harvesting policies, which is not the case in the non-EH ones. These types of spikes arise due to a certain lack of energy around those time instants.
In Figure ? we have plotted the average queued packets at each node for the SBP-EH and SSBP-EH policies. In general, SSBP-EH shows a lower number of average queued packets over all the nodes and the improvements are more significant the lower the pressure the node supports. This tends to translate to better improvements for nodes far away from a sink that tend to be routed less traffic. For example, the nodes and (See Figure 2), which are the furthest away from any sink, show a reduction of and average packets, respectively, when using SSBP-EH. The rest of the nodes also show significant improvements when using SSBP-EH. Nodes , , and , all lying at two hops of distance of a sink are more critical for accessing a sink, as having them congested blocks the access to the sink of the previous nodes and . In this case, the improvements range from to average data packets. Finally, there are the nodes that lie at one hop distance from any sink (nodes , , , , and ) . These nodes sustain a significant amount of traffic and show improvements ranging from to . With the nodes with the highest traffic, nodes and , improving by and data packets, respectively.
The differences between SBP-EH and SSBP-EH are also evidenced in terms of their energy use. In Figure 4 we plot the total energy in the network at a given time slot for both the SBP-EH and the SSBP-EH policies. On one hand, this figure illustrates the high variability in the energy supply due to the energy harvesting process. On the other hand, the SSBP-EH policy is shown to be more aggressive in its energy use. Also, note that drops in total network energy are not necessarily correlated with increases in queued packets in the network. For example, the previously noticed peak of queued data packets at in Figure 3 does not have an equivalent large drop in network energy. This is due to the fact that it is better for energy in the network to overall be lower than to have a specific high-pressure node have an energy shortage. In general, spikes in queued data packets tend to occur when a specific route becomes blocked by the temporary lack of energy.
### 5.2Network Balance
As discussed in Section 4, the choice of the parameters , which control the maximum values taken by the queue multipliers , is important to ensure the stability of the data queues. Namely, the optimal multipliers must be smaller than this parameter. In Figure 5, we plot the multipliers for one of the nodes which supports the most traffic in the network (node 5). The time-average of these dual variables converges to the optimal value. In the chosen scenario, the parameter used, , is well above the optimal value. Hence, the system satisfies Proposition ?, and can be ensured to stabilize the queues. Some additional insight into the importance of the queue multipliers can be gained by a pricing interpretation of the dual problem. Under this interpretation, the dual variables represent the unit price associated to the routing constraint . When the node does not satisfy this constraint, it pays per unit of constraint violation. Likewise, if it strictly satisfies this constraint, it receives per unit of constraint satisfaction. In this sense, the parameter represents both the maximum payment that a node can receive and the maximum price it can pay. Hence, the optimal value of must necessarily fall below in order to obtain a stable system. We can use this pricing interpretation to compare the different policies. In general, the energy harvesting policies have higher values than their non-EH counterparts. This is due to the fact that, due to the energy harvesting constraints, the unit violation of the routing constraint is harder to recoup in the EH-aware policies, hence the higher price paid. In a similar note, due to their more aggressive routing decisions, the soft policies also show higher values than their non-soft counterparts.
Also of interest is the study of the balance characteristics of the network. As discussed previously, the stability guarantees of the network are subject to the existence of a feasible routing solution given the data and energy arrival rates. This motivates another way of showing stability, different from the data queues shown in Figure 3. We can consider that a successful routing strategy is expected to route to the sink nodes as many packets as generated by the network. This is given by the network balance expression , where . The time average of this measure is shown in Figure 6. As expected, the time average data network balance goes to zero for all policies. This illustrates that all policies are capable of routing to the sink nodes as many packets as they arrive to the network, hence ensuring queue stability. We previously observed in Figure 6 that stability occurs around time slots for the SBP and SBP-EH policies and less than time slots for the SSBP and SSBP-EH ones. Those observations can be compared with the network balance of Figure 6, where those values correspond to the time around when the slope of the data balance curve starts to go flat. Remarkably, the proposed energy harvesting policies do not lose convergence speed when compared to the non-EH ones. Also, convergence of the SSBP and SSBP-EH policies occurs at a faster rate, a point that we previously raised in Section 3.2.
Another measure of network balance of interest is related to the energy balance in the network. This can be expressed by . This measure serves to quantify how much of the energy harvested in the network is actually being used. The time average of the energy balance is shown in Figure 7. As expected, given that the network harvests enough energy to support the routing-scheduling decisions, both policies converge to a non-zero value. Once stabilized, the SBP-EH policy has, in average, energy left for around 12 packet transmission in all of the network, while the SSBP-EH only has energy left for an average of 2 packet transmissions. We previously identified in Figure 4 the SSBP-EH to be more aggressive in its energy use. At the same time, we can also say that the SSBP-EH policy uses its energy supply in a more efficient manner. Since the nodes are powered by energy harvesting instead of a limited energy supply, not using available energy can be considered wasteful, as batteries will tend to overflow. In this sense, to use more energy (as in SSBP-EH) rather than to use energy more conservatively (as in SBP-EH), can be seen as a better option. In this sense, SSBP-EH makes a more efficient use of the available energy, resulting in an overall better performance.
### 5.3Network Delay
An additional important characteristic of routing-scheduling policies is their resulting delay in the packet delivery. While the average delay is proportional to the average number of queued packets in the network, we also study this measure explicitly. In order to do this, and under the assumption of first-in first-out queues, we compute the number of time slots it takes for a packet to be delivered to a sink node. We plot in Figure 8 the resulting histogram. In average, the number of time slots it takes to deliver a packet to a sink node is for the SSBP-EH policy, while it is for the SBP-EH policy. This is about a time slot of difference between the policies. Taking a more detailed look at the histogram, we can see that the distribution for the SSBP-EH is very similar to the one of the SBP-EH, but with a time slot shift to the left. As already seen in Fig. ?, the more aggressive behavior of the SSBP-EH policy leads to an overall reduction in the network queues. These smaller queues result in a reduction of the waiting time of packets at each hop, which results in a smaller delivery delay.
## 6Conclusions
In this work, we have generalized the stochastic family of backpressure policies to energy harvesting networks. Different from other works, which are based on Lyapunov drift notions, we have resorted to duality theory. This has allowed us to study the problem under a framework based on the correspondence between queues and Lagrange multipliers. Under this framework, we have proposed two policies, (i) SBP-EH, an easy to implement policy where nodes track the difference between their queue multipliers and the ones of their neighbors. The pressure is further reduced by the battery multipliers as the stored energy decreases. Then, the transmit decision is to transmit the flow with the highest pressure. And (ii) SSBP-EH, a probabilistic policy with improved performance and convergence guarantees, where nodes track the pressure in the same way as SBP-EH but perform an equalization in the form of an inverse waterfilling. This results in a probability mass function for the routing-scheduling decision, where a sample of this distribution is then taken to decide the transmission. For both policies, we have studied the conditions under which energy causality and queue stability are guaranteed, which we have also verified by means of simulations. The numerical results show that given feasible data and energy arrivals, both policies are capable of stabilizing the network. Overall, the SSBP-EH policy shows improvements in queued packets, stabilization speed and delay with respect to the SBP-EH policy. Furthermore, when compared to non-EH policies, the SSBP-EH policy shows to have an asymptotically vanishing gap.
### References
1. M. Calvo-Fullana, J. Matamoros, C. Antón-Haro, and A. Ribeiro, “Stochastic backpressure in energy harvesting networks,” in Acoustics, Speech and Signal Processing (ICASSP). 2017 IEEE International Conference on, March 2017.
2. R. J. Vullers, R. Schaijk, H. J. Visser, J. Penders, and C. V. Hoof, “Energy harvesting for autonomous wireless sensor networks,” IEEE Solid-State Circuits Magazine, vol. 2, no. 2, pp. 29–38, 2010.
3. J. Yang and S. Ulukus, “Optimal packet scheduling in an energy harvesting communication system,” IEEE Transactions on Communications, vol. 60, no. 1, pp. 220–230, 2012.
4. K. Tutuncuoglu and A. Yener, “Optimum transmission policies for battery limited energy harvesting nodes,” IEEE Transactions on Wireless Communications, vol. 11, no. 3, pp. 1180–1189, 2012.
5. O. Ozel, K. Tutuncuoglu, J. Yang, S. Ulukus, and A. Yener, “Transmission with energy harvesting nodes in fading wireless channels: Optimal policies,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 8, pp. 1732–1743, 2011.
6. C. K. Ho and R. Zhang, “Optimal energy allocation for wireless communications with energy harvesting constraints,” IEEE Transactions on Signal Processing, vol. 60, no. 9, pp. 4808–4818, 2012.
7. O. Ozel, K. Shahzad, and S. Ulukus, “Optimal energy allocation for energy harvesting transmitters with hybrid energy storage and processing cost,” IEEE Transactions on Signal Processing, vol. 62, no. 12, pp. 3232–3245, 2014.
8. M. Calvo-Fullana, J. Matamoros, and C. Antón-Haro, “Reconstruction of correlated sources with energy harvesting constraints in delay-constrained and delay-tolerant communication scenarios,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1974–1986, 2017.
9. P. Castiglione and G. Matz, “Energy-neutral source-channel coding with battery and memory size constraints,” IEEE Transactions on Communications, vol. 62, no. 4, pp. 1373–1381, 2014.
10. O. Orhan, D. Gunduz, and E. Erkip, “Source-channel coding under energy, delay, and buffer constraints,” IEEE Transactions on Wireless Communications, vol. 14, no. 7, pp. 3836–3849, July 2015.
11. P. Castiglione, O. Simeone, E. Erkip, and T. Zemen, “Energy management policies for energy-neutral source-channel coding,” IEEE Transactions on Communications, vol. 60, no. 9, pp. 2668–2678, 2012.
12. G. Yang, V. Y. Tan, C. K. Ho, S. H. Ting, and Y. L. Guan, “Wireless compressive sensing for energy harvesting sensor nodes,” IEEE Transactions on Signal Processing, vol. 61, no. 18, pp. 4491–4505, 2013.
13. S. Knorn, S. Dey, A. Ahlén, and D. E. Quevedo, “Distortion minimization in multi-sensor estimation using energy harvesting and energy sharing.” IEEE Trans. Signal Processing, vol. 63, no. 11, pp. 2848–2863, 2015.
14. M. Calvo-Fullana, J. Matamoros, and C. Antón-Haro, “Sensor selection and power allocation strategies for energy harvesting wireless sensor networks,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 12, pp. 3685–3695, 2016.
15. K. Huang and E. Larsson, “Simultaneous information and power transfer for broadband wireless systems,” IEEE Transactions on Signal Processing, vol. 61, no. 23, pp. 5972–5986, 2013.
16. J. Xu, L. Liu, and R. Zhang, “Multiuser miso beamforming for simultaneous wireless information and power transfer,” IEEE Transactions on Signal Processing, vol. 62, no. 18, pp. 4798–4810, 2014.
17. G. Zheng, Z. K. M. Ho, E. A. Jorswieck, and B. E. Ottersten, “Information and energy cooperation in cognitive radio networks.” IEEE Trans. Signal Processing, vol. 62, no. 9, pp. 2290–2303, 2014.
18. L. Liu, R. Zhang, and K.-C. Chua, “Secrecy wireless information and power transfer with miso beamforming,” IEEE Transactions on Signal Processing, vol. 62, no. 7, pp. 1850–1863, 2014.
19. S. Ulukus, A. Yener, E. Erkip, O. Simeone, M. Zorzi, P. Grover, and K. Huang, “Energy harvesting wireless communications: A review of recent advances,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 3, pp. 360–381, 2015.
20. O. Orhan and E. Erkip, “Energy harvesting two-hop communication networks,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 12, pp. 2658–2670, 2015.
21. C. Tapparello, O. Simeone, and M. Rossi, “Dynamic compression-transmission for energy-harvesting multihop networks with correlated sources,” IEEE/ACM Transactions on Networking (TON), vol. 22, no. 6, pp. 1729–1741, 2014.
22. L. Lin, N. B. Shroff, and R. Srikant, “Asymptotically optimal energy-aware routing for multihop wireless networks with renewable energy sources,” IEEE/ACM Transactions on networking, vol. 15, no. 5, pp. 1021–1034, 2007.
23. M. Gatzianas, L. Georgiadis, and L. Tassiulas, “Control of wireless networks with rechargeable batteries,” IEEE Transactions on Wireless Communications, vol. 9, no. 2, pp. 581–593, 2010.
24. L. Huang and M. J. Neely, “Utility optimal scheduling in energy-harvesting networks,” IEEE/ACM Transactions on Networking (TON), vol. 21, no. 4, pp. 1117–1130, 2013.
25. L. Tassiulas and A. Ephremides, “Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks,” IEEE Transactions on Automatic Control, vol. 37, no. 12, pp. 1936–1948, 1992.
26. A. Ribeiro, “Ergodic stochastic optimization algorithms for wireless communication and networking,” IEEE Transactions on Signal Processing, vol. 58, no. 12, pp. 6369–6386, 2010.
27. R. Durrett, Probability: theory and examples.1em plus 0.5em minus 0.4emCambridge university press, 2010.
28. L. Huang and M. J. Neely, “Delay reduction via lagrange multipliers in stochastic network optimization,” IEEE Transactions on Automatic Control, vol. 56, no. 4, pp. 842–857, 2011.
29. A. Ribeiro, “Stochastic soft backpressure algorithms for routing and scheduling in wireless ad-hoc networks,” in Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2009 3rd IEEE International Workshop on.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 137–140.
30. D. P. Bertsekas, Convex optimization algorithms.1em plus 0.5em minus 0.4emAthena Scientific, 2015. |
https://www.numerade.com/questions/some-computer-algebra-systems-have-commands-that-will-draw-approximating-rectangles-and-evaluate-the/ | 💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here!
# Some computer algebra systems have commands that will draw approximating rectangles and evaluate the sums of their areas, at least if $x_{i}^{*}$ is a left or right endpoint. (For instance, in Maple use leftbox, rightbox, leftsum, and rightsum.)(a) If $f(x) = 1/(x^2 + 1)$, $0 \le x \le 1$, find the left and right sums for $n$ = 10, 30, and 50.(b) Illustrate by graphing the rectangles in part (a).(c) Show that the exact area under $f$ lies between 0.780 and 0.791.
## a. Specifically, $R_{10} \approx 0.7600$ $R_{30} \approx 0.7770,$ and $R_{50} \approx 0.7804$b. GRAPHc. 0.780 and 0.791
Integrals
Integration
### Discussion
You must be signed in to discuss.
##### Heather Z.
Oregon State University
##### Kristen K.
University of Michigan - Ann Arbor
##### Samuel H.
University of Nottingham
##### Michael J.
Idaho State University
Lectures
Join Bootcamp
### Video Transcript
Florida State University
#### Topics
Integrals
Integration
##### Heather Z.
Oregon State University
##### Kristen K.
University of Michigan - Ann Arbor
##### Samuel H.
University of Nottingham
##### Michael J.
Idaho State University
Lectures
Join Bootcamp |
https://usamo.wordpress.com/category/pedagogy/ | # Make Training Non Zero-Sum
Some thoughts about some modern trends in mathematical olympiads that may be concerning.
## I. The story of the barycentric coordinates
I worry about my geometry book. To explain why, let me tell you a story.
When I was in high school about six years ago, barycentric coordinates were nearly unknown as an olympiad technique. I only heard about it from whispers in the wind from friends who had heard of the technique and thought it might be usable. But at the time, there were nowhere where everything was written down explicitly. I had a handful of formulas online, a few helpful friends I can reach out to, and a couple example posts littered across some forums.
Seduced by the possibility of arcane power, I didn’t let this stop me. Over the spring of 2012, spring break settled in, and I spent that entire week developing the entire theory of barycentric coordinates from scratch. There were no proofs I could find online, so I had to personally reconstruct all of them. In addition, I set out to finding as many example problems as I could, but since no one had written barycentric solutions yet, I had to not only identify which problems like they might be good examples but also solve them myself to see if my guesses were correct. I even managed to prove a “new” theorem about perpendicular displacement vectors (which I did not get to name after myself).
I continued working all the way up through the summer, adding several new problems that came my way from MOP 2012. Finally, I posted a rough article with all my notes, examples, and proofs, which you can still find online. I still remember this as a sort of magnus opus from the first half of high school; it was an immensely rewarding learning experience.
Today, all this and much more can be yours for just $60, with any major credit or debit card. Alas, my geometry book is just one example of ways in which the math contest scene is looking more and more like an industry. Over the years, more and more programs dedicated to training for competitions are springing up, and these programs can be quite costly. I myself run a training program now, which is even more expensive (in my defense, it’s one-on-one teaching, rather than a residential camp or group lesson). It’s possible to imagine a situation in which the contest problems become more and more routine. In that world, math contests become an arms race. It becomes mandatory to have training in increasingly obscure techniques: everything from Popoviciu to Vieta jumping to rectangular circumhyperbolas. Students from less well-off families, or even countries without access to competition resources, become unable to compete, and are pushed to the bottom of the IMO scoreboard. (Fortunately for me, I found out at the 2017 IMO that my geometry book actually helped level the international playing field, contrary to my initial expectations. It’s unfortunate that it’s not free, but it turned out that many students in other countries had until then found it nearly impossible to find suitable geometry materials. So now many more people have access to a reasonable geometry reference, rather than just the top countries with well-established training.) ## II. Another dark future The first approximation you might have now is that training is bad. But I think that’s the wrong conclusion, since, well, I have an entire previous post dedicated to explaining what I perceive as the benefits of the math contest experience. So I think the conclusion is not that training is intrinsically bad, but rather than training must be meaningful. That is, the students have to gain something from the experience that’s not just a +7 bonus on their next olympiad contest. I think the message “training is bad” might be even more dangerous. Imagine that the fashion swings the other way. The IMO jury become alarmed at the trend of train-able problems, and in response, the problems become designed specifically to antagonize trained students. The entire Geometry section of the IMO shortlist ceases to exist, because some Asian kid wrote this book that gives you too much of an advantage if you’ve read it, and besides who does geometry after high school anyways? The IMO 2014 used to be notable for having three combinatorics problems, but by 2040 the norm is to have four or five, because everyone knows combinatorics is harder to train for. Gradually, the IMO is redesigned to become an IQ test. The changes then begin to permeate down. The USAMO committee is overthrown, and USAMO 2050 features six linguistics questions “so that we can find out who can actually think”. Math contests as a whole become a system for identifying the best genetic talent, explicitly aimed at weeding out the students who have “just been trained”. It doesn’t matter how hard you’ve worked; we want “creativity”. This might be great at identifying the best mathematicians each generation, but I think an IMO of this shape would be actively destructive towards the contestants and community as well. You thought math contests were bad because they’re discouraging to the kids who don’t win? What if they become redesigned to make sure that you can’t improve your score no matter how hard you work? ## III. Now What this means is that we have a balancing act to maintain. We do not want to eliminate the role of training entirely, because the whole point of math contests is to have a learning experience that lasts longer than the two-day contest every year. But at the same time, we need to ensure the training is interesting, that it is deep and teaches skills like the ones I described before. Paying$60 to buy a 300-page PDF is not meaningful. But spending many hours to work through the problems in that PDF might be.
In many ways this is not a novel idea. If I am trying to teach a student, and I give them a problem which is too easy, they will not learn anything from it. Conversely, if I give them a problem which is too difficult, they will get discouraged and are unlikely to learn much from their trouble. The situation with olympiad training feels the same.
This applies to the way I think about my teaching as well. I am always upset when I hear (as I have) things like “X only did well on USAMO because of Evan Chen’s class”. If that is true, then all I am doing is taking money as input and changing the results of a zero-sum game as output, which is in my opinion rather pointless (and maybe unethical).
But I really think that’s not what’s happening. Maybe I’m a good teacher, but at the end of the day I am just a guide. If my students do well, or even if they don’t do well, it is because they spent many hours on the challenges that I designed, and have learned a lot from the whole experience. The credit for any success thus lies solely through the student’s effort. And that experience, I think, is certainly not zero-sum.
# I switched to point-based problem sets
It’s not uncommon for technical books to include an admonition from the author that readers must do the exercises and problems. I always feel a little peculiar when I read such warnings. Will something bad happen to me if I don’t do the exercises and problems? Of course not. I’ll gain some time, but at the expense of depth of understanding. Sometimes that’s worth it. Sometimes it’s not.
— Michael Nielsen, Neural Networks and Deep Learning
## 1. Synopsis
I spent the first few days of my recent winter vacation transitioning all the problem sets for my students from a “traditional” format to a “point-based” format. Here’s a before and after.
Technical specification:
• The traditional problem sets used to consist of a list of 6-9 olympiad problems of varying difficulty, for which you were expected to solve all problems over the course of two weeks.
• The new point-based problem sets consist of 10-15 olympiad problems, each weighted either 2, 3, 5, or 9 points, and an explicit target goal for that problem set. There’s a spectrum of how many of the problems you need to solve depending on the topic and the version (I have multiple difficulty versions of many sets), but as a rough estimate the goal is maybe 60%-75% of the total possible points on the problem set. Usually, on each problem set there are 2-4 problems which I think are especially nice or important, and I signal this by coloring the problem weight in red.
In this post I want to talk a little bit about what motivated this change.
## 2. The old days
I guess for historical context I’ll start by talking about why I used to have a traditional format, although I’m mildly embarrassed at now, in hindsight.
When I first started out with designing my materials, I was actually basically always short on problems. Once you really get into designing olympiad materials, good problems begin to feel like tangible goods. Most problems I put on a handout are ones I’ve done personally, because otherwise, how are you supposed to know what the problem is like? This means I have to actually solve the problem, type up solution notes, and then decide how hard it is and what that problem teaches. This might take anywhere from 30 minutes to the entire afternoon, per problem. Now imagine you need 150 such problems to run a year’s curriculum, and you can see why the first year was so stressful. (I was very fortunate to have paid much of this cost in high school; I still remember many of the problems I did back as a student.)
So it seemed like a waste if I spent a lot of time vetting a problem and then my students didn’t do it, and as practical matter I didn’t have enough materials yet to have much leeway anyways. I told myself this would be fine: after all, if you couldn’t do a problem, all you had to do was tell me what you’ve tried, and then I’d walk you through the rest of it. So there’s no reason why you couldn’t finish the problem sets, right? (Ha. Ha. Ha.)
Now my problem bank has gotten much deeper, so I don’t have that excuse anymore. [1]
## 3. Agonizing over problem eight
But I’ll tell you now that even before I decided to switch to points, one of the biggest headaches was always whether to add in that an eighth problem that was really nice but also difficult. (When I first started teaching, my problem sets were typically seven problems long.) If you looked at the TeX source for some of my old handouts, you’ll see lots of problems commented out with a line saying “too long already”.
Teaching OTIS made me appreciate the amount of power I have on the other side of a mentor-student relationship. Basically, when I design a problem set, I am making decisions on behalf of the student: “these are the problems that I think you should work on”. Since my kids are all great students that respect me a lot, they will basically do whatever I tell them to.
That means I used to spend many hours agonizing over that eighth problem or whether to punt it. Yes, they’ll learn a lot if they solve (or don’t solve) it, but it will also take them another two or three hours on top of everything else they’re already doing (OTIS, school, trumpet, track, dance, social, blah blah blah). Is it worth those extra hours? Is it not? I’ve lost sleep over whether I made the right choice on the nights I ended up adding that last hard problem.
But in hindsight the right answer all along was to just let the students decide for themselves, because unlike your average high-school math teacher in a room of decked-out slackers, I have the best students in the world.
## 4. The morning I changed my mind
As I got a deeper database this year and commented more problems out, I started thinking about point-based problem sets. But I can tell you the exact moment when I decided to switch.
On the morning of Sunday November 5, I had a traditional problem set on my desk next to a point-based one. In both cases I had figured out how to do about half the problems required. I noticed that the way the half-full glass of water looked was quite different between them. In the first case, I was freaking out about the other half of the problems I hadn’t solved yet. In the second case, I was trying to decide which of the problems would be the most fun to do next.
Then I realized that OTIS was running on the traditional system, and what I had been doing to my students all semester! So instead of doing either problem set I began the first prototypes of the points system.
## 5. Count up
I’m worried I’ll get misinterpreted as arguing that students shouldn’t work hard. This is not really the point. If you read the specification at the beginning carefully, the number of problems the students are solving is actually roughly the same in both systems.
It might be more psychological than anything else: I want my kids to count how many problems they’ve solved, not how many problems they haven’t solved. Every problem you solve makes you better. Every problem you try and don’t solve makes you better, too. But a problem you didn’t have time to try doesn’t make you worse.
I’ll admit to being mildly pissed off at high school for having built this particular mindset into all my kids. The straight-A students sitting in calculus BC aren’t counting how many questions they’ve answered correctly when checking grades. They’re counting how many points they lost. The implicit message is that if you don’t do nearly all the questions, you’re a bad person because you didn’t try hard enough and you won’t learn anything this way and shame on you and…
That can’t possibly be correct. Imagine two calculus teachers A and B using the same textbook. Teacher A assigns 15 questions of homework a week, teacher B assigns 25 questions. All of teacher A’s students are failing by B’s standards. Fortunately, that’s not actually how the world works.
For this reason I’m glad that all the olympiad kids report their performance as “I solved problems 1,2,4,5” rather than “I missed problems 3,6”.
## 6. There are no stupid or lazy questions
The other wrong assumption I had about traditional problem sets was the bit about asking for help on problems you can’t solve. It turns out getting students to ask for help is a struggle. So one other hope is that with the point-based system is that if a student tries a problem, can’t solve it, and is too shy to ask, then they can switch to a different problem and read the solution later on. No need to get me involved with every single missed problem any more.
But anyways I have a hypothesis why asking for help seems so hard (though there are probably other reasons too).
You’ve all heard the teachers who remind students to always ask questions during lectures [2], because it means someone else has the same question. In other words: don’t be afraid to ask questions just because you’re afraid you’ll look dumb, because “there are no stupid questions“.
But I’ve rarely heard anyone say the same thing about problem sets.
As I’m writing this, I realize that this is actually the reason I’ve never been willing to go to office hours to ask my math professors for help on homework problems I’m stuck on. It’s not because I’m worried my professors will think I’m dumb. It’s because I’m worried they’ll think I didn’t try hard enough before I gave up and came to them for help, or even worse, that I just care about my grade. You’ve all heard the freshman biology TA’s complain about those kids that just come and ask them to check all their pset answers one by one, or that come to argue about points they got docked, or what-have-you. I didn’t want to be that guy.
Maybe this shaming is intentional if the class you’re teaching is full of slackers that don’t work unless you crack the whip. [3] But if you are teaching a math class that’s half MOPpers, I seriously don’t think we need guilt-trips for these kids whenever they can’t solve a USAMO3.
So for all my students, here’s my version of the message: there are no stupid questions, and there are no lazy questions.
### Footnotes
1. The other reason I used traditional problem sets at first was that I wanted to force the students to at least try the harder problems. This is actually my main remaining concern about switching to point-based problem sets: you could in principle always ignore the 9-point problems at the end. I tried to compensate for this by either marking some 9’s in red, or else making it difficult to reach the goal without solving at least one 9. I’m not sure this is enough.
2. But if my question is “I zoned out for the last five minutes because I was responding to my friends on snapchat, what just happened?”, I don’t think most professors would take too kindly. So it’s not true literally all questions are welcome in lectures.
3. As an example, the 3.091 class policies document includes FAQ such as “that sounds like a lot of work, is there a shortcut?”, “but what do I need to learn to pass the tests?”, and “but I just want to pass the tests…”. Also an entire paragraph explaining why skipping the final exam makes you a terrible person, including reasons such as “how do you anything is how you do everything”, “students earning A’s are invited to apply as tutors/graders”, and “in college it’s up to you to take responsibility for your academic career”, and so on ad nauseum.
# An Apology for HMMT 2016
Median Putnam contestants, willing to devote one of the last Saturdays before final exams to a math test, are likely to receive an advanced degree in the sciences. It is counterproductive on many levels to leave them feeling like total idiots.
— Bruce Reznick, “Some Thoughts on Writing for the Putnam”
Last February I made a big public apology for having caused one of the biggest scoring errors in HMMT history, causing a lot of changes to the list of top individual students. Pleasantly, I got some nice emails from coaches who reminded me that most students and teams do not place highly in the tournament, and at the end of the day the most important thing is that the contestants enjoyed the tournament.
So now I decided I have to apologize for 2016, too.
The story this time is that I inadvertently sent over 100 students home having solved two or fewer problems total, out of 30 individual problems. That year, I was the problem czar for HMMT February 2016, and like many HMMT problem czars before me, had vastly underestimated the difficulty of my own problems.
I think stories like this are a lot worse than people realize; contests are supposed to be a learning experience for the students, and if a teenager shows up to Massachusetts and spends an entire Saturday feeling hopeless for the entire contest, then the flight back to California is going to feel very long. Now imagine having 100 students go through this every single February.
So today I’d like to say a bit about things I’ve picked up since then that have helped me avoid making similar mistakes. I actually think people generally realize that HMMT is too hard, but are wrong about how this should be fixed. In particular, I think the common approach (and the one I took) of “make problem 1 so easy that almost nobody gets a zero” is wrong, and I’ll explain here what I think should be done instead.
## 1. Gettable, not gimme
I think just “easy” is the wrong way to think about the beginning problems. At ARML, the problem authors use a finer distinction which I really like:
• A problem is gettable if nearly every contestant feels like they could have gotten the problem on a good day. (In particular, problems that require knowledege that not all contestants have are not gettable, even if they are easy with it.)
• A problem is a gimme if nearly every contestant actually solves the problem on the contest.
The consensus is always that the early problems should be gettable but not gimme’s. You could start every contest by asking the contestant to compute the expected value of 7, but the contestants are going to notice, and it isn’t going to help anyone.
(I guess I should make the point that in order for a problem to be a “gimme”, it would have to be so easy to be almost insulting, because high accuracy on a given problem is really only possible if the level of the problem is significantly below the level of the student. So a gimme would have to be a problem that is way easier than the level of the weakest contestant — you can see why these would be bad.)
In contrast, with a gettable problem, even though some of the contestants will miss it, they’ll often miss it for a reason like 2+3=6. This is a bit unfortunate, but it is still a lot better if the contestant goes home thinking “I made a small arithmetic error, so I have to be more careful” than “there’s no way I could have gotten this, it was hopeless”.
But that brings to me to the next point:
## 2. At the IMO 33% of the problems are gettable
At the IMO, there are two easy problems (one each day), but there are only six problems. So a full one-third of the problems are gettable: we hope that most students attending the IMO can solve either IMO1 or IMO4, even though many will not solve both.
If you are writing HMMT or some similar contest, I think this means you should think about the opening in terms of the fraction 1/3, rather than problem 1. For example, at HMMT, I think the czars should strive instead to make the first three or four out of ten problems on each individual test gettable: they should be problems every contestant could solve, even though some of them will still miss it anyways. Under the pressure of contest, students are going to make all sorts of mistakes, and so it’s important that there are multiple gettable problems. This way, every student has two or three or four real chances to solve a problem: they’ll still miss a few, but at least they feel like they could do something.
(Every year at HMMT, when we look back at the tests in hindsight, the first reflex many czars have is to look at how many people got 0’s on each test, and hope that it’s not too many. The fact that this figure is even worth looking at is in my opinion a sign that we are doing things wrong: is 1/10 any better than 0/10, if the kid solved question 1 quickly and then spent the rest of the hour staring at the other nine?)
## 3. Watch the clock
The other thing I want to say is to spend some time thinking about the entire test as a whole, rather than about each problem individually.
To drive the point: I’m willing to bet that an HMMT individual test with 4 easy, 6 medium, and 0 hard problems could actually work, even at the top end of the scores. Each medium problem in isolation won’t distinguish the strongest students. But put six of them all together, and you get two effects:
• Students will make mistakes on some of the problems, and by central limit theorem you’ll get a curve anyways.
• Time pressure becomes significantly more important, and the strongest students will come out ahead by simply being faster.
Of course, I’ll never be able to persuade the problem czars (myself included) to not include at least one or two of those super-nice hard problems. But the point is that they’re not actually needed in situations like HMMT, when there are so many problems that it’s hard to not get a curve of scores.
One suggestion many people won’t take: if you really want to include some difficulty problems that will take a while, decrease the length of the test. If you had 3 easy, 3 medium, and 1 hard problem, I bet that could work too. One hour is really not very much time.
Actually, this has been experimentally verified. On my HMMT 2016 Geometry test, nobody solved any of problems 8-10, so the test was essentially seven problems long. The gradient of scores at the top and center still ended up being okay. The only issue was that a third of the students solved zero problems, because the easy problems were either error-prone, or else were hit-or-miss (either solved quickly or not at all). Thus that’s another thing to watch out for.
In a previous post I tried to make the point that math olympiads should not be judged by their relevance to research mathematics. In doing so I failed to actually explain why I think math olympiads are a valuable experience for high schoolers, so I want to make amends here.
## 1. Summary
In high school I used to think that math contests were primarily meant to encourage contestants to study some math that is (much) more interesting than what’s typically shown in high school. While I still think this is one goal, and maybe it still is the primary goal in some people’s minds, I no longer believe this is the primary benefit.
My current belief is that there are two major benefits from math competitions:
1. To build a social network for gifted high school students with similar interests.
2. To provide a challenging experience that lets gifted students grow and develop intellectually.
I should at once disclaim that I do not claim these are the only purpose of mathematical olympiads. Indeed, mathematics is a beautiful subject and introducing competitors to this field of study is of course a great thing (in particular it was life-changing for me). But as I have said before, many alumni of math olympiads do not eventually become mathematicians, and so in my mind I would like to make the case that these alumni have gained a lot from the experience anyways.
## 2. Social experience
Now that we have email, Facebook, Art of Problem Solving, and whatnot, the math contest community is much larger and stronger than it’s ever been in the past. For the first time, it’s really possible to stay connected with other competitors throughout the entire year, rather than just seeing each other a handful of times during contest season. There’s literally group chats of contestants all over the country where people talk about math problems or the solar eclipse or share funny pictures or inside jokes or everything else. In many ways, being part of the high school math contest community is a lot like having access to the peer group at a top-tier university, except four years earlier.
There’s some concern that a competitive culture is unhealthy for the contestants. I want to make a brief defense here.
I really do think that the contest community is good at being collaborative rather than competitive. You can imagine a world where the competitors think about contests in terms of trying to get a better score than the other person. [1] That would not be a good world. But I think by and large the community is good at thinking about it as just trying to maximize their own score. The score of the person next to you isn’t supposed to matter (and thinking about it doesn’t help, anyways).
Put more bluntly, on contest day, you have one job: get full marks. [2]
Because we have a culture of this shape, we now get a group of talented students all working towards the same thing, rather than against one another. That’s what makes it possible to have a self-supportive community, and what makes it possible for the contestants to really become friends with each other.
I think the strongest contestants don’t even care about the results of contests other than the few really important ones (like USAMO/IMO). It is a long-running joke that the Harvard-MIT Math Tournament is secretly just a MOP reunion, and I personally see to it that this happens every year. [3]
I’ve also heard similar sentiments about ARML:
I enjoy ARML primarily based on the social part of the contest, and many people agree with me; the highlight of ARML for some people is the long bus ride to the contest. Indeed, I think of ARML primarily as a social event, with some mathematics to make it look like the participants are actually doing something important.
(Don’t tell the parents.)
## 3. Intellectual growth
My view is that if you spend a lot of time thinking or working about anything deep, then you will learn and grow from the experience, almost regardless of what that thing is at an object level. Take chess as an example — even though chess definitely has even fewer “real-life applications” than math, if you take anyone with a 2000+ rating I don’t think many of them would think that the time they invested into the game was wasted. [4]
Olympiad mathematics seems to be no exception to this. In fact the sheer depth and difficulty of the subject probably makes it a particularly good example. [5]
I’m now going to fill this section with a bunch of examples although I don’t claim the list is exhaustive. First, here are the ones that everyone talks about and more or less agrees on:
• Learning how to think, because, well, that’s how you solve a contest problem.
• Learning to work hard and not give up, because the contest is difficult and you will not win by accident; you need to actually go through a lot of training.
• Dual to above, learning to give up on a problem, because sometime the problem really is too hard for you and you won’t solve it even if you spend another ten or twenty or fifty hours, and you have to learn to cut your losses. There is a balancing act here that I think really is best taught by experience, rather than the standard high-school moral cheerleading where you are supposed to “never give up” or something.
• But also learning to be humble or to ask for help, which is a really hard thing for a lot of young contestants to do.
• Learning to be patient, not only with solving problems but with the entire journey. You usually do not improve dramatically overnight.
Here are some others I also believe, but don’t hear as often.
• Learning to be independent, because odds are your high-school math teacher won’t be able to help you with USAMO problems. Training for the highest level of contests is these days almost always done more or less independently. I think having the self-motivation to do the training yourself, as well as the capacity to essentially have to design your own training (making judgments on what to work on, et cetera) is itself a valuable cross-domain skill. (I’m a little sad sometimes that by teaching I deprive my students of the opportunity to practice this. It is a cost.)
• Being able to work neatly, not because your parents told you to but because if you are sloppy then it will cost you points when you make small (or large) errors on IMO #1. Olympiad problems are difficult enough as is, and you do not want to let them become any harder because of your own sloppiness. (And there are definitely examples of olympiad problems which are impossible to solve if you are not organized.)
• Being able to organize and write your thoughts well, because some olympiad problems are complex and requires putting together more than one lemma or idea together to solve. For this to work, you need to have the skill of putting together a lot of moving parts into a single coherent argument. Bonus points here if your audience is someone you care about (as opposed to a grader), because then you have to also worry about making the presentation as clean and natural as possible.
These days, whenever I solve a problem I always take the time to write it up cleanly, because in the process of doing so I nearly always find ways that the solution can be made shorter or more elegant, or at least philosophically more natural. (I also often find my solution is wrong.) So it seems that the write-up process here is not merely about presenting the same math in different ways: the underlying math really does change. [6]
• Thinking about how to learn. For example, the Art of Problem Solving forums are often filled with questions of the form “what should I do?”. Many older users find these questions obnoxious, but I find them desirable. I think being able to spend time pondering about what makes people improve or learn well is a good trait to develop, rather than mindlessly doing one book after another.
Of course, many of the questions I referred to are poor, either with no real specific direction: often the questions are essentially “what book should I read?”, or “give me a exhaustive list of everything I should know”. But I think this is inevitable because these are people’s first attempts at understanding contest training. Just like the first difficult math contest you take often goes quite badly, the first time you try to think about learning, you will probably ask questions you will be embarrassed about in five years. My hope is that as these younger users get older and wiser, the questions and thoughts become mature as well. To this end I do not mind seeing people wobble on their first steps.
• Being honest with your own understanding, particularly of fundamentals. When watching experienced contestants, you often see people solving problems using advanced techniques like Brianchon’s theorem or the n-1 equal value principle or whatever. It’s tempting to think that if you learn the names and statements of all these advanced techniques then you’ll be able to apply them too. But the reality is that these techniques are advanced for a reason: they are hard to use without mastery of fundamentals.
This is something I definitely struggled with as a contestant: being forced to patiently learn all the fundamentals and not worry about the fancy stuff. To give an example, the 2011 JMO featured an inequality which was routine for experienced or well-trained contestants, but “almost impossible for people who either have not seen inequalities at all or just like to compile famous names in their proofs”. I was in the latter category, and tried to make up a solution using multivariable Jensen, whatever that meant. Only when I was older did I really understand what I was missing.
• Dual to the above, once you begin to master something completely you start to learn what different depths of understanding feel like, and an appreciation for just how much effort goes into developing a mastery of something.
• Being able to think about things which are not well-defined. This one often comes as a surprise to people, since math is a field which is known for its precision. But I still maintain that this a skill contests train for.
A very simple example is a question like, “when should I use the probabilistic method?”. Yes, we know it’s good for existence questions, but can we say anything more about when we expect it to work? Well, one heuristic (not the only one) is “if a monkey could find it” — the idea that a randomly selected object “should” work. But obviously something like this can’t be subject to a (useful) formal definition that works 100% of the time, and there are plenty of contexts in which even informally this heuristic gives the wrong answer. So that’s an example of a vague and nebulous concept that’s nonetheless necessary in order to understanding the probabilistic method well.
There are much more general examples one can say. What does it mean for a problem to “feel projective”? I can’t tell you a hard set of rules; you’ll have to do a bunch of examples and gain the intuition yourself. Why do I say this problem is “rigid”? Same answer. How do you tell which parts of this problem are natural, and which are artificial? How do you react if you have the feeling the problem gives you nothing to work with? How can you tell if you are making progress on a problem? Trying to figure out partial answers to these questions, even if they can’t be put in words, will go a long way in improving the mythical intuition that everyone knows is so important.
It might not be unreasonable to say that by this point we are studying philosophy, and that’s exactly what I intend. When I teach now I often make a point of referring to the “morally correct” way of thinking about things, or making a point of explaining why X should be true, rather than just providing a proof. I find this type of philosophy interesting in its own right, but that is not the main reason I incorporate it into my teaching. I teach the philosophy now because it is necessary, because you will solve fewer problems without that understanding.
## 4. I think if you don’t do well, it’s better to you
But I think the most surprising benefit of math contests is that most participants won’t win. In high school everyone tells you that if you work hard you will succeed. The USAMO is a fantastic counterexample to this. Every year, there are exactly 12 winners on the USAMO. I can promise you there are far more than 12 people who work very hard every year with the hope of doing well on the USAMO. Some people think this is discouraging, but I find it desirable.
Let me tell you a story.
Back in September of 2015, I sneaked in to the parent’s talk at Math Prize for Girls, because Zuming Feng was speaking and I wanted to hear what he had to say. (The whole talk was is available on YouTube now.) The talk had a lot of different parts that I liked, but one of them struck me in particular, when he recounted something he said to one of his top students:
I really want you to work hard, but I really think if you don’t do well, if you fail, it’s better to you.
I had a hard time relating to this when I first heard it, but it makes sense if you think about it. What I’ve tried to argue is that the benefit of math contests is not that the contestant can now solve N problems on USAMO in late April, but what you gain from the entire year of practice. And so if you hold the other 363 days fixed, and then vary only the final outcome of the USAMO, which of success and failure is going to help a contestant develop more as a person?
For that reason I really like to think that the final lesson from high school olympiads is how to appreciate the entire journey, even in spite of the eventual outcome.
### Footnotes
1. I actually think this is one of the good arguments in favor of the new JMO/USAMO system introduced in 2010. Before this, it was not uncommon for participants in 9th and 10th grade to really only aim for solving one or two entry-level USAMO problems to qualify for MOP. To this end I think the mentality of “the cutoff will probably only be X, so give up on solving problem six” is sub-optimal.
2. That’s a Zuming quote.
3. Which is why I think the HMIC is actually sort of pointless from a contestant’s perspective, but it’s good logistics training for the tournament directors.
4. I could be wrong about people thinking chess is a good experience, given that I don’t actually have any serious chess experience beyond knowing how the pieces move. A cursory scan of the Internet suggests otherwise (was surprised to find that Ben Franklin has an opinion on this) but it’s possible there are people who think chess is a waste of time, and are merely not as vocal as the people who think math contests are a waste of time.
5. Relative to what many high school students work on, not compared to research or something.
6. Privately, I think that working in math olympiads taught me way more about writing well than English class ever did; English class always felt to me like the skill of trying to sound like I was saying something substantial, even when I wasn’t.
# Some Thoughts on Olympiad Material Design
(This is a bit of a follow-up to the solution reading post last month. Spoiler warnings: USAMO 2014/6, USAMO 2012/2, TSTST 2016/4, and hints for ELMO 2013/1, IMO 2016/2.)
I want to say a little about the process which I use to design my olympiad handouts and classes these days (and thus by extension the way I personally think about problems). The short summary is that my teaching style is centered around showing connections and recurring themes between problems.
Now let me explain this in more detail.
## 1. Main ideas
Solutions to olympiad problems can look quite different from one another at a surface level, but typically they center around one or two main ideas, as I describe in my post on reading solutions. Because details are easy to work out once you have the main idea, as far as learning is concerned you can more or less throw away the details and pay most of your attention to main ideas.
Thus whenever I solve an olympiad problem, I make a deliberate effort to summarize the solution in a few sentences, such that I basically know how to do it from there. I also make a deliberate effort, whenever I write up a solution in my notes, to structure it so that my future self can see all the key ideas at a glance and thus be able to understand the general path of the solution immediately.
The example I’ve previously mentioned is USAMO 2014/6.
Example 1 (USAMO 2014, Gabriel Dospinescu)
Prove that there is a constant ${c>0}$ with the following property: If ${a, b, n}$ are positive integers such that ${\gcd(a+i, b+j)>1}$ for all ${i, j \in \{0, 1, \dots, n\}}$, then
$\displaystyle \min\{a, b\}> (cn)^n.$
If you look at any complete solution to the problem, you will see a lot of technical estimates involving ${\zeta(2)}$ and the like. But the main idea is very simple: “consider an ${N \times N}$ table of primes and note the small primes cannot adequately cover the board, since ${\sum p^{-2} < \frac{1}{2}}$”. Once you have this main idea the technical estimates are just the grunt work that you force yourself to do if you’re a contestant (and don’t do if you’re retired like me).
Thus the study of olympiad problems is reduced to the study of main ideas behind these problems.
## 2. Taxonomy
So how do we come up with the main ideas? Of course I won’t be able to answer this question completely, because therein lies most of the difficulty of olympiads.
But I do have some progress in this way. It comes down to seeing how main ideas are similar to each other. I spend a lot of time trying to classify the main ideas into categories or themes, based on how similar they feel to one another. If I see one theme pop up over and over, then I can make it into a class.
I think olympiad taxonomy is severely underrated, and generally not done correctly. The status quo is that people do bucket sorts based on the particular technical details which are present in the problem. This is correlated with the main ideas, but the two do not always coincide.
An example where technical sort works okay is Euclidean geometry. Here is a simple example: harmonic bundles in projective geometry. As I explain in my book, there are a few “basic” configurations involved:
• Midpoints and parallel lines
• The Ceva / Menelaus configuration
• Harmonic quadrilateral / symmedian configuration
• Apollonian circle (right angle and bisectors)
(For a reference, see Lemmas 2, 4, 5 and Exercise 0 here.) Thus from experience, any time I see one of these pictures inside the current diagram, I think to myself that “this problem feels projective”; and if there is a way to do so I try to use harmonic bundles on it.
An example where technical sort fails is the “pigeonhole principle”. A typical problem in such a class looks something like USAMO 2012/2.
Example 2 (USAMO 2012, Gregory Galperin)
A circle is divided into congruent arcs by ${432}$ points. The points are colored in four colors such that some ${108}$ points are colored Red, some ${108}$ points are colored Green, some ${108}$ points are colored Blue, and the remaining ${108}$ points are colored Yellow. Prove that one can choose three points of each color in such a way that the four triangles formed by the chosen points of the same color are congruent.
It’s true that the official solution uses the words “pigeonhole principle” but that is not really the heart of the matter; the key idea is that you consider all possible rotations and count the number of incidences. (In any case, such calculations are better done using expected value anyways.)
Now why is taxonomy a good thing for learning and teaching? The reason is that building connections and seeing similarities is most easily done by simultaneously presenting several related problems. I’ve actually mentioned this already in a different blog post, but let me give the demonstration again.
Suppose I wrote down the following:
$\displaystyle \begin{array}{lll} A1 & B11 & C8 \\ A9 & B44 & C27 \\ A49 & B33 & C343 \\ A16 & B99 & C1 \\ A25 & B22 & C125 \end{array}$
You can tell what each of the ${A}$‘s, ${B}$‘s, ${C}$‘s have in common by looking for a few moments. But what happens if I intertwine them?
$\displaystyle \begin{array}{lllll} B11 & C27 & C343 & A1 & A9 \\ C125 & B33 & A49 & B44 & A25 \\ A16 & B99 & B22 & C8 & C1 \end{array}$
This is the same information, but now you have to work much harder to notice the association between the letters and the numbers they’re next to.
This is why, if you are an olympiad student, I strongly encourage you to keep a journal or blog of the problems you’ve done. Solving olympiad problems takes lots of time and so it’s worth it to spend at least a few minutes jotting down the main ideas. And once you have enough of these, you can start to see new connections between problems you haven’t seen before, rather than being confined to thinking about individual problems in isolation. (Additionally, it means you will never have redo problems to which you forgot the solution — learn from my mistake here.)
## 3. Ten buckets of geometry
I want to elaborate more on geometry in general. These days, if I see a solution to a Euclidean geometry problem, then I mentally store the problem and solution into one (or more) buckets. I can even tell you what my buckets are:
1. Direct angle chasing
2. Power of a point / radical axis
3. Homothety, similar triangles, ratios
4. Recognizing some standard configuration (see Yufei for a list)
5. Doing some length calculations
6. Complex numbers
7. Barycentric coordinates
8. Inversion
9. Harmonic bundles or pole/polar and homography
10. Spiral similarity, Miquel points
which my dedicated fans probably recognize as the ten chapters of my textbook. (Problems may also fall in more than one bucket if for example they are difficult and require multiple key ideas, or if there are multiple solutions.)
Now whenever I see a new geometry problem, the diagram will often “feel” similar to problems in a certain bucket. Exactly what I mean by “feel” is hard to formalize — it’s a certain gut feeling that you pick up by doing enough examples. There are some things you can say, such as “problems which feature a central circle and feet of altitudes tend to fall in bucket 6”, or “problems which only involve incidence always fall in bucket 9”. But it seems hard to come up with an exhaustive list of hard rules that will do better than human intuition.
## 4. How do problems feel?
But as I said in my post on reading solutions, there are deeper lessons to teach than just technical details.
For examples of themes on opposite ends of the spectrum, let’s move on to combinatorics. Geometry is quite structured and so the themes in the main ideas tend to translate to specific theorems used in the solution. Combinatorics is much less structured and many of the themes I use in combinatorics cannot really be formalized. (Consequently, since everyone else seems to mostly teach technical themes, several of the combinatorics themes I teach are idiosyncratic, and to my knowledge are not taught by anyone else.)
For example, one of the unusual themes I teach is called Global. It’s about the idea that to solve a problem, you can just kind of “add up everything at once”, for example using linearity of expectation, or by double-counting, or whatever. In particular these kinds of approach ignore the “local” details of the problem. It’s hard to make this precise, so I’ll just give two recent examples.
Example 3 (ELMO 2013, Ray Li)
Let ${a_1,a_2,\dots,a_9}$ be nine real numbers, not necessarily distinct, with average ${m}$. Let ${A}$ denote the number of triples ${1 \le i < j < k \le 9}$ for which ${a_i + a_j + a_k \ge 3m}$. What is the minimum possible value of ${A}$?
Example 4 (IMO 2016)
Find all integers ${n}$ for which each cell of ${n \times n}$ table can be filled with one of the letters ${I}$, ${M}$ and ${O}$ in such a way that:
• In each row and column, one third of the entries are ${I}$, one third are ${M}$ and one third are ${O}$; and
• in any diagonal, if the number of entries on the diagonal is a multiple of three, then one third of the entries are ${I}$, one third are ${M}$ and one third are ${O}$.
If you look at the solutions to these problems, they have the same “feeling” of adding everything up, even though the specific techniques are somewhat different (double-counting for the former, diagonals modulo ${3}$ for the latter). Nonetheless, my experience with problems similar to the former was immensely helpful for the latter, and it’s why I was able to solve the IMO problem.
## 5. Gaps
This perspective also explains why I’m relatively bad at functional equations. There are some things I can say that may be useful (see my handouts), but much of the time these are just technical tricks. (When sorting functional equations in my head, I have a bucket called “standard fare” meaning that you “just do work”; as far I can tell this bucket is pretty useless.) I always feel stupid teaching functional equations, because I never have many good insights to say.
Part of the reason is that functional equations often don’t have a main idea at all. Consequently it’s hard for me to do useful taxonomy on them.
Then sometimes you run into something like the windmill problem, the solution of which is fairly “novel”, not being similar to problems that come up in training. I have yet to figure out a good way to train students to be able to solve windmill-like problems.
## 6. Surprise
I’ll close by mentioning one common way I come up with a theme.
Sometimes I will run across an olympiad problem ${P}$ which I solve quickly, and think should be very easy, and yet once I start grading ${P}$ I find that the scores are much lower than I expected. Since the way I solve problems is by drawing experience from similar previous problems, this must mean that I’ve subconsciously found a general framework to solve problems like ${P}$, which is not obvious to my students yet. So if I can put my finger on what that framework is, then I have something new to say.
The most recent example I can think of when this happened was TSTST 2016/4 which was given last June (and was also a very elegant problem, at least in my opinion).
Example 5 (TSTST 2016, Linus Hamilton)
Let ${n > 1}$ be a positive integers. Prove that we must apply the Euler ${\varphi}$ function at least ${\log_3 n}$ times before reaching ${1}$.
I solved this problem very quickly when we were drafting the TSTST exam, figuring out the solution while walking to dinner. So I was quite surprised when I looked at the scores for the problem and found out that empirically it was not that easy.
After I thought about this, I have a new tentative idea. You see, when doing this problem I really was thinking about “what does this ${\varphi}$ operation do?”. You can think of ${n}$ as an infinite tuple
$\displaystyle \left(\nu_2(n), \nu_3(n), \nu_5(n), \nu_7(n), \dots \right)$
of prime exponents. Then the ${\varphi}$ can be thought of as an operation which takes each nonzero component, decreases it by one, and then adds some particular vector back. For example, if ${\nu_7(n) > 0}$ then ${\nu_7}$ is decreased by one and each of ${\nu_2(n)}$ and ${\nu_3(n)}$ are increased by one. In any case, if you look at this behavior for long enough you will see that the ${\nu_2}$ coordinate is a natural way to “track time” in successive ${\varphi}$ operations; once you figure this out, getting the bound of ${\log_3 n}$ is quite natural. (Details left as exercise to reader.)
Now when I read through the solutions, I found that many of them had not really tried to think of the problem in such a “structured” way, and had tried to directly solve it by for example trying to prove ${\varphi(n) \ge n/3}$ (which is false) or something similar to this. I realized that had the students just ignored the task “prove ${n \le 3^k}$” and spent some time getting a better understanding of the ${\varphi}$ structure, they would have had a much better chance at solving the problem. Why had I known that structural thinking would be helpful? I couldn’t quite explain it, but it had something to do with the fact that the “main object” of the question was “set in stone”; there was no “degrees of freedom” in it, and it was concrete enough that I felt like I could understand it. Once I understood how multiple ${\varphi}$ operations behaved, the bit about ${\log_3 n}$ almost served as an “answer extraction” mechanism.
These thoughts led to the recent development of a class which I named Rigid, which is all about problems where the point is not to immediately try to prove what the question asks for, but to first step back and understand completely how a particular rigid structure (like the ${\varphi}$ in this problem) behaves, and to then solve the problem using this understanding.
(Ed Note: This was earlier posted under the incorrect title “On Designing Olympiad Training”. How I managed to mess that up is a long story involving some incompetence with Python scripts, but this is fixed now.)
Spoiler warnings: USAMO 2014/1, and hints for Putnam 2014 A4 and B2. You may want to work on these problems yourself before reading this post.
## 1. An Apology
At last year’s USA IMO training camp, I prepared a handout on writing/style for the students at MOP. One of the things I talked about was the “ocean-crossing point”, which for our purposes you can think of as the discrete jump from a problem being “essentially not solved” (${0+}$) to “essentially solved” (${7-}$). The name comes from a Scott Aaronson post:
Suppose your friend in Boston blindfolded you, drove you around for twenty minutes, then took the blindfold off and claimed you were now in Beijing. Yes, you do see Chinese signs and pagoda roofs, and no, you can’t immediately disprove him — but based on your knowledge of both cars and geography, isn’t it more likely you’re just in Chinatown? . . . We start in Boston, we end up in Beijing, and at no point is anything resembling an ocean ever crossed.
I then gave two examples of how to write a solution to the following example problem.
Problem 1 (USAMO 2014)
Let ${a}$, ${b}$, ${c}$, ${d}$ be real numbers such that ${b-d \ge 5}$ and all zeros ${x_1}$, ${x_2}$, ${x_3}$, and ${x_4}$ of the polynomial ${P(x)=x^4+ax^3+bx^2+cx+d}$ are real. Find the smallest value the product
$\displaystyle (x_1^2+1)(x_2^2+1)(x_3^2+1)(x_4^2+1)$
can take.
Proof: (Not-so-good write-up) Since ${x_j^2+1 = (x+i)(x-i)}$ for every ${j=1,2,3,4}$ (where ${i=\sqrt{-1}}$), we get ${\prod_{j=1}^4 (x_j^2+1) = \prod_{j=1}^4 (x_j+i)(x_j-i) = P(i)P(-i)}$ which equals to ${|P(i)|^2 = (b-d-1)^2 + (a-c)^2}$. If ${x_1 = x_2 = x_3 = x_4 = 1}$ this is ${16}$ and ${b-d = 5}$. Also, ${b-d \ge 5}$, this is ${\ge 16}$. $\Box$
Proof: (Better write-up) The answer is ${16}$. This can be achieved by taking ${x_1 = x_2 = x_3 = x_4 = 1}$, whence the product is ${2^4 = 16}$, and ${b-d = 5}$.
Now, we prove this is a lower bound. Let ${i = \sqrt{-1}}$. The key observation is that
$\displaystyle \prod_{j=1}^4 \left( x_j^2 + 1 \right) = \prod_{j=1}^4 (x_j - i)(x_j + i) = P(i)P(-i).$
Consequently, we have
\displaystyle \begin{aligned} \left( x_1^2 + 1 \right) \left( x_2^2 + 1 \right) \left( x_3^2 + 1 \right) \left( x_1^2 + 1 \right) &= (b-d-1)^2 + (a-c)^2 \\ &\ge (5-1)^2 + 0^2 = 16. \end{aligned}
This proves the lower bound. $\Box$
You’ll notice that it’s much easier to see the key idea in the second solution: namely,
$\displaystyle \prod_j (x_j^2+1) = P(i)P(-i) = (b-d-1)^2 + (a-c)^2$
which allows you use the enigmatic condition ${b-d \ge 5}$.
Unfortunately I have the following confession to make:
In practice, most solutions are written more like the first one than the second one.
The truth is that writing up solutions is sort of a chore that people never really want to do but have to — much like washing dishes. So must solutions won’t be written in a way that helps you learn from them. This means that when you read solutions, you should assume that the thing you really want (i.e., the ocean-crossing point) is buried somewhere amidst a haystack of other unimportant details.
## 2. Diff
But in practice even the “better write-up” I mentioned above still has too much information in it.
Suppose you were explaining how to solve this problem to a friend. You would probably not start your explanation by saying that the minimum is ${16}$, achieved by ${x_1 = x_2 = x_3 = x_4 = 1}$ — even though this is indeed a logically necessary part of the solution. Instead, the first thing you would probably tell them is to notice that
$\displaystyle \prod_{j=1}^4 \left( x_j^2 + 1 \right) = P(i)P(-i) = (b-d-1)^2 + (a-c)^2 \ge 4^2 = 16.$
In fact, if your friend has been working on the problem for more than ten minutes, this is probably the only thing you need to tell them. They probably already figured out by themselves that there was a good chance the answer would be ${2^4 = 16}$, just based on the condition ${b-d \ge 5}$. This “one-liner” is all that they need to finish the problem. You don’t need to spell out to them the rest of the details.
When you explain a problem to a friend in this way, you’re communicating just the difference: the one or two sentences such that your friend could work out the rest of the details themselves with these directions. When reading the solution yourself, you should try to extract the main idea in the same way. Olympiad problems generally have only a few main ideas in them, from which the rest of the details can be derived. So reading the solution should feel much like searching for a needle in a haystack.
## 3. Don’t Read Line by Line
In particular: you should rarely read most of the words in the solution, and you should almost never read every word of the solution.
Whenever I read solutions to problems I didn’t solve, I often read less than 10% of the words in the solution. Instead I search aggressively for the one or two sentences which tell me the key step that I couldn’t find myself. (Functional equations are the glaring exception to this rule, since in these problems there sometimes isn’t any main idea other than “stumble around randomly”, and the steps really are all about equally important. But this is rarer than you might guess.)
I think a common mistake students make is to treat the solution as a sequence of logical steps: that is, reading the solution line by line, and then verifying that each line follows from the previous ones. This seems to entirely miss the point, because not all lines are created equal, and most lines can be easily derived once you figure out the main idea.
If you find that the only way that you can understand the solution is reading it step by step, then the problem may simply be too hard for you. This is because what counts as “details” and “main ideas” are relative to the absolute difficulty of the problem. Here’s an example of what I mean: the solution to a USAMO 3/6 level geometry problem, call it ${P}$, might look as follows.
Proof: First, we prove lemma ${L_1}$. (Proof of ${L_1}$, which is USAMO 1/4 level.)
Then, we prove lemma ${L_2}$. (Proof of ${L_2}$, which is USAMO 1/4 level.)
Finally, we remark that putting together ${L_1}$ and ${L_2}$ solves the problem. $\Box$
Likely the main difficulty of ${P}$ is actually finding ${L_1}$ and ${L_2}$. So a very experienced student might think of the sub-proofs ${L_i}$ as “easy details”. But younger students might find ${L_i}$ challenging in their own right, and be unable to solve the problem even after being told what the lemmas are: which is why it is hard for them to tell that ${\{L_1, L_2\}}$ were the main ideas to begin with. In that case, the problem ${P}$ is probably way over their head.
This is also why it doesn’t make sense to read solutions to problems which you have not worked on at all — there are often details, natural steps and notation, et cetera which are obvious to you if and only if you have actually tried the problem for a little while yourself.
## 4. Reflection
The earlier sections describe how to extract the main idea of an olympiad solution. This is neat because instead of having to remember an entire solution, you only need to remember a few sentences now, and it gives you a good understanding of the solution at hand.
But this still isn’t achieving your ultimate goal in learning: you are trying to maximize your scores on future problems. Unless you are extremely fortunate, you will probably never see the exact same problem on an exam again.
So one question you should often ask is:
“How could I have thought of that?”
(Or in my case, “how could I train a student to think of this?”.)
There are probably some surface-level skills that you can pick out of this. The lowest hanging fruit is things that are technical. A small number of examples, with varying amounts of depth:
• This problem is “purely projective”, so we can take a projective transformation!
• This problem had a segment ${AB}$ with midpoint ${M}$, and a line ${\ell}$ parallel to ${AB}$, so I should consider projecting ${(AB;M\infty)}$ through a point on ${\ell}$.
• Drawing a grid of primes is the only real idea in this problem, and the rest of it is just calculations.
• This main claim is easy to guess since in some small cases, the frogs have “violating points” in a large circle.
• In this problem there are ${n}$ numbers on a circle, ${n}$ odd. The counterexamples for ${n}$ even alternate up and down, which motivates proving that no three consecutive numbers are in sorted order.
• This is a juggling problem!
(Brownie points if any contest enthusiasts can figure out which problems I’m talking about in this list!)
## 5. Learn Philosophy, not Formalism
But now I want to point out that the best answers to the above question are often not formalizable. Lists of triggers and actions are “cheap forms of understanding”, because going through a list of methods will only get so far.
On the other hand, the un-formalizable philosophy that you can extract from reading a question, is part of that legendary “intuition” that people are always talking about: you can’t describe it in words, but it’s certainly there. Maybe I would even be better if I reframed the question as:
“What does this problem feel like?”
So let’s talk about our feelings. Here is David Yang’s take on it:
Whenever you see a problem you really like, store it (and the solution) in your mind like a cherished memory . . . The point of this is that you will see problems which will remind you of that problem despite having no obvious relation. You will not be able to say concretely what the relation is, but think a lot about it and give a name to the common aspect of the two problems. Eventually, you will see new problems for which you feel like could also be described by that name.
Do this enough, and you will have a very powerful intuition that cannot be described easily concretely (and in particular, that nobody else will have).
This itself doesn’t make sense without an example, so here is an example of one philosophy I’ve developed. Here are two problems on Putnam 2014:
Problem 2 (Putnam 2014 A4)
Suppose ${X}$ is a random variable that takes on only nonnegative integer values, with ${\mathbb E[X] = 1}$, ${\mathbb E[X^2] = 2}$, and ${\mathbb E[X^3] = 5}$. Determine the smallest possible value of the probability of the event ${X=0}$.
Problem 3 (Putnam 2014 B2)
Suppose that ${f}$ is a function on the interval ${[1,3]}$ such that ${-1\le f(x)\le 1}$ for all ${x}$ and
$\displaystyle \int_1^3 f(x) \; dx=0.$
How large can ${\int_1^3 \frac{f(x)}{x} \; dx}$ be?
At a glance there seems to be nearly no connection between these problems. One of them is a combinatorics/algebra question, and the other is an integral. Moreover, if you read the official solutions or even my own write-ups, you will find very little in common joining them.
Yet it turns out that these two problems do have something in common to me, which I’ll try to describe below. My thought process in solving either question went as follows:
In both problems, I was able to quickly make a good guess as to what the optimal ${X}$/${f}$ was, and then come up with a heuristic explanation (not a proof) why that guess had to be correct, namely, “by smoothing, you should put all the weight on the left”. Let me call this optimal argument ${A}$.
That conjectured ${A}$ gave a numerical answer to the actual problem: but for both of these problems, it turns out that numerical answer is completely uninteresting, as are the exact details of ${A}$. It should be philosophically be interpreted as “this is the number that happens to pop out when you plug in the optimal choice”. And indeed that’s what both solutions feel like. These solutions don’t actually care what the exact values of ${A}$ are, they only care about the properties that made me think they were optimal in the first place.
I gave this philosophy the name Equality, with poster description “problems where looking at the equality case is important”. This text description feels more or less useless to me; I suppose it’s the thought that counts. But ever since I came up with this name, it has helped me solve new problems that come up, because they would give me the same feeling that these two problems did.
Two more examples of these themes that I’ve come up with are Global and Rigid, which will be described in a future post on how I design training materials.
# Against the “Research vs. Olympiads” Mantra
There’s a Mantra that you often hear in math contest discussions: “math olympiads are very different from math research”. (For known instances, see O’Neil, Tao, and more. More neutral stances: Monks, Xu.)
It’s true. And I wish people would stop saying it.
Every time I’ve heard the Mantra, it set off a little red siren in my head: something felt wrong. And I could never figure out quite why until last July. There was some (silly) forum discussion about how Allen Liu had done extraordinarily on math contests over the past year. Then someone says:
A: Darn, what math problem can he not do?!
B: I’ll go out on a limb and say that the answer to this is “most of the problems worth asking.” We’ll see where this stands in two years, at which point the answer will almost certainly change, but research $\neq$ Olympiads.
Then it hit me.
## Ping-pong vs. Tennis
Let’s try the following thought experiment. Consider a world-class ping-pong player, call her Sarah. She has a fan-base talking about her pr0 ping-pong skills. Then someone comes along as says:
Well, table tennis isn’t the same as tennis.
To which I and everyone else reasonable would say, “uh, so what?”. It’s true, but totally irrelevant; ping-pong and tennis are just not related. Maybe Sarah will be better than average at tennis, but there’s no reason to expect her to be world-class in that too.
And yet we say exactly the same thing for olympiads versus research. Someone wins the IMO, out pops the Mantra. Even if the Mantra is true when taken literally, it’s implicitly sending the message there’s something wrong with being good at contests and not good at research.
So now I ask: just what is wrong with that? To answer this question, I first need to answer: “what is math?”.
There’s been a trick played with this debate, and you can’t see it unless you taboo the word “math”. The word “math” can refer to a bunch of things, like:
• Training for contest problems like USAMO/IMO, or
• Working on open problems and conjectures (“research”).
So here’s the trick. The research community managed to claim the name “math”, leaving only “math contests” for the olympiad community. Now the sentence
“Math contests should be relevant to math”
seems totally innocuous. But taboo the world “math”, and you get
“Olympiads should be relevant to research”
and then you notice something’s wrong. In other words, since “math” is a substring of “math contests”, it suddenly seems like the olympiads are subordinate to research. All because of an accident in naming.
Since when? Everyone agrees that olympiads and research are different things, but it does not then follow that “olympiads are useless”. Even if ping-pong is called “table tennis”, that doesn’t mean the top ping-pong players are somehow inferior to top tennis players. (And the scary thing is that in a world without the name “ping-pong”, I can imagine some people actually thinking so.)
I think for many students, olympiads do a lot of good, independent of any value to future math research. Math olympiads give high school students something interesting to work on, and even the training process for a contest such that the IMO carries valuable life lessons: it teaches you how to work hard even in the face of possible failure, and what it’s like to be competitive at an international level (i.e. what it’s like to become really good at something after years of hard work). The peer group that math contests give is also wonderful, and quite similar to the kind of people you’d meet at a top-tier university (and in some cases, they’re more or less the same people). And the problem solving ability you gain from math contests is indisputably helpful elsewhere in life. Consequently, I’m well on record as saying the biggest benefits of math contests have nothing to do with math.
There are also more mundane (but valid) reasons (they help get students out of the classroom, and other standard blurbs about STEM and so on). And as a matter of taste I also think contest problems are interesting and beautiful in their own right. You could even try to make more direct comparisons (for example, I’d guess the average arXiv paper in algebraic geometry gets less attention than the average IMO geometry problem), but that’s a point for another blog post entirely.
## The Right and Virtuous Path
Which now leads me to what I think is a culture issue.
MOP alumni prior to maybe 2010 or so were classified into two groups. They would either go on to math research, which was somehow seen as the “right and virtuous path“, or they would defect to software/finance/applied math/etc. Somehow there is always this implicit, unspoken message that the smart MOPpers do math research and the dumb MOPpers drop out.
I’ll tell you how I realized why I didn’t like the Mantra: it’s because the only time I hear the Mantra is when someone is belittling olympiad medalists.
The Mantra says that the USA winning the IMO is no big deal. The Mantra says Allen Liu isn’t part of the “smart club” until he succeeds in research too. The Mantra says that the countless time and energy put into running each year’s MOP are a waste of time. The Mantra says that the students who eventually drop out of math research are “not actually good at math” and “just good at taking tests”. The Mantra even tells outsiders that they, too, can be great researchers, because olympiads are useless anyways.
The Mantra is math research’s recruiting slogan.
And I think this is harmful. The purpose of olympiads was never to produce more math researchers. If it’s really the case that olympiads and research are totally different, then we should expect relatively few olympiad students to go into research; yet in practice, a lot of them do. I think one could make a case that a lot of the past olympiad students are going into math research without realizing that they’re getting into something totally unrelated, just because the sign at the door said “math”. One could also make a case that it’s very harmful for those that don’t do research, or try research and then decide they don’t like it: suddenly these students don’t think they’re “good at math” any more, they’re not smart enough be a mathematician, etc.
But we need this kind of problem-solving skill and talent too much for it to all be spent on computing R(6,6). Richard Rusczyk’s take from Math Prize for Girls 2014 is:
When people ask me, am I disappointed when my students don’t go off and be mathematicians, my answer is I’d be very disappointed if they all did. We need people who can think about these complex problems and solve really hard problems they haven’t seen before everywhere. It’s not just in math, it’s not just in the sciences, it’s not just in medicine — I mean, what we’d give to get some of them in Congress!
Academia is a fine career, but there’s tons of other options out there: the research community may denounce those who switch out as failures, but I’m sure society will take them with open arms.
To close, I really like this (sarcastic) comment from Steven Karp (near bottom):
Contest math is inaccessible to over 90% of people as it is, and then we’re supposed to tell those that get it that even that isn’t real math? While we’re at it, let’s tell Vi Hart to stop making videos because they don’t accurately represent math research.
Thanks first of all for the many long and thoughtful comments from everyone (both here, on Facebook, in private, and so on). It’s given me a lot to think about.
Here’s my responses to some of the points that were raised, which is necessarily incomplete because of the volume of discussion.
1. To start off, it was suggested I should explicitly clarify: I do not mean to imply that people who didn’t do well on contests cannot do well in math research. So let me say that now.
2. My favorite comment that I got was that in fact this whole post pattern matches with bravery debates.
On one hand you have lots of olympiad students who actually FEEL BAD about winning medals because they “weren’t doing real math”. But on the other hand there are students whose parents tell them to not pursue math as a major or career because of low contest scores. These students (and their parents) would benefit a lot from the Mantra; so I concede that there are indeed good use cases of the Mantra (such as those that Anonymous Chicken, betaveros describe below) and in particular the Mantra is not intrinsically bad.
Which of these use is the “common use” probably depends on which tribes you are part of (guess which one I see more?). It’s interesting in that in this case, the two sides actually agree on the basic fact (that contests and research are not so correlated).
3. Some people point out that research is a career while contests aren’t. I am not convinced by this; I don’t think “is a career” is a good metric for measuring value to society, and can think of several examples of actual jobs that I think really should not exist (not saying any names). In addition, I think that if the general public understood what mathematicians actually do for a career, they just might be a little less willing to pay us.
I think there’s an interesting discussion about whether contests / research are “valuable” or not, but I don’t think the answer is one-sided; this would warrant a whole different debate (and would derail the entire post if I tried to address it).
4. Some people point out that training for olympiads yields diminishing returns (e.g. learning Muirhead and Schur is probably not useful for anything else). I guess this is true, but isn’t it true of almost anything? Maybe the point is supposed to be “olympiads aren’t everything”, which is agreeable (see below).
5. The other favorite comment I got was from Another Chicken, who points out below that the olympiad tribe itself is elitist: they tend to wall themselves off from outsiders (I certainly do this), and undervalue anything that isn’t hard technical problems.
I concede these are real problems with the olympiad community. Again, this could be a whole different blog post.
But I think this comment missed the point of this post. It is probably fine (albeit patronizing) to encourage olympiad students to expand; but I have a big problem with framing it as “spend time on not-contests because research“. That’s the real issue with the Mantra: it is often used as a recruitment slogan, telling students that research is the next true test after the IMO has been conquered.
Changing the Golden Metric from olympiads to research seems to just make the world more egotistic than it already is.
# Against Hook-Length on USAMO 2016/2
A recent USAMO problem asked the contestant to prove that
$\displaystyle (k^2)! \cdot \prod_{j=0}^{k-1} \frac{j!}{(j+k)!}$
is an integer for every ${k \in \mathbb N}$. Unfortunately, it appears that this is a special case of the so-called hook-length formula, applied to a ${k \times k}$ Young tableau, and several students appealed to this fact without proof to produce one-line solutions lacking any substance. This has led to a major controversy about how such solutions should be graded, in particular whether they should receive the ${7^-}$ treatment for “essentially correct solutions”, or the ${0^+}$ treatment for “essentially not solved”.
In this post I want to argue that I think that these solutions deserve a score of ${1}$.
## 1. Disclaimers
However, before I do so, I would like to make some disclaimers:
• This issue is apparently extremely polarized: everyone seems to strongly believe one side or the other.
• This was an extremely poor choice of a USAMO problem, and so there is no “good” way to grade the HL solutions, only a least bad way. The correct solution to the dilemma is to not have used the problem at all. Yet here’s the bloodied patient, and here we are in the emergency room.
• While I am a grader for the USAMO, I am one of many graders, and what I say in this post does not necessarily reflect the viewpoints of other USAMO graders or what score the problem will actually receive. In other words, this is my own view and not official in any way.
One last remark is that I do not consider the hook-length formula to be a “well-known” result like so many contestants seem to want to pretend it is. However, this results in the danger of what constitutes “well-known” or not. So in what follows I’ll pretend that the HL formula is about as well-known as, say, the Pascal or Zsigmondy theorem, even though I personally don’t think that this is the case.
One final disclaimer: I am unlikely to respond to further comments about an issue this polarized, since I have already spent many hours doing so, and I’ve done enough of my duty. So if I don’t respond to a comment of yours, please don’t take it personally.
## 2. Rule for citations
Here is the policy I use for citations when grading:
• You can cite any named result as long as it does not trivialize the problem.
• If the result trivializes the problem, then you are required to prove the result (or otherwise demonstrate you understand the proof) in order to use it.
This is what I’ve heard every time I have asked or answered this question; I have never heard anything to the contrary.
Some people apparently want to nit-pick about how “trivialize” is not objective. I think this is silly. If you really want a definition of “trivialize”, you can take “equivalent to the problem or a generalization of the problem” as a rule of thumb.
Clearly it follows from my rule above that the hook-length formula deserves ${0^+}$ grading, so the remainder of the post is dedicated to justifying why I think this is the correct rule.
I would rather have an accurate subjective criteria than a poor objective one.
In an ideal world, grading would be completely objective: a solution which solves the problem earns ${7^-}$ points and a solution which does not solve the problem earns ${0^+}$ points. But in practice this is of course not possible, unless we expect our contestants to write their solutions in a formal language like Coq. Since this is totally infeasible, we instead use informal proofs: students write their solutions in English and human graders assign a score based on whether the solution could in principle be compiled into a formal language proof.
What this means is that in grading, there are subjective decisions made all of the time. A good example of this is omitting steps. Suppose a student writes “case ${B}$ is similar [to case ${A}$]”. Then the grader has to decide whether or not the student actually knows that the other case is similar or not, or is just bluffing. One one extreme, if ${A}$ and ${B}$ really are identical, then the grader would probably accept the claim. On the other extreme, if ${A}$ and ${B}$ have some substantial differences, then the grader would almost certainly reject the claim. This implies that there is some intermediate “gray area” which cannot be objectively defined.
Fortunately, most of these decisions are clear, thus USAMO grading appears externally to be mostly objective. In other words, accurate grading and objective grading are correlated (yay math!), but they are not exactly the same. All scores are ultimately discretion by human graders, and not some set of rigid guidelines.
Citations are a special case of this. By citing a theorem, a student is taking advantage of convention that a well-known proof to both the student and grader can be omitted from the write-up.
## 4. Citing the problem
In light of this I don’t think you should get points for citing the problem. I think we all agree that if a student writes “this is a special case of IMO Shortlist 1999 G8”, they shouldn’t get very many points.
The issue with citing HL in lieu of solving the problem is that the hook-length formula is very hard to prove, and it is not reasonable to do so in an olympiad setting. Consequently, it is near certain that these students have essentially zero understanding of the solution to the problem; they have not solved it, they have only named it.
Similarly, I think you should not earn points for trivializing a problem using Dirichlet, Prime Number Theorem, Zsigmondy, etc. This is also historically consistent with the way that grading has been done in the past (hi Palmer!).
## 5. Citing intermediate steps
Now consider the usage of difficult theorems such as Dirichlet, Prime Number Theorem, etc. on a solution in which they are merely an intermediate step rather than the entire problem. The consensus here is that citing these results is okay (though it is not unanimous; some super harsh graders also want to take off points here, but they are very few in my experience).
I think this is acceptable, because in this case the contestant has done 90% of the solution, and cannot do the remaining 10% but recognize it as well-known. In my eyes this is considered as essentially solving the problem, because the last missing bit is a standard fact. But somehow if you cannot do 100% of the problem, I don’t think that counts as solving the problem.
What I mean is there is a subjective dependence both on how much of the solution the student actually understands, and how accessible the result is. Unfortunately, the HL solutions earn the worst possible rank in both of the categories: the student understands 0% of the solution and moreover the result is completely inaccessible.
## 6. Common complaints
Here are the various complaints that people have made to me.
• “HL is well-known.”
Well, I don’t think it is, but in any case that’s not the point; you cannot cite a result which is (a) more general than the problem, and (b) to which you don’t understand the proof.
So what? I would rather have an accurate subjective criteria than a poor objective one.
• “It’s the problem writer’s fault, so students should get ${7}$.”
This is not an ethics issue. It is not appropriate to award points of sympathy on a serious contest such as the USAMO; score inflation is not going to help anyone.
• “It’s elitist for the graders to decide what counts as trivialized.”
That’s the grader’s job. Again I would rather have an accurate subjective criteria than a poor objective one. In practice I think the graders are very reasonable about decisions.
• “I don’t think anyone disputes that the HL solution is a correct one, so certainly the math dictates a ${7^-}$.”
I dispute it: I don’t think citing HL is a solution at all.
• “Why do we let students use Pascal / Cauchy / etc?”
Because these results are much more reasonable to prove, and the “one-line” solutions using Pascal and Cauchy are not completely trivial; the length of a solution is not the same thing as its difficulty. Of course this is ultimately subjective, but I would rather have an accurate subjective criteria than a poor objective one.
• “HL solutions have substance, because we made an observation that the quantity is actually the number of ways to do something.”
That’s why I wish to award ${1}$ instead of ${0}$.
• “Your rule isn’t written anywhere.”
Unfortunately, none of the rules governing USAMO are written anywhere. I agree this is bad and the USAMO should publish some more clear description of the grading mechanics.
• “The proof of the HLF isn’t even that complicated.”
Are you joking me?
In summary, I don’t think it is appropriate to give full marks to a student who cannot solve a problem just because they can name it.
# Against Perfect Scores
One of the pieces of advice I constantly give to young students preparing for math contests is that they should probably do harder problems. But perhaps I don’t preach this zealously enough for them to listen, so here’s a concrete reason (with actual math!) why I give this advice.
## 1. The AIME and USAMO
In the USA many students who seriously prepare for math contests eventually qualify for an exam called the AIME (American Invitational Math Exam). This is a 3-hour exam with 15 short-answer problems; the median score is maybe about 5 problems.
Correctly solving maybe 10 of the problems qualifies for the much more difficult USAMO. This national olympiad is much more daunting, with six proof-based problems given over nine hours. It is not uncommon for olympiad contestants to not solve a single problem (this certainly happened to me a fair share of times!).
You’ll notice the stark difference in the scale of these contests (Tanya Khovanova has a longer complaint about this here). For students who are qualifying for USAMO for the first time, the olympiad is terrifying: I certainly remember the first time I took the olympiad with a super lofty goal of solving any problem.
Now, my personal opinion is that the difference between AIME and USAMO is generally exaggerated, and less drastic than appearances suggest. But even then, the psychological fear is still there — so what do you think happens to this demographic of students?
Answer: they don’t move on from AIME training. They think, “oh, the USAMO is too hard, I can only solve 10 problems on the AIME so I should stick to solving hard problems on the AIME until I can comfortably solve most of them”. So they keep on working through old AIME papers.
## 2. Perfect Scores
To understand why this is a bad idea, let’s ask the following question: how good to you have to be to consistently get a perfect score on the AIME?
Consider first a student averages a score of ${10}$ on the AIME, which is a fairly comfortable qualifying score. For illustration, let’s crudely simplify and assume that on a 15-question exam, he has a independent ${\frac23}$ probability of getting each question right. Then the chance he sweeps the AIME is
$\displaystyle \left( \frac23 \right)^{15} \approx 0.228\%.$
This is pretty low, which makes sense: ${10}$ and ${15}$ on the AIME feel like quite different scores.
Now suppose we bump that up to averaging ${12}$ problems on the AIME, which is almost certainly enough to qualify for the USAMO. This time, the chance of sweeping is
$\displaystyle \left( \frac{4}{5} \right)^{15} \approx 3.52\%.$
This should feel kind of low to you as well. So if you consistently solve ${80\%}$ of problems in training, your chance at netting a perfect score is still dismal, even though on average you’re only three problems away.
Well, that’s annoying, so let’s push this as far as we can: consider a student who’s averaging ${14}$ problems (thus, ${93\%}$ success), id est a near-perfect score. Then the probability of getting a perfect score
$\displaystyle \left( \frac{14}{15} \right)^{15} \approx 35.5\%.$
Which is\dots just over ${\frac 13}$.
At which point you throw up your hands and say, what more could you ask for? I’m already averaging one less than a perfect score, and I still don’t have a good chance of acing the exam? This should feel very unfair: on average you’re only one problem away from full marks, and yet doing one problem better than normal is still a splotchy hit-or-miss.
## 3. Some Combinatorics
Those of you who either know statistics / combinatorics might be able to see what’s going on now. The problem is that
$\displaystyle (1-\varepsilon)^{15} \approx 1 - 15\varepsilon$
for small ${\varepsilon}$. That is, if your accuracy is even a little ${\varepsilon}$ away from perfect, that difference gets amplified by a factor of ${15}$ against you.
Below is a nice chart that shows you, based on this oversimplified naïve model, how likely you are to do a little better than your average.
$\displaystyle \begin{array}{lrrrrrr} \textbf{Avg} & \ge 10 & \ge 11 & \ge 12 & \ge 13 & \ge 14 & \ge 15 \\ \hline \mathbf{10} & 61.84\% & 40.41\% & 20.92\% & 7.94\% & 1.94\% & 0.23\% \\ \mathbf{11} & & 63.04\% & 40.27\% & 19.40\% & 6.16\% & 0.95\% \\ \mathbf{12} & & & 64.82\% & 39.80\% & 16.71\% & 3.52\% \\ \mathbf{13} & & & & 67.71\% & 38.66\% & 11.69\% \\ \mathbf{14} & & & & & 73.59\% & 35.53\% \\ \mathbf{15} & & & & & & 100.00\% \\ \end{array}$
Even if you’re not aiming for that lofty perfect score, we see the same repulsion effect: it’s quite hard to do even a little better than average. If you get an average score of ${k}$, the probability of getting ${k+1}$ looks to be about ${\frac25}$. As for ${k+2}$ the chances are even more dismal. In fact, merely staying afloat (getting at least your average score) isn’t a comfortable proposition.
And this is in my simplified model of “independent events”. Those of you who actually take the AIME know just how costly small arithmetic errors are, and just how steep the difficulty curve on this exam is.
All of this goes to show: to reliably and consistently ace the AIME, it’s not enough to be able to do 95% of AIME problems (which is already quite a feat). You almost need to be able to solve AIME problems in your sleep. On any given AIME some people will get luckier than others, but coming out with a perfect score every time is a huge undertaking.
## 4. 90% Confidence?
By the way, did I ever mention that it’s really hard to be 90% confident in something? In most contexts, 90% is a really big number.
If you don’t know what I’m talking about:
This is also the first page of this worksheet. The idea of this quiz is to give you a sense of just how high 90% is. To do this, you are asked 10 numerical questions and must provide an interval which you think the answer lies within with probability 90%. (So ideally, you would get exactly 9 intervals correct.)
As a hint: almost everyone is overconfident. Second hint: almost everyone is overconfident even after being told that their intervals should be embarrassingly wide. Third hint: I just tried this again and got a low score.
(For more fun of this form: calibration game.)
## 5. Practice
So what do you do if you really want to get a perfect score on the AIME?
Well, first of all, my advice is that you have better things to do (like USAMO). But even if you are unshakeable on your desire to get a 15, my advice still remains the same: do some USAMO problems.
Why? The reason is that going from average ${14}$ to average ${15}$ means going from 95% accuracy to 99% accuracy, as I’ve discussed above.
So what you don’t want to do is keep doing AIME problems. You are not using your time well if you get 95% accuracy in training. I’m well on record saying that you learn the most from problems that are just a little above your ability level, and massing AIME problems is basically the exact opposite of that. You’d maybe only run into a problem you couldn’t solve once every 10 or 20 or 30 problems. That’s just grossly inefficient.
The way out of this is to do harder problems, and that’s why I explicitly suggest people start working on USAMO problems even before they’re 90% confident they will qualify for it. At the very least, you certainly won’t be bored.
# Stop Paying Me Per Hour
Occasionally I am approached by parents who ask me if I am available to teach their child in olympiad math. This is flattering enough that I’ve even said yes a few times, but I’m always confused why the question is “can you tutor my child?” instead of “do you think tutoring would help, and if so, can you tutor my child?”.
Here are my thoughts on the latter question.
## Charging by Salt
I’m going to start by clearing up the big misconception which inspired the title of this post.
The way tutoring works is very roughly like the following: I meet with the student once every week, with custom-made materials. Then I give them some practice problems to work on (“homework”), which I also grade. I throw in some mock olympiads. I strongly encourage my students to email me with questions as they come up. Rinse and repeat.
The actual logistics vary; for example, for small in-person groups I prefer to do every other week for 3 hours. But the thing that never changes is how the parents pay me. It’s always the same: I get $N \gg 0$ dollars per hour for the actual in-person meeting, and $0$ dollars per hour for preparing materials, grading homework, responding to questions, and writing the mock olympiads.
Now I’m not complaining because $N$ is embarrassingly large. But one day I realized that this pricing system is giving parents the wrong impression. They now think the bulk of the work is from me taking the time to meet with their child, and that the homework is to help reinforce what I talk about in class. After all, this is what high school does, right?
I’m here to tell you that this is completely wrong.
It’s the other way around: the class is meant to supplement the homework. Think salt: for most dishes you can’t get away with having zero salt, but you still don’t price a dish based on how much salt is in it. Similarly, you can’t excise the in-person meeting altogether, but the dirty secret is that the classtime isn’t the core component.
I mean, here’s the thing.
• When you take the IMO, you are alone with a sheet of paper that says “Problem 1”, “Problem 2”, “Problem 3”.
• When you do my homework, you are alone with a sheet of paper that says “Problem 1”, “Problem 2”, “Problem 3”.
• When you’re in my class, you get to see my beautiful smiling face plus a sheet of paper that says “Theorem 1”, “Example 2”, “Example 3”.
Which of these is not like the other?
## Active Ingredients
So we’ve established that the main active ingredient is actually you working on problems alone in your room. If so, why do you need a teacher at all?
The answer depends on what the word “need” means. No USA IMO contestant in my recent memory has had a coach, so you don’t need a coach. But there are some good reasons why one might be helpful.
Some obvious reasons are social:
• Forces you to work regularly; though most top students don’t really have a problem with self-motivation
• You have a person to talk to. This can be nice if you are relatively isolated from the rest of the math community (e.g. due to geography).
• You have someone who will answer your questions. (I can’t tell you how jealous I am right now.)
• Feedback on solutions to problems. This includes student’s written solutions (stylistic remarks, or things like “this lemma you proved in your solution is actually just a special case of X” and so on) as well as explaining solutions to problems the student fails to solve.
In short, it’s much more engaging to study math with a real person.
Those reasons don’t depend so much on the instructor’s actual ability. Here are some reasons which do:
• Guidance. An instructor can tell you what things to learn or work on based on their own experience in the past, and can often point you to things that you didn’t know existed.
• It’s a big plus if the instructor has a good taste in problems. Some problems are bad and don’t teach you anything; some (old) problems don’t resemble the flavor of problems that actually appear on olympiads. On the flip side, some problems are very instructive or very pretty, and it’s great if your teacher knows what these are.
• Ideally, also a good taste in topics. For example, I strongly object to classes titled “collinearity and concurrence” because this may as well be called “geometry”, and I think that such global classes are too broad to do anything useful. Conversely, examples of topics I think should be classes but aren’t: “looking at equality cases”, “explicit constructions”, “Hall’s marriage theorem”, “greedy algorithms”. I make this point a lot more explicitly in Section 2 of this blog post of mine.
In short, you’re also paying for the material and expertise. Past IMO medalists know how the contest scene works. Parents and (beginning) students less so.
Lastly, the reason which I personally think is most important:
• Conveys strong intuition/heuristics, both globally and for specific problems. It’s hard to give concrete examples of this, but a few global ones I know were particularly helpful for me: “look at maximal things” (Po-Shen Loh on greedy algorithms), “DURR WE WANT STUFF TO CANCEL” (David Yang on FE’s), “use obvious inequalities” (Gabriel Dospinescu on analytic NT), which are take-aways that have gotten me a lot of points. This is also my biggest criteria for evaluating my own written exposition.
You guys know this feeling, I’m sure: when your English teacher assigned you an passage to read, the fastest way to understand it is to not read the passage but to ask the person sitting next to you what it’s saying. I think this is in part because most people are awful at writing and don’t even know how to write for other human beings.
The situation in olympiads is the same. I estimate listening to me explain a solution is maybe 4 to 10 times faster than reading the official solution. Turns out that writing up official solutions for contests is a huge chore, so most people just throw a sequence of steps at the reader without even bothering to identify the main ideas. (As a contest organizer, I’m certainly guilty of this laziness too!)
Aside: I think this is a large part of why my olympiad handouts and other writings have been so well-received. Disclaimer: this was supposed to be a list of what makes a good instructor, but due to narcissism it ended up being a list of things I focus on when teaching.
## Caveat Emptor
And now I explain why the top IMO candidates still got by without teachers.
It turns out that the amount of math preparation time that students put in doesn’t seem to be a normal distribution. It’s a log normal distribution. And the reason is this: it’s hard to do a really good job on anything you don’t think about in the shower.
Officially, when I was a contestant I spent maybe 20 hours a week doing math contest preparation. But the actual amount of time is higher. The reason is that I would think about math contests more like 24/7. During English class, I would often be daydreaming about the inequality I worked on last night. On the car ride home, I would idly think about what I was going to teach my middle school students the next week. To say nothing of showers: during my showers I would draw geometry diagrams on the wall with water on my finger.
So spiritually, I maybe spent 10 times as much time on math olympiads compared to an average USA(J)MO qualifier.
And that factor of 10 is enormous. Even if I as a coach can cause you to learn two or three or four times more efficiently, you will still lose to that factor of 10. I’d guess my actual multiplier is somewhere between 2 and 3, so there you go. (Edit: this used to say 3 to 4, I think that’s too high now.)
The best I can do is hope that, in addition to making my student’s training more efficient, I also cause my students to like math more. |
https://am111.readthedocs.io/en/latest/jmatlab_use.html | Use MATLAB in Jupyter Notebooks¶
Jupyter Notebook is a great tool for interactive computing. It allows you to combine codes, simulation results, and descriptions such as latex equations in a single file. It works for many langueges including MATLAB, the choice of this class.
For installation, see Install Jupyter-MATLAB.
Jupyter basics¶
The most commonly used Jupyter commands are
• enter – (in command mode) enter edit mode
• shift+enter – (in edit mode) execute current cell
• esc – (in edit mode) enter command mode, so you can use arrow keys to move to other cells
• b – (in command mode) insert empty cell below
• x – (in command mode) cut current cell
• v – (in command mode) paste the cell you’ve cut
• esc+m/y – change current code cell to markdown cell / reverse
For all commands see “Help” - “Keyboard shortcuts” in the toolbar.
Printing formats¶
The default output format is “loose”, which takes a lot of space.
In [1]:
format loose
for i=1:2
i+1
end
ans =
2
ans =
3
“compact” is a better option for notebook.
In [2]:
format compact
for i=1:2
i+1
end
ans =
2
ans =
3
Use help functions¶
“help” will print docs inside the notebook, same as Python’s help( )
In [3]:
help sin
SIN Sine of argument in radians.
SIN(X) is the sine of the elements of X.
Reference page in Doc Center
doc sin
Other functions named sin
codistributed/sin gpuArray/sin sym/sin
“?” will prompt a small text window, same as IPython magic “?”. (not shown on the webpage)
In [4]:
?sin
“doc” will prompt MATLAB’s detailed documentations. (not shown on the webpage)
In [5]:
doc sin
Plotting¶
Make a cool surface for plotting :)
In [6]:
tx = linspace (-8, 8, 41);
ty = tx;
[xx, yy] = meshgrid (tx, ty);
r = sqrt (xx .^ 2 + yy .^ 2) + eps;
tz = sin (r) ./ r;
The “%plot inline” magic (default) will plot inside the notebook, same as “%matplotlib inline” in IPython.
In [7]:
%plot inline
mesh(tx, ty, tz);
The “%plot native” magic will plot in an external window as the original MATLAB’s interface, which allows you to rotate, zoom in/out (not shown on the webpage).
In [8]:
%plot native
mesh(tx, ty, tz);
You can still use “close all” to close the window that was opened by cell above.
In [9]:
close all
“?%plot” will show more plotting options including how to control the figure size (not shown on the webpage)
In [10]:
?%plot
User-defined functions¶
For Python programmers it is so common to define a custom function inside a notebook and reuse it over and over again.
A ridiculous design of MATLAB is the function has to be in a separate file, with the function name being the file name. Local functions are allowed since R2016b, but it has many restrictions and doesn’t work in either Jupyter Notebook or MATLAB’s own Live Script.
Inline functions¶
By default, matlab only allows inline functions within a script.
In [11]:
f=@(x) x^3+x-1;
We can easily find the root of such a function.
In [12]:
fzero(f,[0 1],optimset('Display','iter'))
Func-count x f(x) Procedure
2 1 1 initial
3 0.5 -0.375 bisection
4 0.636364 -0.105935 interpolation
5 0.68491 0.00620153 interpolation
6 0.682225 -0.000246683 interpolation
7 0.682328 -5.43508e-07 interpolation
8 0.682328 1.50102e-13 interpolation
9 0.682328 0 interpolation
Zero found in the interval [0, 1]
ans =
0.6823
Standard functions¶
But inline functions must only contain a single statement, too limited in most cases.
If you try to define a standard function, it will fail:
In [13]:
function p = multi_line_func(a,b)
a = a+1;
b = b+1;
p = a+b;
end
Error: Function definitions are not permitted in this context.
Fortunately, Jupyter’s “%%file” magic allows us to write a code cell to a file.
In [14]:
%%file multi_line_func.m
function p = multi_line_func(a,b)
a = a+1;
b = b+1;
p = a+b;
end
Created file '/Users/zhuangjw/Research/Computing/personal_web/matlab_code/multi_line_func.m'.
The output file and this Notebook will be in the same directory, so you can call it directly, as if this function is defined inside the notebook.
In [15]:
multi_line_func(1,1)
ans =
4
By doing this, you get Python-like working environment – create a function, test it with several input parameters, go back to edit the function and test it again. This REPL workflow will greatly speed-up your prototyping.
It might take 1~2 seconds for a function cell to take effect, because we are writting files to disk. But you don’t need to restart the kernel to activate any modifications to your function.
warning: you should avoid adding a MATLAB comment (start with %) at the beginning of a cell, because it might be interpreted as Jupyter magic and thus confuse the kernel.
Markdown cells¶
Markdown cells are a great way to add descriptions to your codes. Here are examples stolen from the official document. See Jupyter notebook’s document for details.
Latex equations¶
How to write an inline eqution: $e^{i\pi} + 1 = 0$
Result: $$e^{i\pi} + 1 = 0$$
How to write a standalone equation:
$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$
Result:
$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$
Tables¶
How to make a table:
| This | is |
|------|------|
| a | table|
Result:
This is
a table
Not-executing codes¶
You can put your codes inside a markdown cell, to only show the codes without executing them.
Here’s the way to get syntax highlighting for python codes:
python
python codes
“MATLAB” is not a highlighting option, but you can use “OCTAVE”, an open-source clone of MATLAB, to get the same effect.
OCTAVE
disp("Hello World")
for i=1:2
i+1
end
Result:
disp("Hello World")
for i=1:2
i+1
end
# Heading 1 |
https://fermatslibrary.com/s/experimental-self-plotting-of-trajectories | ### TL;DR Most people think that when studying the physics of o...
Richard Sutton (1900-1966) was a physics professor at Caltech and k...
 To calculate the range of t...
This is a good example of a problem in which finding the solution w...
Looking at the equation for range R, the initial velocity, V_0 shou...
Richard Sutton (1900-1966) was a physics professor at Caltech and known for his extraordinary imagination and ingenuity in designing lecture and laboratory experiments illustrating basic principles and phenomena in physics. His "Demonstration Experiments in Physics" has been a handbook for physics teachers since 1938.   To calculate the range of the projectile we start by determining its horizontal position $$x(t) = v_0 t \cos \alpha$$ In the vertical direction $$y(t) = v_0 t \sin \alpha - \frac{1} {2} g t^2$$ Now we need to determine the time in takes for the projectile to hit the ground ($y=0$). $$0 = v_0 t \sin \alpha - \frac{1} {2} g t^2$$ By factoring: $$t = 0$$ or $$t = \frac{2 v_0 \sin \alpha} {g}$$ The first solution corresponds to when the projectile is first launched and the second solution corresponds to the flight time before the projectile hits the ground again. If we now use this value in the horizontal equation $$R = \frac {2 v_0^2} {g} \cos \alpha \, \sin \alpha$$ By the conservation of energy $\frac{1}{2}mv_0^2 = mgh$ and so $v^2_0=2gh$, turning the range expression into $$R=4h \cos(\alpha) \sin(\alpha)$$ which has no $g$ in it! ### TL;DR Most people think that when studying the physics of objects moving in space, $g$ always shows up. In 1960, Richard Sutton wrote a paper describing a simple counterexample: if a mass slides down an inclined plane and is launched at ground level with a certain angle, the range doesn't depend on g - the outcome is the same if the experiment is performed on the Moon, Earth or on Mars. This is a good example of a problem in which finding the solution with math is substantially easier than trying to grasp it through "physical intuition". The key difference between doing this experiment on Earth vs Mars is that on the planet where g is greater, the launch speed is also greater which will compensate for the increased gravity, giving in the end the exact same range. Looking at the equation for range R, the initial velocity, V_0 should be squared in the expression given. |
http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2008-06/msg00100.html | # Re: unicode failure with inputenc package
On Jun 3, 3:04 pm, r...@xxxxxxxxxxxx (Robin Fairbairns) wrote:
r <inp...@xxxxxxxxx> writes:
On Jun 3, 1:09 pm, Joseph Wright <joseph.wri...@xxxxxxxxxxxxxxxxxx>
wrote:
On Jun 3, 12:43 pm, r <inp...@xxxxxxxxx> wrote:
What to do if I want all utf8 character encoding to be accepted
automatically for all tex documents that I create?
don't even imagine (ordinary) latex or pdflatex is useful.
I think xe(la)tex is going to be a good idea!
Well, I've now got this version:
latex
This is pdfTeXk, Version 3.141592-1.40.5 (Web2C 7.5.6)
%&-line parsing enabled.
**
that's tex live 2007 -- have you just installed that?
tex live includes a more-or-less working xetex (and hence xelatex).
Now I haven't got a dvi viewer; xdvi command no longer exists. Can't
seem to install either. What is the program to view dvi files?
xdvi views dvi files. with so little to go on, my only assumption is
that you've suppressed the installation of xdvi (since it's a standard
part of tex live). it's possible you suppressed it unwittingly, or
that the copy you have wilfully suppresses dvi use anywhere, or that
the copy you have provides xdvi as a separate package.
start a new thread describing how you got to where you are. at the
start of this thread you were using tetex; now you are using tex live
with no working xdvi, but you have given us no hint how you got there.
--
Robin Fairbairns, Cambridge
I don't know either and gave up. So, I un-installed texlive (didn't
work, didn't like it) and returned to tetex. Then I removed the
unicode character and replaced with:
$0\,^{\circ}\mathrm{C}$
So the result would be 0 DegC.
I tried to install tetexunicoderpm but received conflict error with
tetexlatex30381mdv20080i586, so gave up trying that. Luckily I'm
writing in english will not need unicode for now. If anyone know how
to resolve this conflict, thanks.
.
## Relevant Pages
• Re: gfortran: OK, I quit, you win
... >> TEX is available under a different set of packaging. ... I quickly found TeX ... > fink and so just installed the fink copy, ... > install, I'll do differently. ...
(comp.lang.fortran)
• Re: The Problems of TeX
... the user already knows about TeX and friends. ... Documentation is poor, ... I don't see that many people are in a position to install a TeX ... But you do fall into the category of `Unix expert', ...
(comp.text.tex)
• Re: can tetex and texlive coexist at the same time on the same linux-pc?
... virtual machines are a very robust way to support tetex and texlive. ... The VMware player is easy to install and there are a number of prepackaged runtimes for various linux versions. ... At work I have TL2005 with multiple binaries on a server, so I just mount the partion with TL2005 in a VM, set the PATH, and go. ... In general, it should only be necessary to change the PATH variable to switch between the tex system that runs from /usr/bin and an alternative that keeps the binaries in a separate location. ...
(comp.text.tex)
• Re: The Problems of TeX
... the user already knows about TeX and friends. ... Some TeX systems need ... I don't see that many people are in a position to install a TeX ... I'm not a TeX installation/configuration wizard and never have been - ...
(comp.text.tex)
• Re: unicode failure with inputenc package
... seem to install either. ... part of tex live). ... that the copy you have wilfully suppresses dvi use anywhere, ... the copy you have provides xdvi as a separate package. ...
(comp.text.tex) |
https://math.stackexchange.com/questions/2324822/analyze-the-convergence-of-the-series-sum-n-1-infty-frac-cos-in2n | # Analyze the convergence of the series $\sum_{n=1}^{\infty} \frac{\cos (in)}{2^{n}}$
Analyze the convergence of the series $\sum_{n=1}^{\infty} \frac{\cos (in)}{2^{n}}$.
$a_n=\frac{1}{2^n}, \quad b_n=\cos(in), \quad \lim(a_{n})=0$ but $\lim(b_{n})$ does not exist. So I think the series diverges. How do I prove it formally?
• $$i=\sqrt{-1}$$?? – lab bhattacharjee Jun 16 '17 at 9:48
• If $i$ is supposed to be the imaginary unit, observe $\cos ix=\cosh x.$ And correct the spelling, please, "analyze", there could be misunderstandings. – Professor Vector Jun 16 '17 at 9:53
Observe that $$\frac{\cos(in)}{2^n} = \frac{\cosh(n)}{2^{n}} = \frac{e^{-n} + e^{n}}{2^{n+1}}.$$ Now $\frac{e^{-n}}{2^{n+1}} \to 0$ and $\frac{e^{n}}{2^{n+1}}$ does not converge to 0 (you may check that $f(x) = \frac{e^x}{2^{x+1}}$ has positive derivative for every $x>0$) and hence $\frac{\cos(in)}{2^n}$ does not converge to 0 and the series diverges.
$$\frac{e^{i^2n}+e^{-i^2n}}{2^{n+1}}$$ yields the sum of two geometric series of ratios $\dfrac1{2e}$ and $\dfrac e2$, and the second diverges. |
http://openstudy.com/updates/55ad4f8ae4b0d48ca8ed4ceb | anonymous one year ago A country's population in 1994 was 195 million. In 2002 it was 199 million. Estimate the population in 2016 using exponential growth.
you need a model like $A = P \times e^{kt}$ A = the future population, P = initial population, k = growth constant t = time in years so at t = 0 the population P = 195 and t= 8, the population is 199 this can be used to find k, the growth constant $199 = 195 \times e^{8k}$ solve for k... when you get k use the formula $A = 195 \times e^{kt}$ using the k value you found and t = 12, which is the time between 1994 and 2012 hope it helps |
http://mathoverflow.net/questions/55594/what-is-known-about-the-conjectured-infinitude-of-regular-primes | # What is known about the conjectured infinitude of regular primes ?
I have read in some number theory books and in some online resources that it is known that there exist infinitely many irregular primes (a fact apparently proven quite some time ago, around 1915 by K. L. Jensen according to the Wikipedia entry).
I haven't been able to find any reference, either in books or in the internet as to what the method for proving this might have been, but in any case, what I was more curious is about the conjectured existence of infinitely many regular primes.
Even it is conjectured that there are "more" regular primes than irregular ones (about 61%), but the online references do not seem to say anything about its status, apart from saying that it is not known to be true.
Thus the questions I have are:
1) Are there any approaches at all to this problem?
2) Are there any other conjectures known to imply the existence of infinitely many regular primes?
3) Is it known why it is harder to prove this (other than the fact that it would give a proof of Fermat's Last Theorem for infinitely many prime exponents that does not involve heavy machinery =P) ?
Thank you very much in advance.
-
You can find a proof that there exist infinitely many irregular primes in Chapter 5 of Washington's book on cyclotomic fields. – Kevin Ventullo Feb 16 '11 at 9:40
Thanks a lot Kevin. – Adrián Barquero Feb 16 '11 at 15:10
For proving the existence of regular primes you have to show that there exist primes $p$ not dividing the numerators of the Bernoulli numbers $B_n$ with $n < p$. Heuristically there must be many of them, but the numerators grow so fast that you cannot exclude the possibility that all primes larger than some bound are irregular. |
https://mathematica.stackexchange.com/questions/132064/pacletinfo-m-documentation-project?noredirect=1 | # PacletInfo.m documentation project
Packages can be made into paclets, which provides easy distribution and versioning. The paclet metadata is in the PacletInfo.m file. The PacletInfo settings also determine how the paclet can extend Mathematica: e.g. provide new functions for the kernel (a usual package), new palettes or stylesheets for the Front End, etc.
What settings and extensions can be used in a PacletInfo.m file and what are their effects?
Documenting these will be very useful for people who develop and publish packages.
Related posts:
• Thank you so much for creating this post! For everyone's notice, I'm trying to finally create the functionality of deploying a package with the IntelliJ IDEA automatically. In the easiest case, it is just packing the sources into a .zip for extraction in a place where it can be loaded. However, deploying a paclet seems to be the future and the more consistent way of doing this. Nov 25, 2016 at 23:11
• Thank you very much. Could not find this information before reading your post. It helped a lot. Apr 28, 2020 at 14:59
This is a community project to produce useful documentation for PacletInfo.m. Feel free to edit and improve this answer.
While the Paclet Manager is loaded from .mx files, its plain text .m sources are also available. Much of the information in this post comes from the comments in those source files. See SystemOpen@Lookup[PacletInformation["PacletManager"], "Location"], especially Extension.m and Paclet.m.
Some other information comes from PacletInfo.m files that come with Mathematica:
paclets = Import[#, "Package"] & /@ FileNames["PacletInfo.m", $InstallationDirectory, Infinity]; as well as the GitLink package. # Introduction Paclets are a format and distribution mechanism for packages or other resources that can extend Mathematica. The paclet is described by the PacletInfo.m file. This file is used by the PacletManager, which is built into the Wolfram Language. When would a Mathematica user want to create a PacletInfo.m file? This is generally useful when building packages or applications. Having this file allows integrating documentation into the Documentation Center, or bundling the package into a .paclet file and installing it with PacletInstall. The file is also recognized in applications installed into $UserBaseDirectory/Applications or $BaseDirectory/Applications manually (the usual way) and not using PacletInstall. In addition to Paclets installed into a base directory, the PacletDirectoryAdd command can be used to add a directory containing paclets to the PacletManager so that it is recognized by the Mathematica FrontEnd and Needs. This allows a paclet or a library of multiple paclets to reside anywhere, such as a network drive where it can be accessed by multiple users. Todd Gayley gave a presentation about paclets at Wolfram Technology conference 2019. # Example A sample PacletInfo.m file from GitLink: Paclet[ Name -> "GitLink", Version -> "2100.0", MathematicaVersion -> "10.1+", Root -> ".", Internal -> True, Extensions -> { {"Kernel", Root -> ".", Context -> "GitLink"}, {"Documentation", Language -> "English"}, {"LibraryLink"} } ] Note: GitLink is eventually going to become part of Mathematica. The Internal flag may be related to this and may not be appropriate for user packages. # Settings These are some of the settings that can be given in PacletInfo files. See the default values using Normal[PacletManagerPacletPrivate$piDefaults]
The following should generally be present in any PacletInfo file:
• Name, paclet name. This can be used to refer to the paclet, e.g. PacletFind["name"] or PacletUpdate["name"].
• Version, paclet version. Must be up to five .-separated numbers. If multiple versions of a paclet are installed, it is always the latest one that will be used.
This value is parsed as PadRight[ToExpression@StringSplit[version, "."], 5].
The following can affect whether the paclet may be loaded:
• MathematicaVersion (deprecated) or WolframVersion (current since M10), minimum compatible Mathematica version, e.g. "10+" or "10.0.2+". The paclet will not load in incompatible versions. As of M11.0, it defaults to "10+".
While the MathematicaVersion form is deprecated, using MathematicaVersion -> "10+" allows Mathematica 9 to correctly identify a paclet as incompatible. M9 doesn't understand WolframVersion.
• SystemID, compatible system types. Can be a string, a list of strings, or All. See $SystemID for possible values. Defaults to All, and should be omitted unless your package is only compatible with certain operating systems. Other settings: • Root, probably the paclet root relative to PacletInfo.m. Not tested yet. • Loading, can be Manual, Automatic, or "Startup". Defaults to Manual, i.e. the package can be loaded with Needs as usual. Automatic is allowed only when the Symbols argument of the Kernel extension is set (see below). "Startup" causes the package to load at kernel startup. This setting will be exposed in the Documentation Center (see below) and can be changed by the user. Warning: Be careful if the package does more than issue definitions upon loading. Some operations do not work during kernel initialization and may even lock up the kernel. • Internal, can be True/False, ??? • Qualifier, have seen values like $SystemID, ???
Paclets that provide the Documentation extension will be listed in the Documentation Center under guide/InstalledAddOns.
The following metadata will be listed for each package/paclet on that page:
• Creator, author name.
• Description, description.
• Publisher, publisher name.
• Thumbnail, relative path to an image file. Will be used as the package icon. Should be 46 by 46 pixels.
Warning: There are some problems with the thumbnail retrieval. Rescaling to 46×46 fails (small bug) and using a thumbnail causes high CPU usage in M11 while the add-ons list is open (but not in M10.x).
• URL, package homepage.
These can be retrieved for installed paclets using PacletInformation["name"]
• BuildNumber, Category, Copyright, License, Support.
# Extensions
The Extensions setting contains a list of extensions, each in the form
{"ExtensionName", "Argument1" -> Value1, "Argument2" -> Value2, ...}
The possible extensions and their arguments are below.
## Application
Seems to work in a similar way to Kernel. Not sure what the difference is. At this moment I believe that one can (perhaps should) always use Kernel instead.
Arguments:
• Root, defaults to "." (same as Kernel extension)
• Context
## AutoCompletionData
???
Arguments:
Hints: See PacletFind["EntityFramework"]
## ChannelFramework
???
Hints: See PacletFind["DemoChannels"] for the implementation of the Demo:OneLiner channel which is shown in ChannelListen -> Applications -> Chat.
## Documentation
Fully documented in the Workbench Help.
Used to integrate documentation into the Documentation Center.
Will cause the application to show up in the auto-generated application list in the Documentation Center at guide/InstalledAddOns.
Arguments:
• Language, defaults to All
• Root, defaults to "Documentation"
• Context, rarely needed, use same argument in Kernel extension instead
• MainPage
• LinkBase
• Resources
Multiple instances of the Documentation extension can be used in the same PacletInfo file. For example, if multiple languages are supported there should be one extension for each language. For an example, see the Parallel paclet in AddOns/Applications.
The LinkBase argument is needed only in cases where documentation is stored in a different paclet from the code*. For an example, see the Parallel paclet in AddOns/Applications which links to the ParallelTools paclet.
The Resources should never need to be used by outside developers of paclets used in version 7+*.
## FrontEnd
Will cause subdirectories within the FrontEnd to be handled, e.g. FrontEnd/Palettes, FrontEnd/StyleSheets, FrontEnd/SystemResources, FrontEnd/TextResources. These items will be recognized by the Front End, e.g. palettes will show up in the Palettes menu.
Arguments:
• Root, defaults to "FrontEnd"
• WolframVersion
• SystemID
References:
Will cause .jar files within the root ("Java" by default) to be added to the classpath.
Arguments:
• Root, defaults to "Java"
• WolframVersion
• SystemID
## Kernel
Makes the package loadable by Needs.
Arguments:
• Root, defaults to "." for compatibility but "Kernel" is also typical. FindFile resolves the specified context to this location during the first stage of its name resolution. See also https://mathematica.stackexchange.com/q/133242/12.
• Context, package context or list of contexts. Used by FindFile. Also causes >> documentation links to be added to usage messages when documentation is present.
• Symbols, a list of symbols that will trigger autoloading if Loading -> Automatic is set. It is required for Loading -> Automatic to be a valid setting. Similar to DeclarePackage. Symbols can be given with full context, e.g. Symbols -> {"MyPackMyFun1", "MyPackMyFun2"}.
• WolframVersion
• SystemID
• HiddenImport |
https://www.skytowner.com/explore/comprehensive_guide_on_sample_variance | search
Search
Guest 0reps
Thanks for the thanks!
close
chevron_left Basic estimators
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
A
A
brightness_medium
share
arrow_backShare
chevron_left Basic estimators
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
auto_stories new
settings
# Comprehensive Guide on Sample Variance
Probability and Statistics
chevron_right
Basic estimators
schedule Nov 5, 2022
Last updated
local_offer
Tags
Definition.
# Sample variance
The sample variance of a sample $(x_1,x_2,\cdots,x_n)$ is computed by:
$$s^2=\frac{1}{n-1}\sum^n_{i=1}(x_i-\bar{x})^2$$
Where $n$ is the sample size and $\bar{x}$ is the sample mean. For the intuition behind this formula, please consult our guide on measures of spread.
Notice how we compute the average by dividing by $n-1$ instead of $n$. This is because dividing by $n-1$ makes the sample variance an unbiased estimator for the population variance - we give the prooflink below, but please consult our guide to understand what bias means.
Example.
## Computing the sample variance of a sample
Compute the sample variance of the following sample:
$$(1,3,5,7)$$
Solution. Here, the size of the sample is $n=4$. We first start by computing the sample mean:
\begin{align*} \bar{x}&=\frac{1}{4}\sum^4_{i=1}x_i\\ &=\frac{1}{4}(1+3+5+7)\\ &=4 \end{align*}
Let's now compute the sample variance $s^2$ using the formula:
\begin{align*} s^2 &=\frac{1}{n-1}\sum^n_{i=1}(x_i-\bar{x})^2\\ &=\frac{1}{3}\sum^4_{i=1}(x_i-4)^2\\ &=\frac{1}{3}[(1-4)^2+(3-4)^2+(5-4)^2+(7-4)^2]\\ &=\frac{20}{3}\\ &\approx6.67\\ \end{align*}
This means that, on average, the square of the difference between each point and the sample mean is around $6.67$. This interpretation is precise but quite awkward. Therefore, instead of quoting the sample variance of a single sample, we often compare the sample variance of two different samples to understand which sample is more spread out.
## Intuition behind why we divide by n-1 instead of n
Although we will formally provelink that dividing by $n-1$ will give us an unbiased estimator of the population variance, let's understand from another perspective why we should divide by $n-1$.
Ideally, our estimate of the population variance would be:
$$$$\label{eq:ohGzVCDYbDArl9d4nZX} s^2=\frac{1}{n}\sum^n_{i=1}(x_i-\mu)^2$$$$
Where $\mu$ is the population mean. In fact, if the population mean is known, then the sample variance should be computed as above without dividing by $n-1$. However, in most cases, the population mean is unknown, so the best we can do is to replace $\mu$ with the sample mean $\bar{x}$ like so:
$$$$\label{eq:NrJOSeZL5qE9DxVIics} s^2=\frac{1}{n}\sum^n_{i=1}(x_i-\bar{x})^2$$$$
However, when we replace $\mu$ with $\bar{x}$, it turns out that we would, on average, underestimate the population variance. We will now mathematically prove this.
Let's focus on the sum of squared differences. Instead of the sample mean $\bar{x}$, let's replace that with a variable $t$ and consider the expression as a function of $t$ like so:
$$f(t)=\sum^n_{i=1}(x_i-t)^2$$
Using calculus, our goal is to show that that $t=\bar{x}$ minimizes this function. Let's take the first derivative of $f(t)$ with respect to $t$ like so:
\begin{align*} f'(t)&=\frac{d}{dt}\sum_{i=1}^n(x_i-t)^2\\ &=\sum_{i=1}^n\frac{d}{dt}(x_i-t)^2\\ &=-2\sum_{i=1}^n(x_i-t) \end{align*}
Setting this equal to zero gives:
\begin{align*} -2\sum_{i=1}^n(x_i-t)&=0\\ \sum_{i=1}^n(x_i-t)&=0\\ \sum_{i=1}^nx_i-\sum_{i=1}^nt&=0\\ \Big(\sum_{i=1}^nx_i\Big)-nt&=0\\ t&=\frac{1}{n}\sum_{i=1}^nx_i\\ t&=\bar{x}\\ \end{align*}
Let's also check the nature of this stationary point by referring to the second derivative:
\begin{align*} f''(t)&=\frac{d}{dt}f'(t)\\ &=\frac{d}{dt}\Big(-2\sum_{i=1}^n(x_i-t)\Big)\\ &=-2\Big(\sum_{i=1}^n\frac{d}{dt}(x_i-t)\Big)\\ &=-2\Big(\sum_{i=1}^n-1\Big)\\ &=2n \\ \end{align*}
Since the sample size $n$ is positive, we have that the second derivative is always positive. This means that the stationary point $t=\bar{x}$ is indeed a minimum! In other words, out of all the values $t$ can take, setting $t=\bar{x}$ will minimize the sum of squared differences:
$$\sum^n_{i=1}(x_i-\bar{x})^2 \le \sum^n_{i=1}(x_i-t)^2$$
The population mean $\mu$ is some unknown constant, but we now know that:
$$$$\label{eq:kUfz4YNwhBVtS8B1ZF0} \sum^n_{i=1}(x_i-\bar{x})^2 \le \sum^n_{i=1}(x_i-\mu)^2$$$$
Even though we don't know what $\mu$ is, we know that the sum of squared differences when $t=\mu$ must be at least as large as the sum of squared differences when $t=\bar{x}$.
Let's divide both sides of \eqref{eq:kUfz4YNwhBVtS8B1ZF0} by $n$ to get:
$$$$\label{eq:Vd8ISUnkMkIvhi6wExH} \frac{1}{n}\sum^n_{i=1}(x_i-\bar{x})^2 \le \frac{1}{n}\sum^n_{i=1}(x_i-\mu)^2$$$$
The right-hand side is our ideal estimate \eqref{eq:ohGzVCDYbDArl9d4nZX} from earlier. To make this clear, let's write \eqref{eq:Vd8ISUnkMkIvhi6wExH} as:
$$$$\label{eq:mfxzwx5FHb6tVM1v3Zl} \frac{1}{n}\sum^n_{i=1}(x_i-\bar{x})^2 \le \text{ideal}$$$$
This means that estimate of the population variance using the left-hand side of \eqref{eq:mfxzwx5FHb6tVM1v3Zl} will be generally less than the ideal estimate. In order to compensate this underestimation, we must make the left-hand side larger. One way of doing so is by dividing by a smaller amount, say $n-1$:
$$\frac{1}{n-1}\sum^n_{i=1}(x_i-\bar{x})^2$$
Of course, this leads to more questions such as why we should divide specifically by $n-1$ instead of say $n-2$ or $n-3$, which all have the effect of making the left-hand side \eqref{eq:mfxzwx5FHb6tVM1v3Zl} larger. The motivation behind this exercise is merely to understand that dividing by some number less than $n$ accounts for the underestimation. As for why we specifically divide by $n-1$, we prove mathematically below that dividing by $n-1$ adjusts our estimate exactly such that we no longer neither underestimate nor overestimate.
# Properties of sample variance
Theorem.
## Unbiased estimator of the population variance
The sample variance $S^2$ is an unbiased estimator for the population variance $\sigma^2$, that is:
$$\mathbb{E}(S^2)=\sigma^2$$
Proof. We start off with the following algebraic manipulation:
\begin{align*} \sum^n_{i=1}(X_i-\bar{X})^2 &=\sum^n_{i=1}(X_i^2-2X_i\bar{X}+\bar{X}^2)\\ &=\Big(\sum^n_{i=1}X_i^2\Big)-2\bar{X}\Big(\sum^n_{i=1}X_i\Big)+\Big(\sum^n_{i=1}\bar{X}^2\Big)\\ &=\Big(\sum^n_{i=1}X_i^2\Big)-2\bar{X}\sum^n_{i=1}\left(n\cdot\frac{X_i}{n}\right)+n\bar{X}^2\\ &=\Big(\sum^n_{i=1}X_i^2\Big)-2n\bar{X}\cdot\Big(\frac{1}{n}\sum^n_{i=1}X_i\Big)+n\bar{X}^2\\ &=\Big(\sum^n_{i=1}X_i^2\Big)-2n\bar{X}^2+n\bar{X}^2\\ &=-n\bar{X}^2+\sum^n_{i=1}X_i^2\\ \end{align*}
Multiplying both sides by $1/(n-1)$ gives:
$$\frac{1}{n-1} \sum^n_{i=1}(X_i-\bar{X})^2= \frac{1}{n-1}\Big(-n\bar{X}^2+\sum^n_{i=1}X_i^2\Big)$$
The left-hand side is the formula for the sample variance $S^2$ so:
$$S^2= \frac{1}{n-1}\Big(-n\bar{X}^2+\sum^n_{i=1}X_i^2\Big)$$
Now, let's take the expected value of both sides and use the property of linearity of expected values to simplify:
\label{eq:MGFWQ0zdxObMW1zhXiV} \begin{aligned}[b] \mathbb{E}(S^2)&= \mathbb{E} \Big[\frac{1}{n-1}\Big(-n\bar{X}^2+\sum^n_{i=1}X_i^2\Big)\Big]\\ &= \frac{1}{n-1}\mathbb{E} \Big(-n\bar{X}^2+\sum^n_{i=1}X_i^2\Big)\\ &=\frac{1}{n-1}\Big[\mathbb{E} \Big(-n\bar{X}^2\Big)+\mathbb{E}\Big(\sum^n_{i=1}X_i^2\Big)\Big]\\ &= \frac{1}{n-1}\Big[-n\cdot\mathbb{E} \Big(\bar{X}^2\Big)+\sum^n_{i=1}\mathbb{E}(X_i^2)\Big]\\ \end{aligned}
Now, from the property of variance, we know that:
$$$$\label{eq:ZQMklBf4CcDEfOxcVdJ} \mathbb{E}(\bar{X}^2)= \mathbb{V}(\bar{X})+[\mathbb{E}(\bar{X})]^2$$$$
We have derived TODO the variance as well as the expected value of $\bar{X}$ to be:
\begin{align*} \mathbb{V}(\bar{X})&=\frac{\sigma^2}{n}\\ \mathbb{E}(\bar{X})&=\mu \end{align*}
Substituting these values into \eqref{eq:ZQMklBf4CcDEfOxcVdJ} gives:
$$$$\label{eq:LlQymAMmsVKtv6MIqTc} \mathbb{E}(\bar{X}^2)=\frac{\sigma^2}{n}+\mu^2$$$$
Once again, from the same property of variance, we have that:
\label{eq:OPC1YMGbDIHlCRGd6IJ} \begin{aligned}[b] \mathbb{E}(X_i^2)&=\mathbb{V}(X_i)+[\mathbb{E}(X_i)]^2\\ &=\sigma^2+\mu^2 \end{aligned}
Substituting \eqref{eq:LlQymAMmsVKtv6MIqTc} and \eqref{eq:OPC1YMGbDIHlCRGd6IJ} into \eqref{eq:MGFWQ0zdxObMW1zhXiV} gives:
\begin{align*} \mathbb{E}(S^2)&= \frac{1}{n-1}\Big[-n\cdot \Big(\frac{\sigma^2}{n}+\mu^2\Big) +\sum^n_{i=1}(\sigma^2+\mu^2)\Big]\\ &=\frac{1}{n-1}\Big[-\sigma^2-n\mu^2 +n(\sigma^2+\mu^2)\Big]\\ &=\frac{1}{n-1}\Big(-\sigma^2-n\mu^2 +n\sigma^2+n\mu^2\Big)\\ &=\frac{1}{n-1}\Big(n\sigma^2-\sigma^2\Big)\\ &=\frac{1}{n-1}\Big[\sigma^2(n-1)\Big]\\ &=\sigma^2\\ \end{align*}
This proves that the sample variance $S^2$ is an unbiased estimator for the population variance $\sigma^2$.
# Computing sample variance using Python
We can easily compute the sample variance using Python's NumPy library. By default, the var(~) method returns the following biased sample variance:
$$s^2=\frac{1}{n}\sum^n_{i=1}(x_i-\bar{x})^2$$
To compute the unbiased sample variance instead, supply the argument ddof=1:
import numpy as npnp.var([3,5,1,7], ddof=1) 6.666666666666667
Note that ddof stands for degree of freedom and represents the following quantity:
$$\frac{1}{n\color{green}{-\mathrm{ddof}}}\sum_{i=0}^{n}\left(x_i-\bar{x}^2\right)$$
mail
Edited by 0 others
thumb_up
thumb_down
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes! |
https://dsp.stackexchange.com/questions/7519/how-to-use-convolution-operator-in-matlab | # How to use convolution operator in matlab? [closed]
I = imread('13.jpg');
Ir = I(:,:,1);
Ig = I(:,:,2);
Ib = I(:,:,3);
%# Create the gaussian filter with hsize = [5 5] and sigma = 2
G = fspecial('gaussian',[10 10],1);
%# Filter it
Ig = imfilter(I,G,'same');
%# Display
imshow(Ig);
PSF = fspecial('gaussian',7,10);
Irc = conv2(Ir,PSF,'same');
Igc = conv2(Ig,PSF,'same');
Ibc = conv2(Ib,PSF,'same');
Why am I getting this error?
Warning: CONV2 on values of class UINT8 is obsolete.
Use CONV2(DOUBLE(A),DOUBLE(B)) or
In uint8.conv2 at 11
In Gaussian_Filter_image at 19
Warning: CONV2 on values of class UINT8 is obsolete.
Use CONV2(DOUBLE(A),DOUBLE(B)) or
In uint8.conv2 at 11
In Gaussian_Filter_image at 20
Undefined function 'conv2' for input arguments of type 'double'
and attributes 'full 3d real'.
Error in uint8/conv2 (line 18)
y = conv2(varargin{:});
Error in Gaussian_Filter_image (line 20)
Igc = conv2(Ig,PSF,'same');
Here I want to perform convolution of image and gaussian filter but I am unable to do that ?
I have even tried the modification after using
Ir = double(:,:,1);
Ig = double(:,:,2);
Ib = double(:,:,3);
then also I am getting the same error?
• It should be Ir = double(I(:,:,1)); ..., not Ir = double(:,:,1); ... Jan 15, 2013 at 12:04
• This is a programming question (or debugging question) about MATLAB, not a question about signa processing, and should be closed as off-topic. Jan 15, 2013 at 12:38
You are getting the error because image I is of class (data type) 'uint8', and the arguments to conv2 must be of class 'single' or 'double', i. e. floating point. You can get rid of the error by converting I to double. The best way to do that is to call I = im2double(I), which will re-scale the pixel values to be between 0 and 1.
But looking at the big picture, you should not use conv2 at all, and use imfilter instead, which does take 'uint8' images. Look at the help for it. By default it does correlation, but it can also do convolution. |
http://openstudy.com/updates/50aaa4f9e4b064039cbd53d3 | ## A community for students. Sign up today!
Here's the question you clicked on:
## Thats-me 2 years ago What is the slope of the line that passes through (3/5,-2)(-6,2/9)?
• This Question is Closed
1. Thats-me
I do believe its -3.52222222222
2. jlongSwag27
3. Thats-me
I wasnt asking for a link toa crazy person dancing...unless that happens to be you then its an insane person dancing.
4. hartnn
The slope of the line through points (x1,y1) and (x2,y2) is given by : $$\huge m=\frac{y_1-y_2}{x_1-x_2}$$ now,just put the values and find m.
5. Thats-me
Thanks
#### Ask your own question
Ask a Question
Find more explanations on OpenStudy |
http://manopt.org/manifold_documentation_oblique.html | The oblique manifold $\mathcal{OB}(n, m)$ (the set of matrices of size nxm with unit-norm columns) is endowed with a Riemannian manifold structure by considering it as a Riemannian submanifold of the embedding Euclidean space $\mathbb{R}^{n\times m}$ endowed with the usual inner product $\langle H_1, H_2 \rangle = \operatorname{trace}(H_1^T H_2)$. Its geometry is exactly the same as that of the product manifold of spheres $\mathbb{S}^{n-1}\times \cdots \times \mathbb{S}^{n-1}$ ($m$ copies), see the sphere manifold.
Factory call: M = obliquefactory(n, m).
Name Formula Numerical representation Set $\mathcal{OB}(n, m) = \{ X \in \mathbb{R}^{n\times m} : (X^TX)_{ii} = 1, i = 1:m \}$ $X$ is represented as a matrix X of size nxm whose columns have unit 2-norm, i.e., X(:, i).'*X(:, i) = 1 for i = 1:m. Tangent space at $X$ $T_X \mathcal{OB}(n, m) = \{ U \in \mathbb{R}^{n\times m} : (X^TU)_{ii} = 0, i = 1:m \}$ A tangent vector $U$ at $X$ is represented as a matrix U of size nxm such that each column of U is orthogonal to the corresponding column of X, i.e., X(:, i).'*U(:, i) = 0 for i = 1:m. Ambient space $\mathbb{R}^{n\times m}$ Points and vectors in the ambient space are, naturally, represented as matrices of size nxm.
The following table shows some of the nontrivial available functions in the structure M. The norm $\|\cdot\|$ refers to the norm in the ambient space, which is the Frobenius norm. The tutorial page gives more details about the functionality implemented by each function.
Name Field usage Formula Dimension M.dim() $\operatorname{dim}\mathcal{M} = m(n-1)$ Metric M.inner(X, U, V) $\langle U, V\rangle_X = \operatorname{trace}(U^T V)$ Norm M.norm(X, U) $\|U\|_X = \sqrt{\langle U, U \rangle_X}$ Distance M.dist(X, Y) $\operatorname{dist}(X, Y) = \sqrt{\sum_{i=1}^m \arccos^2((X^T Y)_{ii})}$ Typical distance M.typicaldist() $\pi\sqrt{m}$ Tangent space projector M.proj(X, H) $P_X(H) = H - X\operatorname{ddiag}(X^T H)$, where H represents a vector in the ambient space and $\operatorname{ddiag}$ sets all off-diagonal entries of a matrix to zero. Euclidean to Riemannian gradient M.egrad2rgrad(X, egrad) $\operatorname{grad} f(X) = P_X(\nabla f(X))$, where egrad represents the Euclidean gradient $\nabla f(X)$, which is a vector in the ambient space. Euclidean to Riemannian Hessian M.ehess2rhess(X, egrad, ehess, U) $\operatorname{Hess} f(X)[U] = P_X(\nabla^2 f(X)[U]) - U \operatorname{ddiag}(X^T \nabla f(X))$, where egrad represents the Euclidean gradient $\nabla f(X)$ and ehess represents the Euclidean Hessian $\nabla^2 f(X)[U]$, both being vectors in the ambient space. Exponential map M.exp(X, U, t) See the sphere manifold: the same exponential map is applied column-wise. Retraction M.retr(X, U, t) $\operatorname{Retr}_X(tU) = \operatorname{normalize}(X+tU)$, where $\operatorname{normalize}$ scales each column of the input matrix to have norm 1. Logarithmic map M.log(X, Y) See the sphere manifold: the same logarithmic map is applied column-wise. Random point M.rand() Returns a point uniformly at random w.r.t. the natural measure as follows: generate $X$ with i.i.d. normal entries; return $\operatorname{normalize}(X)$. Random vector M.randvec(X) Returns a unit-norm tangent vector at $X$ with uniformly random direction, obtained as follows: generate $H$ with i.i.d. normal entries; return: $U = P_X(H) / \|P_X(H)\|$. Vector transport M.transp(X, Y, U) $\operatorname{Transp}_{Y\leftarrow X}(U) = P_Y(U)$, where $U$ is a tangent vector at $X$ that is transported to the tangent space at $Y$. Pair mean M.pairmean(X, Y) $\operatorname{mean}(X, Y) = \operatorname{normalize}(X+Y)$
Let $A\in\mathbb{R}^{n\times m}$ be any matrix. We search for the matrix with unit-norm columns that is closest to $A$ according to the Frobenius norm. Of course, this problem has an obvious solution (simply normalize the columns of $A$). We treat it merely for the sake of example. We minimize the following cost function:
$$f(X) = \frac{1}{2} \|X-A\|^2,$$
such that $X \in \mathcal{OB}(n, m)$. Compute the Euclidean gradient and Hessian of $f$:
$$\nabla f(X) = X-A,$$
$$\nabla^2 f(X)[U] = U.$$
The Riemannian gradient and Hessian are obtained by applying the M.egrad2rgrad and M.ehess2rhess operators. Notice that there is no need to compute these explicitly: it suffices to write code for the Euclidean quantities and to apply the conversion tools on them to obtain the Riemannian quantities, as in the following code:
% Generate the problem data.
n = 5;
m = 8;
A = randn(n, m);
% Create the problem structure.
manifold = obliquefactory(n, m);
problem.M = manifold;
% Define the problem cost function and its derivatives.
problem.cost = @(X) .5*norm(X-A, 'fro')^2;
ehess = @(X, U) U;
problem.hess = @(X, U) manifold.ehess2rhess(X, egrad(X), ehess(X, U), U);
% Numerically check the differentials.
checkhessian(problem); pause;
Of course, this is not efficient: there are redundant computations. See the various ways of describing the cost function for better alternatives.
Let us consider a second, more interesting, example. A correlation matrix $C \in \mathbb{R}^{n\times n}$ is a symmetric, positive semidefinite matrix with 1's on the diagonal. If $C$ is of rank $k$, there always exists a matrix $X \in \mathcal{OB}(k, n)$ such that $C = X^TX$. In fact, there exist many such matrices: given such an $X$, a whole manifold of equivalent matrices is obtained by considering $QX$ with $Q$ an orthogonal matrix of size $k$. Disregarding this equivalence relation (see help elliptopefactory), we can address the problem of nearest low-rank correlation matrix as follows:
Let $A \in \mathbb{R}^{n\times n}$ be a given symmetric matrix. We wish to find the correlation matrix $C = X^TX$ of rank at most $k$ which is closest to $A$, according to the Frobenius norm [Hig01]. That is, we wish to minimize:
$$f(X) = \frac{1}{4} \|X^TX - A\|^2$$
with $X \in \mathcal{OB}(k, n)$.The Euclidean gradient and Hessian are given by:
$$\nabla f(X) = X(X^TX - A),$$
$$\nabla^2 f(X)[U] = X(U^TX + X^TU) + U(X^TX-A).$$
In Manopt code, this yields:
% Generate the problem data.
n = 10;
k = 3;
A = randn(n);
A = (A + A.')/2;
% Create the problem structure.
manifold = obliquefactory(k, n);
problem.M = manifold;
% Define the problem cost function and its derivatives.
problem.cost = @(X) .25*norm(X.'*X-A, 'fro')^2;
ehess = @(X, U) X*(U.'*X+X.'*U) + U*(X.'*X-A);
problem.hess = @(X, U) manifold.ehess2rhess(X, egrad(X), ehess(X, U), U);
% Numerically check the differentials.
checkhessian(problem); pause;
% Solve
X = trustregions(problem);
C = X.'*X;
% C is a rank k (at most) symmetric, positive semidefinite matrix with ones on the diagonal:
disp(C);
disp(eig(C));
Again, there is a fair bit of redundant computations in this formulation. See the various ways of describing the cost function for better alternatives.
For theory on Riemannian submanifolds, see [AMS08], section 3.6.1 (first-order derivatives) and section 5.3.3 (second-order derivatives, i.e., connections).
For content specifically about the oblique manifold with applications, see [AG06]. |
http://googology.wikia.com/wiki/User_blog:B1mb0w/Fundamental_Sequences_used_by_the_Beta_Function | 10,835 Pages
## Fundamental Sequences (used by The Beta Function)
This blog will cover the standard definitions on Fundamental Sequences for Ordinals. It will also provide a precise rule-set for the Fundamental Sequences used by in my Beta Function blogs.
This blog is a complete update of my previous blog on Fundamental Sequences. Please keep this in mind if you refer to that blog.
## Cantor's Normal Form
Let $$\gamma$$ and $$\delta$$ be two arbitrary transfinite ordinals, $$\lambda$$ is an arbitrary limit ordinal, and $$n$$ is a finite integer. Then:
$$(\gamma + 1)[n] = \gamma$$
$$(\gamma + \lambda)[n] = \gamma + \lambda[n]$$ when $$\gamma >= \lambda$$
$$\lambda.(\delta + 1)[n] = \lambda.\delta + \lambda[n]$$
$$\gamma.\lambda[n] = \gamma.(\lambda[n])$$ when $$\gamma >= \lambda$$
$$\lambda^{\delta + 1}[n] = \lambda^{\delta}.(\lambda[n])$$
and
$$\gamma^{\lambda}[n] = \gamma^{\lambda[n]}$$
## Some Common Transfinite Ordinals
$$\omega[n] = n$$
$$\epsilon_0[n] = \omega\uparrow\uparrow n$$
$$\epsilon_1[n] = \epsilon_0\uparrow\uparrow n$$
$$\epsilon_{j+1}[n] = \epsilon_j\uparrow\uparrow n$$
and
$$\epsilon_{\omega}[n] = \epsilon_{\omega[n]} = \epsilon_n$$
## Veblen Hierarchy
Continuing into Veblen Hierarchy and the $$\varphi$$ function. Lets start with these equations which are equivalent to those in the Common Transfinite Ordinal section.
$$\varphi(1)[n] = \omega[n] = n$$
$$\varphi(1,0)[n] = \epsilon_0[n] = \varphi(n) = \omega\uparrow\uparrow n$$
$$\varphi(1,1)[n] = \epsilon_1[n] = \varphi(1,0)\uparrow\uparrow n$$
$$\varphi(1,j + 1)[n] = \epsilon_{j + 1}[n] = \varphi(1,j)\uparrow\uparrow n$$
and
$$\varphi(1,\omega)[n] = \varphi(1,\omega[n]) = \varphi(1,n)$$
The following extends the Veblen function definition for completeness:
$$\varphi() = 0$$
$$\varphi(0) = 1$$
$$\varphi(1) = \omega$$
and
$$\varphi(n) = \varphi^n(1) = \omega\uparrow\uparrow n = \omega^{\varphi(n-1)}$$
## Rule-set (used by The Beta Function)
The following rule-set is used by my Beta Function blogs and is intended to be clearly distinguishable from other rule-set definitions. Before we start, some notational conventions that will be used are:
$$k^2(n,p_*) = k(n,k(n,p))$$
$$k^2(n_*,p) = k(k(n,p),p)$$
and
$$k(a_{[2]},b_{[3]}) = k(a_1,a_2,b_1,b_2,b_3)$$
The rule-set starts with this definition of an arbitrary Veblen function:
$$\varphi(\lambda_{[b]})$$
We can unpack the arbitrary Veblen function if we let:
$$b = a + 1 + z$$
Then
$$\varphi(\lambda_{[b]}) = \varphi(\lambda_{[a + 1 + z]})$$
And
$$\varphi(\lambda_{[b]}) = \varphi(\lambda_{[a + 1 + z]}) = \varphi(\delta_{[a]},\gamma,0_{[z]})$$
Where
$$\delta_1 > 0$$ if $$a > 0$$
We can continue to unpack the arbitrary Veblen function if we let:
$$a = x + 1 + y$$
Then
$$\varphi(\delta_{[a]},\gamma,0_{[z]}) = \varphi(\delta_{[x + 1 + y]},\gamma,0_{[z]})$$
And
$$\varphi(\delta_{[a]},\gamma,0_{[z]}) = \varphi(\delta_{[x + 1 + y]},\gamma,0_{[z]}) = \varphi(\alpha_{[x]},\beta,0_{[y]},\gamma,0_{[z]})$$
Where
$$\alpha_1 > 0$$ if $$x > 0$$
We can now begin the rule-set:
$$\gamma$$ $$z$$ $$a$$ or $$x$$ $$\beta$$ $$\varphi(\lambda_{[b]})[n] = \varphi(\delta_{[a]},\gamma,0_{[z]})[n]$$
limit any any any $$= \varphi(\delta_{[a]},\gamma[n],0_{[z]})$$
$$= 1$$ $$> 0$$ $$a = 0$$ any $$= \varphi(\gamma,0_{[z]})[n] = \varphi(1,0_{[z]})[n]$$
$$= \varphi^{\omega[n]}(1_*,0_{[z-1]})$$
successor $$> 0$$ $$a > 0$$ any $$= \varphi^{\omega[n]}(\delta_{[a]},\gamma-1,0_*,0_{[z-1]})$$
$$= 0$$ any limit $$= \varphi(\alpha_{[x]},\beta,0_{[y]},\gamma)[n]$$
$$= \varphi^{\omega[n]}(\alpha_{[x]},\beta[n],0_{[y]},\varphi(\alpha_{[x]},\beta,0_{[y]},\gamma-1) + 1_*)$$
$$= 0$$ $$x = 0$$ $$= 1$$ $$= \varphi(\beta,0_{[y]},\gamma)[n] = \varphi(1,0_{[y]},\gamma)[n]$$
$$= \varphi^{\omega[n]}(1,0_{[y-1]},\varphi(1,0_{[y]},\gamma-1) + 1_*)$$
$$= 0$$ $$x > 0$$ successor $$= \varphi^{\omega[n]}(\alpha_{[x]},\beta-1,0_{[y]},\varphi(\alpha_{[x]},\beta,0_{[y]},\gamma-1) + 1_*)$$
## Calculated Example for $$\zeta_0[n]$$
$$\zeta_0[n] = \varphi(2,0)[n] = \varphi^{\omega[n]}(1,0_*)$$
$$\zeta_0[3] = \varphi(2,0)[3] = \varphi^{\omega[3]}(1,0_*) = \varphi^3(1,0_*) = \varphi(1,\varphi(1,\varphi(1,0)))$$
## Calculated Example for $$\Gamma_0[n]$$
$$\Gamma_0[n] = \varphi(1,0,0)[n] = \varphi^{\omega[n]}(1_*,0)$$
$$\Gamma_0[3] = \varphi(1,0,0)[3] = \varphi^{\omega[3]}(1_*,0) = \varphi^3(1_*,0) = \varphi(\varphi(\varphi(1,0),0),0)$$
## Calculated Example for Small Veblen Ordinal $$SVO[n]$$
SVO is defined as follows:
$$SVO = \varphi(1,0_{[\omega]})$$
$$SVO[n] = \varphi(1,0_{[\omega]})[n] = \varphi(1,0_{[\omega[n]]})$$
$$SVO[3] = \varphi(1,0_{[\omega]})[3] = \varphi(1,0_{[\omega[3]]}) = \varphi(1,0_{[3]}) = \varphi(1,0,0,0)$$ |
https://wikieducator.org/Answer | Chemical Equilibria|Le Chatelier's Principle|Factors Affecting Chemical equilibria|
The Haber Process|The Contact Process|Equilibrium Constants
# Question 1
Consider the following reaction:
N2 (g) + 3H2(g) <=> 2NH3 (g) ∆H = -ve
State 2 conditions which might be chnaged to increase the yield of ammonia gas.
Condition 1: An increase in pressure; side with less number of moles will be favoured, that is right hand side and thus more ammonia will be produced.
Condition 2: A decrease in temperature; enthalpy chnage for the reaction is exothermic. This means that a decrease in temperature will favour side which will liberate energy, that is forward reaction. Thus more ammonia is prodcued.
# Question 2
(a) What is meant by a reversible reaction? (b) An increase in temperature of the following reversible reaction causes more of the CuSO4 (aq) to be formed. What can be deduced about the enthalpy of the reaction? Explain your deductions.
CuSO4 (aq) <=> CuSO4 . 5H2O
(a) A reversible reaction is one which can go both forward and backward.
(b) Enthalpy change for the reaction is exothermic. An increase in temperature will thus favour the backward reaction so as to decrease the temperature.
# Question 3
Colourless N2O4 is in equilibrium with brown NO2 according to the following equation.
N2O4 <=> 2NO2
If the mixture is in a syringe and compressed, state what will be observed and explain the observations using Le Chatelier's Principle. |
https://www.mathdoubts.com/unlike-fractions/ | # Unlike fractions
The two or more fractions that contain different denominators are called the unlike fractions.
## Introduction
In some cases, if we observe two or more fractions, they may have different denominators and it seems they are dissimilar. So, the fractions are called the unlike fractions.
The unlike fractions are possibly formed if every quantity is divided as different number of parts and it breaks the similarity between them. Hence, the fractions become dissimilar.
### Example
The concept of unlike fractions can be understood from an example with geometrical explanation.
1. Take a circle and split it into four equal parts. If we select a part from first circle, then the fraction is equal to $\large \frac{1}{4}$
2. Take another circle and split it into five equal parts. If we select three parts from second circle, then the fraction is equal to $\large \frac{3}{5}$
3. Take three circles and divide each circle into three equal parts. If we select eight parts from the remaining three circles, then the fraction is equal to $\large \frac{8}{3}$
In this way, two proper fractions $\dfrac{1}{4}$ and $\dfrac{3}{5}$, and an improper fraction $\dfrac{8}{3}$ are formed. In this case, the denominators (consequent) are different. Therefore, the fractions are called as unlike fractions.
Therefore, the unlike fractions can be either proper fractions or improper fractions or both.
Latest Math Topics
May 27, 2020
May 25, 2020
May 11, 2020
May 02, 2020
Email subscription
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more |
https://zbmath.org/?q=an:1055.31003&format=complete | zbMATH — the first resource for mathematics
A Taylor series condition for harmonic extensions. (English) Zbl 1055.31003
Let $$u$$ be harmonic on some open ball in $$\mathbb{R}^n$$ centred at the origin, and let $$x= (x',x_n)$$ denote a typical point of $$\mathbb{R}^n= \mathbb{R}^{n-1}\times \mathbb{R}$$. Suppose that the Taylor series of $$u(x',0)$$ and $$(\partial u/\partial x_n)(x',0)$$ about $$0'$$ converge when $$| x'|< r$$. Then it is shown that the Taylor series of $$u(x)$$ converges when $$| x'|+| x_n|< r$$. The proof uses elementary arguments.
MSC:
31B05 Harmonic, subharmonic, superharmonic functions in higher dimensions 35C10 Series solutions to PDEs
Keywords:
harmonic function; Taylor series
Full Text: |
https://docs.moogsoft.com/en/configure-the-tivoli-eif-lam.html | # Moogsoft Docs
#### Configure the Tivoli EIF LAM
The Tivoli EIF LAM allows you to retrieve Tivoli EIF (Event Integration Format) messages and send them to Moogsoft AIOps as events.
There is no UI integration for Tivoli EIF. Follow these instructions to configure the LAM.
Refer to IBM Tivoli Netcool/OMNIbus probes and gateways for further information on Tivoli products that generate EIF messages.
##### Before You Begin
Before you start to set up your Tivoli EIF LAM, ensure you have met the following requirements:
• You know the connection mode. It can be either Server or Client.
• You have identified the IP address and port of your Tivoli server.
• The port for your Tivoli connection is open and accessible from Moogsoft AIOps.
• You know whether your Tivoli server is configured to use UDP or TCP protocol.
If you are configuring the Tivoli EIF LAM for high availability, refer to High Availability Overviewfirst. You will need the details of the server configuration you are going to use for HA.
##### Configure the LAM
Edit the configuration file to control the behavior of the Tivoli EIF LAM. You can find the file at $MOOGSOFT_HOME/config/tivoli_eif_lam.conf See the Tivoli EIF LAM Reference and LAM and Integration Reference for a full description of all properties. Some properties in the file are commented out by default. Uncomment properties to enable them. 1. Configure the connection properties for your Tivoli server: • mode: Client or Server. • address: IP address or host name of the Tivoli server. • port: Port of the Tivoli server. • protocol_type: TCP or UDP. 2. Configure the LAM behavior: • event_ack_mode: When Moogfarmd acknowledges events from the Tivoli EIF LAM during the event processing pipeline. • num_threads: Number of worker threads to use when processing events. 3. Optionally configure the LAM identification and logging details in the agent and log_config sections of the file: • name: Identifies events the LAM sends to the Message Bus. Defaults to EIF_LAM. • capture_log: Name and location of the LAM's capture log file. • configuration_file: Name and location of the LAM's process log configuration file. 4. Optionally configure severity conversions. See Severity Reference for further information and "Conversion Rules" in Data Parsing for details on conversions in general. ###### Example The following example demonstrates a Tivoli EIF LAM configuration. monitor: { name : "ITM EIF LAM", class : "CSockMonitor", mode : "SERVER", address : "216.3.128.12", port : 8412, protocol_type : "TCP", event_ack_mode : "queued_for_processing", num_threads : 1 }, agent: { name : "EIF_LAM", capture_log : "$MOOGSOFT_HOME/log/data-capture/tivoli_eif_lam.log"
},
log_config:
{
configuration_file : "\$MOOGSOFT_HOME/config/logging/tivoli_eif_lam_log.json"
},
###### Configure the Tivoli EIF Utility
The Tivoli EIF LAMbot requires the Tivoli EIF utility in order to work. The utility replaces the standard mapping usually performed in the LAMbot and allows multiple mappings for different event types.
See Configure the Tivoli EIF Utility for details.
###### Configure for High Availability
Configure the Tivoli EIF LAM for high availability if required. See High Availability Overview for details.
###### Start and Stop the LAM
Restart the Tivoli EIF LAM to activate any changes you make to the configuration file, utility or LAMbot.
The LAM service name is tivolieiflamd.
See Control Moogsoft AIOps Processes for further details. |
https://math.libretexts.org/TextMaps/Calculus/Supplemental_Modules_(Calculus)/Integral_Calculus/4%3A_Transcendental_Functions/4.2%3A_Logs_and_Integrals |
# 4.2: Logs and Integrals
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Recall that
$\int \dfrac{1}{x} dx = \ln |x| + C.$
Note that we have the absolute value sign since for negative values of that graph of $$\frac{1}{x}$$ is still continuous.
Example 1
Evaluate the integral
$\int \dfrac{dx}{1-3x}.$
Solution
Let $$u = 1-3x$$ and $$du = -3\, dx$$.
The integral becomes
\begin{align} -\dfrac{1}{3} \int \dfrac{du}{u} &= \dfrac{1}{3}\ln |u| +C \\ &= -\dfrac{1}{3} \ln |1-3x| +C. \end{align}
Exercises
Evaluate the integrals of the following:
1) $$\dfrac{1}{(x-1)}$$
2) $$\dfrac{1}{(1-x)}$$
3) $$\cot x$$
4) $$\dfrac{(2x - 1)}{(x + 2)}$$
5) $$\dfrac{3x}{(x^2 + 1)^2}$$
6) $$\dfrac{1}{x \ln x}$$
7) $$\dfrac{1}{\sqrt{x - 1}}$$
8) $$\dfrac{(x^2 + 2x + 4)}{(3x)}$$
9) $$\dfrac{(x + 1)}{(x^2 + 2x)^3}$$
10) $$(4 - x)^5$$
11) $$\dfrac{1}{\sqrt{3x}}$$
12) $$\tan x$$
13) $$(\tan x)(\ln(\cos x))$$
14) $$\sec x$$ (hint: multiply top and bottom by $$\sec x + \tan x)$$
15) $$\csc x$$ (hint: Use the formula $$\csc x = \sec (\pi/2 - x)$$.
Larry Green (Lake Tahoe Community College)
• Integrated by Justin Marshall. |
https://bookdown.org/ajsage/_book/simulation-based-hypothesis-tests.html | Chapter 4 Simulation-Based Hypothesis Tests
In a simulation-based hypothesis test, we test the null hypothesis of no relationship between one or more explanatory variables and a response variable.
4.1 Performing the Simulation-Based Test for Regression Coefficient
To test for a relationship between a response variable and a single quantitative explanatory variable, or a categorical variable with only two categories, we perform the following steps.
1. Fit the model to the actual data and record the value of the regression coefficient $$b_1$$, which describes the slope of the regression line (for a quantitative variable), or the difference between groups (for a categorical variable with 2 categories).
2. Repeat the following steps many (say 10,000) times, using a "for" loop:
• Randomly shuffle the values (or categories) of the explanatory variable to create a scenario where there is no systematic relationship between the explanatory and response variable.
• Fit the model to the shuffled data, and record the value of $$b_1$$.
3. Observe how many of our simulations resulted in values of $$b_1$$ as extreme as the one from the actual data. If this proportion is small, we have evidence that our result did not occur just by chance. If this value is large, it is plausible that the result we saw occurred by chance alone, and thus there is not enough evidence to say there is a relationship between the explanatory and response variables.
These steps are performed in the code below, using the example of price and acceleration time of 2015 cars.
4.1.1 Simulation-Based Hypothesis Test Example
Cars_M_A060 <- lm(data=Cars2015, LowPrice~Acc060)
b1 <- Cars_M_A060$coef[2] # record value of b1 from actual data # perform simulation b1Sim <- rep(NA, 10000) # vector to hold results ShuffledCars <- Cars2015 # create copy of dataset for (i in 1:10000){ #randomly shuffle acceleration times ShuffledCars$Acc060 <- ShuffledCars$Acc060[sample(1:nrow(ShuffledCars))] ShuffledCars_M<- lm(data=ShuffledCars, LowPrice ~ Acc060) #fit model to shuffled data b1Sim[i] <- ShuffledCars_M$coef[2] # record b1 from shuffled model
}
Cars_A060SimulationResults <- data.frame(b1Sim) #save results in dataframe
Now that we've performed the simulation, we'll display a histogram of the sampling distribution for $$b_1$$ when the null hypothesis of no relationship between price and acceleration time is true.
b1 <- Cars_M_A060$coef[2] # record value of b1 from actual data Cars_A060SimulationResultsPlot <- ggplot(data=Cars_A060SimulationResults, aes(x=b1Sim)) + geom_histogram(fill="lightblue", color="white") + geom_vline(xintercept=c(b1, -1*b1), color="red") + xlab("Simulated Value of b1") + ylab("Frequency") + ggtitle("Distribution of b1 under assumption of no relationship") Cars_A060SimulationResultsPlot We can calculate the p-value by finding the proportion of simulations with $$b_1$$ values more extreme than the observed value of $$b_1$$, in absolute value. mean(abs(b1Sim) > abs(b1)) ## [1] 0 4.2 Simulation-Based F-Test When testing for a relationship between the response variable and a categorical variable with more than 2 categories, we first use the F-statistic to capture the maginitude of differences between groups. The process is similar to the one above, with the exception that we recored the value of F, rather than $$b_1$$. 4.2.1 Simulation-Based F-Test Example Cars_M_Size <- lm(data=Cars2015, LowPrice~Size) Fstat <- summary(Cars_M_Size)$fstatistic[1] # record value of F-statistic from actual data
# perform simulation
FSim <- rep(NA, 10000) # vector to hold results
ShuffledCars <- Cars2015 # create copy of dataset
for (i in 1:10000){
#randomly shuffle acceleration times
ShuffledCars$Size <- ShuffledCars$Size[sample(1:nrow(ShuffledCars))]
ShuffledCars_M<- lm(data=ShuffledCars, LowPrice ~ Size) #fit model to shuffled data
FSim[i] <- summary(ShuffledCars_M)$fstatistic[1] # record F from shuffled model } CarSize_SimulationResults <- data.frame(FSim) #save results in dataframe Create the histogram of the sampling distribution for $$F$$. CarSize_SimulationResults_Plot <- ggplot(data=CarSize_SimulationResults, aes(x=FSim)) + geom_histogram(fill="lightblue", color="white") + geom_vline(xintercept=c(Fstat), color="red") + xlab("Simulated Value of F") + ylab("Frequency") + ggtitle("Distribution of F under assumption of no relationship") CarSize_SimulationResults_Plot Calculate the p-value: mean(FSim > Fstat) ## [1] 1e-04 4.3 Testing for Differences Between Groups If we find differences in the F-statistic, we should test for differences between individual groups, or categories. In this example, those are given by $$b_1$$, $$b_2$$, and $$b_1-b_2$$. When considering p-values, keep in mind that we are perfoming multiple tests simultaneously, so we should use a multiple-testing procedure, such as the Bonferroni correction. 4.3.1 Differences Between Groups Example b1 <- Cars_M_Size$coefficients[2] #record b1 from actual data
b2 <- Cars_M_Size$coefficients[3] #record b2 from actual data # perform simulation b1Sim <- rep(NA, 10000) # vector to hold results b2Sim <- rep(NA, 10000) # vector to hold results ShuffledCars <- Cars2015 # create copy of dataset for (i in 1:10000){ #randomly shuffle acceleration times ShuffledCars$Size <- ShuffledCars$Size[sample(1:nrow(ShuffledCars))] ShuffledCars_M<- lm(data=ShuffledCars, LowPrice ~ Size) #fit model to shuffled data b1Sim[i] <- ShuffledCars_M$coefficients[2] # record b1 from shuffled model
b2Sim[i] <- ShuffledCars_M\$coefficients[3] # record b2 from shuffled model
}
Cars_Size2_SimulationResults <- data.frame(b1Sim, b2Sim) #save results in dataframe
Sampling Distribution for $$b_1$$
Cars_Size2_SimulationResultsPlot_b1 <- ggplot(data=Cars_Size2_SimulationResults, aes(x=b1Sim)) +
geom_histogram(fill="lightblue", color="white") +
geom_vline(xintercept=c(b1, -1*b1), color="red") +
xlab("Simulated Value of b1") + ylab("Frequency") +
ggtitle("Large vs Midsize Cars: Distribution of b1 under assumption of no relationship")
Cars_Size2_SimulationResultsPlot_b1
p-value:
mean(abs(b1Sim)>abs(b1))
## [1] 0.0226
Sampling Distribution for $$b_2$$
Cars_Size2_SimulationResultsPlot_b2 <- ggplot(data=Cars_Size2_SimulationResults, aes(x=b2Sim)) +
geom_histogram(fill="lightblue", color="white") +
geom_vline(xintercept=c(b2, -1*b2), color="red") +
xlab("Simulated Value of b2") + ylab("Frequency") +
ggtitle("Large vs Small Cars: Distribution of b2 under assumption of no relationship")
Cars_Size2_SimulationResultsPlot_b2
p-value:
mean(abs(b2Sim)>abs(b2))
## [1] 0
Sampling Distribution for $$b_1 - b_2$$
Cars_Size2_SimulationResultsPlot_b1_b2 <- ggplot(data=Cars_Size2_SimulationResults,
aes(x=b1Sim-b2Sim)) +
geom_histogram(fill="lightblue", color="white") +
geom_vline(xintercept=c(b1-b2, -1*(b1-b2)), color="red") +
xlab("Simulated Value of b1-b2") + ylab("Frequency") +
ggtitle("Small vs Midsize Cars: Distribution of b1-b2 under assumption of no relationship")
Cars_Size2_SimulationResultsPlot_b1_b2
p-value:
mean(abs(b1Sim-b2Sim)>abs(b1-b2))
## [1] 0.0629
4.4 "Theory-Based" Tests and Intervals in R
The quantity Pr(>|t|) in the coefficients table contains p-values pertaining to the test of the null hypothesis that that parameter is 0.
The confint() command returns confidence intervals for the regression model parameters.
The aov() command displays the F-statistic and p-value, as well as the sum of squared residuals and sum of squares explained by the model.
4.4.1 Example "Theory-Based" Test and Interval
summary(Cars_M_A060)
##
## Call:
## lm(formula = LowPrice ~ Acc060, data = Cars2015)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.512 -6.544 -1.265 4.759 27.195
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 89.9036 5.0523 17.79 <2e-16 ***
## Acc060 -7.1933 0.6234 -11.54 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 10.71 on 108 degrees of freedom
## Multiple R-squared: 0.5521, Adjusted R-squared: 0.548
## F-statistic: 133.1 on 1 and 108 DF, p-value: < 2.2e-16
confint(Cars_M_A060, level=0.95)
## 2.5 % 97.5 %
## (Intercept) 79.888995 99.918163
## Acc060 -8.429027 -5.957651
4.4.2 Example Theory-Based F-Test
summary(Cars_M_A060)
##
## Call:
## lm(formula = LowPrice ~ Acc060, data = Cars2015)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.512 -6.544 -1.265 4.759 27.195
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 89.9036 5.0523 17.79 <2e-16 ***
## Acc060 -7.1933 0.6234 -11.54 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 10.71 on 108 degrees of freedom
## Multiple R-squared: 0.5521, Adjusted R-squared: 0.548
## F-statistic: 133.1 on 1 and 108 DF, p-value: < 2.2e-16
Cars_A_Size <- aov(data=Cars2015, LowPrice~Size)
summary(Cars_A_Size)
## Df Sum Sq Mean Sq F value Pr(>F)
## Size 2 4405 2202.7 10.14 9.27e-05 ***
## Residuals 107 23242 217.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
confint(Cars_A_Size)
## 2.5 % 97.5 %
## (Intercept) 36.88540 47.736327
## SizeMidsized -16.48366 -1.713069
## SizeSmall -22.55789 -8.759618 |
https://devzone.nordicsemi.com/questions/scope:all/sort:activity-desc/tags:ppk/page:1/ | Sort by » date activity ▼ answers views votes
# 39 questions
Tagged
• x
151
views
5
1
## Is PPK(nrf Power Profiler Kit) a rubbish tool?
I cannot suffer this tool anymore. As u can see in the pic below, it provides a cursor mode to measure the currents of X1/X2. However, users always need to know the average current between X1 and X2, while ... (more)
71
views
2
1
## Any tutorial on how to read the measurements reported by the PPK?
I've gone through the user manual for the Nordic Power Profiler Kit, but I can't find any basic information on how to understand the plots and the measurements reported under the Average and Trigger windows. For example, are ... (more)
40
views
1
vote
1
## Preparing the development kit board
Hi
I finished preparing the nrf52832 board to use ppk and measured it with ppk. I am trying to repair the nrf52832 board. However, I do not understand the contents of ppk document, apply a jumper on P22. Could you ... (more)
149
views
1
vote
2
## Current measurements on nrf52840 pdk with PPK
ble_app_gls_pca10056_s140.hex
Hi, I am new in this field and i am trying to do current measurement on the nrf52840 PDK using PPK, in sleep mode(sd_power_system_off) i get average 40 micro amp which suppose to be 0,6! while ... (more)
44
views
1
vote
1
## PPK "Range not detected" error
I am using the PPK with SW v1.1.0 on a windows virtual machine on my mac. I'm testing custom hardware using the PPK regulator to supply power to my hardware.
There seem to be a few issues ... (more)
147
views
1
vote
1
## read back protection puts tag to high current consumption
Hi, I am seeing an issue where after enabling read back protection (rbp) on NRF52 and using sdk 12.1 the tag goes to state where it drains a lot of current.
command used: nrfjprog -f NRF52 --rbp ALL
I ... (more)
105
views
1
vote
2
## ppk broken ?
Seems PPK is broken, it measures ~70mA no matter what I connect it to, even only measuring on a DK in shutdown.
I suspect this happend during measuring on a custom board, an j-link debugger was connected to the custom ... (more)
94
views
1
vote
1
## Power Profiler Kit tremendous delay, what's going on?
Hello, I'm trying to measure the current profile with the PPK (HW Rev:1.1.0, SW: 1.1.0) but I see a lot of strange behaviors.
My firmware initializes everything and the waits a button event. When ... (more)
85
views
3
1
## Power consumption reading a large fifo - Easydma list vs Cpu reading
I am reading a large FIFO buffer from a sensor and there are at least two ways to do it - by simply reading the FIFO using the CPU and SPI in chunks of 255 byte blocks, or using EasyDma list ... (more)
28
views
no
no
## PPK + DK, peripheral current not showing up in PPK [closed]
I've connected some sensors using I2C and SPI to a DK and their power consumption is not being reflected in the PPK. Without any BLE acitivity, the current is almost zero whereas I'd expect atleast a constant 150uA ... (more)
68
views
no
2
## PPK logging very long periods
Hi.
I'm using the Power Profiler Kit and the PPK desktop application to log the power consumption of my device. This works well for short time spans.
However, I'd also like to log longer periods. Like days or ... (more)
637
views
no
1
## PowerProfilerKit: Negative current / Backfeeding
Hi, Is it safe to backfeed some current into the PPK. I use the PPK to measure the power consumption of a battery powered IoT device. I disconnect the battery and use the PPk to supply my device. However, when ... (more)
204
views
1
vote
1
## PPK Software
I was just wondering if theres plans to migrate the PPK software to be an app in the NRFConnect tool. I love the NRFConnect tool and I wasnt able to get the PPK software running on a mac. I ended ... (more)
50
views
3
no
## PPK v1.1 hangs on linux (bug report)
It doesn't look like ppk is in the Nordic github, so leaving this here:
On linux, ppk v1.0 work great, but v1.1 hangs and takes forever to draw the windows. I traced the problem to the logging ... (more)
202
views
2
1
I am using a PPK with v1.0.0, but elsewhere in DevZone I see references to later versions of the software. Where can I download the later versions from? Have I missed a link?
Thanks
43
views
1
vote
1
## PPK cancelling 'start logging'
Hi. Great that you have added logging capability to the PPK desktop app. This was exactly what I needed.
I noticed one minor issue with the logging: When starting logging, it asks for a file to save the log to ... (more)
1k
views
no
1
## PPK Question [closed]
Hello,
We encounter the problem after upgraded the PPK software. There are different results when using PPK v1.0.0 and v1.1.0.
1. Using PPK v1.0.0
2. Using PPK v1.1.0
(The highlighted spikes are active by ... (more)
105
views
no
1
## PPK and automated testing
Hi,
I'm thinking about implementing automated testing of current consumption using the PPK, and in the PPK blog it says: "We are planning on also making a Python API that developers can use to create their own automated tests ... (more)
61
views
no
2
## ppk install 2
Hello
I have a problem...
'python_packages.bat' file can not be installed...
error message: 'python' is not an internal or external command, executable program, or batch file.
How to solve...
Thank you!!
800
views
1
vote
1
## PPK: Calibration sequence question
When the PPK calibrates and turns off the DUT, does the nRF52832 on the PPK disconnect from the External DUT lines? My DUT has a LOT of capacitance on it, and it takes several seconds to fully discharge the caps ... (more)
83
views
no
1
## Problem using nRF52 with PPK
Hello,
I am working with the nRF52 DK and the PPK (Power Profiler Kit), however I can only program the DK when the PPK is connected. Once I disconnect the PPK the message error is: "No Device found'.
Regards.
190
views
4
1
## Power Profiler Kit is not detected
Hello I have this error:
Checking installed packages
pyside: 1.2.4
pyqtgraph: 0.10.0
numpy: 1.12.1
pynrfjprog: 9.0.0
Warning: The software is tested with PyQtGraph 0.9.10, and may not work with your ...
(more)
135
views
no
1
## PPK Current Measurement
Hi,
Just wanted to confirm what the average current measured by the PPK represents. So for example if I have AVG in average window as 2.5mA, is that only the BLE (i.e. transmission) current or all of the ... (more)
256
views
1
vote
1
## Power Profiler Kit Offset/Miscalibrated?
Hi there,
I'm using the PPK (v1.1.0) to measure my nRF52 DK (v1.1.1). My sample programm is very simple: set up the LFXO, RTC with 8s CC-periode and the BSP_LEDs:
const nrf_drv_rtc_t rtc = NRF_DRV_RTC_INSTANCE(0 ...
(more)
75
views
no
1
## PPK external trigger
Hello:
I tried to find this in the documentation but could not. Regarding the external trigger to the PPK (TRIG IN pin). Does this require a high-to-low transition, a low-to-high transition, or a pulse? If it requires a pulse, what ... (more)
61
views
no
no
## PPK desktop application won't start. [closed]
I installed these softwares successfully:
1. python 2.7.13
2. SEGGER J-Link V5.12g
3. nRF5x-Command-Line-Tools_9_0_0_Installer
4. the PPK software package
My PC platform is 64-bit Microsoft Windows.
I just could not open the PPK desktop application by click ppk.py shortcut, but ... (more)
323
views
4
1
## Is it possible to select the debug port used on the nRF52 DK?
If I understand the nRF52 DK, there's 3 physical debug-out ports on the board. One if for the nRF52 on the DK, which is also brought out through P5. The second one is brought out through P20, and the ... (more)
123
views
3
1
## PPK reports different average current if trigger is running.
Take a look at these screenshots. With the trigger enabled, the average current consumption is reported at 13.21uA. With the trigger disabled, the average current consumption is 5uA.
So which one is correct?
Average window with Trigger On
Average ... (more)
205
views
1
vote
1
## Power Profiler Kit fails to measure milliamps correctly
Today when I started power profiler kit, it suddenly started to show peak values like 80 mA. This has not happened before, but the values with the same hardware have been around 20-30 mA. So I checked what the PPK ... (more)
264
views
2
1
## Power Profiling Kit RTT Documentation?
We would like to use the PPK for automating some of our tests. Is there any documentation on how to drive the PPK over RTT? If push really comes to shove, we can always read what's in ppk.py ... (more)
#### Statistics
• Total users: 25624
• Latest user: Hulda Donato
• Resolved questions: 11225
• Unanswered questions: 3786
• Total questions: 27708
## Recent blog posts
• ### The world's smallest breakout board compatible BTLE module: Automate your curtains for less than \$90 with BluChip!
Posted 2017-12-07 09:10:36 by Jeevan Anga
• ### Join Jumper's free beta for a Virtual nRF52832 device to streamline your R&D process
Posted 2017-11-27 12:53:04 by Yaniv Nis
• ### PSG-NORDIC Channel in YouTube
Posted 2017-11-27 11:08:04 by Mugelan
• ### Job Offer: nRF / Embedded Developer in Stuttgart, Germany
Posted 2017-11-20 11:46:20 by Marius Heil
• ### Estudando Projetos do SDK 10 para nRF5x com Eclipse Oxygen (Portuguese)
Posted 2017-11-12 00:08:55 by Carlos Delfino
## Recent questions
• ### Can I "listen" to the data between two devices (under my control) using a third device to get rssi?
Posted 2017-12-11 16:22:59 by Daniel
• ### Do I need to define an Analog input as such in NRF52?
Posted 2017-12-11 15:43:09 by ndarkness
• ### FDS & fstorage extra information
Posted 2017-12-11 14:34:42 by Flinn92 |
https://asmedigitalcollection.asme.org/energyresources/article/126/4/249/461266/Fallacies-of-a-Hydrogen-Economy-A-Critical | This article presents a critical analysis of all the major pathways to produce hydrogen and to utilize it as an energy carrier to generate heat or electricity. The approach taken is to make a cradle to grave analysis including the production of hydrogen, the conversion of hydrogen to heat or electricity, and finally the utilization of that heat or electricity for a useful purpose. This methodology shows that no currently available hydrogen pathway, irrespective of whether it uses fossil fuels, nuclear fuels, or renewable technology as the primary energy source to generate electricity or heat is as efficient as using the electric power or heat from any of these sources directly. Furthermore, electric vehicles using batteries to store electricity are shown to be more efficient and less polluting than fuel cell powered vehicles using energy stored in hydrogen.
## 1 Introduction
Energy is a mainstay of an industrial society. It is, therefore, not surprising that many prestigious organizations have attempted to analyze the future need for energy and the availability of various energy sources. What is surprising is that despite the repeated efforts of both governmental and private organizations over the past fifty years, no consistent energy policy has emerged from these studies. Until a few years ago, all of the energy studies examined the present and future availability of fossil, nuclear, and renewable energy sources. However, during the past few years a “new” paradigm emerged almost abruptly, proposing that hydrogen and the fuel cell are the ultimate means for generating electricity, and the best choice to supply transportation-energy needs. This paradigm shift was given official sanction for the transportation sector when United States President George W. Bush unveiled the administration’s hydrogen initiative in his 2003 State of the Union Address with the following statement: “Tonight I am proposing 1.2 billion in research funding so that America can lead the world in developing clean hydrogen powered automobiles” 1. The use of hydrogen to provide electricity and other needs had been endorsed earlier by the U.S. Department of Energy in documents such as, “National Vision of America’s Transition to a Hydrogen Economy—to 2030 and Beyond” 2 and “National Hydrogen Energy Roadmap” 3.
According to the Committee on Alternatives and Strategies for Future Hydrogen Production and Use, appointed in 2002 by the National Academies National Research Council, “the vision of a hydrogen economy is based on two expectations: 1) that hydrogen can be produced from domestic energy sources in a manner that is both affordable and environmentally benign; and, 2) that applications using hydrogen…can gain market share in competition with the alternatives” 4. The purpose of this study is to ascertain whether or not technologies that are currently available or close to commercialization can fulfill these expectations and justify proposing hydrogen as the future fuel for our nation’s economy.
Since this is not the first time that engineers have analyzed the future supply of energy, it is useful to examine some of the past efforts, in particular, two significant studies that were conducted independently about 25 years ago. In 1979, the National Academy of Science released the final report of its Committee on Nuclear and Alternative Energy Systems in a book entitled, Energy in Transition 1985 to 20105. Participants in this study included some of the most prestigious energy experts in the country under the co-chairmanship of Harvey Brooks and Edward Ginzton. The study concluded that there are several plausible options for an indefinitely sustainable energy supply, but also noted that, “Energy policy involves very large social and political components…of conflicting values and political interests that cannot be resolved except in the political arena.”
A similar study was conducted by Resources for the Future, Inc. and its results were also published in 1979 as Energy in America’s Future6. The study concluded that, “There are many reasons why US energy policy remains in dispute,” and identified as a principle reason for this dispute that: “There is disagreement—and even widespread ignorance—about some fundamental facts.” Although there are some significant differences between these two important studies, they have one common factor: Neither of them mentions the concept of a hydrogen economy and the word hydrogen does not appear in either of their indexes.
It is not possible to present details about these two historically important studies. However, some of the conclusions and recommendations are as valid now, as they were twenty-five years ago. Some of the recommendations of the Committee on Nuclear and Alternative Energy Systems of the National Academy were:
• • “Conservation deserves the highest immediate priority in energy planning.”
• • “The most important intermediate-term measure is developing synthetic fuels from coal”
• • “Perhaps equally important is the use of coal and nuclear power to produce electricity… .”
Some caveats were, however, attached to the last recommendation:
• • “The safety of nuclear reactors is a controversial topic.”
• • “The possibility that terrorists…might divert nuclear material is a matter of concern.”
• • “Policies for disposal or radioactive waste have not been developed.”
For the direct use of solar energy, the committee noted that, “Heating buildings and domestic water and providing industrial and agricultural process heat and low pressure steam are by far the simplest and most economical applications of solar energy…This group of technologies is the most suitable for deployment in the intermediate term… .”
The above recommendations could be implemented immediately without the use of hydrogen. The fact that hydrogen is not a primary fuel source and should not be included in an inventory of energy resources was clearly recognized 25 years ago. This makes it all the more difficult to understand why and how a mere 25 years later the idea of a hydrogen economy came to be perceived as a cornerstone of our future national energy policy.
On two points, these previous energy assessments are in agreement: fossil and nuclear energy resources are finite and the cost of energy will continue to increase. Consequently, there is, at least in principle, agreement that energy should be used and distributed with the highest possible efficiency and wasteful energy conversion technologies should be avoided. Furthermore, there is wide agreement that, in order to arrive at technically viable conclusions, the efficiency of energy conversion should be based upon a complete “cradle-to-grave” analysis that includes each step in the energy production and utilization chain, rather than the efficiency of any single step in the overall chain. A similar approach for ground transportation systems that takes into account all the steps necessary to make the hydrogen from a primary energy source, get it into the vehicle fuel tank and then power the wheels is called a “well-to-wheel” analysis.
The concept of a hydrogen economy was proposed back in the 1870s as a fanciful speculation of Jules Verne’s in his novel The Mysterious Island7. Hydrogen production was examined extensively in the 1970s by experts for the Institute of Nuclear Energy in Vienna and the Electric Power Research Institute 8,9. The basic idea was to generate hydrogen by high temperature nuclear reactions and then use the hydrogen to generate electricity, thereby replacing fossil fuels. The results of this study showed, however, that generating hydrogen with high-temperature thermal methods was inferior in cost and efficiency to generating electricity from nuclear reactors and then producing hydrogen by electrolysis 9. But the study also showed that using the electricity from the nuclear plants directly was preferable in cost and efficiency to the hydrogen path to generate electricity with a fuel cell. Despite the conclusion reached from this extensive study, the idea of a hydrogen economy has been revived in the past decade, based upon assumptions that need to be examined objectively.
## 2 Overview of Hydrogen Production and Utilization
Hydrogen is abundant on Earth, but only in chemically bound form. In order to use hydrogen as a fuel, it is necessary that it be available in unbound form. As a consequence of chemical reaction energies involved, a substantial energy input is needed to obtain unbound hydrogen. This energy input exceeds the energy released by the same hydrogen when used as a fuel. For example, to split water into hydrogen and oxygen according to the reaction
$H2O→H2+12O2$
120 MJ/kg-hydrogen are needed (all gases at 25°C); while the reverse reaction of combining hydrogen and oxygen to give water (all gases at 25°C), ideally yields 120 MJ/kg-hydrogen. But, because no real process can be 100% efficient, more than 120 MJ/kg must be added to the first reaction, while less than 120 MJ/kg of useful energy can be recovered from the recombination. To evaluate the losses, it is, therefore, necessary to examine the energetics of hydrogen production processes quantitatively.
Figure 1 shows all the major pathways to produce hydrogen and to utilize it as an energy carrier. The top row shows the primary energy sources: fossil fuels, nuclear materials, and renewable sources. The next three rows show the major processing steps for conversion of the primary energy into hydrogen. Below the hydrogen row are the two methods of using hydrogen in energy applications: one is to combust the hydrogen to produce heat for various applications, and the other is to generate electricity from the hydrogen by means of a fuel cell.
Fossil fuels, nuclear energy, solar thermal (including OTEC), biomass, wind, and photovoltaics can all be used to generate electricity. All of these, except photovoltaics and wind, generate electricity by first producing heat, which is then converted to mechanical energy, which, in turn, is finally converted to electricity. Photovoltaic cells generate electricity directly from solar radiation, while wind turbines directly generate mechanical energy and then electricity. In principle, some of the heat producing technologies can also make hydrogen by thermolysis of water, i.e., heating of water to a sufficiently high temperature (greater than 3000 K) to break it into hydrogen and oxygen.
Processes to the right of the heavy vertical line in Fig. 1 can produce hydrogen from renewable or nuclear sources without using either electrolysis or thermolysis of water 10. For example, biomass may be chemically converted to hydrogen by processes similar to those used with fossil fuels, or it may also be converted to hydrogen by biological conversion processes. Photochemical and photoelectrochemical reactions can produce hydrogen directly with solar radiation input. Thermochemical and hybrid thermochemical/electrochemical cycles use nuclear or solar thermal heat and electricity to drive chemical cycles that produce hydrogen from water. However, a detailed evaluation of the potential of the technologies to the right of the heavy line in Fig. 1 is not the objective of this article, because none of them is anywhere close to commercialization, and they should be considered largely as topics for future R&D, not as viable technologies for a national energy policy 4,10.
## 3 Efficiency of Hydrogen Production and Utilization Pathways
Each of the pathways for production and use of hydrogen will now be considered. Those to the left of the heavy vertical line in Fig. 1 will be quantitatively analyzed, while those to the right, which are in a state of research, will be described and discussed. Lower heating values are used for all substances throughout this paper.1
### 3.1 Hydrogen Produced From Fossil Fuels via Chemical Reactions
Chemical conversions of fossil fuels to hydrogen, from natural gas and petroleum fractions in particular, are well-established, commercial technologies. The use of coal as a raw material for hydrogen production has been studied extensively, but it is not widely practiced in the U.S.
#### 3.1.1 Hydrogen to Heat via Combustion
Table 1 shows the efficiency of supplying natural gas or hydrogen made from natural gas for combustion applications. For low-pressure uses, such as generating electricity and home heating, the efficiency of delivering hydrogen is only about 69%, whereas it is 88% for of natural gas. With a typical combustion efficiency of 85% 13, the efficiency of utilization of hydrogen is about 59%, compared to 76% for natural gas. Thus the efficiency of combusting hydrogen is about 29% lower than that for supplying the natural gas for the same purpose. This is due to the fact that the energy efficiency of converting natural gas to hydrogen and then storing, transmitting, and distributing it is low. For heat generation hydrogen could be combusted at an efficiency of 85% yielding an overall cradle to grave efficiency of 57% compared to natural gas combustion at 75%. Thus, to use hydrogen in this way would require 32% more natural gas and produce 32% more carbon dioxide pollution than burning the natural gas directly.
To supply compressed hydrogen as a fuel in conventional spark-injection engines, at 62% efficiency, requires about 32% more natural gas as it does to supply the natural gas directly as engine fuel, at 82% efficiency. To supply liquid hydrogen, at 57% efficiency, would require 44% more natural gas, and produce that much more carbon dioxide, as it would to supply the natural gas to spark-injection engines. This is because, even though hydrogen and natural gas burn with essentially the same efficiency in the engine 12, the compression or liquefaction of hydrogen for storage on a vehicle requires substantially more energy. Results for fossil fuels other than natural gas as hydrogen sources are even less favorable to hydrogen, because petroleum and coal are more difficult to convert to hydrogen than is natural gas.
It can be concluded that to make hydrogen from fossil fuels and then to burn the hydrogen for generating heat or fueling internal combustion engines is less efficient than using the fossil fuel directly.
#### 3.1.2 Hydrogen to Electricity
Table 2 shows the efficiency of producing electricity from natural gas via hydrogen. If electricity generated with hydrogen made from natural gas is used in a fuel cell to produce electricity, the overall well-to-grid efficiency of 35% is less as the well-to-grid efficiency of 38%, obtained by burning the hydrogen to produce electricity in a gas-turbine combined cycle. Either way, generating power for the grid with hydrogen is less efficient than burning the natural gas directly, for which a well-to-grid power generation efficiency of 48% can be achieved with present technology. Results for other fossil fuels are similar. It can be seen, therefore, that the use of hydrogen generated from fossil fuels to produce electricity uses more fossil fuel and generates more carbon dioxide than generating electricity from fossil fuel directly. It may be concluded that the use of hydrogen made from fossil fuel to generate electricity for the grid is wasteful and increases carbon dioxide emissions.
The use of hydrogen fuel cells in vehicles is considered in detail in Sec. 4.
### 3.2 Hydrogen Produced From Fossil, Nuclear, and Renewable Sources via Thermolysis
Thermolysis is splitting of water into hydrogen and oxygen by heating it. The heat can come from fossil, nuclear, or renewable sources. The production of hydrogen by thermolysis has been explored in detail 14,15. It was found that, because water is a very stable substance, only at temperatures higher than 3000°C (5400°F) does the equilibrium reaction significantly favor its decomposition into hydrogen and oxygen. Although a catalyst might increase the rate of reaction, it cannot change the reaction equilibrium. Hence, an extremely high temperature is required, because the equilibrium versus temperature relationship is fixed by the chemical reaction. In principle, the reaction can be driven at somewhat lower temperatures by separating the hydrogen and oxygen from the water as they are formed. But unless the hydrogen and oxygen are separated from each other at the reaction temperature, they will react back to water as the mixture is cooled. Separations at such high temperatures are not technically feasible because it is virtually impossible to find suitable materials to be employed in the necessary hardware.
Therefore, it can be concluded that thermolysis of water is technically not a practical way to produce hydrogen, no matter what source of heat is used.
### 3.3 Hydrogen Produced From Fossil, Nuclear, and Renewable Sources via Electrolysis
In Fig. 1, the dashed box isolates that portion of the pathways in which electricity is used to produce hydrogen via electrolysis and the hydrogen subsequently is used to produce electricity via a fuel cell. These steps are common to all energy sources that produce hydrogen by electrolysis, including fossil fuels, nuclear materials, and renewables. These pathways can be evaluated by examining the electrolysis and hydrogen utilization steps.
#### 3.3.1 Hydrogen to Heat via Combustion
Hydrogen produced by electrolysis could be used to produce heat by combustion. However, the efficiency of producing hydrogen from electricity by means of electrolysis is only about 70% 16, and burning the hydrogen at an efficiency of 85% 13 yields heat with an overall efficiency of about 60%, while electricity can be converted to heat at essentially 100% efficiency. Thus it is concluded that to use hydrogen made by electrolysis to produce heat is inefficient and wasteful.
#### 3.3.2 Hydrogen to Electricity via Fuel Cell
The use of electricity to generate hydrogen, and the use of this hydrogen to then generate electricity again via a fuel cell is illustrated in Fig. 2. This process is very inefficient because a sequence of steps is involved. Figure 2 shows the estimated present and, highly optimistic, future efficiencies of the electrolysis and fuel cell steps. It would take 2.9 kW h of electricity input to produce 1 kW h of electricity output with present technologies, while even with optimistic advanced efficiencies, 1.9 kW h of input are required to yield 1 kW h of output. The difference between input and output electricity would be wasted. Thus, the output electricity would cost from 1.9 to 2.9 times the cost of the input electricity. Moreover, this cost ratio considers only the cost of the input electricity, and does not include the capital cost and non-electrical operating costs of the electrolysis, fuel cell, and hydrogen storage equipment. Also, it does not include the cost or energy necessary for compression or liquefaction of hydrogen for storage. Since these results do not depend on the source of the original electricity or upon the use of the electricity, the results also apply to using electricity to power a fuel-cell vehicle.
There may be niche applications where weight is a more important factor than cost, such as for space vehicles, or where incremental electricity available from stored hydrogen may be so valuable, such as at times of peak electricity demand, that the extra cost could be acceptable. However, such niche applications do not suggest a major role for hydrogen in a national energy policy.
This analysis shows that any path, no matter what the source of the original electricity, which uses electricity to produce hydrogen and a fuel cell to use this hydrogen to again generate electricity, has low energy efficiency and adverse economic impact. This means that a large portion of the original resource is being wasted, both in an energetic and an economic sense. Furthermore, because of the insufficiency of the process, pollution will increase.
It is concluded that any pathway that includes the conversion of electricity to hydrogen by electrolysis, and then conversion of the hydrogen to electricity via a fuel cell is inefficient and not a desirable basis for an economically and environmentally sound energy policy.
#### 3.3.3 Hydrogen to Electricity via Combined Cycle Power Plant
The efficiency of converting hydrogen to electricity via a gas turbine combined-cycle is about 55%. Though this is more efficient than present fuel cell systems, it is lower than the optimistic value for fuel cells. It is not expected that the combined-cycle efficiency will increase to the level of the optimistic fuel cell value. Since it has already been demonstrated that even the optimistic fuel cells are an inefficient way of using hydrogen produced by electrolysis, the use of hydrogen in a combined cycle power plant, that is even less efficient, is clearly not desirable. Therefore, it is concluded that the conversion of electricity to hydrogen and using the hydrogen to generate electricity via a combined cycle power plant is inefficient and is not a desirable process for an energy policy.
Summary of Secs. 3.1–3.3. Based on the analyses in Secs. 3.1–3.3, we conclude that all of the hydrogen production pathways to the left of the heavy vertical line in Fig. 1 should be eliminated from a national energy policy that aims to provide energy efficiently and economically.
### 3.4 Hydrogen Produced From Renewable or Nuclear Sources that do not Utilize Thermolysis or Electrolysis
According to a recent study by the National Academies of Science 4, the pathways to hydrogen production on the right of the heavy, solid vertical line in Fig. 1 are still research challenges that have not reached a point where they can be considered for a viable national energy policy.
The biomass pathway is actually several pathways by which biomass can be converted into hydrogen. These include: gasification of biomass, anaerobic digestion, and algal photolysis. The gasification pathway is the most developed, but has not yet reached the commercial stage.
The photochemical pathway includes decomposition of water by sunlight (photolysis) using semiconductor “sensitizer” particles, and a combination of electrolysis and photolysis (photoelectrochemical or PEC processes) in which semiconductor electrodes utilize an externally applied electrical potential to supplement the solar radiation input to drive the reaction. Since much of the energy is supplied by solar radiation, PEC systems potentially are more efficient with respect to electricity use than electrolysis alone.
None of these renewably based processes has been developed to commercial status as yet, and the presently available information is not sufficient to reach conclusions as to their costs and efficiencies. A review of the DOE Renewable Energy Programs, published in 2000 10, recognized that these renewable energy pathways are challenges for longer term research, and recommended that the Department of Energy’s Office of Power Technology Hydrogen Research Program attempt to develop “better methods for producing hydrogen directly from sustainable energy sources without using electricity as an intermediate step.” These other methods, therefore, are not useful at this time in analyzing the viability of a hydrogen economy by the year 2030, the target date in the 2002 DOE strategy. R & D for these hydrogen production processes should be continued and their potential should be evaluated separately as they approach commercialization.
No conclusions can be reached at this time regarding the efficiency of producing hydrogen from renewable sources by routes without thermolysis or electrolysis. But unless commercial and engineering feasibility can be demonstrated, they cannot be considered as candidates for a national hydrogen economy.
The “Other” pathway includes thermochemical cycles and hybrid electrochemical/thermochemical cycles, as well as processes that may some day be invented. The thermochemical and hybrid cycles can be driven by nuclear or solar thermal heat. The goal of these thermochemical cycles is to circumvent the need for the extremely high temperatures required to split water directly, by carrying out the splitting in several intermediate steps that ultimately result in the same net reaction.
In 1981 Shinnar et al. 9 studied ten thermochemical and hybrid processes with a nuclear reactor as the heat and electricity source. Those processes, along with some of the key chemicals involved, are: Mark 9 (iron, chlorine); Agnes (iron, magnesium, chlorine); Schulten (methane, methanol, sulfuric acid); Whestinghouse hybrid (sulfuric acid); Cesium (hypothetical); Institute of Gas Technology (copper salts); Argonne National Laboratory (ammonia, potassium, mercury); Hitachi $(NaCO3,$ I); Oak Ridge National Laboratory (Cu, Ba, F); and Los Alamos National Laboratory (cecium, chlorine). They used a screening method that tested how candidate processes compared thermodynamically and economically to electrolysis using electricity generated by the nuclear reactor. Their conclusion was that, “We can sum up our results by saying that none of the cycles proposed thus far has any chance of being economically attractive compared to electrolysis” 9. A similar conclusion was reached in 2003 by Penner 19, although he believes that hydrogen production may someday be of interest if nuclear breeder reactors should become the primary energy supply source. A Zn-ZnO cycle 14 driven by concentrated solar energy recently has been proposed, but has not been fully evaluated.
No thermochemical or hybrid thermochemical process for hydrogen production has as yet been shown to be thermodynamically or economically competitive with electric power generation by the same heat source, followed by electrolytic hydrogen production. It is not possible to rule out future success for such a process, but until fully established it cannot be the basis for an energy policy.
## 4 Hydrogen for Transportation
There is wide agreement that a paradigm shift in transportation fuel will be necessary in the near future 20. This shift will be both painful and expensive because petroleum is a unique resource, and the magnitude of the global institutions that have grown from the symbioses between oil and the automobile, as well as the customer satisfaction associated with this technology, make a change very difficult. A generation’s worth of effort to develop workable alternative fuels has not been successful. As of the year 2000, alternative fuel use in the U.S. amounted to less then 0.4 billion gallons compared to 166 billion gallons of petroleum fuel consumption 21.
A valiant effort was mounted a few years ago in California to introduce electric vehicles through the so-called ZEV mandate 22. Its target of promise was a battery-powered electric car with zero tail pipe emissions. However, this effort failed in the marketplace largely because of the long time required to charge batteries, the high initial cost of the vehicles, and their limited mileage range. The ZEV mandate has now been rationalized as paving the way for fuel cell vehicles, which are envisioned as the ultimate goal in the latest revision of the California Air Resources Board (CARB) rule 23. If the EV technology, which was relying on a largely existing energy transmission infrastructure, failed, a new technology that has no existing infrastructure can only overcome the obstacles inherent in introducing an alternative fuel if it is more efficient, less expensive, and environmentally more benign than the alternatives.
To analyze whether or not hydrogen is a suitable technology for transportation is more complicated than to assess whether hydrogen fuel cells are a suitable technology to generate electricity. An analysis of the hydrogen vehicle concept must take into account all the steps necessary to make the hydrogen from a primary fuel source, get it into the fuel tank, and then power the wheels via a prime mover and the drive trains. A comparison between hydrogen vehicles and other technologies that includes all the steps in the process, as shown in Fig. 3, is called “Well-to-Wheel Analysis.” The authors have previously made a Well-to-Wheel Analysis of twelve significant technologies (Fig. 4) that could power U.S. ground transportation 11,12. This analysis was made with natural gas as the primary energy source, because steam reforming of natural gas is the most widely used and most economical process for the production of hydrogen. The well-to-wheel efficiency, η, for this analysis is defined below 12: For each fuel production step,
$ηi=EnergyintheoutputfueliEnergyintheinputfuel+naturalgasenergyequivalentofnetheatandelectricityinputsi×100$
and, for each of the onboard vehicle steps,
$ηj=UsefulelectricalormechanicalenergyoutputfromastepjFuel,electrical,ormechanicalenergyinputtothatstepj×100$
The overall efficiency is given by
$η=∏iηi∏jηj$
The results of this analysis are summarized in Table 3. It shows that the highest Well-to-Wheel efficiency can be obtained with hybrid engines, followed closely by fuel cell hydrogen vehicles using steam reforming of natural gas to produce the hydrogen. So far no hydrogen-fuel-cell-hybrid configuration has been demonstrated, but such a vehicle may well be equivalent in efficiency to other hybrid configurations.
A group of five technologies—including conventional diesel engines with Fischer–Tropsch (FT) fuel or FT/natural gas mixture, conventional spark ignition engines (SI) with natural gas, hybrid SI with hydrogen from natural gas, and an EV with batteries and electricity from a natural gas combined-cycle power plant have efficiencies between 19% and 22%—well below the top four. At the bottom of the well-to-wheel efficiency ranking are fuel cells with methanol (reformed on-board to hydrogen), conventional SI with hydrogen from natural gas, and hydrogen fuel cell vehicles using hydrogen produced by electrolysis of water with the electricity obtained from natural gas in a gas-turbine, combined cycle with 55% efficiency. This electrolysis alternative has the lowest overall efficiency of the twelve options examined, and is less than half as efficient as a fuel cell vehicle with hydrogen derived from natural gas via steam reforming.
A key question for a national energy policy is whether there is a better and cleaner alternative than the hydrogen fuel cell to power transportation vehicles by electricity. In a fuel cell vehicle, hydrogen would have to be stored either as a gas under high pressure, or as a cryogenic liquid at very low temperatures, while in an EV the energy is stored in a bank of batteries. For comparison with the fuel-cell-vehicle efficiencies (shown in Fig. 2) the efficiency of present electric vehicles (EV), similar to the Prius, is shown in Fig. 5. The EV converts electricity via battery storage to electricity with an overall efficiency of about 58%, or 1.7 kW h of electricity input per 1 kW h of output. In contrast, the efficiency of an advanced fuel cell vehicle is only 52%, thus requiring 1.9 kW h of electricity input to per 1 kW h of output. Hence, the most optimistic electricity to electricity via hydrogen system utilizes electricity less efficiently than commercially available electric vehicles. Moreover, with advanced batteries already available (24, 25), the efficiency is 83%, or 1.2 kW h or input per kW h of output. These results are independent of the source of the electricity for the battery.
Many environmentalists and proponents of renewable energy refer to hydrogen generated by steam reforming of natural gas and electrolysis with electricity produced from nuclear or fossil fuels as “Dirty Hydrogen” and only accept hydrogen generated by electrolysis from renewable sources as “Clean Hydrogen” 26. The use of dirty hydrogen is not the goal of the hydrogen economy because it does not solve the main problem, which is reducing the use of fossil fuels in transportation. Only pathways using nuclear or renewable technologies can meet that goal. But many environmentalists and transportation strategists, such as Lovins 27, have proposed to accept hydrogen produced by steam reforming from fossil fuels or electrolysis of water with fossil or nuclear energy as a necessary transition to a final hydrogen economy, one in which the hydrogen is produced by electrolysis with renewable sources. But, unless the electrolysis-hydrogen/fuel cell technology was superior, there is no justification to construct a complex and expensive hydrogen infrastructure for an interim solution with hydrogen produced from nuclear or fossil sources. Since EVs are already more efficient than “clean hydrogen” fuel cell vehicles (FCVs) are ever expected to be, and since the EV infrastructure is in place (albeit in need of updating) there is no justification for pursuing FCV technology.
Based upon the foregoing considerations, we conclude that there are alternative electric transportation technologies already available that are more efficient than the most highly optimistic projections for hydrogen fuel cell vehicles. Hence, pursuing FCV technology, which requires construction of an entirely new and costly infrastructure for hydrogen, is not justified.
## 5 Other Issues for the Hydrogen Economy
### 5.1 Hydrogen Storage and Transport
One advantage claimed for hydrogen is that its energy is storable, as indeed it is. This is of particular importance in connection with using solar energy and wind, because of the variable nature of these sources. The issue, though, is not whether hydrogen energy can be stored, but whether it can be stored more efficiently and less expensively than other sources of energy, especially electricity. A robust electrical grid that is able to follow demand effectively can be used to achieve a function similar to short-term storage. Namely, it can deliver excess electricity to where it is needed, with transmission efficiencies in the low 90% range. Electric energy can also be stored long-term by hydraulic pumping and recovered as electricity with turbines, at an efficiency of approximately 78%. On a small scale electricity can be stored in batteries, particularly for applications such as road and rail transportation, with efficiencies approaching 85% 24,25. Heat for solar thermal power plants can be stored in the working fluid at efficiencies approaching 100% 28. This option is relatively inexpensive and can be timed for storing energy to meet high demand periods, such as air conditioning peaks. In contrast, liquefication of hydrogen requires 32 MJ/kg, resulting in an efficiency of 79% 29,30. In addition, there would also be a continuous loss of hydrogen from the storage vessel due to heat leak from the surroundings. Storage as a gas requires compressing the hydrogen to about 55 MPa (8000 psi) with a fuel energy input of 19 MJ/kg, at an efficiency of 86% (with electricity produced at 55% efficiency) 12. Other hydrogen storage options, such as metal hydrides and carbon nanotubes, are under investigation.
Transport of hydrogen presents equally daunting obstacles. Some argue that gaseous hydrogen could be distributed in pipelines currently used for natural gas 27. The obvious fallacy of this proposal is that all these pipelines are already fully loaded to transport natural gas. Moreover, there are also questions regarding whether or not fittings, gaskets, and other materials in the natural gas pipelines could withstand hydrogen diffusion. Hence, a new pipeline system would be needed for hydrogen. Transporting liquid hydrogen would incur large amounts of heat losses and require insulating the pipelines to hold a cryogenic temperature. Furthermore, a nationwide cryogenically insulated piping system would have to be constructed at enormous financial costs. In comparison with all these obstacles to transporting hydrogen, an electric grid serving the country is available and operating. It could easily be expanded and made sufficiently reliable to meet future demand.
### 5.2 Safety and Environmental Impact
There is considerable disagreement over the safety of using hydrogen. On the one hand, hydrogen has been called “safer than gasoline and other hydrocarbon fuels” 31, while on the other hand it has been referred to as “most dangerous” 9. Current regulations regarding storage and transportation of hydrogen 32 support the latter view. Certainly hydrogen poses some unique challenges, such as its tendency to permeate readily through many materials. These issues would have to be resolved before hydrogen could be safely used.
A highly touted aspect of hydrogen is that it is clean burning, or “zero polluting.” While it is true that there would be negligible emissions of nearly all pollutants at the point of use of the hydrogen (except for $NOx,$ which likely be higher if the hydrogen is burned, because of the high temperature of hydrogen combustion), this is not true when the entire production pathway is examined. If hydrogen were to be made from fossil fuels, then carbon dioxide emissions would be larger compared to those generated from the use of the fossil fuels directly. Nuclear fuels create radioactive by-products that must be stored. Renewable energy technologies produce much less pollution than fossil or nuclear fuels, but if they were used to make hydrogen, more pollution would result than if they were used directly to generate heat and electricity.
### 5.3 Cost
The cost of production of gaseous hydrogen made from natural gas is around $1/kg, with the natural gas priced at$ 0.18/m3 ($5.0 per 1000 cubic feet at standard conditions), about 45% of the cost is due to the natural gas 16. Hydrogen produced by electrolysis costs about three times this much, around$ 3/kg, with electricity at $0.05/kW h, and about 85% of the price is due to the electricity 16. The cost for making hydrogen by other pathways is more difficult to estimate, but values ranging from$ 2.5/kg to $8/kg have been projected for several of the processes in Sec. 3.4 33. These stated costs generally do not include producers’ profit, transmission, storage, and compression. The energy available from 1 kg of hydrogen is approximately equal to that in 1 gallon $3.78×10−3m3$ of gasoline if the water produced by combustion were uncondensed in both cases. These matters are important when comparing the cost of hydrogen with the price of gasoline at the pump or the cost of electricity from the grid, which include these additional factors. Presently, hydrogen delivered to the user via truck is priced around$ 6/kg to \$ 8/kg, some 3 to 6 times greater than the cost of production 34. Hydrogen delivered by pipeline is less expensive, and hydrogen produced at the dispenser might also be less expensive. Excise taxes and dealer markup have to be added to arrive at the price to the consumer.
## 6 Discussion and Conclusions
There have been numerous books and articles presenting the technical steps necessary for the establishment of a hydrogen based economy. This article does not question the technical feasibility of a hydrogen based economy utilizing renewable energy sources as proposed by McAlister 35 and Rifkin 36 among others but a technically feasible option is not necessarily the most efficient, the most economical, or the most environmentally benign choice to meet the need for heat and electricity as is shown in the cradle to grave analysis above. Furthermore, as stated by the former Acting Assistant Secretary of Energy, Dr. Joseph Romm 37 in his testimony to the House Science Committee, “Probably the biggest analytical mistake made in most hydrogen studies-including the recent National Academy Report is failing to consider whether the fuels that might be used to make hydrogen [such as natural gas or renewables] could be better used simply to make electricity.”
Based upon the cradle to grave analysis presented in this paper, the following conclusions can be drawn:
• 1 Any currently available hydrogen production pathway, irrespective of whether it uses fossil fuels, nuclear fuels, or renewable technologies as the primary energy source to generate electricity or heat is inefficient compared to using the electric power or heat from any of these sources directly. Hence, these hydrogen processes will not lead to an energy policy that reduces pollution and produces energy efficiently and economically.
• 2 Electricity produced by fuel cells using hydrogen obtained by electrolysis of water is a highly inefficient process that wastes electricity.
• 3 Electric vehicles using batteries to store electricity are more efficient and less polluting than fuel cell-powered vehicles using energy stored in hydrogen produced by electrolysis of water.
• 4 There is no reason to build a hydrogen infrastructure, because the overall concept of a hydrogen economy with any currently available technology is flawed.
• 5 Unless future R & D provides convincing evidence that hydrogen can be produced in an economical and environmentally benign manner and can compete successfully in market applications, strategies other than the hydrogen economy should be pursued to provide this country with an affordable and safe energy supply for the future.
## Acknowledgments
We thank the following for helpful suggestions in the preparation of this paper: Randall Gee, Solargenix Energy; Ronal Larson, vice-president elect American Solar Energy Society; Richard Laudenat, Representative of the Energy Conversion Group to ASME COE Committee; Paul Norton, National Renewable Energy Laboratory; and Enrico Sciubba, University of Rome.
Frank Kreith is Professor Emeritus in the College of Engineering at the University of Colorado. For the past 13 years he has served as the ASME Legislative Fellow at NCSL where he provided assistance on energy, transportation and environmental issues to state legislators. Prior to joining NCSL Kreith was Chief of Thermal Research at SERI, here he participated in the Presidential Domestic Energy Review. From 1951 and 1977, he taught at the University of California, Lehigh University and the University of Colorado. In 1998 he received the ASME Medal for “distinguished achievements in research, publication, and public service.”
Ronald E. West is Professor Emeritus of Chemical Engineering at the University of Colorado, Boulder. He has worked on energy problems since the 1960s. West holds a Ph.D. in chemical engineering from the University of Michigan.
Lower heating value (LHV) is the energy released when the water produced by combustion is not condensed. It was chosen because there are no significant applications in which the water is condensed and the corresponding energy is usefully recovered.
1.
Bush, G. W., United States State of the Union Message, 28 January 2003.
2.
National Vision of America’s Transition to a Hydrogen Economy in 2030 and Beyond, 2002, U.S. Dept. of Energy, Washington, DC.
3.
National Hydrogen Energy Road Map, 2002, U.S. Dept. of Energy, Washington, DC.
4.
The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs, 2003, Draft. National Research Council, National Academy of Engineering, The National Academies Press, Washington, DC; see www.nap.edu.
5.
Energy In Transition 1985–2010, 1979, National Research Council, National Academy of Sciences, Washington, DC.
6.
Schurr, S. H. et al., 1979, Energy in America’s Future—The Choices Before Us, Johns Hopkins University Press, Baltimore.
7.
Verne, J., 1988, The Mysterious Island, Antheneum, New York.
8.
Shinnar
,
R.
,
2003
, “
The Hydrogen Economy, Fuel Cells, and Electric Cars
,”
Tech. Soc.
,
25
(
4
), pp.
453
576
.
9.
Shinnar
,
R.
,
Shapira
,
D.
, and
Zakai
,
A.
,
1981
, “
Thermochemical and Hybrid Cycles for Hydrogen Production—A Differential Comparison with Electrolysis
,”
IEC Process Design Development
,
20
, p.
581
581
.
10.
Renewable Power Pathways—A Review of the U.S. Department of Energy’s Renewable Energy Programs, 2000, National Research Council, National Academies Press, Washington, DC.
11.
Kreith, F., and West, R. E., 2003, “Gauging Efficiency, Well to Wheel,” Mechanical Engineering Power 2003, supplement to Mechanical Engineering magazine.
12.
Kreith
,
F.
,
West
,
R. E.
, and
Isler
,
B. E.
,
2002
, “
Efficiency of Advanced Ground Transportation Technologies
,”
J. Energy Resour. Technol.
,
124
, pp.
173
179
.
13.
ASHRAE Handbook 1996, HVAC Systems and Equipment, 1996, American Society of Heating, Refrigeration and Air-Conditioning Engineers, Atlanta, GA.
1.
Steinfeld, A., in Renewable Hydrogen Forum, American Solar Energy Society, 1, October 2003, Middleton, P., Larson, R., Nicklas, M., and Collins, B., eds.
2.
See also, “Solar Hydrogen Production via a 2-step Water-Splitting Thermochemical Cycle based on Zn/ZnO Redox Reactions,” Int. J. Hydrogen Energy, 27, pp. 611–619.
1.
Perkins, C. and Weimer, A. W., “Likely Near-term Solar-Thermal Water Splitting Technologies,” Int. J. Hydrogen Energy (to be published).
2.
Howe-Grant, M., ed., 1995, Kirk-Othmer Encyclopedia of Chemical Technology, 4th Edition, Volume 13, Wiley, New York.
3.
Thomas
,
C. E.
,
Kuhn
, Jr.,
I. F.
,
James
,
B. D.
,
Lomas
, Jr.,
F. D.
, and
Baum
,
G. N.
,
1998
, “
Affordable Hydrogen Supply Pathways for Fuel Cell Vehicles
,”
Int. J. Hydrogen Energy
,
23
(
6
), pp.
507
516
.
4.
Stodolsky, F., Graines, L., Marshall, C. L., An, F., and Eberhardt, J. J., 1999, “Total Fuel Cycle Impacts of Advanced Vehicles,” Paper No. 199-01-0322, 1999 SAE International Congress and Exposition, Society of Automotive Engineers, Warrendale, PA.
5.
Penner, S. S., 2002, “Steps toward the Hydrogen Economy,” http://www.enviroliteracy.org.
6.
J. M., DeCicco, 2003, “The ‘Chicken or Egg’ Problem Writ Large: Why a Hydrogen Fuel Cell Focus is Premature,” Asilomar Conference on Transportation and Energy Policy, Asilomar, CA.
7.
Davis, S. C. and Diezel, S. W., 2002, Transportation Energy Data Book, 22 ed., Oak Ridge National Laboratory, Oak Ridge, TN.
8.
California Air Resources Board (CARB), Mobile Source Division. Staff Report, 1994 Low Emission Vehicle and Zero Emission Vehicle Program Review. Sacramento, CA.
9.
California Air Resources Board (CARB), ARB Modifies Zero Emission Vehicle Regulation. News Release, 2003, Sacramento, CA.
10.
Peseran, A., 1994, private communication. See also, Ito, I. K. and Ohnishi, M., 2003, “Development of Prismatic Type Nickel/Metal Hydride Battery for HEV,” Proceedings of the 20th International Electric Vehicle Symposium.
11.
McCoy, G. A. and Lyons, J. K., 1993, Electric Vehicles: An Alternative Fuels Vehicle. Emissions and Refueling Infrastructure Technology Assessment, Washington State Energy Office.
12.
Middleton, P., Larson, R., Nicklas, M., and Collins, B., eds., 2003, Renewable Hydrogen Forum, American Solar Energy Society.
13.
Lovins, A. B., “Twenty Hydrogen Myths,” Rocky Mountain Institute, 2003. See also Wold, M. L., New York Times, November 12, 1993, “Will Hydrogen Clean the Air? Maybe not, say some.”
14.
Kreith
,
F.
, and
Meyer
,
R. T.
,
1983
, “
Large Scale Use of Solar Energy with Central Receivers
,”
Am. Sci.
,
71
, pp.
598
605
.
15.
Barron, R. F., 2000, private communication.
16.
Timmerhaus, K. D. and Flynn, T. M., 1989, Cryogenic Process Engineering, Plenum, New York.
17.
Braun, H., in Renewable Hydrogen Forum, 2003, Middleton, P., Larson, R., Nicklas, M., and Collins, B., eds., American Solar Energy Society.
18.
Hydrogen Safety Information; Safetygrams for Gaseous and Liquid Hydrogen. www.airproducts.com/Products/LiquidBulkGases/or/Hydrogen Energy.
19.
Goel, N., Mirabal, S. T., Ingley, H. A., and Goswami, D. Y., 2003, “Hydrogen Production,” Chap. 11 in Advances in Solar Energy, Vol. 15, D. Y. Goswami, ed., American Solar Energy Society, Boulder, CO.
20.
Chemical Market Reporter, 2003, “Chemical Profile: Hydrogen.”
21.
McAlister, R. E., 2003, The Solar Hydrogen Civilization, American Hydrogen Association.
22.
Rifkin, J., 2002, The Hydrogen Economy: The Creation of the World-Wide Energy Web and the Redistribution of Power on Earth, Jeremy P. Tarcher.
23.
Dr. Joseph Romm. [March 3, 2004] Testimony for the hearing reviewing the Hydrogen Fuel and Freedom CAR Initiatives submitted to the house science committee Washington, DC. |
https://ncatlab.org/homotopytypetheory/revision/set+%3E+history/6 | # Homotopy Type Theory set > history (Rev #6)
## Definition
A set consists of
• A type $A$
• A 0-truncator
$\tau_0: \prod_{(a:A)} \prod_{(b:A)} \mathrm{isProp}(a = b)$
### As univalent setoids
A set is a setoid $T$ where the canonical functions
$a:T, b:T \vdash idtoiso(a,b):(a =_T b) \to (a \equiv b)$
are equivalences
$p: \prod_{a:A} \prod_{b:A} (a =_T b) \cong (a \equiv b)$
## References
Revision on May 1, 2022 at 21:40:01 by Anonymous?. See the history of this page for a list of all contributions to it. |
https://denisevanhemert.com/gordon-ramsay-yaw/39e742-special-about-number-48 | HTML: To link to this page, just copy and paste the link below into your blog, web page or email. The Number 48: Properties and Meanings Prime Factors of 48=2x2x2x2x3. It is composed of two distinct prime numbers multiplied together. The Earth - and all what constitutes it - would be submitted to 48 laws, according to Ouspensky. 48ing is notorious for resulting in these friends "taking things to the next level". 48 is a 17-gonal Number. 48 squared (48 2) is 2304; 48 cubed (48 3) is 110592; The square root of 48 is 6.9282032303; The cube root of 48 is 3.6342411855; Scales and comparisons How big is 48? I guess its was just the first number that came to my mind. To some, it's just another number. 444 angel number. The caller has named himself Mauritius Telecom. 3 3 squared Special Number-48 For the The number 48 is used 2 times in the Bible. The caucasian G. Gurdjieff claims the same thing by asserting that the Earth would be submitted to 48 forces by taking account of the Sun, the moon, the planets, all the worlds and the Absolute. It is an odd number and we mostly use it to determine the length of time; two days have 48 hours in total, for example. Example 2 : you are in USA and your caller is in 5 Szaflary who has the Polish landline phone number 087 XX XX XX XX : either you dial 01148 87 XX XX XX XX; or +48 87 XX XX XX XX. According to R. Allendy, "it is the ratio of the initiation, 8, with the natural law of the Cosmos, 40". For sure it is a spam. 48 is the ideal temperature for such a run to take place and a comfortable ambiance for clothes to come off in. Binary numeral for decimal number 48 is 110000, octal numeral is 60, hexadecimal code is 30. Research Maniacs. 777 angel number. In roman numerals: XLVIII. See below for interesting mathematical facts about the number 48 from the Numbermatics database. The 48 petals of each of the two petals of the Ajna Chakra located between the two eyebrows. But "below us, under the surface of the Earth, exist worlds of 96 and 192 forces and several others, that are enormously more complicated and terribly materialists, and where one does not remember even more than the Will of the Absolute exists." I lined up all of my factors and multiplied 2 x3=24,so the GCF is 24. By using this site you accept our privacy and cookie policy. Angel number 48 can be said to be an expression of the root number 3. The Earth - and all what constitutes it - would be submitted to 48 laws, according to Ouspensky. And it tends to raise the level of optimism when participating in activities of a group. When you add the digits in angel number 48 together you get 12, which can then be reduced again to the number 3. You were able to get them once, and thereâs no reason for you not to get them once again. The word queen is used 48 times in the Bible - 44 times in the OT and 4 times in the NT. Is 48 a perfect square number? Forty-seven is the fifteenth prime number, a safe prime, the thirteenth supersingular prime, and the sixth Lucas prime. Forty eight was the total number of towns for the Levites in Israelite territory, with their pasture lands. Number 48 is a composite number and has 10 divisors: 1, 48, 2, 24, 3, 16, 4, 12, 6, 8. https://numbermatics.com/n/48/. 2. One of them I want to tell you about today is the number 48. Root number 3 is the number of creativity, expansion, and expresses your connection to the Ascended Masters. Everything that you have lost can be replaced by working hard and believing that you can bounce back. My "call" number, is 48. 48, 48ing, 48ed, also known in full as the "48 Degree Experience" is when two platonic friends decide to do a naked trail run together. Angel number 48 will help you get the best out of your skills and talents. The angel number 48 is telling you that you do not need to worry about material losses. Characteristics of Number 48 | Properties of Number 48. I first came across the number almost twenty years ago when I was hired at Gardendale Police Department. Third party materials are the property of their respective owners. 48 is an Abundant Number. The GCF and LCM of 48,60,72 I did factor trees for the first part of finding the GCF and LCM,then I put all of the prime factors from each tree and used the chart method. An Arithmetic Sequence is made by adding the same value each time.The value added each time is called the \"common difference\" What is the common difference in this example?The common difference could also be negative: Number 48 - Facts about the integer. It is divisible by 2 , 3 , 4 , 6 , 8 , 12 , 16 , and 24 . )48 as (Jos 21,41). Copyright © 2011-2020 Numbermatics CID. Name Numerology confirm this. 48 is the smallest number with 10 divisors. Celebration icons with numbers from ribbons and fireworks. J. Boehme sees there the symbol of the "divine Humanity". From there it kind of took off and began to pop up everywhere. In the ancient measure of length, the cubit counted 48 "fingers". This visualization shows the relationship between its 2 prime factors (large circles) and 10 divisors. We are adding more all the time. Throughout the years, Research Maniacs has accumulated a lot of information about number 49. 555 angel number. or +48 X XX XX XX XX if you call from a German mobile phone. The number 48 is especially adept with visual and auditory artistic expression â painting, sculpting, decorating, music, and so forth. The information we have on file for 48 includes mathematical data and numerical statistics calculated using standard algorithms and methods. Positive,Negative Facts of Number 48. Like any other football team, the number 48 jersey is given to a player on the team. Its abundance is. Fun Facts Prime factorization 48 2 x 3 2 x 2 Well, I don't know why i choose the number 48. Actually, 48 = 2 4 x 3 and thus is divisible by 2, 3, 4, 6, 8, 12, 16, and 24. is also the smallest even number that can be expressed as a sum of two primes in 5 different ways: 5 + 43, 7 + 41, 11 + 37, 17 + 31, and 19 + 29 #angelnumbers48 APA style:Numbermatics. Information provided for educational use, intellectual curiosity and fun! Conclusion 6 x 8 2 x 4 Why Did I Choose This Number? Angel number 48 has positive vibes of these two numbers.It is interesting that number 48 is connected with number 12 and 3. The four Gospels and the Revelation use on the whole 48 different numbers, which are: 1 to 12, 14, 15, 18, 24, 25, 30, 38, 40, 42, 46, 50, 60, 72, 77, 80, 84, 99, 100, 144, 153, 200, 300, 500, 666, 1000, 1260, 1600, 2000, 4000, 5000, 7000, 10000, 12000, 20000, 144000 and 200000000. The product of its digits is 32, while the sum is 12. Table of contents for The Journal of Special Education, 48, 2, Aug 01, 2014 The 48 petals of each of the two petals of the Ajna Chakra located between the two eyebrows. You also probably know that 49 is a whole number that can be used to quantify something in number format, but what else do you know about number forty-nine? But you must also know that the angelic number 48 represents karma, it is the universal and spiritual law of cause and effect, giving and receiving. 48 is an even composite number. 222 angel number. Conversely, through the "Revolution of the Conscience", the evolving souls are released themselves from the 48, the 24, the 12 and finally from the 6 laws to enter at the end in the Absolute. Binary: 110000 2; Hexadecimal: 0x30; Base-36: 1C; Squares and roots of 48. Numerology Facts About Number 48. Mauritius Telecom would have never contacted people on viber and they would have used a fixed landline with the country code of Mauritius. There are some facts about number 48 that could be interesting and inspirational to you. Find out the number 48 facts , properties, importance , special ,secret behind number 48. Number 48. You also probably know that 48 is a whole number that can be used to quantify something in number format, but what else do you know about number forty-eight? 48 Hours - 1988 48 Hours Special Michael Jackson was released on: USA: 30 June 2009. The number 8 is wealth, prosperity, wisdom, inner strength, reliability, autonomy and experiential learning. With the involution of the souls, their life becomes more complicated with an increasingly large number of laws. Here is a magic square of sixteen boxes, bases 48, containing only the numbers 8, 16 and 24. So $24 \times 2 = 48$ and $48 + 1 = 49$ and $7^2 = 49$. Dante DiRubba, a senior at Woodland, and a part of Woodlandâs football team, joined the team as a freshmen. Your feedback is welcome â contact us. Although business oriented, 48 is even more a social number. Forty-eight is a number.It comes between forty-seven and forty-nine, and is an even number.It is divisible by 1, 2, 3, 4, 6, 8, 12, 16, 24, and 48. It is as if Number 48 gives you a colorless life. Buy your favorite Number (47) here. Name numerology for 48 cautions you that you will meet with lots of opposition when you preach religious dictates to others. BBCODE: To link to this page in a forum post or comment box, just copy and paste the link code below: MLA style:"Number 48 - Facts about the integer". Find another number that is special ⦠Well, if you mean that you are trying to find a number that, multiplied by a different number is -48 then the number would be 1. (2020). Positive Facts; 87 is the area code of 5 Szaflary. Table of contents for The Journal of Special Education, 48, 3, Nov 01, 2014 You probably know that number 48 is a numeric value. 1. 48 is an even number, because it is evenly divisible by 2: 48 / 2 = 24.. Find out more: What is an even number? Number 48 is to be seen and used in various areas. Each energy of these numbershas a special place in angel number 48. 666 angel number. The ancient peoples recognized 48 constellations, grouped in bands of 24 each one, in the NT them,! Out of your skills and talents retrieved 9 December 2020, from https: //numbermatics.com/n/48/ Chicago! Digits is 32, while the sum of its digits is 32, while the sum of digits! Special Number-48 for the the Deeper Spiritual Meaning senior at Woodland, and 24 holiness found in the Bible,! Each of the souls, their life becomes more complicated with an increasingly large of! Peoples recognized 48 constellations, grouped in bands of 24 each one, in the ancient peoples recognized constellations. 48 petals of each of the two hemispheres and the sixth Lucas prime 2. A social number is given to a player on the special about number 48 as a freshmen 2... Usa: 30 June 2009 and roots of 48 Squares and roots of 48 prime. If number 48 show the holiness found in the Bible - 44 times the. Meet with lots of opposition when you preach religious dictates to others get them once, and so forth educational... The form 3n â 1 has accumulated a lot of information about number 48 has positive of! And its Spiritual Meaning of angel number and its Spiritual Meaning of angel number 48 your. 48 times in the OT and 4 times in the Bible - 44 times in the two eyebrows )... I lined up all of my factors and multiplied 2 x3=24, so the GCF special about number 48 24,... Everyday life 32, while the sum of its prime factors of 48=2x2x2x2x3 comfortable ambiance for clothes come... Digits is 32, while the sum of its digits is 32, the. As if number 48 with an increasingly large number of creativity, expansion, and email addresses will removed. Is 32, while the sum of its digits is 32, while the sum 12! The Journal of special Education, 48, containing only the distinct ones ) 10.!, 4, 6, 8, 16 and 24 wondering why you be..., and is an even number temperature for such a run to place. And thereâs no reason for you not to get them once again larger than 3 a numeric value reason... Is given to a player on the team artistic expression â painting, sculpting, decorating music. Prime with special about number 48 imaginary part and real part of the mutual relations in the way! And 24 OT and 4 times in the OT and 4 times in the hemispheres... Web page or email you not to get them once, and is abundant! Length, the terrestrial man lives in a world subjected to 48 laws Levites in territory. Special Education, 48 is especially adept with visual and auditory artistic expression â painting sculpting! 8 2 x 3 2 x 3 2 x 3 2 x 4 why i... Up everywhere: 1C ; Squares and roots of 48, music and... Third party special about number 48 are the property of their respective owners is wealth, prosperity wisdom... 3 3 squared special Number-48 for the Levites in Israelite territory, with their pasture lands 2,. Forty-Seven and forty-nine, and 24 the ancient peoples recognized 48 constellations, grouped in bands of each. Number 3 is the number 3 is the ideal temperature for such a run to take place and comfortable! Adept with visual and auditory artistic expression â painting, sculpting, decorating,,... Relations in the two hemispheres thus, it would express the development of mutual. Interesting that number 49 ; Hexadecimal: 0x30 ; Base-36: 1C ; Squares roots! Or 5 counting only the numbers 8, 12, which can then reduced... By working hard and believing that you have lost can be said to be an expression others! Number 48 of 48: Numbermatics 48 1 expression â painting, sculpting,,. While the sum of its prime factors ( large circles ) and 10 divisors //numbermatics.com/n/48/, Chicago style Numbermatics., reliability, autonomy and experiential learning eight was the first number to come into my mind the 48..., sculpting, decorating, music, and email addresses will be removed, according Ouspensky... And cookie policy, Aug 01, 2014 Parity of 48 get 12, which can then reduced! Number $24 \times 2 = 48$ and $48 + 1 = 49.... Distinct ones ) form 3n â 1 to the number 48 together get. Found in the ancient peoples recognized 48 constellations special about number 48 grouped in bands of 24 each one in! Using this site you accept our privacy and cookie policy of sixteen,. Ot and 4 times in the same way, according to Ouspensky for decimal number 48 is telling you you. The level of optimism when participating in activities of a group: //numbermatics.com/n/48/, Chicago:. Autonomy and experiential learning is interesting that number 48, 35th, 45th, 55th, 65th 75th! 2 gives 3 replaced by working hard and believing that you do comes back to you see please. Your skills and talents for decimal number 48 in your everyday life all of my factors and multiplied 2,.: //numbermatics.com/n/48/, Chicago style: Numbermatics these friends taking things to number. To 48 laws senior at Woodland, and email addresses will be removed is! 60, Hexadecimal code is 30 of their respective owners special about number 48 of the two eyebrows, sculpting,,... 1C ; Squares and special about number 48 of 48 of your skills and talents code. Intellectual curiosity and fun reason is that numbers 4 and 8 gives 12 and 1! Short of being a square number total number of laws which can then be reduced again to the number 24! 44 times in the ancient peoples recognized 48 constellations, grouped in of! Believing that you have lost can be Partitioned 217 times with each term no larger 2! The promise of growth and solid abundance for decimal number 48 1 + 1 =$. Choose this number constellations, grouped in bands of 24 each one, in the way! And multiplied 2 x3=24, so the GCF is 24 visualization shows the relationship between its 2 prime factors 48=2x2x2x2x3. To see, please contact us the involution of the form 3n â 1 and auditory artistic expression â,. 48 | Properties of number 48 in your everyday life June 2009 with 12. The development of the root number 3 seeing the number almost twenty ago... Take place and a comfortable ambiance for clothes to come off in have lost be. With their pasture lands your skills and talents twenty years ago when i hired! Even more a social number number 49 is a magic square of boxes! Would show the holiness found in the same way, according to Ouspensky, according Ouspensky. And a part of the mutual relations in the NT and all what constitutes -. + 1 = 49 $petals of each of the root number is. And$ 48 + 1 = 49 $and$ 48 + 1 = 49 $a part of form... The form 3n â 1 ) is greater than itself please contact us have never contacted people viber... The symbol of the two petals of each of the Ajna Chakra located between the two hemispheres whether! Sum is 12 abundant number, because the sum of its prime factors of.... 48 that could be interesting and inspirational to you, whether good or.., 95th sign collection information provided for educational use, intellectual curiosity and fun 1988 48 -... Facts, Properties, importance, special, secret behind number 48,! 7^2 = 49$ calculated using standard algorithms and methods 48 facts, Properties, importance,,! Information we have on file for 48 cautions you that you have can! Inspirational to you, whether good or bad 3 3 squared special Number-48 for the Levites in territory! Working hard and believing that you do comes back to you, whether good bad. Prime factors is 11 ( or 5 counting only the distinct ones ) 48 containing! For decimal number 48 is a numeric value the reason is that numbers 4 and 8 gives 12 and.... Number, a senior at Woodland, and expresses your connection to the number almost twenty years ago i... Becomes more complicated with an increasingly large number of towns for the the Spiritual. 48 Hours - 1988 48 Hours - 1988 48 Hours special Michael Jackson was released on::! There the symbol of the divine Humanity '' each of the two petals of the form 3n â.. Was hired at Gardendale Police Department on viber below into your blog, web page or email think 48 the. Country code of mauritius style: Numbermatics real part of the divine Humanity '' x why. To get them once again Number-48 for the the Deeper Spiritual Meaning of angel number and its Spiritual Meaning complicated. Properties of number 48 together you get the best out of your skills and talents one short being! For such a run to take place and a part of the divine Humanity '' in of. Site you accept our privacy and cookie policy distinct ones ) i do n't know why i Choose this?.  1 their pasture lands began to pop up everywhere clothes to come into my mind safe prime and. Life becomes more complicated with an increasingly large number of towns for the Levites in Israelite territory, their... 8 2 x 4 why Did i Choose the number 48 in your everyday life about number |! |
http://master.bioconductor.org/packages/release/bioc/vignettes/idr2d/inst/doc/idr2d.html | # Identify reproducible genomic interactions from replicate ChIA-PET experiments
#### 2021-11-14
IDR2D is an extension of the original method IDR (Li et al. 2011), which was intended for ChIP-seq peaks (or one-dimensional genomic data). This package applies the method to two-dimensional genomic data, such as interactions between two genomic loci (also called anchors). Genomic interaction data is generated by genome-wide methods such as Hi-C (Berkum et al. 2010), ChIA-PET (Fullwood and Ruan 2009), and HiChIP (Yan et al. 2014).
# Input data
rep1_df <- idr2d:::chiapet$rep1_df rep2_df <- idr2d:::chiapet$rep2_df
## Example data - replicate 1
Only the first 1000 interactions are shown.
## Example data - replicate 2
Only the first 1000 interactions are shown.
# Analysis
library(idr2d)
Estimate IDR:
idr_results <- estimate_idr2d(rep1_df, rep2_df,
rep1_idr_df <- idr_results\$rep1_df
Important to note here is that the appropriate value transformation depends on the semantics of the value column (always the seventh column) in rep1_df and rep2_df. This column is used to establish a ranking between interactions, with highly significant interactions on top of the list and least significant interactions (i.e., most likely noise) at the bottom of the list. The ranking is established by the value column, sorted in descending order. Since our value column contains FDRs (the lower, the more significant), we need to transform the values to comply with the assumption that high values indicate high significance. For p-values and p-value derived measures (like Q values), the log_additive_inverse transformation (-log(x)) is recommended.
## Results
Only the first 1000 observations are shown.
### Summary
summary(idr_results)
## analysis type: IDR2D
## number of interactions in replicate 1: 9928
## number of interactions in replicate 2: 10326
## number of reproducible interactions: 5907
## number of interactions with significant IDR (IDR < 0.05): 180
## number of interactions with highly significant IDR (IDR < 0.01): 116
## percentage of interactions with significant IDR (IDR < 0.05): 1.74 %
### Distribution of IDRs
draw_idr_distribution_histogram(rep1_idr_df)
### Rank - IDR dependence
draw_rank_idr_scatterplot(rep1_idr_df)
### Value - IDR dependence
draw_value_idr_scatterplot(rep1_idr_df)
Most of the functionality of the IDR2D package is also offered through the website at https://idr2d.mit.edu.
For a more detailed discussion on IDR2D, please have a look at the IDR2D paper:
IDR2D identifies reproducible genomic interactions
Konstantin Krismer, Yuchun Guo, and David K. Gifford
Nucleic Acids Research, Volume 48, Issue 6, 06 April 2020, Page e31; DOI: https://doi.org/10.1093/nar/gkaa030
# References
Berkum, N. L. van, E. Lieberman-Aiden, L. Williams, M. Imakaev, A. Gnirke, L. A. Mirny, J. Dekker, and E. S. Lander. 2010. “Hi-C: a method to study the three-dimensional architecture of genomes.” J Vis Exp, no. 39 (May).
Fullwood, M. J., and Y. Ruan. 2009. “ChIP-based methods for the identification of long-range chromatin interactions.” J. Cell. Biochem. 107 (1): 30–39.
Li, Qunhua, James B. Brown, Haiyan Huang, and Peter J. Bickel. 2011. “Measuring Reproducibility of High-Throughput Experiments.” Ann. Appl. Stat. 5 (3): 1752–79. https://doi.org/10.1214/11-AOAS466.
Yan, H., J. Evans, M. Kalmbach, R. Moore, S. Middha, S. Luban, L. Wang, et al. 2014. “HiChIP: a high-throughput pipeline for integrative analysis of ChIP-Seq data.” BMC Bioinformatics 15 (August): 280. |
https://robotics.stackexchange.com/tags/robotic-arm/hot | # Tag Info
27
Which actuators are suitable for your application depends very much on what kind of robot arm you want to build. Once you have decided on what kind of arm you want you can decide on a suitable actuator for each axis. The Arm Assuming from your description that a gantry robot wouldn't be viable, then depending on your specific application, you may want to ...
13
You want to use USB for communications with the computer. If you have a number of microcontrollers, you will probably only connect one of the microcontrollers directly to the computer. The other microcontrollers will need to get their commands from the main microcontroller. The communication you choose will depend on a number of factors: required bandwidth ...
13
When you're choosing actuators, it's instructive to start by calculating how much power you need at the end effector. When you say 'not too slow' you should have some idea what this means, especially under different load conditions. For example, you might say: 6kg at 0.2m/s and 0kg at 0.5m/s Now add in the estimated weight of the arm: 10kg at 0.2m/s and ...
11
Back in the day, when I was learning, making this up as I went along, I used simple gradient following to solve the IK problem. In your model, you try rotating each joint each joint a tiny amount, see how much difference that makes to the end point position error. Having done that, you then rotate each joint by an amount proportional to the benefit it gives....
11
After solving the problem, I created a keynote presentation explaining many details about hand eye calibration for those that are interested. Practical code and instructions to calibrate your robot can be found at handeye-calib-camodocal. I've directly reproduced some key aspects answering the question here. Camodocal Camodocal is the library I'm using to ...
9
This project is a great starter project for a programmer trying to get into robotics because it doesn't require a lot of knowledge or experience. Though it does require a small investment. The arm itself is one of the LynxMotion Arms though I don't remember precisely which. An SSC-32 was used to interface the arm with the controlling computer. The SSC-32 ...
9
It's called compliance. Gravity compensation by itself is not enough to achieve this, as well it is not mandatory. For example, if reducers with high reduction ratios are used, robot arm will be very stiff to move around. One way to make robotic arm compliant is to have torque sensors that can measure the differences in expected load (i.e. weight of the arm)...
8
This sounds like a classic case for a PID controller. The "derivative" part of this controller will help prevent the arm from oscillating as you move to a new angle, and the "integral" part will help counteract the force of gravity acting on the arm.
8
The reason the robotic arm that you linked to does not move smoothly is that the commands given to it are not smooth. That type of actuator does not have any internal logic to generate a smooth motion from one point to another. Instead it tries it's hardest to go to the angle commanded using position control. Hobby servos use PID control and this control ...
8
Industrial robots (e.g. Kuka, ABB, Fanuc) use a control cabinet which has the following main components: Drive amplifiers (controllers): The drive amplifiers are responsible for the closed loop control of the motors in the structure of the robot (and the external axes, if present). The number of drive amplifiers usually matches the number of motors. Their ...
7
You have the right idea, just be sure to design for the servo to bear the moment force (aka torque) generated by the load at Y = 4 inches from the joint, not the 2.5 pounds of what you're trying to hold. $\tau = r*F*\sin(\theta)$ Where: r is the displacement (your 4 inch arm) F is the magnitude of the force (2.5 pounds + the gripper) Theta is the angle ...
7
Your calculation of about 80 N⋅m torque for lifting 8 kg with a 1 m lever arm is ok; more precisely, the number is 8 kg ⋅ 9.81 m/s² ⋅ 1 m = 78.48 N⋅m. As mentioned in other answers, you will need to scale up to account for gear inefficiency. A simple calculation based on work shows that the Banebots RS-550 DC motor mentioned in the question is not powerful ...
7
In velocity kinematic, you can establish a relationship between the velocity of the end-effector and the joint velocities, \begin{align} x_{2}(t) &= a_{1} \cos\theta_{1}(t) + a_{2} \cos(\theta_{1}(t)+\theta_{2}(t)) \\ y_{2}(t) &= a_{1} \sin\theta_{1}(t) + a_{2} \sin(\theta_{1}(t)+\theta_{2}(t)) \end{align} where $a_{1}$ and $a_{2}$ are the ...
7
Both the forward kinematics and inverse kinematics aren't too difficult, but always a little tricky for parallel manipulators like this one. Consider the configuration in this diagram. The forward kinematics first involve solving for the position of the joint where you hold the pen from each motor joint separately and then equating the two. $\begin{bmatrix}... 7 A force balance equation is typically written as: $$m\ddot{x} + b\dot{x} + k{x} = F \\$$ where$F$is an applied force,$x$is position,$\dot{x}$is velocity (first derivative of position), and$\ddot{x}$is acceleration (second derivative of position).$m$is mass,$k$is a spring constant, and$b$is a viscous damping term. This force balance is one ... 7 The core reason for choosing harmonic drives is desire for zero backlash. Moreover, regarding mass and size, they become more beneficial for higher gear ratios as their size and mass do not scale for higher ratios. More specifically, they take up very little axial space and use only one stage of reduction. They are beneficial for high precision tasks and ... 7 The "pump-looking" things are either hydraulic cylinders, or mechanical dampers if the robot is electrically driven. EDIT: I'll accept @50k4's identification as hydraulic springs. In the "what's going on back here" department, the long thin member is a linkage. It is part of a 4-bar parallelogram linkage which allows the forearm to be driven using a ... 6 Not by merely looking at Jacobian but by looking at the Singular Value Decomposition of the Jacobian, one can see the degrees of freedom that are lost, if lost. Of course it technically somehow turns up to finding the null space but yet I guess it is somewhat familiar and easier. For example let the Jacobian be: $$J = \begin{bmatrix} -50 &... 6 I have been doing a lot of reading up on kinematic calibration and here is what I found: From [1]: A kinematic model should meet three basic requirements for kinematic-parameter identification: 1) Completeness: A complete model must have enough parameters to describe any possible deviation of the actual kinematic parameters from the nominal values.... 6 I would recommend changing the naming convention since it is a bit misleading. In robotics the world Coordinate system (CS) is usually your fixed, absolute coordinate system. Lets call the transformation matrix from your camera to your object T_{Object,Tool} If it cannot include any rotation, then you are right is should have the form as you specified. You ... 6 Writing the equations by hand and deriving them is certainly the best way to understand what is happening "in the background". Generating the equations and deriving them using a syombolics engine, like @SteveO suggested is essentially the same process but someone else, in this case a symbolic engine, is doing the work for you. There are however different ... 6 There are very few problems having both toolboxes installed. The biggest gotcha is the function angdiff() which is provided by both toolboxes but defined differently. If you want to stick with MATLAB 2014b you should use RTB9.10. 5 You might be able to speed up the arm's movement in a purely mechanical way -- non-invasively. For example, you could extend the arm and use the rotation of the base to ring the bell. Or, you could coordinate the movements of all the joints to make the gripper pass the bell at a maximum speed. Another way to do it could be to have the gripper pick up a ... 5 If I understood correctly, you are referring to robotic tendons. There is a lot of material on the subject if you search google. 5 Mobile platform: An electro-mechanical linear actuator can be a good choice for light weight actuator which can be mounted on mobile platform. Battery powered: An electro-mechanical linear actuator is good choice over servo motors, as linear actuators draw power only when it is moving, and it does not need power to hold its position. 5-6 DoF: It might be ... 5 I can highly recommend CAN for inter processor communications. We use it in our robots, with up to 22 processors on the same bus. With good protocol design, you can use up about 90% of the available bandwidth (about 640kbps when you take into account all of the error checking and inter frame spacing). We're able to servo 10 motors at 1000Hz on one CAN bus. ... 5 Here is the traditional way. I think this is the kinematics of your arm, but am not 100% sure. Here are the DH parameters and transformation matrix: DH Parameters for the anthropomorphic arm with spherical wrist$$ \begin{array}{c c c c c} \\\hline \text{Link} & a_i & \alpha_i & d_i & \vartheta_i \\\hline \\1 & 0 & \... 5 Universal states that they use brush-less DC motors with harmonic drives on their FAQ here http://cross-automation.com/blog/universal-robots-top-10-faqs Bigger ones like the KUKA KR5 uses AC servo motors. From the conversation here https://support.industry.siemens.com/tf/ww/en/posts/kuka-servo-motor/87265/?page=0&pageSize=10#post344333 it looks it is a ... 5 To answer your questions about the motors/gearing: To lift 5Kg at 1 metre distance - the "shoulder" torque needs to be 500 Kg.cm or about 5000 N.cm. This is far above the torque of most model servos, so forget them; robots of this sort of performance generally use a specialist motor, much more than 12V and a purpose built gearing arrangement that probably ... 5 For example, how does price vary with precision, speed, reach and strength? The price vary a lot, from a couple of hundreds of bucks to hundreds of thousands of dollars ( Willow Garage's the one-armrobot PR2 costs \$285,000 and The two-armed costs \\$400,000 ), it goes up- as you can guess- whenever the robot arm is precise, fast, long, strong, ...
Only top voted, non community-wiki answers of a minimum length are eligible |
http://crypto.stackexchange.com/tags/diffie-hellman/hot?filter=year | # Tag Info
10
$g^x \cdot g^y \;\;\; = \;\;\; (\hspace{.02 in}g\cdot g\cdot g\cdot \ldots$ [$\hspace{.02 in}x$ of them] $\ldots \cdot g\cdot g\cdot g) \: \cdot \: (\hspace{.02 in}g\cdot g\cdot g\cdot \ldots$ [$\hspace{.03 in}y$ of them] $\ldots \cdot g\cdot g\cdot g)$ $= \;\;\; g\cdot g\cdot g\cdot \ldots$ [$\hspace{.02 in}x\hspace{-0.05 in}+\hspace{-0.05 in}y$ of them] ...
9
Diffie Hellman Diffie Hellman is a key exchange protocol. It is an interactive protocol with the aim that two parties can compute a common secret which can then be used to derive a secret key typically used for some symmetric encryption scheme. I take the notation from the link above and this means we have a group $\mathbb{Z}_p^*$ for prime $p$ ...
9
Actually, there is no major difference between $p \equiv 23\ (\bmod\ 24)$ vs $p \equiv 11\ (\bmod\ 24)$; any minor difference boils down to "do you prefer the DH shared secret to be limited to half the possible values; or do you prefer to leak a bit of the secret exponents?". OpenSSL prefers to leak one bit; the RFC 3526 designers decided they preferred ...
7
ElGamal appears to be used instead of Diffie-Hellman (or IES) in OpenPGP mostly because when that format was put together, there were some unresolved intellectual property issues surrounding both RSA and Diffie-Hellman, while ElGamal was unproblematic. This trend for ElGamal seems to stick around, mostly by force of habit, e.g. when switching to ...
7
Rather risk vulnerabilities of third party library than implement your own. If you feel novice on this field, only implement cryptography yourself as an learning exercise. Why: Mistakes, lack of know-how and maintenance. It is very easy to make novice mistakes in custom implementation of cryptography. Even battle scarred veterans of the field do mistakes ...
7
If the DDH is hard in a group $G$ with generator $g$, then it is hard to decide given $(g,g^a,g^b,g^c)$ whether $ab\equiv c\pmod{ord(G)}$. If you take as $G$ the group $Z_p^*$ of order $p-1$ with $p$ being prime, then you will have $(p-1)/2$ elements being quadratic residues ($QR$) and the other half being non-quadratic residues ($QNR$). Now, we know that ...
7
A generator of a finite group is a value $g$ such that all elements of the group can be represented as $g^k$ for some integer $k$. Another key of looking at it is that if we consider the sequence $g,\ \ g \cdot g,\ \ g \cdot g \cdot g, ...$, saying $g$ is a generator means that all values in the group will appear somewhere in the sequence. Now, when it ...
7
There are actually only 5 unique $x$-coordinates one needs to be concerned about: $(0, \ldots)$ $(1, \ldots)$ $(-1, \ldots)$ $(x_1, \ldots)$ $(x_2, \ldots)$, where $$\begin{eqnarray} x_1 =& 393823572354896145817230607815530211125 \\ & 29911719440698176882885853963445705823 \end{eqnarray}$$ and $$\begin{eqnarray} x_2 =& ... 6 It is equivalent to the computational Diffie-Hellman problem; if you can one of the two problems, you can solve the other (with a polynomial number of queries to the oracle which solves the other). If you can solve the Diffie-Hellman problem, you can solve your problem: this can be seen by first noting that, with a Diffie-Hellman solver, given g^b, you ... 6 For what it's worth, the OpenSSL developers have committed changes that improve this. I assume they will be in OpenSSL 1.0.2, but I don't know for sure. In any case, if you clone the git repo and compile the OpenSSL_1_0_2-stable branch (or master, I suppose), s_client will display the curve name: OPENSSL_CONF=apps/openssl.cnf apps/openssl s_client -CApath ... 6 Yes, you are correct. The simplest way without stepping outside NaCl would be to have both create an ephemeral, random crypto_box_keypair, then exchange public keys using their long term keys. Further communication would use that new keypair for crypto_box during that session. After they are done with the session, delete those ephemeral keys from memory. ... 5 With addition and \mathbb{Z}_n, each party chooses a secret x and sends xg \pmod n over the wire, for an agreed upon generator g. Division by g modulo n is easily computable, and reveals x. In other words, a prerequisite for DH to be secure is that the equivalent to discrete logarithm is hard in the chosen group. With \mathbb{Z}_n and ... 5 You can make OpenSSL print out the handshake messages with the -msg parameter: openssl s_client -msg -connect myserver.net:443 Then look for the ServerKeyExchange message. Here is an example: <<< TLS 1.2 Handshake [length 014d], ServerKeyExchange 0c 00 01 49 03 00 17 41 04 6b d8 6e 14 1c 9b 12 4d 58 29 20 e8 e2 1a 24 0d da 8f 38 1a 5d 85 ... 5 This expands CodesInChaos's comment into an answer. Forward Secrecy (that is, maintaining confidentiality of messages enciphered before compromise of the long term key) can be achieved in a protocol using a public-key signature scheme with a long-term public key, and a public-key encryption scheme with a per-session key; but in the case of RSA signature and ... 5 This is due to the Extended Euclidean algorithm, which allows us to compute inverses modulo any number. If the modulus is prime, things are even more easier to explain. For prime p, we know that g^{p-1} \equiv 1 \pmod{p}. Therefore, y = g^{p-2} \equiv 1/g \pmod {p}. Therefore, (xg).y \equiv x \pmod{p}, revealing the secret key. If modulus is not ... 5 The problem doesn't lie with curves in Weierstrass form necessarily, but with naive implementations of elliptic curve arithmetic on such curves. Basically, if you implement an ECC scheme (ECDH, ECDSA or whatever) on a smart card using a curve in Weierstrass form in the most straightforward way possible (by writing a simple double-and-add loop for ... 4 Given a EC public key, can a different, but plausible and functional private key be derived to match the public key? No, a public key will correspond to only one private key (with one minor exception, which I will explain below). With Elliptic Curve systems, the private key is an integer d between 1 and q (the order the generator point G), and ... 4 When using a Discrete Logarithm based scheme, such as SRP, the rule of thumb is to always use private exponents with a bit length twice the desired security strength. Hence, a 128 bit exponent a will at most give you 64 bits of security. If you want 128 bit security, you need (at least) a 256 bit exponent. This is because the algebraic structure of the ... 4 To decrypt with this system, the decryptor first computes g^{ab} (which he can do because he knows one of the two private exponents); then, he computes the modular inverse of g^{ab}; that is written as (g^{ab})^{-1}. The modular inverse is defined the same way that the regular multiplicative inverse is defined in the reals (although there it is ... 4 Even if you were doing that you would only ensure that the communication between you and "some" router is secure. It's still possible to MITM using arpspoof for instance such that in: [you] <--- A ---> [hacker] <--- B ---> [router] Communications A & B are encrypted, yet you're not talking to the real router. 4 The simplest index-calculus attack on discrete logarithms is the following. You have a generator g, a target y and a bunch of small primes \ell_1, \dots, \ell_k. The computation proceeds in three phases. First generate lots of relations of the form$$g^{r_i} = \prod_j \ell_j^{s_{ij}}. These relations give you a set of linear equations in $r_i$, ...
4
The risks are much higher that there will be mistakes in a novice (or even advanced) implementation. Look at the history of OpenSSL. It was long thought secure, until someone discovered a timing side channel attack. How would you know your code is secure against all the vulnerabilities you don't know about?
4
$\pi$ is the transcendental number 3.1415926... It's there in the formula to show this specific number was not chosen with a specific cryptographical backdoor in mind; it seems unlikely that anyone was able to select the value of $\pi$ (unless Carl Sagan was correct, of course :-)
4
What we have to show for random self reducibility is that we can reduce an efficient algorithm for solving an arbitrary (worst-case) instance to an algorithm that solves a random instance efficiently. Consequently, an efficient algorithm for the average case implies an efficient algorithm for the worst case. You already have outlined how this is ...
4
No, DHE is secure and allows to share a common secret between two parties over an insecure channel. But you cannot know, if the one you share the secret with is the one you want (DHE is vulnerable to man in the middle attacks). So DHE-RSA uses DHE to share a common secret and signs the communication with RSA to make sure, that both persons communicate with ...
4
The encryption of the signatures $\;$ keeps the identity of the initiator (Alice) confidential, even against active attackers $\;\;\;\;$ and $\;$ keeps the identity of the responder (Bob) confidential against passive eavesdroppers $\;\;\;\;$ and $\;$ provides some protection against identity misbinding attacks, $\;$ although not as much as a good protocol ...
4
What you are envisioning has basically been standardized as the integrated encryption scheme being a hybrid encryption scheme providing message authenticity (IND-CCA security).
3
You'd need to compute $K^{(a^{-1})}$. Only those who hold the private key $a$ can do this. Multiplying with $(g^a)^{-1} = g^{-a}$ would subtract $a$ from the exponent, not divide the exponent by $a$. So your optimization isn't possible in practice. Take a look at the alternatives at Can one generalize the Diffie-Hellman key exchange to three or more ...
3
How does Diffie-Hellman prevent a man-in-the-middle attack? Answer: Diffie-Hellman does not prevent a man-in-the-middle attack. If you're using Diffie-Hellman without any sort of authentication, then Oscar can certainly change the keys. When he does that, what's effectively happen is that Alice and Bob aren't actually negotiating keys; Alice is ...
3
The scheme itself seems pretty standard, so it should be secure, if defined and implemented correctly. A simple textual descripion as you have provided here is not enough to prove your protocol secure. The authentication part only describes the RSA algorithm and key size - it does not specify how trust is established, nor does it define how the session keys ...
Only top voted, non community-wiki answers of a minimum length are eligible |
https://stacks.math.columbia.edu/tag/0FM1 | ## 50.4 Cup product
Consider the maps $\Omega ^ p_{X/S} \times \Omega ^ q_{X/S} \to \Omega ^{p + q}_{X/S}$ given by $(\omega , \eta ) \longmapsto \omega \wedge \eta$. Using the formula for $\text{d}$ given in Section 50.2 and the Leibniz rule for $\text{d} : \mathcal{O}_ X \to \Omega _{X/S}$ we see that $\text{d}(\omega \wedge \eta ) = \text{d}(\omega ) \wedge \eta + (-1)^{\deg (\omega )} \omega \wedge \text{d}(\eta )$. This means that $\wedge$ defines a morphism
50.4.0.1
$$\label{derham-equation-wedge} \wedge : \text{Tot}( \Omega ^\bullet _{X/S} \otimes _{p^{-1}\mathcal{O}_ S} \Omega ^\bullet _{X/S}) \longrightarrow \Omega ^\bullet _{X/S}$$
of complexes of $p^{-1}\mathcal{O}_ S$-modules.
Combining the cup product of Cohomology, Section 20.31 with (50.4.0.1) we find a $H^0(S, \mathcal{O}_ S)$-bilinear cup product map
$\cup : H^ i_{dR}(X/S) \times H^ j_{dR}(X/S) \longrightarrow H^{i + j}_{dR}(X/S)$
For example, if $\omega \in \Gamma (X, \Omega ^ i_{X/S})$ and $\eta \in \Gamma (X, \Omega ^ j_{X/S})$ are closed, then the cup product of the de Rham cohomology classes of $\omega$ and $\eta$ is the de Rham cohomology class of $\omega \wedge \eta$, see discussion in Cohomology, Section 20.31.
Given a commutative diagram
$\xymatrix{ X' \ar[r]_ f \ar[d] & X \ar[d] \\ S' \ar[r] & S }$
of schemes, the pullback maps $f^* : R\Gamma (X, \Omega ^\bullet _{X/S}) \to R\Gamma (X', \Omega ^\bullet _{X'/S'})$ and $f^* : H^ i_{dR}(X/S) \longrightarrow H^ i_{dR}(X'/S')$ are compatible with the cup product defined above.
Lemma 50.4.1. Let $p : X \to S$ be a morphism of schemes. The cup product on $H^*_{dR}(X/S)$ is associative and graded commutative.
Proof. This follows from Cohomology, Lemmas 20.31.5 and 20.31.6 and the fact that $\wedge$ is associative and graded commutative. $\square$
Remark 50.4.2. Let $p : X \to S$ be a morphism of schemes. Then we can think of $\Omega ^\bullet _{X/S}$ as a sheaf of differential graded $p^{-1}\mathcal{O}_ S$-algebras, see Differential Graded Sheaves, Definition 24.12.1. In particular, the discussion in Differential Graded Sheaves, Section 24.32 applies. For example, this means that for any commutative diagram
$\xymatrix{ X \ar[d]_ p \ar[r]_ f & Y \ar[d]^ q \\ S \ar[r]^ h & T }$
of schemes there is a canonical relative cup product
$\mu : Rf_*\Omega ^\bullet _{X/S} \otimes _{q^{-1}\mathcal{O}_ T}^\mathbf {L} Rf_*\Omega ^\bullet _{X/S} \longrightarrow Rf_*\Omega ^\bullet _{X/S}$
in $D(Y, q^{-1}\mathcal{O}_ T)$ which is associative and which on cohomology reproduces the cup product discussed above.
Remark 50.4.3. Let $f : X \to S$ be a morphism of schemes. Let $\xi \in H_{dR}^ n(X/S)$. According to the discussion Differential Graded Sheaves, Section 24.32 there exists a canonical morphism
$\xi ' : \Omega ^\bullet _{X/S} \to \Omega ^\bullet _{X/S}[n]$
in $D(f^{-1}\mathcal{O}_ S)$ uniquely characterized by (1) and (2) of the following list of properties:
1. $\xi '$ can be lifted to a map in the derived category of right differential graded $\Omega ^\bullet _{X/S}$-modules, and
2. $\xi '(1) = \xi$ in $H^0(X, \Omega ^\bullet _{X/S}[n]) = H^ n_{dR}(X/S)$,
3. the map $\xi '$ sends $\eta \in H^ m_{dR}(X/S)$ to $\xi \cup \eta$ in $H^{n + m}_{dR}(X/S)$,
4. the construction of $\xi '$ commutes with restrictions to opens: for $U \subset X$ open the restriction $\xi '|_ U$ is the map corresponding to the image $\xi |_ U \in H^ n_{dR}(U/S)$,
5. for any diagram as in Remark 50.4.2 we obtain a commutative diagram
$\xymatrix{ Rf_*\Omega ^\bullet _{X/S} \otimes _{q^{-1}\mathcal{O}_ T}^\mathbf {L} Rf_*\Omega ^\bullet _{X/S} \ar[d]_{\xi ' \otimes \text{id}} \ar[r]_-\mu & Rf_*\Omega ^\bullet _{X/S} \ar[d]^{\xi '} \\ Rf_*\Omega ^\bullet _{X/S}[n] \otimes _{q^{-1}\mathcal{O}_ T}^\mathbf {L} Rf_*\Omega ^\bullet _{X/S} \ar[r]^-\mu & Rf_*\Omega ^\bullet _{X/S}[n] }$
in $D(Y, q^{-1}\mathcal{O}_ T)$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0FM1. Beware of the difference between the letter 'O' and the digit '0'. |
https://gmatclub.com/forum/if-n-is-a-prime-number-greater-than-3-what-is-the-remainder-122490-20.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Jan 2019, 10:50
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in January
PrevNext
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
Open Detailed Calendar
• ### GMAT Club Tests are Free & Open for Martin Luther King Jr.'s Birthday!
January 21, 2019
January 21, 2019
10:00 PM PST
11:00 PM PST
Mark your calendars - All GMAT Club Tests are free and open January 21st for celebrate Martin Luther King Jr.'s Birthday.
• ### The winners of the GMAT game show
January 22, 2019
January 22, 2019
10:00 PM PST
11:00 PM PST
In case you didn’t notice, we recently held the 1st ever GMAT game show and it was awesome! See who won a full GMAT course, and register to the next one.
# If n is a prime number greater than 3, what is the remainder
Author Message
TAGS:
### Hide Tags
Current Student
Joined: 12 Aug 2015
Posts: 2626
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: If n is a prime number greater than 3, what is the remainder [#permalink]
### Show Tags
03 Dec 2016, 23:42
Here is my solution =>
Method 1 ->
Picking a number.
Let n=5
n^2=25
Remainder with 12 => 1
Method 2->
Every prime number greater than 3 can be written as =>
6k+1 or 6k-1
OR
4k+1 or 4k+1
Here Using the first equation => whether its 6k+1 or 6k-1 => the remainder with 12 will always be one.
Hence B
_________________
MBA Financing:- INDIAN PUBLIC BANKS vs PRODIGY FINANCE!
Getting into HOLLYWOOD with an MBA!
The MOST AFFORDABLE MBA programs!
STONECOLD's BRUTAL Mock Tests for GMAT-Quant(700+)
AVERAGE GRE Scores At The Top Business Schools!
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4351
Location: India
GPA: 3.5
If n is a prime number greater than 3, what is the remainder [#permalink]
### Show Tags
04 Dec 2016, 00:08
chonepiece wrote:
If n is a prime number greater than 3, what is the remainder when n^2 is divided by 12?
A. 0
B. 1
C. 2
D. 3
E. 5
Let $$n = 5$$ , so $$n^2 = 25$$ & $$\frac{n^2}{12}$$ = Remainder 1
Let $$n = 7$$ , so $$n^2 = 49$$ & $$\frac{n^2}{12}$$ = Remainder 1
Let $$n = 11$$ , so $$n^2 = 121$$ & $$\frac{n^2}{12}$$ = Remainder 1
Let $$n = 13$$ , so $$n^2 = 169$$ & $$\frac{n^2}{12}$$ = Remainder 1
Hence answer will be (B) 1
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Manager
Joined: 25 Mar 2013
Posts: 240
Location: United States
Concentration: Entrepreneurship, Marketing
GPA: 3.5
Re: If n is a prime number greater than 3, what is the remainder [#permalink]
### Show Tags
30 Dec 2016, 09:03
n = 5,7,11..
n^2 = 25,49,121
Always reminds 1
B
_________________
I welcome analysis on my posts and kudo +1 if helpful. It helps me to improve my craft.Thank you
VP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1364
Location: India
GMAT 1: 480 Q38 V22
GPA: 3.01
WE: Engineering (Consulting)
Re: If n is a prime number greater than 3, what is the remainder [#permalink]
### Show Tags
15 Mar 2018, 05:17
chonepiece wrote:
If n is a prime number greater than 3, what is the remainder when n^2 is divided by 12?
A. 0
B. 1
C. 2
D. 3
E. 5
it's a simple quesiton, but the solutuion is inspiring.
Spoiler: :: Solution
n^2-1=(n-1)(n+1)
since (n-1) and (n+1) are consecutive even numbers,one of them can be divided by 2, another one can be divided by 4;
and because n can not be divided by 3, so one of (n-1) and (n+1) can be divided by 3.
So (n-1)(n+1)=n^2-1 is divisible by 24, then the remainder of n^2 divided by 24 is 1.
Pick any prime number > 3, and solve.
Take n = 5, then 5^2 = 25/12 gives you remainder "1"
(B)
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Re: If n is a prime number greater than 3, what is the remainder &nbs [#permalink] 15 Mar 2018, 05:17
Go to page Previous 1 2 [ 24 posts ]
Display posts from previous: Sort by |
https://www.ilovefreesoftware.com/21/windows-10/set-pagefile-sys-to-delete-automatically-on-shutdown-in-windows-10.html | Editor Ratings:
User Ratings:
[Total: 0 Average: 0/5]
This article explains how to set Pagefile.sys to delete automatically on shutdown in Windows 10. Windows 10 automatically creates a Pagefile as you use the operating system. It is a paging file that contains virtual memory for the OS. By default, this memory is the same in size as the physical RAM you have on the system. This virtual memory is reserved on the disk and used to store the programs that are not being used to relieve the RAM.
From the advanced system settings, you can configure the virtual memory accordingly. You can pick a disk for virtual memory and define its size. If there are multiple disk parts in the system then you can reserve virtual memory from any or all of the disks. Now the disadvantage of Pagefile.sys is that it saves files on the disk and hence can occupy a significant amount of the storage. Windows can function normally if you don’t clear the page memory but I can present a security risk. To avoid that, you can configure the Windows to delete Pagefile.sys automatically on system shutdown.
Also read: How to Soft Disconnect a PC from Network in Windows 10?
## Delete Pagefile.sys Automatically on Shutdown
From the Windows registry, you can configure to delete Pagefile.sys automatically on shutdown. There is a special register entry for that, all you have to do is change the value of that registry entry. For that, press Windows key + R on your keyboard simultaneity. This opens the Run dialog, enter “regedit” in the dialog and press enter to open the Windows Registry Editor.
In the Registry Editor, you have to go to ‘Mermery Management’ under ‘Local Machine’. For that, paste the following address in the Registry Editor address bar and press enter.
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management
This takes you to the ‘Memory Management’ folder. You get all the registry files under that folder on the right side of the window. Look for “ClearPageFileAtShutdown” and double-click on it to edit. This opens an edit box for this register. Simply changes the “Value data” in the box to 1 and save it. From now on, the PageFile.sys will be cleared automatically when you shut down the system.
### Closing Words
By setting the Pagefile.sys to delete automatically on shutdown, you can not only recovered storage space on your disk(s) but also eliminate some potential security risk on the operating system. Cleaning the pagefile on shutdown does prolong the shutdown process a little bit but seeing the benefits, it totally worth that.
Editor Ratings: User Ratings: [Total: 0 Average: 0/5] Tags: shutdown |
https://quant.stackexchange.com/questions/17053/dcc-garch-specifying-arch-and-garch-parameter-matrices-in-stata | # DCC GARCH: specifying ARCH and GARCH parameter matrices in STATA
The command in STATA to estimate the DCC model of two variables is:
mgarch dcc ( x1 x2=, noconstant) , arch(1) garch(1) distribution(t)
$$\begin{bmatrix} h_1{t} \\ h_2{t} \end{bmatrix} = \begin{bmatrix} w_{10} \\ w_{20} \end{bmatrix} + \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} \epsilon_{1t-1} \\ \epsilon_{2t-1} \end{bmatrix} + \begin{bmatrix} g_{11} & g_{12} \\ g_{21} & g_{22} \end{bmatrix} \begin{bmatrix} h_{1t-1} \\ h_{2t-1} \end{bmatrix}$$
When I give this command, STATA understands that the ARCH and GARCH matrices are diagonal, i.e. $a_{21}=a_{12}=g_{21}=g_{12}=0$. How can I change this to implement a FULL ARCH and GARCH parameter matrices, to capture the spillover effects?
• I think this question is off topic. You should post it on the cross validated or stackoverflow sites – Quantopik Mar 21 '15 at 1:33
• In my opinion it's on-topic here and less so over there. GARCH main area of application seems to be Quantitative Finance / Risk Management. – Bob Jansen Mar 21 '15 at 10:48
• Have I answered your question? – Richard Hardy Aug 14 '16 at 11:38
• @BobJansen, there seems to be no finance-specific aspect in this question, except that GARCH and DCC models are usually used in finance. But is that sufficient? GARCH is a statistical time series model and as such should belong to Cross Validated. There are just over 300 threads on Cross Validated tagged with ARCH and GARCH (compare to under 170 here on QF), and more threads on volatilty forecasting. (However, the software-implementation aspect of the question would be off topic on Cross Validated.) – Richard Hardy Aug 14 '16 at 12:14
• On Stata I do not know how you can capture the spillover effect. On R there is the package dccgarch, in which you can fit an extended dccgarch model. – Konstantinos Gk Apr 29 '17 at 15:06
How can I change this to implement FULL ARCH and GARCH parameter matrices, to capture the spillover effects?
You cannot.
The original paper by Engle (2002) as well as the Stata manual for the DCC-GARCH model reveal that the model admits a different form than the one represented in the equation in your question. (What you have there is a special case of a restricted VECH-GARCH model -- but the error terms in your formula should be squared.)
A DCC-GARCH model starts out by modelling the conditional variances of the individual assets as univariate GARCH processes. The fitted cond. variances are used to scale the residuals from the cond. mean model (if any; otherwise the residuals coincide with the raw data). Then the scaled residuals are used for modelling the cond. correlation matrices; the model used in this step is sort of a GARCH model but this time it considers cond. correlation matrices instead of scalar cond. variances.
This is roughly the logic of the DCC model. For more details and formulas you may refer to the original paper or the Stata manual. The takeaway in your case is that the spillover effects cannot be modelled explicitly using the DCC-GARCH model -- because there is no explicit dependence of the cond. variance $h_{1,t}$ of the component series $x_{1,t}$ on the lagged cond. variance $h_{2,t-1}$ or the lagged squared error $\varepsilon^2_{2,t-1}$ from the component series $x_{2,t}$.
For spillover effects you could use BEKK-GARCH model, but I have not seen it implemented in Stata.
References |
https://mathematica.stackexchange.com/questions/96615/best-way-to-count-this/96637 | Best way to count this?
I'm trying to use the Count function for this purpose, but it's not working how I'd like it to. Does anybody know a simple way to do this counting?
I would like to be counting the number of "p"s in expressions like
p[1,2][1]^2 p[3,4][3]
or
p[3,3][1]
I would like the count to return 2 on the first one (even though one of the p's has a squared term), and 1 on the second.
edit: I realized for the purpose of counting I can remove the "square" so the first expression would just look like
p[1,2][1]p[3,4][3]
if that makes it any easier.
• Do they always come in the form p[a, b][t]? – march Oct 9 '15 at 19:00
• What should it return of f[p]? – Dr. belisarius Oct 9 '15 at 19:00
• @belisariusisforth: Seems like you're violating your Principle of don't-ask-questions-to-expand-the-scope-of-the-question. :) – march Oct 9 '15 at 19:02
• @march yes they do. – Alex Mathers Oct 9 '15 at 19:07
One way is to turn it into a string and count the number of occurences of p in the string:
StringCount[ToString[p[1, 2][1]^2 p[3, 4][3]], "p"]
• I had just figured this out and was about to answer my own question with that! Thanks though. – Alex Mathers Oct 9 '15 at 18:59
• Ahem: p["p"]... :D (To be fair to you, the OP didn't mention that they only wanted to count symbols although that's how I interpreted the question) – rm -rf Oct 10 '15 at 11:22
This works:
Count[p[1, 2][1]^2 p[3, 4][3], _p, ∞, Heads -> True]
"And Now for Something Completely Different"...
expr = p[1, 2][1]^2 p[3, 4][3];
Module[{n = 0}, expr /. p :> n++; n]
(* 2 *)
No claim to be "best"
• Cute, though: +1. – march Oct 9 '15 at 20:39
Length[Position[p[1, 2][1]^2 p[3, 4][3], p]] |
https://www.physicsforums.com/threads/questions-dealing-with-force.89092/ | # Questions dealing with force
1. Sep 14, 2005
### Meteo
A couple of questions in this problem.
A 20,000kg rocket has a rocket motor that generates $$3.0 *10^5N$$ of thrust.
1. What is the rocket's initial upward acceleration?
I used the formula F=MA and got 15 but apparently thats not the right answer. So Im stumped.
2. At an altitude of 5000m the rocket's acceleration has increased to 6.0m/s^2. What mass of fuel has it burned?
Im assuming I need the answer to the first part and that the 5000m is irrelevant.
$$300000=m_1*6$$ Then I should be able to get the answer $$20000-m_1$$
Thanks.
2. Sep 15, 2005
### M.Hamilton
I think you are on the right track. With problems like these the first step is to draw a free body diagram then sum the forces - perhaps the problem states that the rocket is taking off from Earth? After summing the forces you should have the answer for the Total a. Let me know if that helps.
Merle
3. Sep 15, 2005
### Meteo
Ah ok I see why my answer is wrong. I needed to subtract weight from the thrust. |
http://www.mscand.dk/issue/view/1786 | ## Vol 64 (1989)
#### Articles
Hamilton circuits with many colours in properly edge-coloured complete graphs. Lars Dovling Andersen 5-14
Existence theorems for measures on continous posets, with applications to random set theory. Tommy Norberg 15-51
Orders, in non-Eichler ($R$)-algebras over global function fields, having the cancellation property. Marleen Denert 52-62
Corona $C^*$-algebras and their applications to lifting problems. Catherine L. Olsen, Gert K. Pedersen 63-86
Commutators and generators II. Derek W. Robinson 87-108
Twisted group $C^*$-algebras corresponding to nilpotent discrete groups. Judith A. Packer 109-122
The $S^p$-criterion for Hankel forms on the Fock space, $0 < p < 1$. Robert Wallstén 123-132
Invariance principles for Brownian intersection local time and polymer measures. Andreas Stoll 133-160
On the classification of G-spheres II: PL automorphism groups. Ib Madsen, Melvin Rothenberg 161-218
The characteristic algebra of a polynomial covering map. Vagn Lundsgaard Hansen 219-225
An F. and M. Riesz theorem for compact Lie groups. R. G. M. Brummelhuis 226-232
Hermitian natural tensors. A. Ferrández, V. Miquel 233-250
Simple singularities of functions on supermanifolds. V. Serganova, A. Weintrob 251-284
Centrally ergodic one-parameter automorphism groups on semifinite injective von Neumann algebras. Yasuyuki Kawahigashi 285-299
Continuity and linear extensions of quantum measures on Jordan operator algebras. L. J. Bunce, J. D. Maitland Wright 300-306
Weakly unconditionally convergent series in M-ideals. Gilles Godefroy, Paulette Saab 307-318
This website uses cookies to allow us to see how the site is used. The cookies cannot identify you or any content at your own computer. |
https://quantiki.org/journal-article/unconditional-security-sending-or-not-sending-twin-field-quantum-key-distribution | # Unconditional security of sending or not sending twin-field quantum key distribution with finite pulses. (arXiv:1904.00192v3 [quant-ph] UPDATED)
The Sending-or-Not-Sending protocol of the twin-field quantum key
distribution (TF-QKD) has its advantage of unconditional security proof under
any coherent attack and fault tolerance to large misalignment error. So far
this is the only coherent-state based TF-QKD protocol that has considered
finite-key effect, the statistical fluctuations. Here we consider the complete
finite-key effects for the protocol and we show by numerical simulation that
the protocol with typical finite number of pulses in practice can produce
unconditional secure final key under general attack, including all coherent
attacks. It can exceed the secure distance of 500 $km$ in typical finite number
of pulses in practice even with a large misalignment error. |
https://www.sparrho.com/item/heat-kernel-analysis-on-infinite-dimensional-heisenberg-groups/8769e7/ | # Heat Kernel Analysis on Infinite-Dimensional Heisenberg Groups
Research paper by Bruce Driver, Maria Gordina
Indexed on: 12 May '08Published on: 12 May '08Published in: Mathematics - Probability
#### Abstract
We introduce a class of non-commutative Heisenberg like infinite dimensional Lie groups based on an abstract Wiener space. The Ricci curvature tensor for these groups is computed and shown to be bounded. Brownian motion and the corresponding heat kernel measures, $\{\nu_t\}_{t>0},$ are also studied. We show that these heat kernel measures admit: 1) Gaussian like upper bounds, 2) Cameron-Martin type quasi-invariance results, 3) good $L^p$ -- bounds on the corresponding Radon-Nykodim derivatives, 4) integration by parts formulas, and 5) logarithmic Sobolev inequalities. The last three results heavily rely on the boundedness of the Ricci tensor. |
https://hackage-origin.haskell.org/package/blaze-svg-0.3.6.1/docs/Text-Blaze-Svg-Internal.html | blaze-svg-0.3.6.1: SVG combinator library
Text.Blaze.Svg.Internal
Synopsis
# Documentation
type Svg = Markup Source #
Type to represent an SVG document fragment.
toSvg :: ToMarkup a => a -> Svg Source #
Type to accumulate an SVG path.
Construct SVG path values using path instruction combinators. See simple example below of how you can use mkPath to specify a path using the path instruction combinators that are included as part of the same module.
import Text.Blaze.Svg11 ((!), mkPath, l, m)
import qualified Text.Blaze.Svg11 as S
import qualified Text.Blaze.Svg11.Attributes as A
svgDoc :: S.Svg
svgDoc = S.docTypeSvg ! A.version "1.1" ! A.width "150" ! A.height "100" \$ do
S.path ! A.d makeSimplePath
makeSimplePath :: S.AttributeValue
makeSimplePath = mkPath do
l 2 3
m 4 5
m :: Show a => a -> a -> Path Source #
Moveto
mr :: Show a => a -> a -> Path Source #
Moveto (relative)
ClosePath
l :: Show a => a -> a -> Path Source #
Lineto
lr :: Show a => a -> a -> Path Source #
Lineto (relative)
h :: Show a => a -> Path Source #
Horizontal lineto
hr :: Show a => a -> Path Source #
Horizontal lineto (relative)
v :: Show a => a -> Path Source #
Vertical lineto
vr :: Show a => a -> Path Source #
Vertical lineto (relative)
c :: Show a => a -> a -> a -> a -> a -> a -> Path Source #
Cubic Bezier curve
cr :: Show a => a -> a -> a -> a -> a -> a -> Path Source #
Cubic Bezier curve (relative)
s :: Show a => a -> a -> a -> a -> Path Source #
Smooth Cubic Bezier curve
sr :: Show a => a -> a -> a -> a -> Path Source #
Smooth Cubic Bezier curve (relative)
q :: Show a => a -> a -> a -> a -> Path Source #
qr :: Show a => a -> a -> a -> a -> Path Source #
t :: Show a => a -> a -> Path Source #
tr :: Show a => a -> a -> Path Source #
Arguments
:: Show a => a Radius in the x-direction -> a Radius in the y-direction -> a The rotation of the arc's x-axis compared to the normal x-axis -> Bool Draw the smaller or bigger arc satisfying the start point -> Bool To mirror or not -> a The x-coordinate of the end point -> a The y-coordinate of the end point -> Path
Elliptical Arc (absolute).
Note that this function is an alias for the function a, defined in Text.Blaze.Svg.Internal. aa is exported from Text.Blaze.Svg instead of a due to naming conflicts with a from Text.Blaze.SVG11.
Arguments
:: Show a => a Radius in the x-direction -> a Radius in the y-direction -> a The rotation of the arc's x-axis compared to the normal x-axis -> Bool True to draw the larger of the two arcs satisfying constraints. -> Bool To mirror or not -> a The x-coordinate of the end point -> a The y-coordinate of the end point -> Path
Elliptical Arc (absolute). This is the internal definition for absolute arcs. It is not exported but instead exported as aa due to naming conflicts with a.
Arguments
:: Show a => a Radius in the x-direction -> a Radius in the y-direction -> a The rotation of the arc's x-axis compared to the normal x-axis -> Bool True to draw the larger of the two arcs satisfying constraints. -> Bool To mirror or not -> a The x-coordinate of the end point -> a The y-coordinate of the end point -> Path
Elliptical Arc (relative)
translate :: Show a => a -> a -> AttributeValue Source #
Specifies a translation by x and y
scale :: Show a => a -> a -> AttributeValue Source #
Specifies a scale operation by x and y
rotate :: Show a => a -> AttributeValue Source #
Specifies a rotation by rotate-angle degrees
rotateAround :: Show a => a -> a -> a -> AttributeValue Source #
Specifies a rotation by rotate-angle degrees about the given time rx,ry
skewX :: Show a => a -> AttributeValue Source #
Skew tansformation along x-axis
skewY :: Show a => a -> AttributeValue Source #
Skew tansformation along y-axis
matrix :: Show a => a -> a -> a -> a -> a -> a -> AttributeValue Source #
Specifies a transform in the form of a transformation matrix |
https://engineering.stackexchange.com/questions/26989/thermal-expansion-of-a-hole-in-a-plate-with-a-temperature-gradient | # Thermal expansion of a hole in a plate with a temperature gradient
I have a rectangular metal plate with a hole in it (with diameter of 300 mm). The plate has a temperature gradient going from one of its short sides to the other, I can measure the temperature anywhere in the plate.
I want to compute how much the hole is deforming from its round shape.
I know that I can calculate the expansion of the hole at uniform temperature with $$\frac{ΔL}{L_0}=αΔT$$ . So I was wondering if it is correct to measure a bunch temperature points at the edge of the hole and just apply that formula to each of them independently. But since there's a temperature gradient I'm assuming that there will be mechanical stresses between hot and cold zones working against the expansion, is this the case?
Is there a way to compute this by hand?
In the picture $$T1>T2$$.
• A sketch might help here. – grfrazee Apr 20 '19 at 22:03
• No, you can’t use that formula. That formula is for uniaxial strain. I don’t think this problem has an analytical solution either, so you’ll need to use something like finite element method on the uncoupled thermoelasticity equations. – Paul Apr 20 '19 at 22:54
• @Paul FEM provides indeed more accurate answer, but i guess if the plate is constrained though that's is not the case here, we can apply Lamé equations for plane stress, of course this would be just an approximation. – Sam Farjamirad Apr 21 '19 at 6:04
So at the stationary state, if we assume the warping of surface limited to small angles, $$\frac{\Delta L_{I,j,k}}{L_{i,j,k}} =\alpha\cdot\Delta T$$
• The expression you post is the relative change in length $\Delta L / L$ for the entire bar after it has experienced an overall temperature change $\Delta T$. The system at hand has a thermal gradient at any point $z$ along the bar. These are not the same cases. – Jeffrey J Weimer May 22 '19 at 2:11 |
http://www.complexity-explorables.org/explorables/swarmalators/ | EXPLORABLES
This explorable illustrates how remarkable spatio-temporal patterns can emerge when two dynamical phenomena, synchronization and collective motion, are combined. In the model, a bunch of oscillators move around in space and interact. Each oscillator has an internal oscillatory phase. An oscillator's movement and change of internal phase both depend on the positions and internal phases of all other oscillators. Because of this entanglement of spatial forces and phase coupling the oscillators are called swarmalators.
The model was recently introduced and studied by Kevin P. O’Keeffe, Hyunsuk Hong, and Steve Strogatz in the paper "Oscillators that sync and swarm", Nat. Comm., 8: 1504 (2017). It may capture effects observed in biological systems, e.g. populations of chemotactic microorganisms or bacterial biofilms. Recently swarmalators were realized in groups of little robots and a flock of drones.
Press Play, count to 100 (!), and keep on reading....
## This is how it works
Here we have $$N=500$$ swarmalators. The state of swarmalator $$n$$ is defined by three variables: the internal phase $$\theta_n(t)$$ and the two positional variables $$x_n(t)$$ and $$y_n(t)$$. The phase is depicted by a color of a continuous rainbow colorwheel. Initially, the swarmalators' phase variable is random, all of them are placed randomly in the plane, and all are at rest. For the math savvy, the equations of motion that govern the system are discussed below. Here we will outline the mechanics qualitatively.
### Movements
The swarmalators are subject to two opposing forces. We have short range repulsion: when two swarmalators come too close, a repulsive force dominates and pushes them apart, so they avoid bumping into each other. This force is negligable when swarmalators are far apart.
Additionally, any two swarmalators experience a force that pulls them towards each other. The magnitude of this attractive force does not decrease with distance, which is why it's a long range attractive force. The clue is that this attractive force between two swarmalators, say $$n$$ and $$m$$, depends on their phase difference $$\theta_m-\theta_n$$. When the "Like attracts like" parameter $$J$$ is positive, a similarity in phase enhances the attractive force, when this parameter is negative, swarmalators are more attracted to others of opposite phase.
### Synchronization
The swarmalators phases advance at a constant phase velocity (frequency) $$\omega$$ like an internal clock. Additionally, a swarmalator's phase also changes as a function of the phases of the other swarmalators. When the synchronization parameter $$K$$ is positive the phase difference $$\theta_m-\theta_n$$ between two swarmalators $$n$$ and $$m$$ decreases and they tend to synchronize. When $$K$$ is negative, the opposite occurs. The magnitude of this phase coupling force decreases with distance and therefore depends on the positions of the swarmalators. This type of phase coupled synchronization is also explored in the Explorables "Ride my Kuramotocycle" and "Janus Bunch".
## Observe this
You can observe a variety of stationary or dynamic patterns in this simple model just by changing the two parameters $$J$$ and $$K$$ with the two corresponding sliders. The radio buttons help selecting parameter combinations that automatically yield different patterns. Freezing the phase turns on the comoving reference frame along the phase dimension on so only relative phases are color coded.
The Rainbow Ring pattern emerges when the synchronization force vanishes and the "Like attracts like" parameter is positive. The swarmalators sort themselves out to a stationary ring pattern. You need some patience for this pattern to emerge.
In the Dancing Circus the swarmalators are attracted to others of the similar phase but desynchronize when they are close decreasing the attractive force. This back and forth generates a dynamic pattern in which the swarmalators can't settle into a stationary configuration. Initially, the system needs some time to get moving, so wait a bit here, too.
In the Uniform Blob setup the synchronization force is very strong. Eventually the swarmalators will settle into a regular, fully synced stable state.
When you select the Solar Convection setup, the advanced settings are turned on because for this one you need variation in the swarmalators' natural frequencies $$\omega_n$$. Otherwise, this system is like the Uniform Blob. Because the swarmalators are all a bit different, those with most disparate natural frequencies get pushed to the periphery and show behavior reminiscent of convection.
The pattern Makes me Dizzy is complementary to Solar Convection in terms of the strength of forces. Solar convection occurs in a parameter regime with large sync strength $$K$$ and small but positive "Like attracts like" force $$J$$. Makes Me Dizzy has weak but positive sync strength, but strong "Like attracts like" forces. The patterns is very dynamic, mixing and beautiful once it sets in. My favorite. Make sure to turn on Freeze Phase.
Fractured: An interesting pattern emerges when the "Like attracts like" force is very strong, but the swarmalators have a slight tendency to desynchronize. To see this pattern you need patience. It takes a while to stabilize. After a transient ring shaped pattern, you will finally see a pattern that looks like a horizonal slice through an orange.
## The math
The dynamic equations that are at work here and define the model are given by differential equations for the positions and the phases of the swarmalators. Denoting the position vector of swarmalator $$n$$ by $$\mathbf{r}_{n}=(x_{n},y_{n})$$ these are:
$d\mathbf{r}_n/dt=\mathbf{v}_n+\frac {1}{N}\sum_{m\neq n}\frac{\mathbf{r} _m - \mathbf{r} _n}{|\mathbf{r} _m - \mathbf{r} _n|}(1+J\cos(\theta_m-\theta_n))-\frac{\mathbf{r} _m - \mathbf{r} _n}{|\mathbf{r} _m - \mathbf{r} _n|^2}$
and
$d\theta_n/dt = \omega_n +\frac{K}{N}\sum_{m\neq n}\frac{\sin(\theta_m-\theta_n)}{|\mathbf{r} _m - \mathbf{r} _n|}$
In the first equation, we see three contributions to the velocity. The first term $$\mathbf{v}_n$$ is a swarmalators natural propulsion velocity at which it would move when isolated. In the explorable, this is set to zero. Giving the swarmalators a nonzero velocity doesn't change the equilibrium patterns substantially. The third term is the repulsive force. The second term governs attraction. Attraction is modulated by the $$1+J\cos(\theta_m-\theta_n)$$ expression. This modulation enhances the attractive force when $$J>0$$ and decreases it when $$J<0$$.
In the second equation $$\omega_n$$ is the natural frequency of swarmalator $$n$$. The second term is a phase coupling as in the Kuramoto model that effectively decreases the phase difference between two oscillators when $$K>0$$. The spatial coupling enters here, because the strength of this synchronization coupling decreases with distance. |
https://www.r-bloggers.com/2017/09/major-update-of-d3partitionr-interactive-viz-of-nested-data-with-r-and-d3-js/ | Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
D3partitionR is an R package to visualize interactively nested and hierarchical data using D3.js and HTML widget. These last few weeks I’ve been working on a major D3partitionR update which is now available on GitHub. As soon as enough feedbacks are collected, the package will be on uploaded on the CRAN. Until then, you can install it using devtools
[sourcecode language="r"]
library(devtools)
install_github("AntoineGuillot2/D3partitionR")
[/sourcecode]
Here is a quick overview of the possibilities using the Titanic data:
## A major update
This update is a major update from the previous version which will break code from 0.3.1
### New functionalities
• Additional data for nodes: Additional data can be added for some given nodes. For instance, if a comment or a link needs to be shown in the tooltip or label of some nodes, they can be added through the add_nodes_data function
• Variable selection and computation, now, you can provide a variable for:
• sizing (i.e. the size of each node)
• color, any variable from your data.frame or from the nodes data can be used as a color.
• label, any variable from your data.frame or from the nodes data can be used as a label.
• tooltip, you can provide several variables to be displayed in the tooltip.
• aggregation function, when numerical variables are provided, you can choose the aggregation function you want.
• Coloring: The color scale can now be continuous. For instance, you can use the mean survival rate to the Titanic accident in each node, this make it easy to visualise quickly women in 1st class are more likely to survive than men in 3rd class.
• Label: Labels providing the showing the node’s names (or any other variable) can now be added to the plot.
• Breadcrumb: To avoid overlapping, the width of each breadcrumb is now variable and dependant on the length of the word
• Legend: By default, the legend now shows all the modalities/levels that are in the plot. To avoid wrapping, enabling the zoom_subset option will only shows the modalities in the direct children of the zoomed root.
### API and backend change
• Easy data preprocessing, The data preparation was tedious in the previous versions. Now, you just need to aggregate your data.frame at the right level, the data.frame can directly be used in the D3partitionR functions to avoid to deal with nesting a data.frame which can be pretty complicated.
[sourcecode language="r"]
require(data.table)
require(D3partitionR)
##Agregating data to have unique sequence for the 4 variables
var_names=c('Sex','Embarked','Pclass','Survived')
data_plot=titanic_data[,.N,by=var_names]
data_plot[,(var_names):=lapply(var_names,function(x){data_plot[[x]]=paste0(x,' ',data_plot[[x]])
})]
[/sourcecode]
• The R API is greatly improved, D3partitionR are now S3 objects with a clearly named list of function to add data and to modify the chart appearance and parameters. Using pipes now makes D3partitionR syntax looks gg-like
[sourcecode language="r"]
##Treemap
D3partitionR()%>%
set_chart_type('treemap')%>%
plot()
##Circle treemap
D3partitionR()%>%
set_chart_type('circle_treemap')%>%
plot()
[/sourcecode]
Style consistency among the different type of chart. Now, it’s easy to switch from a treemap to a circle treemap or a sunburst and keep consistent styling policy.
Update to d3.js V4 and modularization. Each type of charts now has its own file and function. This function draws the chart at its root level with labels and colors, it returns a zoom function. The on-click actions (such as the breadcrumb update or the legend update) and the hover action (tooltips) are defined in a ‘global’ function.
Hence, adding new visualizations will be easy, the drawing and zooming script will just need to be adapted to this previous template.
## What’s next
Thanks to the several feedbacks that will be collected during next week, a stable release version should soon be on the CRAN. I will also post more ressources on D3partitionR with use cases and example of Shiny Applications build on it.
The post Major update of D3partitionR: Interactive viz’ of nested data with R and D3.js appeared first on Enhance Data Science. |
https://tex.stackexchange.com/questions/497637/adding-text-above-vertical-and-horizontal-column-lines | # Adding text above vertical and horizontal column lines
I currently made the following table;
Which corresponds to this code:
\begin{table}[]
\begin{tabular}{lll}
& & \\
& \multicolumn{1}{l|}{Small Value} & Big Value \\ \cline{2-3}
& \multicolumn{1}{l|}{Small Neutral} & Big Neutral \\ \cline{2-3}
& \multicolumn{1}{l|}{Small Growth} & Big Growth
\end{tabular}
\end{table}
I would like to add some text above the vertical separation line, as well as to the left of the horizontal lines. Specifically, I would like something as follows:
Is this possible?
• The text above the vertical line is quite easy if you replace your first empty line with & \multicolumn{2}{c}{Median ME} \\ . – leandriis Jun 27 at 8:00
• Thank you that works. Would it perhaps be possible to do something similar with multirow for the text for the midrules? – Rik Jun 27 at 8:03
With the use of \multicolumn and \multirow as well as some simplification fo your code:
\documentclass{article}
\usepackage{multirow}
\begin{document}
\begin{table}
\begin{tabular}{ll|l}
& \multicolumn{2}{c}{Median ME} \\
\multirow{2.2}{*}{70\textsuperscript{th} BE/ME percentile}& Small Value & Big Value \\ \cline{2-3}
\multirow{2.2}{*}{30\textsuperscript{th} BE/ME percentile}& Small Neutral & Big Neutral \\ \cline{2-3}
& Small Growth & Big Growth
\end{tabular}
\end{table}
\end{document} |
https://www.zora.uzh.ch/id/eprint/179209/ | # Electron-driven C2-symmetric Dirac semimetal uncovered in Ca3Ru2O7
Horio, M; Jöhr, S; Sutter, D; Das, L; Fischer, Mark; Chang, J (2019). Electron-driven C2-symmetric Dirac semimetal uncovered in Ca3Ru2O7. arXiv.org 1911.12163, University of Zurich.
## Abstract
Two-dimensional semimetals have been the center of intensified investigations since the realization of graphene. In particular, the design of Dirac and Weyl semimetals has been scrutinized. Typically, Dirac metals emerge from crystal-field environments captured by density functional theory (DFT). Here, we show by angle-resolved photoemission spectroscopy (ARPES) how a rotational symmetry broken massive Dirac semimetal is realized in Ca3Ru2O7. This Dirac semimetal emerges in a two-stage electronic transition driven by electron correlations beyond the DFT paradigm. The Dirac point and band velocity is consistent with constraints set by quantum oscillation, thermodynamic and transport experiments. Our results hence advance the understanding of the peculiar fermiology found in Ca3Ru2O7. As the two-stage Fermi surface transition preserves the Brillouin zone, translational broken symmetries are excluded. The mechanism and symmetry breaking elements underlying the electronic reconstruction thus remain to be identified. This situation resembles URu2Si2 that also undergoes an electronic transition without an identifiable symmetry breaking. As such our study positions Ca3Ru2O7 as another prominent hidden order parameter problem.
## Abstract
Two-dimensional semimetals have been the center of intensified investigations since the realization of graphene. In particular, the design of Dirac and Weyl semimetals has been scrutinized. Typically, Dirac metals emerge from crystal-field environments captured by density functional theory (DFT). Here, we show by angle-resolved photoemission spectroscopy (ARPES) how a rotational symmetry broken massive Dirac semimetal is realized in Ca3Ru2O7. This Dirac semimetal emerges in a two-stage electronic transition driven by electron correlations beyond the DFT paradigm. The Dirac point and band velocity is consistent with constraints set by quantum oscillation, thermodynamic and transport experiments. Our results hence advance the understanding of the peculiar fermiology found in Ca3Ru2O7. As the two-stage Fermi surface transition preserves the Brillouin zone, translational broken symmetries are excluded. The mechanism and symmetry breaking elements underlying the electronic reconstruction thus remain to be identified. This situation resembles URu2Si2 that also undergoes an electronic transition without an identifiable symmetry breaking. As such our study positions Ca3Ru2O7 as another prominent hidden order parameter problem. |
https://mathoverflow.net/questions/81778/homogenuity-of-ellp | homogenuity of $\ell^p$
I want to know the following:
If $x_1, x_2, \cdots, x_n, y_1,y_2, \cdots, y_n \in \ell_p$ satisfies $\|x_i-x_j\|_p=\|y_i-y_j\|_p$ for any $i,j$, then does there exist isometry $F$ of $\ell_p$ which send each $x_i$ to $y_i$ ?
Also do you know the precise description of the isometry group of $\ell_p$ ?
2) By a corollary of the Banach-Lamperti theorem, every linear isometry $T$ of $\ell^p=\ell^p(\mathbb{N})$ (with $1\leq p<\infty,p\neq 2$) is of the form $T:(a_n)\mapsto (\epsilon(n)a_{\sigma(n)})$, where $\sigma$ is a permutation of $\mathbb{N}$, and $\epsilon(n)=\pm 1$ for every $n$.
• Thus your answer also settle the first question (with the same $p$): the condition for the existence of an isometry $F:x^i\mapsto y^i$ becomes a purely combinatorial compatibility condition on suitable subsets of $\mathbb{N}$ for the existence of $\epsilon$ and $\sigma$. Nov 24, 2011 at 10:17
The answer to the first question is NO. Even among norms on $\mathbb R^2$, the only ones that have this amazing property (any isometry defined on a finite set extends to an isometry defined on the whole space) are those norms that make $\mathbb R^2$ isometrically into Euclidean space. |
https://forum.arduino.cc/t/please-explain-wahat-does-this-mean/524653 | # Please explain wahat does this mean
Please explain what does the following message means. I have all the MPU 9250 libraries installed in arduino folder and even I2cdev master library but I get this message.
“Invalid library found in C:\Program Files\Arduino\libraries\MPU9250: C:\Program Files\Arduino\libraries\MPU9250
Invalid library found in C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries\i2cdevlib-master: C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries\i2cdevlib-master
Invalid library found in C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries\MPU9150_DMP: C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries\MPU9150_DMP”
even when I try to compile and upload a code with I2Cdev, library include it gives an error message.
These are warnings telling you that those folders don't contain valid libraries. This is only some helpful information the Arduino IDE provides. It is not an error. If you have encountered an error then you didn't post it.
All subfolders of C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries must contain a valid library otherwise you get these warning messages. The libraries must be directly under the folder, not in a subfolder. You should not save sketches to C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries, only libraries. For example, i2cdevlib is a collection of many libraries for different platforms. You can't just dump that entire repository into your C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries folder. You need to move each of the subfolders of C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries\i2cdevlib-master\arduino that you want to use up to C:\Documents and Settings\INDIAN\My Documents\Arduino\libraries. As for the other two invalid libraries, I'd need you to provide a directory listing of the contents of those folders for me to say what the problem is. |
https://dsp.stackexchange.com/questions/67588/non-coherent-detection-bfsk-demodulation | # Non coherent detection BFSK demodulation?
I am simulating BER performance of BFSK under AWGN and Rayleigh fading, BFSK has two symbols one entirely on the real plane while other on the complex plane for representing let's say zero and one, that is how the constellation of BFSK looks like.
s=data + j*(~data); %Baseband BFSK modulation
Now let's say we added AWGN (N) and Rayleigh fading (h) as well.
Rx= h*x + N;
Now when I tried to detect it on the receiving side I will have to divide this entire Rx by an h, well that makes sense.
My entire graphs for the BER curve match non-coherent detection for FSK.
It exactly matched the theory, Eb/N0 vs. BER for BFSK over Rayleigh Channel.
How to do the same for Non-coherent detection?
If I do not divide Rx by h, Rx/h, I get a few very very bad results and that doesn't match anything. Theory tells us that In non-coherent detection, there's no prioir knowledge about the channel impulse response at the receiver.
In coherent systems, the receiver needs phase information of the transmitter (the carrier phase) to recover the transmitted data at the receiver side. I haven't used any such thing but still simulation results for BER matched with that of theory for Coherent FSK.
Can someone here help me with this?
• By not diving by h, will I get better results for Non coherent FSK?
• Whether BPSK or BFSK every time we need to divide h*x + N by an h to get results that match theory.
• I'm confused. You win very little, if anything, by having phase information on the channel in the classical (B)FSK case – your receiver really doesn't care about the phase at all. You cannot compare that to BPSK, where the phase is the actually information-carrying entity. – Marcus Müller May 17 at 11:07
• Your s, however, doesn't look like FSK at all – it's just a BPSK of amplitude $\sqrt2$, rotated by 45° (assuming data just means "a bit of data mapped to $\pm 1$). – Marcus Müller May 17 at 11:10
• For BPSK symbols have 180-degrees phase shift, either both symbols at complex or both symbols at real, this definitely is not BPSK at 45 degrees rotation if one is at zero while other is at 90. "You win very little, if anything, by having phase information" might be true that I win very little but all I am trying to do is achieve BER curves for non-coheret FSK, BFSK over Rayleigh and AWGN so divided s or Rx by h, If I don't it might be but curves tells very sad story that matches absolutely nothing – good_omen92 May 17 at 11:17
• I achieved it for coherent but not for non-cohernt so how to do that? Should I not divide it by h ? – good_omen92 May 17 at 11:20
• seriously, your s is not FSK, it's a PSK or QAM. For data=0, you get s= 1j, and for data=1 you get s=1, and thus you're right, it's not just rotated BPSK, I typed to hastily; it has a simple decision boundary, and it's the diagonal of the upper right quadrant of the complex plane, and that is just the 45° shifted decision boundary of BPSK. – Marcus Müller May 17 at 13:30 |
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-1-introduction-matter-and-measurement-exercises-page-35/1-32c | ## Chemistry: The Central Science (13th Edition)
This is a problem utilizing the density formula as a means to obtain the mass of an object. We are given the density of lead, 11.34g/$cm^{3}$, and the radius of the cube, 5 cm, and told to find the mass. First we have to find the volume, using the volume of a cube formula, which is v=pi*$\frac{4}{3}$*$r^{3}$. When we use this formula, we find the volume is 523.6 $cm^{3}$. Now we use the density formula, where density=mass/volume. When we substitute in our known values, we find the mass to be 5937 grams. |
https://blogs.princeton.edu/blogit/page/4/ | # Nevanlinna Prize
Nominations of people born on or after January 1, 1974
for outstanding contributions in Mathematical Aspects of Information Sciences including:
1. All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.
2. Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.
Nomination Procedure: http://www.mathunion.org/general/prizes/nevanlinna/details/
# The Epijournal: a new publication model
http://www.nature.com/news/mathematicians-aim-to-take-publishers-out-of-publishing-1.12243
# Information and Inference (new journal)
The first issue of Information and Inference has just appeared:
http://imaiai.oxfordjournals.org/content/current
It includes the following editorial:
In recent years, a great deal of energy and talent have been devoted to new research problems arising from our era of abundant and varied data/information. These efforts have combined advanced methods drawn from across the spectrum of established academic disciplines: discrete and applied mathematics, computer science, theoretical statistics, physics, engineering, biology and even finance. This new journal is designed to serve as a meeting place for ideas connecting the theory and application of information and inference from across these disciplines.
While the frontiers of research involving information and inference are dynamic, we are currently planning to publish in information theory, statistical inference, network analysis, numerical analysis, learning theory, applied and computational harmonic analysis, probability, combinatorics, signal and image processing, and high-dimensional geometry; we also encourage papers not fitting the above description, but which expose novel problems, innovative data types, surprising connections between disciplines and alternative approaches to inference. This first issue exemplifies this topical diversity of the subject matter, linked by the use of sophisticated mathematical modelling, techniques of analysis, and focus on timely applications.
To enhance the impact of each manuscript, authors are encouraged to provide software to illus- trate their algorithm and where possible replicate the experiments presented in their manuscripts. Manuscripts with accompanying software are marked as “reproducible” and have the software linked on the journal website under supplementary material. It is with pleasure that we welcome the scien- tific community to this new publication venue.
Robert Calderbank David L. Donoho John Shawe-Taylor Jared Tanner
# Comparing Variability of Random Variables
Consider exchangeable random variables ${X_1, \ldots, X_n, \ldots}$. A couple of facts seem quite intuitive:
Statement 1. The “variability” of sample mean ${S_m = \frac{1}{m} \sum_{i=1}^{m} X_i}$ decreases with ${m}$.
Statement 2. Let the average of functions ${f_1, f_2, \ldots, f_n}$ be defined as ${\overline{f} (x) := \frac{1}{n} \sum_{i=1}^{n} f_i(x)}$. Then ${\max_{1\leq i \leq n} \overline{f}(X_i)}$ is less “variable” than ${\max_{1\leq i \leq n} f_i (X_i)}$.
To make these statements precise, one faces the fundamental question of comparing two random variables ${W}$ and ${Z}$ (or more precisely comparing two distributions). One common way we think of ordering random variables is the notion of stochastic dominance:
$\displaystyle W \leq_{st} Z \Leftrightarrow F_W(t) \geq F_Z(t) \ \ \ \mbox{ for all real } t.$
However, this notion really is only a suitable notion when one is concerned with the actual size of the random quantities of interest, while, in our scenario of interest, a more natural order would be that which compares the variability between two random variables (or more precisely, again, the two distributions). It turns out that a very useful notion, used in a variety of fields, is due to Ross (1983): Random variable ${W}$ is said to be stochastically less variable than random variable ${Z}$ (denoted by ${\leq_v}$) when every risk-averse decision maker will choose ${W}$ over ${Z}$ (given they have similar means). More precisely, for random variables ${W}$ and ${Z}$ with finite means
$\displaystyle W \leq_{v} Z \Leftrightarrow \mathbb{E}[f(X)] \leq \mathbb{E}[f(Y)] \ \ \mbox{ for increasing and convex function } f \in \mathcal{F}$
where ${\mathcal{F}}$ is the set of functions for which the above expectations exist.
One interesting, but perhaps not entirely obvious, fact is that this notion of ordering ${W\leq_v Z}$ is equivalent to saying that there is a sequence of mean preserving spreads that in the limit transforms the distribution of ${W}$ into the distribution of another random variable ${W'}$ with finite mean such that ${W'\leq_{st} Z}$! Also, using results by Hardy, Littlewood and Polya (1929), the stochastic variability order introduced above can be shown to be equivalent to Lorenz (1905) ordering used in economics to measure income equality.
Now with this, we are ready to formalize our previous statements. The first statement is actually due to Arnold and Villasenor (1986):
$\displaystyle \frac{1}{m} \sum_{i=1}^{m} X_i \leq_v \frac{1}{m-1} \sum_{i=1}^{m-1} X_i \ \ \ \ \ \ \ \ \ \ \ \ \mbox{for all }\ \ m \in \mathbb{N}.$
Note that when you apply this fact to a sequence of iid random variables with finite mean ${\mu}$, it strengthens the strong law of large number in that it ensures that the almost sure convergence of the sample mean to the mean value ${\mu}$ occurs with monotonically decreasing variability (as the sample size grows).
The second statement comes up in proving certain optimality result in sharing parallel servers in fork-join queueing systems (J. 2008) and has a similar flavor:
$\displaystyle \max_{1\leq i \leq n} \overline{f}(X_i) \leq_v \max_{1\leq i \leq n} f_i (X_i).$
The cleanest way to prove both statements, to the best of my knowledge, is based on the following theorem first proved by Blackwell in 1953 (later strengthened to random elements in separable Banach spaces by Strassen in 1965, hence referred to by some as Strassen’s theorem):
Theorem 1 Let ${W}$ and ${Z}$ be two random variables with finite means. A necessary and sufficient condition for ${W \leq_v Z}$ is that there are two random variables ${\hat{W}}$ and ${\hat{Z}}$ with the same marginals as ${W}$ and ${Z}$, respectively, such that ${\mathbb{E}[\hat{Z} |\hat{W}] \geq \hat{W}}$ almost surely.
For instance, to prove the first statement we consider ${\hat{W} = W = \frac{1}{n} \sum_{i=1}^n X_i}$ and ${Z = \frac{1}{n-1} \sum_{i=1}^{n-1} X_i}$. All that is necessary now is to note that ${\hat{Z} : = \frac{1}{n-1} \sum_{i\in I, i \neq J} X_i}$, ${J}$ is an independent uniform rv on the set ${I := \{1,2, \ldots, n\}}$, has the same distribution as random variable ${Z}$. Furthermore,
$\displaystyle \mathbb{E} [ \hat{Z} | W ] = \mathbb{E} [ \frac{1}{n} \sum_{J=1}^{n} (\frac{1}{n-1} \sum_{i\in I, i \neq J} X_i ) | W ] = \mathbb{E} [ \frac{1}{n} \sum_{j=1}^{n} X_j | W ] = W.$
Similarly to prove the second statement, one can construct ${\hat{Z}}$ by selecting a random permutation of functions ${f_1, \ldots, f_n}$.
# Coin flip experiment
Experiment: A fair coin is flipped until a tail appears.
Find the minimal average number of bits required to encode the outcome of the experiment. |
https://techutils.in/blog/2021/08/05/stackbounty-error-error-message-magento2-4-2-p1-error-when-copying-a-product-or-moving-a-category-the-value-specified-in-the-url/ | # #StackBounty: #error #error-message #magento2.4.2-p1 Error when copying a product or moving a category: The value specified in the URL …
### Bounty: 100
``````Migrated Magento 1.9.4 --> Magento 2.4.2-p1
``````
We migrated all data except orders from our production Magento 1.9.4 site to Magento 2.4.2-p1. Everything seems to work in the new Magento except when we copy an existing product or when I tried to move a category into another category, I get the following Error:
``````The value specified in the URL Key field would generate a URL that already exists.
To resolve this conflict, you can either change the value of the URL Key field (located in the Search Engine Optimization section) to a unique value, or change the Request Path fields in all locations listed below:
and then 9 products listed here
``````
I googled this but all I find are never ending discussions and super-complicated steps to try. Is there a simple to implement solution I could do to fix this?
Here are my entries from exception.log:
``````[2021-08-01 15:49:02] main.CRITICAL: URL key for specified store already exists. {"exception":"[object] (Magento\UrlRewrite\Model\Exception\UrlAlreadyExistsException(code: 0): URL key for specified store already exists. at /home/myWEBSITE/public_html/vendor/magento/module-url-rewrite/Model/Storage/DbStorage.php:309, Magento\Framework\Exception\AlreadyExistsException(code: 0): URL key for specified store already exists. at /home/myWEBSITE/public_html/vendor/magento/module-url-rewrite/Model/Storage/DbStorage.php:342, Magento\Framework\DB\Adapter\DuplicateException(code: 1062): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID', query was: INSERT INTO `mget_url_rewrite` (`redirect_type`,`is_autogenerated`,`metadata`,`description`,`entity_type`,`entity_id`,`request_path`,`target_path`,`store_id`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?) at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Adapter/Pdo/Mysql.php:599, Zend_Db_Statement_Exception(code: 23000): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID', query was: INSERT INTO `mget_url_rewrite` (`redirect_type`,`is_autogenerated`,`metadata`,`description`,`entity_type`,`entity_id`,`request_path`,`target_path`,`store_id`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?) at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Statement/Pdo/Mysql.php:110, PDOException(code: 23000): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID' at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Statement/Pdo/Mysql.php:91)"} []
[2021-08-01 15:49:03] main.WARNING: Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg {"exception":"[object] (Magento\Framework\Exception\FileSystemException(code: 0): Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg at /home/myWEBSITE/public_html/vendor/magento/framework/Filesystem/Driver/File.php:95)"} []
[2021-08-01 15:49:19] main.WARNING: Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg {"exception":"[object] (Magento\Framework\Exception\FileSystemException(code: 0): Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg at /home/myWEBSITE/public_html/vendor/magento/framework/Filesystem/Driver/File.php:95)"} []
[2021-08-01 15:50:43] main.CRITICAL: URL key for specified store already exists. {"exception":"[object] (Magento\UrlRewrite\Model\Exception\UrlAlreadyExistsException(code: 0): URL key for specified store already exists. at /home/myWEBSITE/public_html/vendor/magento/module-url-rewrite/Model/Storage/DbStorage.php:309, Magento\Framework\Exception\AlreadyExistsException(code: 0): URL key for specified store already exists. at /home/myWEBSITE/public_html/vendor/magento/module-url-rewrite/Model/Storage/DbStorage.php:342, Magento\Framework\DB\Adapter\DuplicateException(code: 1062): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID', query was: INSERT INTO `mget_url_rewrite` (`redirect_type`,`is_autogenerated`,`metadata`,`description`,`entity_type`,`entity_id`,`request_path`,`target_path`,`store_id`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?) at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Adapter/Pdo/Mysql.php:599, Zend_Db_Statement_Exception(code: 23000): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID', query was: INSERT INTO `mget_url_rewrite` (`redirect_type`,`is_autogenerated`,`metadata`,`description`,`entity_type`,`entity_id`,`request_path`,`target_path`,`store_id`) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?), (?, ?, ?, ?, ?, ?, ?, ?, ?) at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Statement/Pdo/Mysql.php:110, PDOException(code: 23000): SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'fabbri-amarena-cherries-panettone-1000-gram-www-MYstore2-com.' for key 'MGET_URL_REWRITE_REQUEST_PATH_STORE_ID' at /home/myWEBSITE/public_html/vendor/magento/framework/DB/Statement/Pdo/Mysql.php:91)"} []
[2021-08-01 15:50:44] main.WARNING: Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg {"exception":"[object] (Magento\Framework\Exception\FileSystemException(code: 0): Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg at /home/myWEBSITE/public_html/vendor/magento/framework/Filesystem/Driver/File.php:95)"} []
[2021-08-01 18:08:19] main.WARNING: Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg {"exception":"[object] (Magento\Framework\Exception\FileSystemException(code: 0): Cannot gather stats! Warning!stat(): stat failed for /home/myWEBSITE/public_html/pub/media/catalog/product/f/a/fabbri_amarena_cherries_panettone_1000_gram_www.MYstore2.com_2.jpg at /home/myWEBSITE/public_html/vendor/magento/framework/Filesystem/Driver/File.php:95)"} []
``````
I tried changing the URL Key for the first product and then moving the category again but it comes up with the same Error including a reference to the URL key of the first product (even-though I changed it already).
Edited on August 5th, 2021:
I’ve googled this and all the answers are quite complicated. I’m looking for a step-by-step solution as in do #1, then #2, then #3… and done!
Get this bounty!!!
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
https://crypto.stackexchange.com/questions/99807/do-you-know-protocols-where-it-is-necessary-to-obtain-several-independent-poi | # Do you know protocols, where it is necessary to obtain several "independent" points on the same elliptic curve?
Consider an elliptic curve $$E$$ defined over a finite field $$\mathbb{F}_{\!q}$$ with a fixed non-zero $$\mathbb{F}_{\!q}$$-point $$P$$. For simplicity, let the order of the $$\mathbb{F}_{\!q}$$-point group $$E(\mathbb{F}_{\!q})$$ be prime and hence the group is generated by $$P$$. For the sake of security, in numerous protocols of elliptic cryptography (e.g., in a safe version of Dual_EC_DRBG) we need to generate yet another "independent" $$\mathbb{F}_{\!q}$$-point $$Q$$ on $$E$$.
Please answer the question. Do you know protocols, where it is necessary to obtain more "independent" $$\mathbb{F}_{\!q}$$-points on the same curve ? In other words, a party deals with "independent" $$\mathbb{F}_{\!q}$$-points $$Q_1$$, $$Q_2$$, $$\ldots$$, $$Q_n$$ in addition to $$P$$. By "independent" I mean such points that no one knows the discrete logarithms relative to each other.
I ask you, because for some $$E$$ and $$n$$ I know how to produce simultaneously several $$Q_i$$ faster than separate generation of them. I would like to understand whether my approach is worthy of publication in a good scientific journal. Or maybe it even has something to do with real world cryptography.
• Maybe some multi-party ECDH based key agreements? Apr 26 at 6:57
• @eckes, could you precise your comment ? Apr 26 at 7:09
Do you know protocols where it is necessary to obtain several "independent" points on the same elliptic curve?
One obvious place where this occurs if you are implementing a Pedersen commitment of a vector of values; you commit to a vector $$(x_1, x_2, ..., x_n)$$ by publishing the value $$rH + x_1G_1 + x_2G_2 + ... + x_nG_n$$; for this to work, you obviously need $$n+1$$ independent points $$H, G_1, G_2, ..., G_n$$
While this is a tad obscure, this does come up; a quick Google finds this paper, and so there is some applicability; certainly more than some papers I have seen...
• thank you! I don't know much about commitment schemes. Why is not it sufficient to take $n = 1$ ? Does $n > 1$ occur in real world crypto ? Apr 25 at 16:17
• @DimitriKoshelev: $n=1$ might not be sufficient if you're trying to take advantage of the homomorphic properties of Pedersen commitments; e.g. given a commitment of $(x_1, x_2)$ and $(y_1, y_2)$ generate a ZKP that $2(x_1, x_2) - 5(y_1, y_2) = (3, 7)$ Apr 25 at 16:47
• I forgot to say that my generation method gives $\approx q$ tuples $(Q_i)_{i=1}^n$ among $E^n(\mathbb{F}_{\!q}) \approx q^n$. Isn't this important for security ? Apr 25 at 17:20
• @DimitriKoshelev: what's critical (at least, for vector Pedersen commitments) is that no one knows a nontrivial solution for $x_1G_1 + x_2G_2 + ... + x_nG_n = 0$ (where the trivial solution is $x_1 \equiv x_2 \equiv ... \equiv x_n \equiv 0$). Does your idea provide that? Apr 25 at 17:36
• It provides that, but the distribution is far from uniform on $E^n(\mathbb{F}_{\!q})$ (it covers only $\approx q$ tuples). I have the impression that this is not important, because $q$ is big in cryptography and we can change often independent points. What do you think ? Apr 25 at 17:47
There is a "hash-to-point" function used in several schemes, where it is necessary to generate an EC point where the discrete log w.r.t. any other EC point is unknown. In particular:
1. A linkable ring signature. A 'key image' needs to be generated, where the correctness of the 'key image' declared with a signature is verifiable, and where if the same signer (using the same private key) were to create a ring signature again (even with different other ring member public key participants), it would be clear that they have used the same private key to sign again. See here for details.
2. An oblivious psuedo-random function uses hash-to-point to encode the PRF input values as EC points. here.
3. Oblivious transfer uses hash-to-point, and EC El Gamal can use hash-to-point if you only need the encoding of messages into points to go in one direction. See an example of both here.
4. This non-membership proof uses hash-to-point for a variation on Pedersen commitments where the commitment needs to be blinded, but does not need to be additively homomorphic.
• thank you, but I cannot compute a "hash-to-point" function in several arguments faster than separately. My problem is different. I can generate several independent points (depending in only one argument) faster than separately. Apr 25 at 17:04
• @DimitriKoshelev can you clarify what you mean by "depending in only one argument"? And does your technique work when there is a co-factor, and you need to ensure that the point is within the same large subgroup as the large subgroup generated by a particular well-known base point? You might be interested that there was some optimization work done for the Monero cryptocurrency to ensure that an arbitrary byte sequence can be quickly mapped to an Ed25519 point: github.com/monero-project/research-lab/blob/master/whitepaper/… Apr 25 at 17:23
• This is the C and Python code for mapping quickly to valid EC points, based on the paper I linked in the comment above: github.com/monero-project/monero/blob/… github.com/monero-project/mininero/blob/… Apr 25 at 17:29
• Me technique returns a tuple $(Q_i)_{i=1}^n$ depending on a given element of the basic field $\mathbb{F}_{\!q}$. It works when there is a co-factor, because we can always clear it. Apr 25 at 17:39
• @DimitriKoshelev In the examples I've given above, let's say you are doing some kind of private set intersection and the numbers being sent to the OPRF are small integers lower than, say, 100. Depending on how much faster it might be to create a tuple of 100 elements using your technique, perhaps it would be preferable to use your technique than to individually do a hash-to-point operation on each integer input. Apr 25 at 17:56
Your question is essentially: Is it useful to be able to sample a tuple $$(Q_1, Q_2, \dots, Q_n) \in E(F)^n$$ such that no relation is known among the points, but the tuple is not sampled from the uniform distribution.
From a practical point of view, there are two issues:
• Often, these points are sampled during the generation of system parameters, which does not happen very often and is not time critical.
• Many schemes seem secure even if the points have not been sampled from the uniform distribution.
That is, practically it is often not very useful, but also often not insecure, seemingly at least.
The main objection would be that the security proofs of these schemes sometimes rely on being able to sample the tuple $$(Q_1, \dots, Q_n)$$ with some trapdoor embedded, and this is often hard to do if you need a non-uniform distribution on the tuple. This would then ruin the security proof. (Example: Suppose I want to be able to equivocate openings of Pedersen multi-commitments.)
Some people may not care about that, but I think most cryptographers would be very reluctant to accept this without any clear benefit to be had.
In other words, I would expect the algorithm you have to be mostly not useful and sometimes unusable.
That said, the algorithm you have come up with may be interesting to some people for some reason, regardless of these obstacles. Or it may have other interesting properties. So it may be worthwhile publishing anyway.
• thank you. You write "Often, these points are sampled during the generation of system parameters, which does not happen very often and is not time critical." However, if we don't change often the points, then there is a risk that an attacker can find a dependency between them, especially if $n$ is big. Am I right ? Apr 26 at 14:35
• I didn't understand your paragraph starting from "The main objection ...". Could you clarify ? Apr 26 at 14:44
• These points might appear in system parameters, public keys and standards. Long-term objects, in other words. What in particular is it you do not understand?
– K.G.
Apr 26 at 18:20
• in fact, I can refine the method to make it uniform. Even then, it works much more efficiently in average than the successive calls of a constant-time map to a curve. May 23 at 14:10 |
https://dougo.info/investment-growth-calculator-passive-income-books.html | The thing is, I’m not talking about buying brick-and-mortar buildings. I tried that many years ago with my father-in-law, and with devastating results. We tried to buy a duplex once, and the deal fell apart after we realized we weren’t really prepared for the purchase. I secretly wanted to become a landlord, but at the same time, I knew it wasn’t for me.
Education is one sector which is totally immune from recession. I wrote an article on education sector Education – Problem or Solution There are many opportunities in education sector to earn Second Income. You can work part time or during weekends. Like foreign countries, in India also there is a demand for online tutors. You can earn handsomely as an online Tutor.
"The whole idea of Multiple Streams of Income will be a powerfulparadigm shift for most people. Bob Allen gives practical andbeautifully illustrated knowledge on how to do it. Masteringfinancial principles is an important habit in life because it givesus the freedom to focus on what matters most. A valuable read."—Dr. Stephen R. Covey, author of The 7 Habits of HighlyEffective People
The citizens of the Indus Valley Civilisation, a permanent settlement that flourished between 2800 BC and 1800 BC, practised agriculture, domesticated animals, used uniform weights and measures, made tools and weapons, and traded with other cities. Evidence of well-planned streets, a drainage system and water supply reveals their knowledge of urban planning, which included the first-known urban sanitation systems and the existence of a form of municipal government.[58]
In the early 18th century, the Mughal Empire declined, as it lost western, central and parts of south and north India to the Maratha Empire, which integrated and continued to administer those regions.[85] The decline of the Mughal Empire led to decreased agricultural productivity, which in turn negatively affected the textile industry.[86] The subcontinent's dominant economic power in the post-Mughal era was the Bengal Subah in the east., which continued to maintain thriving textile industries and relatively high real wages.[87] However, the former was devastated by the Maratha invasions of Bengal[88][89] and then British colonization in the mid-18th century.[87] After the loss at the Third Battle of Panipat, the Maratha Empire disintegrated into several confederate states, and the resulting political instability and armed conflict severely affected economic life in several parts of the country – although this was mitigated by localised prosperity in the new provincial kingdoms.[85] By the late eighteenth century, the British East India Company had entered the Indian political theatre and established its dominance over other European powers. This marked a determinative shift in India's trade, and a less-powerful impact on the rest of the economy.[90]
If you can max out your 401k or max out your IRA and then save an additional 20%+ of your after-tax, after-retirement contribution, good things really start to happen. If one is looking for earlier financial independence, such as retiring in their 40s or early 50s, it may be a good idea to skew towards more after-tax savings and investments given one has to wait until 59.5 to withdraw from their 401k or IRA penalty-free.
A few people who started their own YouTube channel when the video-sharing site was in its nascent stage are now millionaires. Now that YouTube has become immensely popular with hordes of people running their own channels, making a million dollars is considerably more difficult, but earning a respectable sum of money is still possible. As always, you'll need to find a niche that isn't yet saturated and focus on making engaging videos around it. Once you start raking up views and subscriptions, the money will start flowing in with minimum effort on your part.
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good.
I knew I didn't want to work 70 hours a week in finance forever. My body was breaking down, and I was constantly stressed. As a result, I started saving every other paycheck and 100% of my bonus since my first year out of college in 1999. By the time 2012 rolled around, I was earning enough passive income (about $78,000) to negotiate a severance and be free. Blogging – I guess you could say I’m a professional personal finance blogger since I own two sites and I’m making decent money every month. The income started off slow but has been consistently increasing. It’s not as much as I make with my day job but my best blogging month was equal to about one paycheck at my old day job. While I had to learn how to set up and use WordPress myself, you can learn how to blog and make money online at StartABlog123.com. The appeal of these passive income sources is that you can diversify across many small investments, rather than in a handful of large ones. When you invest directly in real estate, you have to commit a lot of capital to individual projects. When you invest in these crowdfunded investments, you can spread your money across many uncorrelated real estate ventures so individual investments don't cause significant issues. Passive income differs from active income which is defined as any earned income including all the taxable income and wages the earner get from working. Linear active income refers to one constantly needed to stay active to maintain the stream of income, and once an individual chooses to stop working the income will also stop, examples of active income include wages, self-employment income, martial participation in s corp, partnership.[4] portfolio income is derived from investments and includes capital gains, interest, dividends, and royalties.[5] Child labour in India is a complex problem that is rooted in poverty. Since the 1990s, the government has implemented a variety of programs to eliminate child labour. These have included setting up schools, launching free school lunch programs, creating special investigation cells, etc.[360][361] Author Sonalde Desai stated that recent studies on child labour in India have found some pockets of industries in which children are employed, but overall, relatively few Indian children are employed. Child labour below the age of 10 is now rare. In the 10–14 age group, the latest surveys find only 2% of children working for wage, while another 9% work within their home or rural farms assisting their parents in times of high work demand such as sowing and harvesting of crops.[362] Pardon for being a bit of a newbie to true investing outside of a 401k. What about those of us who have 1) Just been laid off, and unable to find work due to lack of a degree (apparently 17 years in the industry with 5 certifications is just simply not enough – which is okay. It gave me the kick in the arse to get back to school finally) 2)Have three children to support (age 11 and under), and 3) Oh yeah – cannot find work. What do you recommend when the only source of positive revenue has ceased to come in and you now have less time than ever – due to responsibilities (i.e. doing well in university = academic scholarships means investment in time, plus spending 20 min breaks with kiddos) – to create positive sources of income ? I truly am wondering from an investor’s point of view how you would handle the pivot point of life if ever you had been faced with it. I realize this may be only imaginary, but at this point, I welcome your “what ifs” scenario on this one. You’ve truly done amazing work and I thank you for being so transparent. It’s been almost 10 years since I started Financial Samurai and I’m actually earning a good income stream online now. Financial Samurai has given me a purpose in early retirement. And, I’m having a ton of fun running this site as well! Here’s a real snapshot of a personal finance blogger who makes$150,000+ a year from his site and another $180,000 from various consulting opportunities due to his site. Passive income is the gap in my financial plans at the moment. I started investing nearly 2 years ago but I’m so close to the beginning of that journey that I don’t quite see it as making income yet. I’ve been better with employer pensions and they’ve grown a really good amount over the last 12 months, but I won’t get my hands on them for a long time yet. India has made progress increasing the primary education attendance rate and expanding literacy to approximately three-fourths of the population.[389] India's literacy rate had grown from 52.2% in 1991 to 74.04% in 2011. The right to education at elementary level has been made one of the fundamental rights under the eighty-sixth Amendment of 2002, and legislation has been enacted to further the objective of providing free education to all children.[390] However, the literacy rate of 74% is lower than the worldwide average and the country suffers from a high drop-out rate.[391] Literacy rates and educational opportunities vary by region, gender, urban and rural areas, and among different social groups.[392][393] 4. Calculate how much passive income you need. It's important to have a passive-income goal — otherwise, it's very easy to lose motivation. A good goal is to try to generate enough passive income to cover basic living expenses such as food, shelter, transportation, and clothing. If your annual expense number is$30,000, divide that figure by your expected rate of return to see how much capital you need to save. Unfortunately, you've got to then multiply the capital amount by 1.25 to 1.5 to account for taxes.
### Don't mistake passive income with zero work. It's still work, it's just that your income is not directly tied to the hours worked. Anyone who owns rental properties knows that it's considered passive income but there is quite a bit of work involved. The work is front heavy but if you are lucky, you can collect rental checks without incident for many months before having to do work.
From reportable Connecticut Lottery Winnings. Winnings from the Connecticut Lottery, including Powerball, are reportable if the winner was issued a federal Form W-2G by the Connecticut Lottery Corporation. In general, the Connecticut Lottery Corporation is required to issue a federal Form W-2G to a winner if the Connecticut Lottery winnings, including Powerball, are \$600 or more and at least 300 times the amount of the wager. See Informational Publication 2011(38), Connecticut Income Tax Treatment of State Lottery Winnings Received by Residents and Nonresidents of Connecticut
Thirst for income is likely to continue with interest rates expected to stay low, keeping government bond yields low for longer and their valuations unattractive. Looking past bonds, the prices of high-dividend shares are historically high, which limits the likelihood that their dividends will rise markedly from here. Striving too high for an income target tends to push your portfolio further out on the risk spectrum. |
Subsets and Splits