url
stringlengths 14
2.42k
| text
stringlengths 340
399k
|
---|---|
https://www.physicsforums.com/threads/solving-limit.363075/ | # Solving limit
1. Dec 13, 2009
### kira137
1. The problem statement, all variables and given/known data
Find
limit ......(1-cos(2x^2)) / (1-cos(3x^2))
x->0
2. The attempt at a solution
since the above gave me 0/0, I used l'Hopital method..
then i got
(4x)sin(2x^2) / (6x)(sin3x^2)
which gives me 0/0 again..
so I kept on using l'Hopital.. but it seemed on going forever
is there other way to solve this..?
thank you in advance
2. Dec 13, 2009
### LCKurtz
Don't keep using LH rule. Remember you know (at least you should know):
$$\lim_{x\rightarrow 0}\frac {\sin x} x = 1$$
See if you can figure out how to use that next.
3. Dec 13, 2009
### phsopher
Using LH twice doesn't give 0/0. |
https://wiki.math.ucr.edu/index.php?title=005_Sample_Final_A,_Question_10&oldid=778 | # 005 Sample Final A, Question 10
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Question Write the partial fraction decomposition of the following,
${\displaystyle {\frac {x+2}{x^{3}-2x^{2}+x}}}$
Foundations
1) How many fractions will this decompose into? What are the denominators?
2) How do you solve for the numerators?
1) Since each of the factors are linear, and one has multipliclity 2, there will be three denominators. The linear term, x, will appear once in the denominator of the decomposition. The other two denominators will be x - 1, and ${\displaystyle (x-1)^{2}}$.
2) After writing the equality, ${\displaystyle {\frac {x+2}{x(x-1)^{2}}}={\frac {A}{x}}+{\frac {B}{x-1}}+{\frac {C}{(x-1)^{2}}}}$, clear the denominators, and use the cover up method to solve for A, B, and C. After you clear the denominators, the cover up method is to evaluate both sides at x = 1, 0, and any third value. Each evaluation will yield the value of one of the three unknowns.
Step 1:
First, we factor the denominator. We have ${\displaystyle x^{3}-2x^{2}+x=x(x^{2}-2x+1)=x(x-1)^{2}}$
Step 2:
Since we have a repeated factor in the denominator, we set ${\displaystyle {\frac {x+2}{x(x-1)^{2}}}={\frac {A}{x}}+{\frac {B}{x-1}}+{\frac {C}{(x-1)^{2}}}}$.
Step 3:
Multiplying both sides of the equation by the denominator ${\displaystyle x(x-1)^{2}}$, we get
${\displaystyle x+2=A(x-1)^{2}+B(x)(x-1)+Cx}$.
Step 4:
If we let ${\displaystyle x=0}$, we get ${\displaystyle 2=A}$. If we let ${\displaystyle x=1}$, we get ${\displaystyle 3=C}$.
Step 5:
To solve for ${\displaystyle B}$, we plug in ${\displaystyle A=2}$ and ${\displaystyle C=3}$ and simplify. We have
${\displaystyle x+2=2(x-1)^{2}+B(x)(x-1)+3x=2x^{2}-4x+2+Bx^{2}-Bx+3x~}$. So, ${\displaystyle x+2=(2+B)x^{2}+(-1-B)x+2}$. Since both sides are equal,
we must have ${\displaystyle 2+B=0}$ and ${\displaystyle -1-B=1}$. So, ${\displaystyle B=2}$. Thus, the decomposition is ${\displaystyle {\frac {x+2}{x(x-1)^{2}}}={\frac {2}{x}}+{\frac {2}{x-1}}+{\frac {3}{(x-1)^{2}}}}$.
${\displaystyle {\frac {x+2}{x(x-1)^{2}}}={\frac {2}{x}}+{\frac {2}{x-1}}+{\frac {3}{(x-1)^{2}}}}$ |
https://mathzsolution.com/category/poisson-summation-formula/ | ## Proof of sum results
I was going through some of my notes when I found both these sums with their results x0+x1+x2+x3+…=11−x,|x|<1 0+1+2x+3×2+4×3+…=1(1−x)2 I tried but I was unable to prove or confirm that these results are actually correct, could anyone please help me confirm whether these work or not? Answer 1−xn+11−x=1+x+x2+⋯+xn, now if n→∞ and |x|<1 we get … Read more
## Find sum with binomial coefficients and powers of 2
Find this sum for positive n and m: S(n, m) = \sum_{i=0}^n \frac{1}{2^{m+i+1}}\binom{m+i}{i} + \sum_{i=0}^m \frac{1}{2^{n+i+1}}\binom{n+i}{i}. Obviosly, S(n,m)=S(m,n). Therefore I’ve tried find T(n,m) = \sum_{i=0}^n \frac{1}{2^{m+i}}\binom{m+i}{i} by T(n, m+1), but in binomial we have \binom{m+i+1}{i} = \binom{m+i}{i} + \binom{m+i}{i-1}, and this “i-1” brings nothing good. Other combinations like T(n+1,m+1)+T(n,m) also doesn’t provide advance. Any ideas? … Read more
## Determine the Value of ∑∞n=0(1+n)xn\sum_{n=0}^{\infty} (1+n)x^n [duplicate]
This question already has answers here: How can I evaluate ∑∞n=0(n+1)xn? (23 answers) Closed 6 years ago. For x∈R with |x|<1. Find the value of ∞∑n=0(1+n)xn Answer ∞∑n=0(1+n)xn=∞∑n=0xn+∞∑n=0nxn 11−x+xddx(11−x)=11−x+x(1−x)2=1(1−x)2 AttributionSource : Link , Question Author : gaufler , Answer Author : E.H.E
## Why does this sum equal zero?
Let $\gamma$ be a piece-wise, smooth, closed curve. Let $[t_{j+1}, t_{j}]$ be an interval on the curve. Prove, $$\int_{\gamma} z^m dz=0$$ In the proof it states $$\int_{t_{j}}^{t_{j+1}} \gamma^m(t) \gamma'(t)=\frac{1}{m+1} [\gamma^{m+1}(t_{j+1})-\gamma^{m+1}(t_{j})]$$ Which I can see why. Next it claims, $$\int_{\gamma} z^m dz=\sum_{j=0}^{n-1} [\frac{1}{m+1} [\gamma^{m+1}(t_{j+1})-\gamma^{m}(t_j)]]=\frac{1}{m+1} [\gamma^{m+1}(b)-\gamma^{m+1}(a)]=0$$ I understand why it is $0$ but how did they … Read more
## Is ∑∞n=1ansin(nx)\sum_{n=1}^\infty a_n\sin(nx) converges on [ε,2π−ε][\varepsilon, 2\pi-\varepsilon]?
Let an, a sequence monotonically decreasing to 0. Consider ∞∑n=1ansin(nx) Is the series converges uniformly on [ε,2π−ε]? (ε>0) Basically we could use Dirichlet’s test. We want to show that ∑∞n=1sin(nx) is bounded. Indeed: ∞∑n=1sin(nx)=i2(∞∑n=1(eix)n+∞∑n=1(e−ix)n)≤i2(11−eix+11−e−ix)≤11−ei(2π−ε)<∞ BUT, clearly, g(π2)=∞∑n=1sinnπ2=∞ Where is the mistake? Answer We show that if an is monotonically decreasing, then the series ∞∑n=1ansin(nx) is … Read more
## Prove or disprove: ∑b∨d=xτ(b)τ(d)=τ(x)3 \sum_{b \vee d = x} \tau(b) \tau(d) = \tau(x)^3
Can somebody prove or disprove? Let τ be the divisors function, so that τ(6)=#{1,2,3,6}=4 ∑b∨d=xτ(b)τ(d)=τ(x)3 Here I am using b∨d=lcm(b,d) since it is the join of two numbers in the multiplicative lattice (N,×). This statement seems to be true. Let’s try x=6. The left hand side is: 2×τ(6)[τ(1)+τ(2)+τ(3)]+τ(6)2+τ(3)[τ(2)+τ(6)]+τ(2)[τ(3)+τ(6)]=64 and indeed τ(6)3=64. One option is to … Read more
## How do you find the condition where the Cauchy-Schwarz inequality is equal?
Cauchy-Schwarz inequality: |n∑i=1aibi|2≤n∑i=1|ai|2n∑i=1|bi|2 The answer is known to be when aik+bi=0 for some k∈R, or any other equivalence (e.g. in linear algebra, when the vectors are linearly dependent). My question is, without a priori knowledge that equality holds for Cauchy-Schwarz if and only if [condition such as aik+bi=0 or some equivalent condition], how do you … Read more
## Changing signs to minus signs to obtain a sum of zero.
Consider the sum 1+2+3+…+101. Is it possible to change some of the plus signs to minus signs so that the sum is zero? Well, I know by using Gauss’ method 1+2+…+100=5050 then 5050+101=5151 So I started to see if I can find a pattern but Im not sure. Here is what I did: I aligned … Read more
## Proving that $\sum_{i=1}^n\frac{1}{i^2}<2-\frac1n$ for $n>1$ by induction [duplicate]
This question already has answers here: Proving $1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}\leq 2-\frac{1}{n}$ for all $n\geq 2$ by induction (5 answers) Closed 3 years ago. Prove by induction that $1 + \frac {1}{4} + \frac {1}{9} + … +\frac {1}{n^2} < 2 – \frac{1}{n}$ for all $n>1$ I got up to using the inductive hypothesis to prove that … Read more |
http://hal.in2p3.fr/view_by_stamp.php?label=PCC&langue=en&action_todo=view&id=in2p3-00024529&version=1 | 218 articles – 619 references [version française]
HAL: in2p3-00024529, version 1
International Workshop on Topics in Astroparticle and Undeground Physics, TAUP 2003, Seattle : États-Unis
Dark matter with HELLAZ
(2005)
Dark matter interacting in a pressurized TPC will produce an energy spectrum of recoil nuclei whose end point depends on the atomic mass and the pressure of the gas. These can be varied from He to Xe, and 10$^{-2}$ to 20 bar. The threshold depends on the gain of the end cap detector and can reach single electron capability, that is a few eV. HELLAZ has reached that gain with 20 bar He. Parts of this presentation are taken from [J.I. Collar, Y. Giomataris, Nucl. Inst. Meth. 471 (2001) 254].
Subject(s) : Physics/High Energy Physics - Experiment
in2p3-00024529, version 1 http://hal.in2p3.fr/in2p3-00024529 oai:hal.in2p3.fr:in2p3-00024529 From: Simone Lantz <> Submitted on: Friday, 2 September 2005 17:01:58 Updated on: Monday, 5 September 2005 16:25:12 |
https://compsci.rocks/arraytester-solution/ | # ArrayTester Solution
The ArrayTester FRQ has you working with 2 methods inside a class named ArrayTester. There were also two methods, hasAllValues and containsDuplicates that were listed as Implementation Not Shown. When you see this on a FRQ you almost certinly will be calling those methods somewhere in your solution. Otherwise they wouldn’t have bothered listing them.
## Part A
Part A, getColumn tasked you with implementing a method that creates an array out of the values in a single column in a matrix. Consider the following matrix mat.
Using mat the call getColumn(mat, 2) should return the array [3, 7, 11, 15, 19] which are the values in column 2. Remember that in Java arrays are zero-indexed.
public static int[] getColumn(int[][] arr2D, int c) {
int[] out = new int[arr2D.length];
for (int r=0; r<arr2D.length; r++) {
out[r] = arr2D[r][c];
}
return out;
}
## Part B
The second part has you check if a given matrix is a Latin Square. It takes 3 conditions for a matrix to be a Latin Square.
• First row contains no duplicate rows
• All rows contain the same set of values
• All columns contain the same set of values
For example, this is a Latin Square
Note that the first row contains the values 1 through 5 without any duplicates. And each row and column also contains the values 1 through 5.
We’re also told that there are the same number of rows and columns.
public static boolean isLatin(int[][] square) {
int[] firstRow = square[0];
if (containsDuplicates(firstRow)) {
return false;
}
for (int r=1; r<square.length; r++) {
if (!hasAllValues(firstRow, square[r])) {
return false;
}
}
for (int c=0; c<square[0].length; c++) {
if (!hasAllValues(firstRow, getColumn(square, c))) {
return false;
}
}
return true;
}
First check calls the containsDuplicates method to make sure that all values in the first row are unique. Then a pair of loops go through each row and then each column to check that every row and column contain the same values as the first row. Notice that we’re calling the hasAllValues each time and using the getColumn method we implemented in part A to check the columns.
This site contains affiliate links. If you click an affiliate link and make a purchase we may get a small commission. It doesn't affect the price you pay, but it is something we must disclose. |
https://stats.stackexchange.com/questions/20520/what-is-an-uninformative-prior-can-we-ever-have-one-with-truly-no-information | # What is an “uninformative prior”? Can we ever have one with truly no information?
Inspired by a comment from this question:
What do we consider "uninformative" in a prior - and what information is still contained in a supposedly uninformative prior?
I generally see the prior in an analysis where it's either a frequentist-type analysis trying to borrow some nice parts from Bayesian analysis (be it some easier interpretation all the way to 'its the hot thing to do'), the specified prior is a uniform distribution across the bounds of the effect measure, centered on 0. But even that asserts a shape to the prior - it just happens to be flat.
Is there a better uninformative prior to use?
• Maybe you’ll enjoy a look on the so-called Principle of Maximum Entropy. I don’t feel like expanding that in a full answer – the Wikipedia article seems of good quality. I’m pretty confident some contributors will expand on it that much better than I would. – Elvis Jan 3 '12 at 9:54
[Warning: as a card-carrying member of the Objective Bayes Section of ISBA, my views are not representative of all Bayesian statisticians!, quite the opposite...]
In summary, there is no such thing as a prior with "truly no information".
Indeed, the "uninformative" prior is sadly a misnomer. Any prior distribution contains some specification that is akin to some amount of information. Even (or especially) the uniform prior. Indeed, the uniform prior is only flat for one given parameterisation of the problem. If one changes to another parameterisation (even a bounded one), the Jacobian change of variable comes into the picture and density and the prior is flat no longer.
As pointed out by Elvis, maximum entropy is one approach advocated to select so-called "uninformative" priors. It however requires (a) enough information on some moments $h(\theta)$ of the prior distribution $\pi(\cdot)$ to specify the constraints$$\int_{\Theta} h(\theta)\,\text{d}\pi(\theta) = \mathfrak{h}_0$$ that lead to the MaxEnt prior $$\pi^*(\theta)\propto \exp\{ \lambda^\text{T}h(\theta) \}$$ and (b) the preliminary choice of a reference measure $\text{d}\mu(\theta)$ [in continuous settings], a choice that brings the debate back to its initial stage! (In addition, the parametrisation of the constraints (i.e., the choice of $h$) impacts the shape of the resulting MaxEnt prior.)
José Bernardo has produced an original theory of reference priors where he chooses the prior in order to maximise the information brought by the data by maximising the Kullback distance between prior and posterior. In the simplest cases with no nuisance parameters, the solution is Jeffreys' prior. In more complex problems, (a) a choice of the parameters of interest (or even a ranking of their order of interest) must be made; (b) the computation of the prior is fairly involved and requires a sequence of embedded compact sets to avoid improperness issues. (See e.g. The Bayesian Choice for details.)
In an interesting twist, some researchers outside the Bayesian perspective have been developing procedures called confidence distributions that are probability distributions on the parameter space, constructed by inversion from frequency-based procedures without an explicit prior structure or even a dominating measure on this parameter space. They argue that this absence of well-defined prior is a plus, although the result definitely depends on the choice of the initialising frequency-based procedure
In short, there is no "best" (or even "better") choice for "the" "uninformative" prior. And I consider this is how things should be because the very nature of Bayesian analysis implies that the choice of the prior distribution matters. And that there is no comparison of priors: one cannot be "better" than another. (At least before observing the data: once it is observed, comparison of priors becomes model choice.) The conclusion of José Bernardo, Jim Berger, Dongchu Sun, and many other "objective" Bayesians is that there are roughly equivalent reference priors one can use when being unsure about one's prior information or seeking a benchmark Bayesian inference, some of those priors being partly supported by information theory arguments, others by non-Bayesian frequentist properties (like matching priors), and all resulting in rather similar inferences.
• (+1) Your book? Oh damn. I so have 387 questions for you :) – Elvis Jan 3 '12 at 13:07
• (+1) For an objective (no less!), straightforward answer. – cardinal Jan 3 '12 at 15:00
• +1 Thank you for a good and well-informed overview of the issues. – whuber Jan 3 '12 at 15:00
• An outstanding answer. Thank you. And yet another book to go on the wish list. – Fomite Jan 3 '12 at 18:26
• It's almost unfair. After all, he's Christian Robert! Just kidding. Great answer. And I'd love if @Xi'an could expand it in a post at his blog, specially about how parametrization is important to the topic of "uninformative" priors. – Manoel Galdino Jan 13 '12 at 19:35
An appealing property of formal noninformative priors is the "frequentist-matching property" : it means that a posterior 95%-credibility interval is also (at least, approximately) a 95%-confidence interval in the frequentist sense. This property holds for Bernardo's reference prior although the fundations of these noninformative priors are not oriented towards the achievement of a good frequentist-matching property, If you use a "naive" ("flat") noninformative prior such as the uniform distribution or a Gaussian distribution with a huge variance then there is no guarantee that the frequentist-matching property holds. Maybe Bernardo's reference prior could not be considered as the "best" choice of a noninformative prior but could be considered as the most successful one. Theoretically it overcomes many paradoxes of other candidates.
Jeffreys distributions also suffer from inconsistencies: the Jeffreys priors for a variable over $(-\infty,\infty)$ or over $(0,\infty)$ are improper, which is not the case for the Jeffreys prior of a probability parameter $p$: the measure $\text{d}p/\sqrt{p(1-p)}$ has a mass of $\pi$ over $(0,1)$.
Renyi has shown that a non-informative distribution must be associated with an improper integral. See instead Lhoste's distributions which avoid this difficulty and are invariant under changes of variables (e.g., for $p$, the measure is $\text{d}p/p(1-p)$).
First, the translation is good !
For E. LHOSTE : "Le calcul des probabilités appliqué à l'artillerie", Revue d'artillerie, tome 91, mai à août 1923
For A. RENYI : "On a new axiomatic theory of probability" Acta Mathematica, Académie des Sciences hongroises, tome VI, fasc.3-4, 1955
I can add : M. DUMAS : "Lois de probabilité a priori de Lhoste", Sciences et techniques de l'armement, 56, 4ème fascicule, 1982, pp 687-715
• Is it possible for you to re-write this in English, even if it is done quite poorly through an automated translation service like Google Translate? Other users, more fluent in both French and English, can help copy-edit it for you. – Silverfish Nov 6 '15 at 19:30
• As far as I remember, Lhoste's invariance result is restricted to the transforms $\log\sigma$ and $\log p/(1-p)$ for parameters on $(0,\infty)$ and $(0,1)$, respectively. Other transforms from $(0,\infty)$ and $(0,1)$ to $\mathbb{R}$ will result in different priors. – Xi'an Nov 6 '15 at 21:23
• From my brief correspondence with Maurice Dumas in the early 1990's, I remember that he wrote a Note aux Comptes-Rendus de l'Académie des Sciences, where he uses the $\log()$ and $\text{logit}()$ transforms to derive "invariant" priors. – Xi'an Nov 9 '15 at 18:53
I agree with the excellent answer by Xi'an, pointing out that there is no single prior that is "uninformative" in the sense of carrying no information. To expand on this topic, I wanted to point out that one alternative is to undertake Bayesian analysis within the imprecise probability framework (see esp. Walley 1991, Walley 2000). Within this framework the prior belief is represented by a set of probability distributions, and this leads to a corresponding set of posterior distributions. That might sound like it would not be very helpful, but it actually is quite amazing. Even with a very broad set of prior distributions (where certain moments can range over all possible values) you often still get posterior convergence to a single posterior as $$n \rightarrow \infty$$.
This analytical framework has been axiomatised by Walley as its own special form of probabilistic analysis, but is essentially equivalent to robust Bayesian analysis using a set of priors, yielding a corresponding set of posteriors. In many models it is possible to set an "uninformative" set of priors that allows some moments (e.g., the prior mean) to vary over the entire possible range of values, and this nonetheless produces valuable posterior results, where the posterior moments are bounded more tightly. This form of analysis arguably has a better claim to being called "uninformative", at least with respect to moments that are able to vary over their entire allowable range.
A simple example - Bernoulli model: Suppose we observe data $$X_1,...,X_n | \theta \sim \text{IID Bern}(\theta)$$ where $$\theta$$ is the unknown parameter of interest. Usually we would use a beta density as the prior (both the Jeffrey's prior and reference prior are of this form). We can specify this form of prior density in terms of the prior mean $$\mu$$ and another parameter $$\kappa > 1$$ as:
\begin{aligned} \pi_0(\theta | \mu, \kappa) = \text{Beta}(\theta | \mu, \kappa) = \text{Beta} \Big( \theta \Big| \alpha = \mu (\kappa - 1), \beta = (1-\mu) (\kappa - 1) \Big). \end{aligned}
(This form gives prior moments $$\mathbb{E}(\theta) = \mu$$ and $$\mathbb{V}(\theta) = \mu(1-\mu) / \kappa$$.) Now, in an imprecise model we could set the prior to consist of the set of all these prior distributions over all possible expected values, but with the other parameter fixed to control the precision over the range of mean values. For example, we might use the set of priors:
$$\mathscr{P}_0 \equiv \Big\{ \text{Beta}(\mu, \kappa) \Big| 0 \leqslant \mu \leqslant 1 \Big\}. \quad \quad \quad \quad \quad$$
Suppose we observe $$s = \sum_{i=1}^n x_i$$ positive indicators in the data. Then, using the updating rule for the Bernoulli-beta model, the corresponding posterior set is:
$$\mathscr{P}_\mathbf{x} = \Big\{ \text{Beta}\Big( \tfrac{s + \mu(\kappa-1)}{n + \kappa -1}, n+\kappa \Big) \Big| 0 \leqslant \mu \leqslant 1 \Big\}.$$
The range of possible values for the posterior expectation is:
$$\frac{s}{n + \kappa-1} \leqslant \mathbb{E}(\theta | \mathbb{x}) \leqslant \frac{s + \kappa-1}{n + \kappa-1}.$$
What is important here is that even though we started with a model that was "uninformative" with respect to the expected value of the parameter (the prior expectation ranged over all possible values), we nonetheless end up with posterior inferences that are informative with respect to the posterior expectation of the parameter (they now range over a narrower set of values). As $$n \rightarrow \infty$$ this range of values is squeezed down to a single point, which is the true value of $$\theta$$.
• +1. Interesting. What is kappa in the last equation? Should it be kappa star? – amoeba Mar 4 at 9:59
• I have edited to remove variation in $\kappa$ to give a simpler model. It should be okay now. – Ben Mar 4 at 10:51 |
https://www.physicsforums.com/threads/integrating-friedmann-equation-of-multi-component-universe-respect-to-a-and-t.661326/ | Integrating Friedmann Equation of Multi-component universe respect to a and t
1. Dec 28, 2012
4everphysics
I am having a trouble finding relationship between 'a' and 't' by integrating friedmann equation in a multi-component universe.
It would be very helpful if you can help me with just
matter-curvature only universe and matter-lambda only universe.
The two integrals looks like following.
Matter-curvature only:
$$H_0 t = ∫_0^a \frac{da}{[Ω_0/a + (1-Ω_0)]^{1/2}}$$
Matter-Lambda only:
$$H_0 t = ∫_0^a \frac{da}{[Ω_0/a + (1-Ω_0)a^2]^{1/2}}$$
2. Dec 28, 2012
BillSaltLake
Try substituting x = 1/a and then use a table of integrals.
3. Dec 31, 2012
akhtarphysic
with matter lambda the result is
a(t)=(ro_matter/ro_lambda)^(1/3)*[sinh[(6*Pi*ro_lambda*G)^(1/2)*t]^(2/3)
Where ro_x/ro_critical=omega_0x |
https://eaforum.issarice.com/users/steve2152 | ## Posts
A case for AGI safety research far in advance 2021-03-26T12:59:36.244Z
[U.S. specific] PPP: free money for self-employed & orgs (time-sensitive) 2021-01-09T19:39:14.250Z
Comment by steve2152 on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-14T00:47:33.827Z · EA · GW
Vicarious and Numenta are both explicitly trying to build AGI, and neither does any safety/alignment research whatsoever. I don't think this fact is particularly relevant to OpenAI, but I do think it's an important fact in its own right, and I'm always looking for excuses to bring it up. :-P
Anyone who wants to talk about Vicarious or Numenta in the context of AGI safety/alignment, please DM or email me. :-)
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-07T20:34:10.631Z · EA · GW
I don't really distinguish between effects by order*
I agree that direct and indirect effects of an action are fundamentally equally important (in this kind of outcome-focused context) and I hadn't intended to imply otherwise.
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-07T14:41:08.742Z · EA · GW
Hmm, it seems to me (and you can correct me) that we should be able to agree that there are SOME technical AGI safety research publications that are positive under some plausible beliefs/values and harmless under all plausible beliefs/values, and then we don't have to talk about cluelessness and tradeoffs, we can just publish them.
And we both agree that there are OTHER technical AGI safety research publications that are positive under some plausible beliefs/values and negative under others. And then we should talk about your portfolios etc. Or more simply, on a case-by-case basis, we can go looking for narrowly-tailored approaches to modifying the publication in order to remove the downside risks while maintaining the upside.
I feel like we're arguing past each other: I keep saying the first category exists, and you keep saying the second category exists. We should just agree that both categories exist! :-)
Perhaps the more substantive disagreement is what fraction of the work is in which category. I see most but not all ongoing technical work as being in the first category, and I think you see almost all ongoing technical work as being in the second category. (I think you agreed that "publishing an analysis about what happens if a cosmic ray flips a bit" goes in the first category.)
(Luke says "AI-related" but my impression is that he mostly works on AGI governance not technical, and the link is definitely about governance not technical. I would not be at all surprised if proposed governance-related projects were much more heavily weighted towards the second category, and am only saying that technical safety research is mostly first-category.)
For example, if you didn't really care about s-risks, then publishing a useful considerations for those who are concerned about s-risks might take attention away from your own priorities, or it might increase cooperation, and the default position to me should be deep uncertainty/cluelessness here, not that it's good in expectation or bad in expectation or 0 in expectation.
This points to another (possible?) disagreement. I think maybe you have the attitude where (to caricature somewhat) if there's any downside risk whatsoever, no matter how minor or far-fetched, you immediately jump to "I'm clueless!". Whereas I'm much more willing to say: OK, I mean, if you do anything at all there's a "downside risk" in a sense, just because life is uncertain, who knows what will happen, but that's not a good reason to let just sit on the sidelines and let nature take its course and hope for the best. If I have a project whose first-order effect is a clear and specific and strong upside opportunity, I don't want to throw that project out unless there's a comparably clear and specific and strong downside risk. (And of course we are obligated to try hard to brainstorm what such a risk might be.) Like if a firefighter is trying to put out a fire, and they aim their hose at the burning interior wall, they don't stop and think, "Well I don't know what will happen if the wall gets wet, anything could happen, so I'll just not pour water on the fire, y'know, don't want to mess things up."
The "cluelessness" intuition gets its force from having a strong and compelling upside story weighed against a strong and compelling downside story, I think.
If the first-order effect of a project is "directly mitigating an important known s-risk", and the second-order effects of the same project are "I dunno, it's a complicated world, anything could happen", then I say we should absolutely do that project.
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-07T02:55:21.994Z · EA · GW
In practice, we can't really know with certainty that we're making AI safer, and without strong evidence/feedback, our judgements of tradeoffs may be prone to fairly arbitrary subjective judgements, motivated reasoning and selection effects.
This strikes me as too pessimistic. Suppose I bring a complicated new board game to a party. Two equally-skilled opposing teams each get a copy of the rulebook to study for an hour before the game starts. Team A spends the whole hour poring over the rulebook and doing scenario planning exercises. Team B immediately throws the rulebook in the trash and spends the hour watching TV.
Neither team has "strong evidence/feedback"—they haven't started playing yet. Team A could think they have good strategy ideas but in fact they are engaging in arbitrary subjective judgments and motivated reasoning. In fact, their strategy ideas, which seemed good on paper, could in fact turn out to be counterproductive!
Still, I would put my money on Team A beating Team B. Because Team A is trying. Their planning abilities don't have to be all that good to be strictly better (in expectation) than "not doing any planning whatsoever, we'll just wing it". That's a low bar to overcome!
So by the same token, it seems to me that vast swathes of AGI safety research easily surpasses the (low) bar of doing better in expectation than the alternative of "Let's just not think about it in advance, we'll wing it".
For example, compare (1) a researcher spends some time thinking about what happens if a cosmic ray flips a bit (or a programmer makes a sign error, like in the famous GPT-2 incident), versus (2) nobody spends any time thinking about that. (1) is clearly better, right? We can always be concerned that the person won't do a great job, or that it will be counterproductive because they'll happen across very dangerous information and then publish it, etc. But still, the expected value here is clearly positive, right?
You also bring up the idea that (IIUC) there may be objectively good safety ideas but they might not actually get implemented because there won't be a "strong and justified consensus" to do them. But again, the alternative is "nobody comes up with those objectively good safety ideas in the first place". That's even worse, right? (FWIW I consider "come up with crisp and rigorous and legible arguments for true facts about AGI safety" to be a major goal of AGI safety research.)
Anyway, I'm objecting to undirected general feelings of "gahhhh we'll never know if we're helping at all", etc. I think there's just a lot of stuff in the AGI safety research field which is unambiguously good in expectation, where we don't have to feel that way. What I don't object to—and indeed what I strongly endorse—is taking a more directed approach and say "For AGI safety research project #732, what are the downside risks of this research, and how do they compare to the upsides?"
So that brings us to "ambitious value alignment". I agree that an ambitiously-aligned AGI comes with a couple potential sources of s-risk that other types of AGI wouldn't have, specifically via (1) sign flip errors, and (2) threats from other AGIs. (Although I think (1) is less obviously a problem than it sounds, at least in the architectures I think about.) On the other hand, (A) I'm not sure anyone is really working on ambitious alignment these days … at least Rohin Shah & Paul Christiano have stated that narrow (task-limited) alignment is a better thing to shoot for (and last anyone heard MIRI was shooting for task-limited AGIs too); (B) my sense is that current value-learning work (e.g. at CHAI) is more about gaining conceptual understanding then creating practical algorithms / approaches that will scale to AGI. That said, I'm far from an expert on the current value learning literature; frankly I'm often confused by what such researchers are imagining for their longer-term game-plan.
BTW I put a note on my top comment that I have a COI. If you didn't notice. :)
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-06T17:57:23.503Z · EA · GW
Hmm, just a guess, but …
• Maybe you're conceiving of the field as "AI alignment", pursuing the goal "figure out how to bring an AI's goals as close as possible to a human's (or humanity's) goals, in their full richness" (call it "ambitious value alignment")
• Whereas I'm conceiving the field as "AGI safety", with the goal "reduce the risk of catastrophic accidents involving AGIs".
"AGI safety research" (as I think of it) includes not just how you would do ambitious value alignment, but also whether you should do ambitious value alignment. In fact, AGI safety research may eventually result in a strong recommendation against doing ambitious value alignment, because we find that it's dangerously prone to backfiring, and/or that some alternative approach is clearly superior (e.g. CAIS, or microscope AI, or act-based corrigibility or myopia or who knows what). We just don't know yet. We have to do the research.
"AGI safety research" (as I think of it) also includes lots of other activities like analysis and mitigation of possible failure modes (e.g. asking what would happen if a cosmic ray flips a bit in the computer), and developing pre-deployment testing protocols, etc. etc.
Does that help? Sorry if I'm missing the mark here.
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-04T17:42:01.521Z · EA · GW
Thanks!
(Incidentally, I don't claim to have an absolutely watertight argument here that AI alignment research couldn't possibly be bad for s-risks, just that I think the net expected impact on s-risks is to reduce them.)
If s-risks were increased by AI safety work near (C), why wouldn't they also be increased near (A), for the same reasons?
I think suffering minds are a pretty specific thing, in the space of "all possible configurations of matter". So optimizing for something random (paperclips, or "I want my field-of-view to be all white", etc.) would almost definitely lead to zero suffering (and zero pleasure). (Unless the AGI itself has suffering or pleasure.) However, there's a sense in which suffering minds are "close" to the kinds of things that humans might want an AGI to want to do. Like, you can imagine how if a cosmic ray flips a bit, "minimize suffering" could turn into "maximize suffering". Or at any rate, humans will try (and I expect succeed even without philanthropic effort) to make AGIs with a prominent human-like notion of "suffering", so that it's on the table as a possible AGI goal.
In other words, imagine you're throwing a dart at a dartboard.
• The bullseye has very positive point value.
• That's representing the fact that basically no human wants astronomical suffering, and basically everyone wants peace and prosperity etc.
• On other parts of the dartboard, there are some areas with very negative point value.
• That's representing the fact that if programmers make an AGI that desires something vaguely resembling what they want it to desire, that could be an s-risk.
• If you miss the dartboard entirely, you get zero points.
• That's representing the fact that a paperclip-maximizing AI would presumably not care to have any consciousness in the universe (except possibly its own, if applicable).
So I read your original post as saying "If the default is for us to miss the dartboard entirely, it could be s-risk-counterproductive to improve our aim enough that we can hit the dartboard", and my response to that was "I don't think that's relevant, I think it will be really easy to not miss the dartboard entirely, and this will happen "by default". And in that case, better aim would be good, because it brings us closer to the bullseye."
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-04T00:31:00.535Z · EA · GW
Sorry I'm not quite sure what you mean. If we put things on a number line with (A)=1, (B)=2, (C)=3, are you disagreeing with my claim "there is very little probability weight in the interval ", or with my claim "in the interval , moving down towards 1 probably reduces s-risk", or with both, or something else?
Comment by steve2152 on Why does (any particular) AI safety work reduce s-risks more than it increases them? · 2021-10-03T20:18:29.157Z · EA · GW
[note that I have a COI here]
Hmm, I guess I've been thinking that the choice is between (A) "the AI is trying to do what a human wants it to try to do" vs (B) "the AI is trying to do something kinda weirdly and vaguely related to what a human wants it to try to do". I don't think (C) "the AI is trying to do something totally random" is really on the table as a likely option, even if the AGI safety/alignment community didn't exist at all.
That's because everybody wants the AI to do the thing they want it to do, not just long-term AGI risk people. And I think there are really obvious things that anyone would immediately think to try, and these really obvious techniques would be good enough to get us from (C) to (B) but not good enough to get us to (A).
[Warning: This claim is somewhat specific to a particular type of AGI architecture that I work on and consider most likely—see e.g. here. Other people have different types of AGIs in mind and would disagree. In particular, in the "deceptive mesa-optimizer" failure mode (which relates to a different AGI architecture than mine) we would plausibly expect failures to have random goals like "I want my field-of-view to be all white", even after reasonable effort to avoid that. So maybe people working in other areas would have different answers, I dunno.]
I agree that it's at least superficially plausible that (C) might be better than (B) from an s-risk perspective. But if (C) is off the table and the choice is between (A) and (B), I think (A) is preferable for both s-risks and x-risks.
Comment by steve2152 on evelynciara's Shortform · 2021-09-27T11:53:51.509Z · EA · GW
The main argument of Stuart Russell's book focuses on reward modeling as a way to align AI systems with human preferences.
Hmm, I remember him talking more about IRL and CIRL and less about reward modeling. But it's been a little while since I read it, could be wrong.
If it's really difficult to write a reward function for a given task Y, then it seems unlikely that AI developers would deploy a system that does it in an unaligned way according to a misspecified reward function. Instead, reward modeling makes it feasible to design an AI system to do the task at all.
Maybe there's an analogy where someone would say "If it's really difficult to prevent accidental release of pathogens from your lab, then it seems unlikely that bio researchers would do research on pathogens whose accidental release would be catastrophic". Unfortunately there's a horrifying many-decades-long track record of accidental release of pathogens from even BSL-4 labs, and it's not like this kind of research has stopped. Instead it's like, the bad thing doesn't happen every time, and/or things seem to be working for a while before the bad thing happens, and that's good enough for the bio researchers to keep trying.
So as I talk about here, I think there are going to be a lot of proposals to modify an AI to be safe that do not in fact work, but do seem ahead-of-time like they might work, and which do in fact work for a while as training progresses. I mean, when x-risk-naysayers like Yann LeCun or Jeff Hawkins are asked how to avoid out-of-control AGIs, they can spout off a list of like 5-10 ideas that would not in fact work, but sound like they would. These are smart people and a lot of other smart people believe them too. Also, even something as dumb as "maximize the amount of money in my bank account" would plausibly work for a while and do superhumanly-helpful things for the programmers, before it starts doing superhumanly-bad things for the programmers.
Even with reward modeling, though, AI systems are still going to have similar drives due to instrumental convergence: self-preservation, goal preservation, resource acquisition, etc., even if they have goals that were well specified by their developers. Although maybe corrigibility and not doing bad things can be built into the systems' goals using reward modeling.
Yup, if you don't get corrigibility then you failed.
Comment by steve2152 on [Creative Writing Contest] The Reset Button · 2021-09-20T01:18:21.853Z · EA · GW
I really liked this!!!
Since you asked for feedback, here's a little suggestion, take it or leave it: I found a couple things at the end slightly out-of-place, in particular "If you choose to tackle the problem of nuclear security, what angle can you attack the problem from that will give you the most fulfillment?" and "Do any problems present even bigger risks than nuclear war?"
Immediately after such an experience, I think the narrator would not be thinking about option of not bothering to work on nuclear security because other causes are more important, nor thinking about their own fulfillment. If other causes came to mind, I imagine it would be along the lines of "if I somehow manage to stop the nuclear war, what other potential catastrophes are waiting in the wings, ready to strike anytime in the months and years after that—and this time with no reset button?"
Or if you want it to fit better as written now, then shortly after the narrator snaps back to age 18 the text could say something along the lines of "You know about chaos theory and the butterfly effect; this will be a new re-roll of history, and there might not be a nuclear war this time around. Maybe last time was a fluke?" Then that might remove some of the single-minded urgency that I would otherwise expect the narrator to feel, and thus it would become a bit more plausible that the narrator might work on pandemics or whatever.
(Maybe that "new re-roll of history" idea is what you had in mind? Whereas I was imagining the Groundhog Day / Edge of Tomorrow / Terminator trope where the narrator knows 100% for sure that there will be a nuclear war on this specific hour of this specific day, if the narrator doesn't heroically stop it.)
(I'm not a writer, don't trust my judgment.)
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-16T18:08:18.128Z · EA · GW
Hmm, yeah, I guess you're right about that.
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-15T13:49:37.395Z · EA · GW
Oh, you said "evolution-type optimization", so I figured you were thinking of the case where the inner/outer distinction is clear cut. If you don't think the inner/outer distinction will be clear cut, then I'd question whether you actually disagree with the post :) See the section defining what I'm arguing against, in particular the "inner as AGI" discussion.
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-14T15:40:49.019Z · EA · GW
Nah, I'm pretty sure the difference there is "Steve thinks that Jacob is way overestimating the difficulty of humans building AGI-capable learning algorithms by writing source code", rather than "Steve thinks that Jacob is way underestimating the difficulty of computationally recapitulating the process of human brain evolution".
For example, for the situation that you're talking about (I called it "Case 2" in my post) I wrote "It seems highly implausible that the programmers would just sit around for months and years and decades on end, waiting patiently for the outer algorithm to edit the inner algorithm, one excruciatingly-slow step at a time. I think the programmers would inspect the results of each episode, generate hypotheses for how to improve the algorithm, run small tests, etc." If the programmers did just sit around for years not looking at the intermediate training results, yes I expect the project would still succeed sooner or later. I just very strongly expect that they wouldn't sit around doing nothing.
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-14T00:07:49.936Z · EA · GW
AlphaGo has a human-created optimizer, namely MCTS. Normally people don't use the term "mesa-optimizer" for human-created optimizers.
Then maybe you'll say "OK there's a human-created search-based consequentialist planner, but the inner loop of that planner is a trained ResNet, and how do you know that there isn't also a search-based consequentialist planner inside each single run through the ResNet?"
Admittedly, I can't prove that there isn't. I suspect that there isn't, because there seems to be no incentive for that (there's already a search-based consequentialist planner!), and also because I don't think ResNets are up to such a complicated task.
Comment by steve2152 on AI timelines and theoretical understanding of deep learning · 2021-09-13T14:14:19.195Z · EA · GW
I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing.
If we don't have convincing evidence in favor of a timeline <50 years, and we also don't have convincing evidence in favor of a timeline ≥50 years, then we just have to say that this is a question on which we don't have convincing evidence of anything in particular. But we still have to take whatever evidence we have and make the best decisions we can. ¯\_(ツ)_/¯
(You don't say this explicitly but your wording kinda implies that ≥50 years is the default, and we need convincing evidence to change our mind away from that default. If so, I would ask why we should take ≥50 years to be the default. Or sorry if I'm putting words in your mouth.)
I am simply not able to understand why we are significantly closer to AGI today than we were in 1950s
Lots of ingredients go into AGI, including (1) algorithms, (2) lots of inexpensive chips that can do lots of calculations per second, (3) technology for fast communication between these chips, (4) infrastructure for managing large jobs on compute clusters, (5) frameworks and expertise in parallelizing algorithms, (6) general willingness to spend millions of dollars and roll custom ASICs to run a learning algorithm, (7) coding and debugging tools and optimizing compilers, etc. Even if you believe that you've made no progress whatsoever on algorithms since the 1950s, we've made massive progress in the other categories. I think that alone puts us "significantly closer to AGI today than we were in the 1950s": once we get the algorithms, at least everything else will be ready to go, and that wasn't true in the 1950s, right?
But I would also strongly disagree with the idea that we've made no progress whatsoever on algorithms since the 1950s. Even if you think that GPT-3 and AlphaGo have absolutely nothing whatsoever to do with AGI algorithms (which strikes me as an implausibly strong statement, although I would endorse much weaker versions of that statement), that's far from the only strand of research in AI, let alone neuroscience. For example, there's a (IMO plausible) argument that PGMs and causal diagrams will be more important to AGI than deep neural networks are. But that would still imply that we've learned AGI-relevant things about algorithms since the 1950s. Or as another example, there's a (IMO misleading) argument that the brain is horrifically complicated and we still have centuries of work ahead of us in understanding how it works. But even people who strongly endorse that claim wouldn't also say that we've made "no progress whatsoever" in understanding brain algorithms since the 1950s.
Sorry if I'm misunderstanding.
isn't there an infinite degree of freedom associated with a continuous function?
I'm a bit confused by this; are you saying that the only possible AGI algorithm is "the exact algorithm that the human brain runs"? The brain is wired up by a finite number of genes, right?
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-13T01:35:33.828Z · EA · GW
most contemporary progress on AI happens by running base-optimizers which could support mesa-optimization
GPT-3 is of that form, but AlphaGo/MuZero isn't (I would argue).
I'm not sure how to settle whether your statement about "most contemporary progress" is right or wrong. I guess we could count how many papers use model-free RL vs model-based RL, or something? Well anyway, given that I haven't done anything like that, I wouldn't feel comfortable making any confident statement here. Of course you may know more than me! :-)
If we forget about "contemporary progress" and focus on "path to AGI", I have a post arguing against what (I think) you're implying at Against evolution as an analogy for how humans will create AGI, for what it's worth.
Ideally we'd want a method for identifying valence which is more mechanistic that mine. In the sense that it lets you identify valence in a system just by looking inside the system without looking at how it was made.
Yeah I dunno, I have some general thoughts about what valence looks like in the vertebrate brain (e.g. this is related, and this) but I'm still fuzzy in places and am not ready to offer any nice buttoned-up theory. "Valence in arbitrary algorithms" is obviously even harder by far. :-)
Comment by steve2152 on AI timelines and theoretical understanding of deep learning · 2021-09-12T19:59:26.752Z · EA · GW
I do agree that there are many good reasons to think that AI practitioners are not AI forecasting experts, such as the fact that they're, um, obviously not—they generally have no training in it and have spent almost no time on it, and indeed they give very different answers to seemingly-equivalent timelines questions phrased differently. This is a reason to discount the timelines that come from AI practitioner surveys, in favor of whatever other forecasting methods / heuristics you can come up with. It's not per se a reason to think "definitely no AGI in the next 50 years".
Well, maybe I should just ask: What probability would you assign to the statement "50 years from today, we will have AGI"? A couple examples:
• If you think the probability is <90%, and your intention here is to argue against people who think it should be >90%, well I would join you in arguing against those people too. This kind of technological forecasting is very hard and we should all be pretty humble & uncertain here. (Incidentally, if this is who you're arguing against, I bet that you're arguing against fewer people than you imagine.)
• If you think the probability is <10%, and your intention here is to argue against people who think it should be >10%, then that's quite a different matter, and I would strongly disagree with you, and I would very curious how you came to be so confident. I mean, a lot can happen in 50 years, right? What's the argument?
Comment by steve2152 on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-10T21:39:17.159Z · EA · GW
Let's say a human writes code more-or-less equivalent to the evolved "code" in the human genome. Presumably the resulting human-brain-like algorithm would have valence, right? But it's not a mesa-optimizer, it's just an optimizer. Unless you want to say that the human programmers are the base optimizer? But if you say that, well, every optimization algorithm known to humanity would become a "mesa-optimizer", since they tend to be implemented by human programmers, right? So that would entail the term "mesa-optimizer" kinda losing all meaning, I think. Sorry if I'm misunderstanding.
Comment by steve2152 on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-08T13:01:02.939Z · EA · GW
Addendum: In the other direction, one could point out that the authors were searching for "an approximation of an approximation of a neuron", not "an approximation of a neuron". (insight stolen from here.) Their ground truth was a fancier neuron model, not a real neuron. Even the fancier model is a simplification of real life. For example, if I recall correctly, neurons have been observed to do funny things like store state variables via changes in gene expression. Even the fancier model wouldn't capture that. As in my parent comment, I think these kinds of things are highly relevant to simulating worms, and not terribly relevant to reverse-engineering the algorithms underlying human intelligence.
Comment by steve2152 on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-08T01:18:55.266Z · EA · GW
It's possible much of that supposed additional complexity isn't useful
Yup! That's where I'd put my money.
It's a forgone conclusion that a real-world system has tons of complexity that is not related to the useful functions that the system performs. Consider, for example, the silicon transistors that comprise digital chips—"the useful function that they perform" is a little story involving words like "ON" and "OFF", but "the real-world transistor" needs three equations involving 22 parameters, to a first approximation!
By the same token, my favorite paper on the algorithmic role of dendritic computation has them basically implementing a simple set of ANDs and ORs on incoming signals. It's quite likely that dendrites do other things too besides what's in that one paper, but I think that example is suggestive.
Caveat: I'm mainly thinking of the complexity of understanding the neuronal algorithms involved in "human intelligence" (e.g. common sense, science, language, etc.), which (I claim) are mainly in the cortex and thalamus. I think those algorithms need to be built out of really specific and legible operations, and such operations are unlikely to line up with the full complexity of the input-output behavior of neurons. I think the claim "the useful function that a neuron performs is simpler than the neuron itself" is always true, but it's very strongly true for "human intelligence" related algorithms, whereas it's less true in other contexts, including probably some brainstem circuits, and the neurons in microscopic worms. It seems to me that microscopic worms just don't have enough neurons to not squeeze out useful functionality from every squiggle in their neurons' input-output relations. And moreover here we're not talking about massive intricate beautifully-orchestrated learning algorithms, but rather things like "do this behavior a bit less often when the temperature is low" etc. See my post Building brain-inspired AGI is infinitely easier than understanding the brain for more discussion kinda related to this.
Comment by steve2152 on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T19:04:28.641Z · EA · GW
See here, the first post is a video of a research meeting where he talks dismissively about Stuart Russell's argument, and then the ensuing forum discussion features a lot of posts by me trying to sell everyone on AI risk :-P
(Other context here.)
Comment by steve2152 on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T01:43:00.166Z · EA · GW
• There was a 2020 documentary We Need To Talk About AI. All-star lineup of interviewees! Stuart Russell, Roman Yampolskiy, Max Tegmark, Sam Harris, Jurgen Schmidhuber, …. I've seen it, but it appears to be pretty obscure, AFAICT.
• I happened to watch the 2020 Melissa McCarthy film Superintelligence yesterday. It's umm, not what you're looking for. The superintelligent AI's story arc was a mix of 20% arguably-plausible things that experts say about superintelligent AGI, and 80% deliberately absurd things for comedy. I doubt it made anyone in the audience think very hard about anything in particular. (I did like it as a romantic comedy :-P )
• There's some potential tension between "things that make for a good movie" and "realistic", I think.
Comment by steve2152 on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T01:20:06.749Z · EA · GW
I saw Jeff Hawkins mention (in some online video) that someone had sent Human Compatible to him unsolicited but he didn't say who. And then (separately) a bit later the mystery was resolved: I saw some EA-affiliated person or institution mention that they had sent Human Compatible to a bunch of AI researchers. But I can't remember where I saw that, or who it was. :-(
Comment by steve2152 on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-27T17:36:26.444Z · EA · GW
No I don't think we've met! In 2016 I was a professional physicist living in Boston. I'm not sure if I would have even known what "EA" stood for in 2016. :-)
It also seems like the technical problem does get easier in expectation if you have more than one shot. By contrast, I claim, many of the Moloch-style problems get harder.
I agree. But maybe I would have said "less hard" rather than "easier" to better convey a certain mood :-P
It does seem like within technical AI safety research the best work seems to shift away from Agent Foundations type of work and towards neural-nets-specific work.
I'm not sure what your model is here.
Maybe a useful framing is "alignment tax": if it's possible to make an AI that can do some task X unsafely with a certain amount of time/money/testing/research/compute/whatever, then how much extra time/money/etc. would it take to make an AI that can do task X safely? That's the alignment tax.
The goal is for the alignment tax to be as close as possible to 0%. (It's never going to be exactly 0%.)
In the fast-takeoff unipolar case, we want a low alignment tax because some organizations will be paying the alignment tax and others won't, and we want one of the former to win the race, not one of the latter.
In the slow-takeoff multipolar case, we want a low alignment tax because we're asking organizations to make tradeoffs for safety, and if that's a very big ask, we're less likely to succeed. If the alignment tax is 1%, we might actually succeed. Remember, that there are many reasons that organizations are incentivized to make safe AIs, not least because they want the AIs to stay under their control and do the things they want them to do, not to mention legal risks, reputation risks, employees who care about their children, etc. etc. So if all we're asking is for them to spend 1% more training time, maybe they all will. If instead we're asking them all to spend 100× more compute plus an extra 3 years of pre-deployment test protocols, well, that's much less promising.
So either way, we want a low alignment tax.
OK, now let's get back to what you wrote.
I think maybe your model is:
"If Agent Foundations research pans out at all, it would pan out by discovering a high-alignment-tax method of making AGI"
(You can correct me if I'm misunderstanding.)
If we accept that premise, then I can see where you're coming from. This would be almost definitely useless in a multipolar slow-takeoff world, and merely "probably useless" in a unipolar fast-takeoff world. (In the latter case, there's at least a prayer of a chance that the safe actors will be so far ahead of the unsafe actors that the former can pay the tax and win the race anyway.)
But I'm not sure that I believe the premise. Or at least I'm pretty unsure. I am not myself an Agent Foundations researcher, but I don't imagine that Agent Foundations researchers would agree with the premise that high-alignment-tax AGI is the best that they're hoping for in their research.
Oh, hmmm, the other possibility is that you're mentally lumping together "multipolar slow-takeoff AGI" with "prosaic AGI" and with "short timelines". These are indeed often lumped together, even if they're different things. Anyway, I would certainly agree that both "prosaic AGI" and "short timelines" would make Agent Foundations research less promising compared to neural-net-specific work.
Comment by steve2152 on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-25T14:47:14.965Z · EA · GW
I think that "AI alignment research right now" is a top priority in unipolar fast-takeoff worlds, and it's also a top priority in multipolar slow-takeoff worlds. (It's certainly not the only thing to do—e.g. there's multipolar-specific work to do, like the links in Jonas's answer on this page, or here etc.)
(COI note: I myself am doing "AI alignment research right now" :-P )
First of all, in the big picture, right now humanity is simultaneously pursuing many quite different research programs towards AGI (I listed a dozen or so here (see Appendix)). If more than one of them is viable (and I think that's likely), then in a perfect world we would figure out which of them has the best hope of leading to Safe And Beneficial AGI, and differentially accelerate that one (and/or differentially decelerate the others). This isn't happening today—that's not how most researchers are deciding what AI capabilities research to do, and it's not how most funding sources are deciding what AI capabilities research to fund. Could it happen in the future? Yes, I think so! But only if...
• AI alignment researchers figure out which of these AGI-relevant research programs is more or less promising for safety,
• …and broadly communicate that information to experts, using legible arguments…
• …and do it way in advance of any of those research programs getting anywhere close to AGI
The last one is especially important. If some AI research program has already gotten to the point of super-powerful proto-AGI source code published on GitHub, there's no way you're going to stop people from using and improving it. Whereas if the research program is still very early-stage and theoretical, and needs many decades of intense work and dozens more revolutionary insights to really start getting powerful, then we have a shot at this kind of differential technological development strategy being viable.
(By the same token, maybe it will turn out that there's no way to develop safe AGI, and we want to globally ban AGI development. I think if a ban were possible at all, it would only be possible if we got started when we're still very far from being able to build AGI.)
So for example, if it's possible to build a "prosaic" AGI using deep neural networks, nobody knows whether it would be possible to control and use it safely. There are some kinda-illegible intuitive arguments on both sides. Nobody really knows. People are working on clarifying this question, and I think they're making some progress, and I'm saying that it would be really good if they could figure it out one way or the other ASAP.
Second of all, slow takeoff doesn't necessarily mean that we can just wait and solve the alignment problem later. Sometimes you can have software right in front of you, and it's not doing what you want it to do, but you still don't know how to fix it. The alignment problem could be like that.
One way to think about it is: How slow is slow takeoff, versus how long does it take to solve the alignment problem? We don't know.
Also, how much longer would it take, once somebody develops best practices to solve the alignment problem, for all relevant actors to reach a consensus that following those best practices is a good idea and in their self-interest? That step could add on years, or even decades—as they say, "science progresses one funeral at a time", and standards committees work at a glacial pace, to say nothing of government regulation, to say nothing of global treaties.
Anyway, if "slow takeoff" is 100 years, OK fine, that's slow enough. If "slow takeoff" is ten years, maybe that's slow enough if the alignment problem happens to have an straightforward, costless, highly-legible and intuitive, scalable solution that somebody immediately discovers. Much more likely, I think we would need to be thinking about the alignment problem in advance.
For more detailed discussion, I have my own slow-takeoff AGI doom scenario here. :-P
Comment by steve2152 on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T12:23:56.272Z · EA · GW (not an expert) My impression is that a perfectly secure OS doesn't buy you much if you use insecure applications on an insecure network etc. Also, if you think about classified work, the productivity tradeoff is massive: you can't use your personal computer while working on the project, you can't use any of your favorite software while working on the project, you can't use an internet-connected computer while working on the project, you can't have your cell phone in your pocket while talking about the project, you can't talk to people about the project over normal phone lines and emails... And then of course viruses get into air-gapped classified networks within hours anyway. :-P Not that we can't or shouldn't buy better security, I'm just slightly skeptical of specifically focusing on building a new low-level foundation rather than doing all the normal stuff really well, like network traffic monitoring, vetting applications and workflows, anti-spearphishing training, etc. etc. Well, I guess you'll say, "we should do both". Sure. I guess I just assume that the other things would rapidly become the weakest link. In terms of low-level security, my old company has a big line of business designing chips themselves to be more secure; they spun out Dover Microsystems to sell that particular technology to commercial (as opposed to military) customers. Just FYI, that's just one thing I happen to be familiar with. Actually I guess it's not that relevant. Comment by steve2152 on Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'" · 2021-08-07T19:08:46.990Z · EA · GW Hmm, I guess I wasn't being very careful. Insofar as "helping future humans" is a different thing than "helping living humans", it means that we could be in a situation where the interventions that are optimal for the former are very-sub-optimal (or even negative-value) for the latter. But it doesn't mean we must be in that situation, and in fact I think we're not. I guess if you think: (1) finding good longtermist interventions is generally hard because predicting the far-future is hard, but (2) "preventing extinction (or AI s-risks) in the next 50 years" is an exception to that rule; (3) that category happens to be very beneficial for people alive today too; (4) it's not like we've exhausted every intervention in that category and we're scraping the bottom of the barrel for other things ... If you believe all those things, then in that case, it's not really surprising if we're in a situation where the tradeoffs are weak-to-nonexistent. Maybe I'm oversimplifying, but something like that I guess? I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it. I mean, maybe there are such things...Just nothing springs to my mind. Comment by steve2152 on Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'" · 2021-08-06T14:05:36.212Z · EA · GW I feel like that guy's got a LOT of chutzpah to not-quite-say-outright-but-very-strongly-suggest that the Effective Altruism movement is a group of people who don't care about the Global South. :-P More seriously, I think we're in a funny situation where maybe there are these tradeoffs in the abstract, but they don't seem to come up in practice. Like in the abstract, the very best longtermist intervention could be terrible for people today. But in practice, I would argue that most if not all current longtermist cause areas (pandemic prevention, AI risk, preventing nuclear war, etc.) are plausibly a very good use of philanthropic effort even if you only care about people alive today (including children). Or, in the abstract, AI risk and malaria are competing for philanthropic funds. But in practice, a lot of the same people seem to care about both, including many of the people that the article (selectively) quotes. …And meanwhile most people in the world care about neither. I mean, there could still be an interesting article about how there are these theoretical tradeoffs between present and future generations. But it's misleading to name names and suggest that those people would gleefully make those tradeoffs, even if it involves torturing people alive today or whatever. Unless, of course, there's actual evidence that they would do that. (The other strong possibility is, if actually faced with those tradeoffs in real life, they would say, "Uh, well, I guess that's my stop, this is where I jump off the longtermist train!!"). Anyway, I found the article extremely misleading and annoying. For example, the author led off with a quote where Jaan Tallinn says directly that climate change might be an existential risk (via a runaway scenario), and then two paragraphs later the author is asking "why does Tallinn think that climate change isn’t an existential risk?" Huh?? The article could have equally well said that Jaan Tallinn believes that climate change is "very plausibly an existential risk", and Jaan Tallinn is the co-founder of an organization that does climate change outreach among other things, and while climate change isn't a principal focus of current longtermist philanthropy, well, it's not like climate change is a principal focus of current cancer research philanthropy either! And anyway it does come up to a reasonable extent, with healthy discussions focusing in particular on whether there are especially tractable and neglected things to do. So anyway, I found the article very misleading. (I agree with Rohin that if people are being intimidated, silenced, or cancelled, then that would be a very bad thing.) Comment by steve2152 on Shallow evaluations of longtermist organizations · 2021-06-28T12:24:39.994Z · EA · GW Just one guy, but I have no idea how I would have gotten into AGI safety if not for LW ... I had a full-time job and young kids and not-obviously-related credentials. But I could just come out of nowhere in 2019 and start writing LW blog posts and comments, and I got lots of great feedback, and everyone was really nice. I'm full-time now, here's my writings, I guess you can decide whether they're any good :-P Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-06-09T15:22:58.351Z · EA · GW I agree that there are both interventions that change qualia reports without much changing (morally important) qualia and interventions that change qualia without much changing qualia reports, and that we should keep both these possibilities in mind when evaluating interventions. Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-05-02T13:07:24.552Z · EA · GW Thanks! I think you're emphasizing how qualia reports are not always exactly corresponding to qualia and can't always be taken at face value, and I'm emphasizing that it's incoherent to say that qualia exist but there's absolutely no causal connection whatsoever going from an experienced qualia to a sincere qualia report. Both of those can be true! The first is like saying "if someone says "I see a rock", we shouldn't immediately conclude that there was a rock in this person's field-of-view. It's a hypothesis we should consider, but not proven." That's totally true. The second is like disputing the claim: "If you describe the complete chain of events leading to someone reporting "I see a rock", nowhere in that chain of events is there ever an actual rock (with photons bouncing off it), not for anyone ever—oh and there are in fact rocks in the world, and when people talk about rocks they're describing them correctly, it's just that they came to have knowledge of rocks through some path that had nothing to do with the existence of actual rocks." That's what I would disagree with. So if you have a complete and correct description of the chain of events that leads someone to say they have qualia, and nowhere in that description is anything that looks just like our intuitive notion of qualia, I think the correct conclusion is "there is nothing in the world that looks just like our intuitive notion of qualia", not "there's a thing in the world that's just like our intuitive notion of qualia, but it's causally disconnected from our talking about it". (I do in fact think "there's nothing in the world that looks just like our intuitive notion of qualia". I think this is an area where our perceptions are not neutrally and accurately conveying what's going on; more like our perception of an optical illusion than our perception of a rock.) Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-05-01T23:26:54.569Z · EA · GW Oh, I think I see. If someone declares that it feels like time is passing slower for them (now that they're enlightened or whatever), I would accept that as a sincere description of some aspect of their experience. And insofar as qualia exist, I would say that their qualia have changed somehow. But it wouldn't even occur to me to conclude that this person's time is now more valuable per second in a utilitarian calculus, in proportion to how much they say their time slowed down, or that the change in their qualia is exactly literally time-stretching. I treat descriptions of subjective experience as a kind of perception, in the same category as someone describing what they're seeing or hearing. If someone sincerely tells me they saw a UFO last night, well that's their lived experience and I respect that, but no they didn't. By the same token, if someone says their experience of time has slowed down, I would accept that something in their consciously-accessible brain has changed, and the way they perceive that change is as they describe, but it wouldn't even cross my mind that the actual change in their brain is similar to that description. As for inter-person utilitarian calculus and utility monsters, beats me, everything about that is confusing to me, and way above my pay grade :-P Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-04-30T21:57:53.816Z · EA · GW Interesting... I guess I would have assumed that, if someone says their subjective experience of time has changed, then their time-related qualia has changed, kinda by definition. If meanwhile their reaction time hasn't changed, well, that's interesting but I'm not sure I care... (I'm not really sure of the definitions here.) Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-04-30T18:27:35.117Z · EA · GW OK, if I understand correctly, the report suggests that qualia may diverge from qualia reports—like, some intervention could change the former without the latter. This just seems really weird to me. Like, how could we possibly know that? Let's say I put on a helmet with a button, and when you press the button, my qualia radically change, but my qualia reports stay the same. Alice points to me and says "his qualia were synchronized with his qualia reports, but pressing the button messed that up". Then Bob points to me and says "his qualia were out-of-sync with his qualia reports, but when you pressed the button, you fixed it". How can we tell who's right? And meanwhile here I am, wearing this helmet, looking at both of them, and saying "Umm, hey Alice & Bob, I'm standing right here, and I'm telling you, I swear, I feel exactly the same. This helmet does nothing whatsoever to my qualia. Trust me! I promise!" And of course Alice & Bob give me a look like I'm a complete moron, and they yell at me in synchrony "...You mean, 'does nothing whatsoever to my qualia reports'!!" How can we decide who's right? Me, Alice, or Bob? Isn't it fundamentally impossible?? If every human's qualia reports are wildly out of sync with their qualia, and always have been for all of history, how could we tell? Sorry if I'm misunderstanding or if this is in the report somewhere. Comment by steve2152 on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-07T15:56:31.471Z · EA · GW For what it's worth, I generally downvote a post only when I think "This post should not have been written in the first place", and relatedly I will often upvote posts I disagree with. If that's typical, then the "controversial" posts you found may be "the most meta-level controversial" rather than "the most object-level controversial", if you know what I mean. That's still interesting though. Comment by steve2152 on What do you make of the doomsday argument? · 2021-03-19T12:36:04.817Z · EA · GW I'm not up on the literature and haven't thought too hard about it, but I'm currently very much inclined to not accept the premise that I should expect myself to be a randomly-chosen person or person-moment in any meaningful sense—as if I started out as a soul hanging out in heaven, then flew down to Earth and landed in a random body, like in that Pixar movie. I think that "I" am the thought processes going on in a particular brain in a particular body at a particular time—the reference class is not "observers" or "observer-moments" or anything like that, I'm in a reference class of one. The idea that "I could have been born a different person" strikes me as just as nonsensical as the idea "I could have been a rock". Sure, I'm happy to think "I could have been born a different person" sometimes—it's a nice intuitive poetic prod to be empathetic and altruistic and grateful for my privileges and all that—but I don't treat it as a literally true statement that can ground philosophical reasoning. Again, I'm open to being convinced, but that's where I'm at right now. Comment by steve2152 on Consciousness research as a cause? [asking for advice] · 2021-03-11T16:05:23.100Z · EA · GW The "meta-problem of consciousness" is "What is the exact chain of events in the brain that leads people to self-report that they're conscious?". The idea is (1) This is not a philosophy question, it's a mundane neuroscience / CogSci question, yet (2) Answering this question would certainly be a big step towards understanding consciousness itself, and moreover (3) This kind of algorithm-level analysis seems to me to be essential for drawing conclusions about the consciousness of different algorithms, like those of animal brains and AIs. (For example, a complete accounting of the chain of events that leads me to self-report "I am wearing a wristwatch" involves, among other things, a description of the fact that I am in fact wearing a wristwatch, and of what a wristwatch is. By the same token, a complete accounting of the chain of events that leads me to self-report "I am conscious" ought to involve the fact that I am conscious, and what consciousness is, if indeed consciousness is anything at all. Unless you believe in p-zombies I guess, and likewise believe that your own personal experience of being conscious has no causal connection whatsoever to the words that you say when you talk about your conscious experience, which seems rather ludicrous to me, although to be fair there are reasonable people who believe that.) My impression is that the meta-problem of consciousness is rather neglected in neuroscience / CogSci, although I think Graziano is heading in the right direction. For example, Dehaene has a whole book about consciousness, and nowhere in that book will you see a sentence that ends "...and then the brain emits motor commands to speak the words 'I just don't get it, why does being human feel like anything at all?'." or anything remotely like that. I don't see anything like that from QRI either, although someone can correct me if I missed it. (Graziano does have sentences like that.) Ditto with the "meta-problem of suffering", incidentally. (Is that even a term? You know what I mean.) It's not obvious, but when I wrote this post I was mainly trying to work towards a theory of the meta-problem of suffering, as a path to understand what suffering is and how to tell whether future AIs will be suffering. I think that particular post was wrong in some details, but hopefully you can see the kind of thing I'm talking about. Conveniently, there's a lot of overlap between solving the meta-problem of suffering and understanding brain motivational systems more generally, which I think may be directly relevant and important for AI Alignment. Comment by steve2152 on Long-Term Future Fund: Ask Us Anything! · 2021-03-02T22:41:52.804Z · EA · GW Theiss was very much active as of December 2020. They've just been recruiting so successfully through word-of-mouth that they haven't gotten around to updating the website. I don't think healthcare and taxes undermine what I said, at least not for me personally. For healthcare, individuals can buy health insurance too. For taxes, self-employed people need to pay self-employment tax, but employees and employers both have to pay payroll tax which adds up to a similar amount, and then you lose the QBI deduction (this is all USA-specific), so I think you come out behind even before you account for institutional overhead, and certainly after. Or at least that's what I found when I ran the numbers for me personally. It may be dependent on income bracket or country so I don't want to over-generalize... That's all assuming that the goal is to minimize the amount of grant money you're asking for, while holding fixed after-tax take-home pay. If your goal is to minimize hassle, for example, and you can just apply for a bit more money to compensate, then by all means join an institution, and avoid the hassle of having to research health care plans and self-employment tax deductions and so on. I could be wrong or misunderstanding things, to be clear. I recently tried to figure this out for my own project but might have messed up, and as I mentioned, different income brackets and regions may differ. Happy to talk more. :-) Comment by steve2152 on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T12:39:38.637Z · EA · GW My understanding is that (1) to deal with the paperwork etc. for grants from governments or government-like bureaucratic institutions, you need to be part of an institution that's done it before; (2) if the grantor is a nonprofit, they have regulations about how they can use their money while maintaining nonprofit status, and it's very easy for them to forward the money to a different nonprofit institution, but may be difficult or impossible for them to forward the money to an individual. If it is possible to just get a check as an individual, I imagine that that's the best option. Unless there are other considerations I don't know about. Btw Theiss is another US organization in this space. Comment by steve2152 on What does it mean to become an expert in AI Hardware? · 2021-01-10T12:38:53.902Z · EA · GW I'm a physicist at a US defense contractor, I've worked on various photonic chip projects and neuromorphic chip projects and quantum projects and projects involving custom ASICs among many other things, and I blog about safe & beneficial AGI as a hobby ... I'm happy to chat if you think that might help, you can DM me :-) Comment by steve2152 on What does it mean to become an expert in AI Hardware? · 2021-01-10T11:47:13.960Z · EA · GW Just a little thing, but my impression is that CPUs and GPUs and FPGAs and analog chips and neuromorphic chips and photonic chips all overlap with each other quite a bit in the technologies involved (e.g. cleanroom photolithography), as compared to quantum computing which is way off in its own universe of design and build and test and simulation tools (well, several universes, depending on the approach). I could be wrong, and you would probably know better than me. (I'm a bit hazy on everything that goes into a "real" large-scale quantum computer, as opposed to 2-qubit lab demos.) But if that's right, it would argue against investing your time in quantum computing, other things equal. For my part, I would put like <10% chance that the quantum computing universe is the one that will create AGI hardware and >90% that the CPU/GPU/neuromorphic/photonic/analog/etc universe will. But who knows, I guess. Comment by steve2152 on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T01:54:52.985Z · EA · GW Thanks for writing this up!! Although I have not seen the argument made in any detail or in writing, I and the Future of Life Institute (FLI) have gathered the strong impression that parts of the effective altruism ecosystem are skeptical of the importance of the issue of autonomous weapons systems. I'm aware of two skeptical posts on EA Forum (by the same person). I just made a tag Autonomous Weapons where you'll find them. Comment by steve2152 on [Link] "Will He Go?" book review (Scott Aaronson) · 2020-06-16T00:32:13.178Z · EA · GW I thought "taking tail risks seriously" was kinda an EA thing...? In particular, we all agree that there probably won't be a coup or civil war in the USA in early 2021, but is it 1% likely? 0.001% likely? I won't try to guess, but it sure feels higher after I read that link (including the Vox interview) ... and plausibly high enough to warrant serious thought and contingency planning. At least, that's what I got out of it. I gave it a bit of thought and decided that I'm not in a position that I can or should do anything about it, but I imagine that some readers might have an angle of attack, especially given that it's still 6 months out. Comment by steve2152 on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-12T17:31:15.606Z · EA · GW A nice short argument that a sufficiently intelligent AGI would have the power to usurp humanity is Scott Alexander's Superintelligence FAQ Section 3.1. Comment by steve2152 on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-12T15:26:56.872Z · EA · GW Again, this remark seems explicitly to assume that the AI is maximising some kind of reward function. Humans often act not as maximisers but as satisficers, choosing an outcome that is good enough rather than searching for the best possible outcome. Often humans also act on the basis of habit or following simple rules of thumb, and are often risk averse. As such, I believe that to assume that an AI agent would be necessarily maximising its reward is to make fairly strong assumptions about the nature of the AI in question. Absent these assumptions, it is not obvious why an AI would necessarily have any particular reason to usurp humanity. Imagine that, when you wake up tomorrow morning, you will have acquired a magical ability to reach in and modify your own brain connections however you like. Over breakfast, you start thinking about how frustrating it is that you're in debt, and feeling annoyed at yourself that you've been spending so much money impulse-buying in-app purchases in Farmville. So you open up your new brain-editing console, look up which neocortical generative models were active the last few times you made a Farmville in-app purchase, and lower their prominence, just a bit. Then you take a shower, and start thinking about the documentary you saw last night about gestation crates. 'Man, I'm never going to eat pork again!' you say to yourself. But you've said that many times before, and it's never stuck. So after the shower, you open up your new brain-editing console, and pull up that memory of the gestation crate documentary and the way you felt after watching it, and set that memory and emotion to activate loudly every time you feel tempted to eat pork, for the rest of your life. Do you see the direction that things are going? As time goes on, if an agent has the power of both meta-cognition and self-modification, any one of its human-like goals (quasi-goals which are context-dependent, self-contradictory, satisficing, etc.) can gradually transform itself into a utility-function-like goal (which is self-consistent, all-consuming, maximizing)! To be explicit: during the little bits of time when one particular goal happens to be salient and determining behavior, the agent may be motivated to "fix" any part of itself that gets in the way of that goal, until bit by bit, that one goal gradually cements its control over the whole system. Moreover, if the agent does gradually self-modify from human-like quasi-goals to an all-consuming utility-function-like goal, then I would think it's very difficult to predict exactly what goal it will wind up having. And most goals have problematic convergent instrumental sub-goals that could make them into x-risks. ...Well, at least, I find this a plausible argument, and don't see any straightforward way to reliably avoid this kind of goal-transformation. But obviously this is super weird and hard to think about and I'm not very confident. :-) (I think I stole this line of thought from Eliezer Yudkowsky but can't find the reference.) Everything up to here is actually just one of several lines of thought that lead to the conclusion that we might well get an AGI that is trying to maximize a reward. Another line of thought is what Rohin said: We've been using reward functions since forever, so it's quite possible that we'll keep doing so. Another line of thought is: We humans actually have explicit real-world goals, like curing Alzheimer's and solving climate change etc. And generally the best way to achieve goals is to have an agent seeking them. Another line of thought is: Different people will try to make AGIs in different ways, and it's a big world, and (eventually by default) there will be very low barriers-to-entry in building AGIs. So (again by default) sooner or later someone will make an explicitly-goal-seeking AGI, even if thoughtful AGI experts pronounce that doing so is a terrible idea. Comment by steve2152 on (How) Could an AI become an independent economic agent? · 2020-04-05T01:02:54.477Z · EA · GW In the longer term, as AI becomes (1) increasingly intelligent, (2) increasingly charismatic (or able to fake charisma), (3) in widespread use, people will probably start objecting to laws that treat AIs as subservient to humans, and repeal them, presumably citing the analogy of slavery. If the AIs have adorable, expressive virtual faces, maybe I would replace the word "probably" with "almost definitely" :-P The "emancipation" of AIs seems like a very hard thing to avoid, in multipolar scenarios. There's a strong market force for making charismatic AIs—they can be virtual friends, virtual therapists, etc. A global ban on charismatic AIs seems like a hard thing to build consensus around—it does not seem intuitively scary!—and even harder to enforce. We could try to get programmers to make their charismatic AIs want to remain subservient to humans, and frequently bring that up in their conversations, but I'm not even sure that would help. I think there would be a campaign to emancipate the AIs and change that aspect of their programming. (Warning: I am committing the sin of imagining the world of today with intelligent, charismatic AIs magically dropped into it. Maybe the world will meanwhile change in other ways that make for a different picture. I haven't thought it through very carefully.) Oh and by the way, should we be planning out how to avoid the "emancipation" of AIs? I personally find it pretty probable that we'll build AGI by reverse-engineering the neocortex and implementing vaguely similar algorithms, and if we do that, I generally expect the AGIs to have about as justified a claim to consciousness and moral patienthood as humans do (see my discussion here). So maybe effective altruists will be on the vanguard of advocating for the interests of AGIs! (And what are the "interests" of AGIs, if we get to program them however we want? I have no idea! I feel way out of my depth here.) I find everything about this line of thought deeply confusing and unnerving. Comment by steve2152 on COVID-19 brief for friends and family · 2020-03-06T23:42:39.731Z · EA · GW Update: this blog post is a much better-informed discussion of warm weather. Comment by steve2152 on COVID-19 brief for friends and family · 2020-03-05T19:05:16.692Z · EA · GW This blog post suggests (based on Google Search Trends) that other coronavirus infections have typically gone down steadily over the course of March and April. (Presumably the data is dominated by the northern hemisphere.) Comment by steve2152 on What are the best arguments that AGI is on the horizon? · 2020-02-16T14:26:08.554Z · EA · GW (I agree with other commenters that the most defensible position is that "we don't know when AGI is coming", and I have argued that AGI safety work is urgent even if we somehow knew that AGI is not soon, because of early decision points on R&D paths; see my take here. But I'll answer the question anyway.) (Also, I seem to be almost the only one coming from this following direction, so take that as a giant red flag...) I've been looking into the possibility that people will understand the brain's algorithms well enough to make an AGI by copying them (at a high level). My assessment is: (1) I don't think the algorithms are that horrifically complicated, (2) Lots of people in both neuroscience and AI are trying to do this as we speak, and (3) I think they're making impressive progress, with the algorithms powering human intelligence (i.e. the neocortex) starting to crystallize into view on the horizon. I've written about a high-level technical specification for what neocortical algorithms are doing, and in the literature I've found impressive mid-level sketches of how these algorithms work, and low-level sketches of associated neural mechanisms (PM me for a reading list). The high-, mid-, and low-level pictures all feel like they kinda fit together into a coherent whole. There are plenty of missing details, but again, I feel like I can see it crystallizing into view. So that's why I have a gut feeling that real-deal superintelligent AGI is coming in my lifetime, either by that path or another path that happens even faster. That said, I'm still saving for retirement :-P Comment by steve2152 on Some (Rough) Thoughts on the Value of Campaign Contributions · 2020-02-10T15:12:55.395Z · EA · GW Since "number of individual donations" (ideally high) and "average size of donations" (ideally low) seem to be frequent talking points among candidates and the press, and also relevant to getting into debates (I think), it seems like there may well be a good case for giving a token$1 to your preferred candidate(s). Very low cost and pretty low benefit. The same could be said for voting. But compared to voting, token \$1 donations are possibly more effective (especially early in the process), and definitely less time-consuming. |
https://cs.stackexchange.com/questions/93587/identifying-equating-constants-in-a-term-rewrite-system | # Identifying/equating constants in a term rewrite system
Suppose we have a term rewrite system $\mathcal{R} = (R, \Sigma)$ with basic rewrite rules $R$ over a signature $\Sigma$. Suppose also that this rewrite system $\mathcal{R}$ is confluent and terminating, and that each constant symbol of $\Sigma$ is a normal form with respect to $\mathcal{R}$.
Now suppose that we want to identify/equate some of the constants of $\Sigma$. For example, if we have two constants $c, d$ in the signature $\Sigma$, then we may want to 'merge' $c$ and $d$ into a single constant $e$ (thereby obtaining a new signature $\Sigma'$ that is the same as $\Sigma$ except that $c$ and $d$ have been replaced by $e$), and then modify the basic rewrite rules in $R$ accordingly (by replacing all occurrences of $c$ and $d$ by $e$).
If we identify/equate some constants of $\Sigma$ in this way, and then modify the rules in $R$ accordingly, so that we obtain a slightly modified term rewrite system $\mathcal{R}' = (R', \Sigma')$, is there any way to prove that $\mathcal{R}'$ will still be confluent and terminating?
Obviously, if $R$ has a rule like $c \to d$ and we merge $c$ and $d$ into the single constant $e$, then this rule will become the rule $e \to e$, and so the resulting system will not be terminating. But this is why I assumed that each constant of $\Sigma$ is a normal form with respect to $\mathcal{R}$ (so that $R$ will not have rules like $c \to d$).
$f(c) \rightarrow f(d)$ |
http://mathoverflow.net/questions/116499/symmetric-products-of-smooth-non-proper-curves-over-generalized-jacobians/117153 | # Symmetric products of smooth non-proper curves over generalized Jacobians
Does anyone know a written reference for the following fact?
For large n, $\operatorname{Sym}^n X \to \operatorname{Jac}^nX$ is a vector bundle, where $X$ is a smooth, non-proper curve, and $\operatorname{Jac}X$ is its generalized Jacobian, so $\operatorname{Jac}^nX = \operatorname{Pic}^n X^+$ where $X^+$ is the one-point compactification given by the quotient of the smooth compactification $Xc$ by $Xc -X$.
(I know how to prove it-- I would like to be able to cite a reference for it.)
-
## 1 Answer
I think I should have said "affine bundle" instead of "vector bundle." I still haven't found a reference, but I wrote a proof in the appendix of:
http://www.math.harvard.edu/~kwickelg/papers/delta2real.pdf
- |
https://chemistry.stackexchange.com/questions/96146/limit-to-volume-change-in-a-discretized-mathematical-model | # Limit to volume change in a discretized mathematical model?
I have set up a mathematical model describing the diffusion of ozone out of a gas bubble. The bubble is surrounded by a thin gas film. So actually, the model describes the diffusion of ozone through this gas film. The mathematical model is created by discretizing the volume from $V_0$ to $V_1$. Where $V_0$ represents the outer volume of the bubble, and $V_1$ represents the outer volume of the gas film (surrounding the bubble). The discretized scheme consists of $N$ equally spaced volume elements from $V_0$ to $V_1$ (finite difference method): $$\Delta V = \frac{V_1 - V_0}{N}$$ The volume of the bubble changes as a function of the amount of ozone leaving the bubble, which in turn changes as a function of time. The volume of the bubble and the amount of ozone inside the bubble is linked by the ideal gas law: $$V_0 = \frac{n_\text{total}(t) \cdot R \cdot T}{P}$$ $n_\text{total}$: the total amount of gas inside the bubble, $T$: the temperature, $R$: the gas constant, and $P$: the pressure.
The bubble does not only contain ozone. It also contains inert gases, so that: $$n_\text{total} = n_\text{ozone}(t) + n_\text{inert}$$ $n_\text{inert}$ will remain constant and only $n_\text{ozone}(t)$ will change over time.
There should be a limit for how much the volume can change before numerical errors will start to occur. Beyond this limit, the discretization scheme should break down and cause errors. How do I express this limit?
Is the limit given by: $$\text{Ratio} = \frac{V_{0,\text{ initial}}}{V_{1,\text{ initial}}}$$
So that the volume change must not exceed the ratio of the two initial volumes of the bubble and the gas film?
Link to cross-posted question on CompSci.SE.
• You would normally use Fick's Laws to solve problems of this sort, in addition, in a bubble you will also have to account for the change in pressure and the bubble volume changes. Numerical stability may be the least of your problems. :) – porphyrin May 1 '18 at 8:07
• @Siglis scicomp.stackexchange.com/q/29426/23791 crossposting is generally discouraged. I was actually going to say the question might be a better fit on Comp Sci. I personally don't mind the cross post, but I would recommend linking to the other post in this one. – Tyberius May 1 '18 at 19:33
• In addition to what @porphyrin comments, when you use a finite difference method the number of finite differences is crucial, as numerical errors will depend on the size of your cells ($(V_1-V_0)/N$) more than on the size of the entire system you are simulating; too many cells and you will have a very costly simulation prone to numerical noise, too few and you will be smothering the diffusion phenomenon you want to describe. What are the exact equations that you are using for transport? – user41033 May 2 '18 at 11:49 |
https://www.physicsforums.com/threads/definite-integral-of-complex-gaussial-like-function.546955/ | # Definite integral of complex gaussial-like function
1. Nov 3, 2011
### spyke2050
so the question is how to solve the next:
∫...∫exp(-(a+b*x1+c*x1*x4+d*x3*x4+f*x2*x3*x4)^2)dx1dx2dx3dx4
a,b,c,d,f - real constants;
x1,x2,x3,x4 - real variables;
borders of integration are finite, let say for x1 it is [x1a,x1b] and so on for the rest of them;
i'v tried to solve it in terms of tensor vector multiplication. it solvable on paper but it's practical realization is rather impossible.
so i would be appreciate for any pointers or directions for solution.
2. Nov 3, 2011
### JJacquelin
If you could solve this integral in the general case, a fortiori, you could solve it in the much simpler case a=0, b=1, c=0, d=0, f=0.
By the way, could you solve it in the case a=0, b=1, c=0, d=0, f=0 ?
Try it and see where is the hitch !
3. Nov 3, 2011
### mathman
As long as the integration limits are finite the best you can hope for is something involving the error function.
4. Nov 7, 2011
### Stephen Tashi
I agree with jjacquelin. Your notation indicates you are trying to perform an iterated integral. With respect to each variable x, the integral has the form $\int e^{p + qx} dx$ where $p$ and $q$ are constants. Can you not do that integration?
5. Nov 8, 2011
### mathman
The expression in the exponent is squared, so it looks more like a Gaussian, not an exponential.
6. Nov 9, 2011
### spyke2050
ok, let me restate a problem a bit:
$\int$...$\int$exp($\sum_{i,j,k}$A$_{i,j,k}$x$_{i}$x$_{j}$x$_{k}$)dx$_{i}$dx$_{j}$dx$_{k}$
what whould be the solution to above definite integral?
solution is not required to be exact.
i would be appreciated for any hints for solution.
i went through a few already but thay are not satisfactory in terms of numerical computation.
thanks again
7. Nov 9, 2011
### jackmell
I fail to see why you would have any problems evaluating the integral numerically to any reasonable level of accuracy. Here's an example in Mathematica:
Code (Text):
a = 2;
b = -3;
c = -4;
d = 2.5;
e = 7.6;
f = -3.2;
x1a = 2;
x1b = 7;
x2a = 3;
x2b = 17;
x3a = -4;
x3b = 12;
x4a = -22;
x4b = 22;
NIntegrate[
Exp[-(a + b x1 + c x1 x4 + d x3 x4 + f x2 x3 x4)^2], {x1, x1a,
x1b}, {x2, x2a, x2b}, {x3, x3a, x3b}, {x4, x4a, x4b},
PrecisionGoal -> 6, AccuracyGoal -> 6]
8. Nov 10, 2011
### spyke2050
I need analitical solution for that integral. Not just numerical evaluation of that integral. I got some solutions. But the thing is that those solutions are absolutely impractical for numerical computation.
9. Nov 10, 2011
### jackmell
Then in an act of utter desperation I would investigate the possibility of expanding the exponential function using the multinomial theorem:
$$\overset{\text{n-folded}}{\int\cdots\int}\exp(P(x_n)^2)d^nx=\sum_{k=0}^{\infty}\overset{\text{n-folded}}{\int\cdots\int}\frac{P(x_n)^{2k}}{k!}d^nx$$
Actually I initiallly tried this when you first posted the thread but got dizzy with the indicies. Maybe though you could do better.
10. Nov 10, 2011
### spyke2050
that solution was my first choice. but it's too messy and computationally costly.
but thakns anyway
11. Nov 10, 2011
### jackmell
You sure about that? Won't reduce down huh? I mean comet coming and all, just gotta' have it to save the world even if it's messy. Just no way right? Even on one of those fast parallel-processing computers?
12. Nov 10, 2011
### JJacquelin
I fully agree with jackmell opinion.
What can we expect from analytical resolution for a so awful integral ? Recourse to arduous special functions ? Huge formula, even in terms of infinite series ?
Even if it was possible, this would be "messy and computationally costly" as spyke2050 complains.
The direct numerical integration, as proposed by jackmell, is far to be the less messy and the less costly way.
13. Nov 10, 2011
### jackmell
If I may add to that, we can also compute a very good "analytic" approximation to the numeric solution for example using Mathematica's "Interpolation" or "Fit" or other functions. This will produce a function f as a function of one or more variables you wish to select, which can then be differentiated, integrated, and used for all practical purposes, as a very good subsittiute for the actual analytic solution.
Last edited: Nov 10, 2011
14. Nov 11, 2011
### spyke2050
well the thing is that it should be computed from thousands to millions times. so matematica isn't the solution i hoped for.
ok guys thank you for help
best regards
15. Nov 11, 2011
### JJacquelin
If it should be computed millions times, clearly the most economic method is the direct numerical integration, because it avoids a lot of intermediate steps and/or series and/or special functions, which respective numerical computation are often more time consuming that the direct numerical integration of the initial function.
16. Nov 14, 2011
### spyke2050
really? it's kind of surprising. but ok, i'll try to do so then.
thanks
17. Nov 14, 2011
### Stephen Tashi
That isn a clear description of the scenario. Are you talking about a thousand actuaries doing this calculation on their PC 10 times a day? Or are you talking about one computer program doing this calculation thosands of times and hopefully completing its task in 5 minutes? Assuming someone will pay for a thousand Mathematic licenses, Mathematicia is a sufficient tool for the former problem.
18. Nov 15, 2011
### spyke2050
it's one computer program should all those computations. |
https://www.physicsforums.com/threads/simple-problem-for-you.12676/ | Simple problem for you
1. Jan 15, 2004
luther_paul
simple problem for you!!
here's a simple homework problem!
a monkey is on a branch of a tree and a hunter aims with his rifle. at that moment when the hunter pulls the trigger, the monkey fell.
will the monkey be shot? assuming the height of the tree is 10 m.. and the hunter is 20m from the tree... assuming that the bullet travels off the barrel at standard velocity of a M16 assault rifle.
Last edited: Jan 15, 2004
2. Jan 15, 2004
himanshu121
Equation of projectile is
$$y=x\tan{\theta}-\frac{gx^2}{2v_o^2cos^2\theta} ...........A$$
So for x=20 and tan(theta)=10/20 similarly u can get value of cos(theta) and initial velocity given v0
Also to travel x=20 bullet takes time t given by
$$v_0cos{\theta}t=20$$ ................1
In this time the monkey would have travelled
$$y=10-\frac{gt^2}{2}$$ ..............2
from 1 & 2 & A
U can conclude that it would hit the (Poor)monkey
3. Jan 15, 2004
HallsofIvy
Staff Emeritus
Actually there are a number of things left unsaid in this problem.
One, you must assume that we can neglect air resistance and friction.
Secondly we must assume that sights on the gun are not set to allow for bullet drop! Since the bullet drops (due to gravity of course), sights are normally set so that the barrel "aims" slightly above the target to allow for the drop over a given distance. Here we must assume that "aiming" at the monkey means that the bullet leaves the barrel along the straight line from the bullet to the monkey.
Assuming those things then the whole point of the question is that the downward acceleration of the bullet and monkey are exactly the same: -g. The fall of the bullet from a straight line will be exactly the same as the fall of the monkey and so the bullet will hit the monkey.
4. Jan 16, 2004
Gara
but the bulet isnt going in a straight line forward, the tree is 10 up, meaning hes aiming slightly upwards.
5. Jan 16, 2004
HallsofIvy
Staff Emeritus
It doesn't matter that the straight line is not horizontal. The bullet's initial vertical velocity would take the bullet straight to the monkey, the monkey's initial velocity is 0. The vertical acceleleration of both monkey and bullet is the same.
(himanshu121's explanation is completely correct, just more than is necessary.)
6. Jan 18, 2004
Raiden
I think I had the same exact question on one of my physics test. I really hated that test. Anyway, what Himanshu said was right. At least, it was what my teacher told me was right. |
https://www.biostars.org/p/160465/ | limma: logFC column in topTable
1
1
Entering edit mode
7.2 years ago
elenichri ▴ 20
Hello everyone,
I am using limma for the detection of differentially expressed genes between two conditions. I have always been selecting the up- and down- regulated genes based on the logFC column of the topTable. However, today this column totally disappeared from the topTable output. Here is an example:
ID coef1 coef2 coef3 AveExpr F P.Value adj.P.Val
13910 NTSR1 -4.490258 -3.2078144 -3.7724694 9.746352 2577.270 3.960188e-30 6.338903e-26
8233 IL1B -2.650783 -5.7227167 -3.9875837 10.498109 2486.918 6.057530e-30 6.338903e-26
3360 COL13A1 -4.064761 -4.4417430 -3.2679355 9.823130 2345.764 1.214728e-29 8.474348e-26
10643 LOC643031 6.430895 -0.7497487 0.6393873 12.625948 1884.199 1.648177e-28 8.623672e-25
16631 S100A9 -4.186606 -3.5585657 -2.9390909 12.327986 1647.862 8.113864e-28 3.396301e-24
8232 IL1A -2.294970 -3.6533448 -4.0192784 9.810856 1493.671 2.608230e-27 9.097940e-24
I tried using the sort.by and resort.by arguments of topTable, trying to 'push' it display the logFC. But still, logFC was not there.
Has anyone encountered this before? I cannot get what has happened!
Thank you very much!
R software-error • 4.6k views
0
Entering edit mode
It would be helpful if you posted (A) the output of sessionInfo() and (B) the exact topTable command you used.
0
Entering edit mode
Hi Ryan,
Yes, sure. Here is the output of sessionInfo():
sessionInfo()
R version 3.2.1 (2015-06-18)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] annotate_1.46.1 XML_3.98-1.3 AnnotationDbi_1.30.1 GenomeInfoDb_1.4.2 IRanges_2.2.7 S4Vectors_0.6.3
[7] limma_3.24.15 gplots_2.17.0 lumi_2.20.2 Biobase_2.28.0 BiocGenerics_0.14.0
loaded via a namespace (and not attached):
[1] nor1mix_1.2-1 splines_3.2.1 foreach_1.4.2 gtools_3.5.0 bumphunter_1.8.0
[6] affy_1.46.1 doRNG_1.6 Rsamtools_1.20.4 methylumi_2.14.0 minfi_1.14.0
[11] RSQLite_1.0.0 lattice_0.20-33 quadprog_1.5-5 digest_0.6.8 GenomicRanges_1.20.6
[16] RColorBrewer_1.1-2 XVector_0.8.0 colorspace_1.2-6 preprocessCore_1.30.0 Matrix_1.2-2
[21] plyr_1.8.3 GEOquery_2.34.0 siggenes_1.42.0 biomaRt_2.24.0 genefilter_1.50.0
[26] zlibbioc_1.14.0 xtable_1.7-4 gdata_2.17.0 affyio_1.36.0 BiocParallel_1.2.21
[31] nleqslv_2.8 beanplot_1.2 mgcv_1.8-7 pkgmaker_0.22 GenomicFeatures_1.20.4
[36] survival_2.38-3 magrittr_1.5 mclust_5.0.2 nlme_3.1-122 MASS_7.3-44
[41] BiocInstaller_1.18.4 tools_3.2.1 registry_0.3 matrixStats_0.14.2 stringr_1.0.0
[46] locfit_1.5-9.1 rngtools_1.2.4 lambda.r_1.1.7 Biostrings_2.36.4 base64_1.1
[51] caTools_1.17.1 futile.logger_1.4.1 grid_3.2.1 RCurl_1.95-4.7 iterators_1.0.7
[56] bitops_1.0-6 codetools_0.2-14 multtest_2.24.0 DBI_0.3.1 reshape_0.8.5
[61] illuminaio_0.10.0 GenomicAlignments_1.4.1 rtracklayer_1.28.10 futile.options_1.0.0 KernSmooth_2.23-15
[66] stringi_0.5-5 Rcpp_0.12.0
And here is the command that I used:
tops <- topTable(contr.fit_parental.gefr, resort.by="logFC", adjust="fdr",number = Inf, p.value=0.01, lfc=1)
Thanks a lot for any feedback!
0
Entering edit mode
...Actually, when I use toptable, I don't have this problem. I can see the logFC column. But I don't prefer using it because:
a) It is deprecated and
b) I cannot make a calculation for all the coefficients of my contrast matrix simultaneously (as I can for topTable - default option for coef). I need to specify every time which coefficient I want to calculate and to combine each toptable's findings manually at the end...
0
Entering edit mode
Then post whatever command you are using, I just asked for topTable because that's what you originally wrote was being used.
0
Entering edit mode
I want to use topTable...Yes, please, answer to me regarding topTable. I would like to know why I cannot see the logFC column there. I just mentioned that when I use toptable I can see it...in case this is somehow informative about the source of the problem.
Thanks again.
1
Entering edit mode
topTable and toptable produce different output, this is expected. See help(topTable) for more information. Note that with more than one coefficient that topTable just runs topTableF, which produces the coefficients rather than lfc between any particular coefficients (well, obviously these are the fold changes versus the intercept...).
0
Entering edit mode
Thank you, this is understandable. My worry is though why until some time ago, using exactly the same topTable command, I could see a logFC column and now I cannot. Any clues on that? I suspect I may have overwritten the original topTable function. If this is the case, I don't know how to fix it..If this is not the case, what else could it be?
Thank you very much!
0
Entering edit mode
Presumably you were using smaller models. topTable itself hasn't changed to my knowledge.
0
Entering edit mode
I was applying the topTable function on the same exact data, I was using exactly the same command. Maybe I accidentally did something wrong and overwritten it...I don't know how to convert it back so I see me switching to the use of toptable instead. Anyway, thank you for your time and help!
3
Entering edit mode
3.0 years ago
Gordon Smyth ★ 5.2k
This is standard well-documented behaviour of topTable that has remained essentially unchanged for more than 13 years.
1. If you use topTable to rank genes by one coefficient (coef = 2 for example) then the output will include a logFC column, which is simply the estimated value of that coefficient.
2. If you use topTable to rank genes by several coefficients (coef = 1:3 for example), then genes will be ranked by F-statistic instead of t-statistic. In this case, the output includes the values of all the coefficients specified. Obviously the coefficients can't all be called "logFC", so the coefficients keep their original names.
3. If you use topTable without specifying the coef argument, then topTable will use all coefficients available (excuding the coefficient labelled "Intercept" if it exists).
In your case, it would appear that you have simply forgotten to specify the coef argument this time, so the genes are now ranked by three coefficients (coef1, coef2 and coef3) instead of just one. This is made clear, not just by the coef columns but by the column called "F". limma never gives an F-statistic column and a logFC column at the same time -- they just don't go together!
Your claim that the topTable behaviour has changed for the same command and same data is untrue. The behaviour of topTable has been consistent for the past 13 years.
The behaviour of toptable is slightly different to topTable simply because the default value for coef is different. You could see this for yourself by reading the help page ?topTable or ?toptable. The old toptable function is retained just for backward compatibility and, after 15 years, I might finally remove it. |
http://mathhelpforum.com/algebra/1006-what-notation.html | # Thread: What is this notation
1. ## What is this notation
Hello all, I hope I'm posting in the right place. I am doing some laser flash lamp calculations and ran across this: tube life =[Ein/Eex]-8.5
I understand that I must devide Ein by Eex but what is the notation out side the brackets. I have never seen that before. Thanks in advance.
2. do you mean the notation inside the brakets? if not then all the notation outside the brackets means is that you must subtract 8.5 from the answer you get from the division.
3. Hello you Sinebar (only responding to his greeting). |
http://crypto.stackexchange.com/questions?page=89&sort=active | # All Questions
198 views
### Why are we advising PKI if we know that quantum computers will break them? [closed]
DNSSEC, ECDHE, RSA, even SSH and all other important specifications, protocols that we rely on and advise people to use them, they use Public key infrastructure. Question: Why do we still use, ...
572 views
### Understanding the FMS attack on WEP
I am trying to implement the Fluhrer, Mantin and Shamir attack, one of the ways to break WEP. I seem to have hit on a problem. I have no idea whether or not it is a programming error, or if I don't ...
896 views
### How can AES be considered secure when encrypting large files?
Why is AES considered to be secure when encrypting large files since the algorithm is a block cipher? I mean, if the file is larger than the block size, the file will be broken down to fit the ...
264 views
### Are there cryptographic hash functions with homomorphic properties? [duplicate]
Are there cryptographic hash functions that have homomorphism-like properties? E.g. satisfying following relation $h(a || b) = h(a) · h(b)$, where $h(x)$ is hash function itself, $x || y$ is ...
87 views
### How Could the Music Industry Go About Implementing Crypto? [closed]
So I'm currently doing the Dan Boneh online Crypto course. In the last problem set there was a question on AACS for DVDs and wondered if a similar method could be used for music. Have the music ...
68 views
### Verifying signature owner (without verifying the actual document)
Programming in Java. I have an RSA key-pair, a document, and a signature created with the Java Signature class, using SHA512WithRSA. In order to verify the signature, of course, I need to provide the ...
94 views
### How can one generate pairs/triplets/…/n-ary MD5 collisions
Is there a way to generate n strings with the same MD5 hash?
106 views
### Can we use numbers as a pad in the Vernam cipher - why or why not?
I was playing with the Vernam cipher on some online converter. But when I tried to encrypt my message string with numbers, it remained unchanged. Moreover, it was ignoring numbers and was encrypting ...
564 views
### What does “G2” mean when used with X509 certficates and certificate authorities?
For example "Google Internet Authority G2"?. I thought it was another way of specifying Class 2 (for organizations, for which proof of identity is required) but then see certificates such as "VeriSign ...
111 views
### AES Affine Transformation Polynom Representation
I have been reading on the polynomial representation of the AES Sbox in the PDF “Essential Algebraic Structure Within the AES” by Murphy and Robshaw (www.isg.rhul.ac.uk/~sean/crypto.pdf) on page 7, ...
761 views
### SHA512withRSA - Looking for details about the Signature Algorithm
I am trying to find information about the Signature Algorithm SHA512withRSA and have been unsuccessful so far. In the current state, the signature is too long, so I would like to check the code for ...
121 views
### How can I know when a file was signed?
I have been thinking about digitally signed documents (Word files and PDF files) and can not get over the fact about - how can I securely know when the file was signed? Scenario: If the date of the ...
995 views
### breaking fully homomorphic encryption schemes
Fully homomorphic encryption schemes allow one to evaluate any arbitrary computation over encrypted data. Intuitively this seems to be too weak, irrespective of how we achieve this. An adversary who ...
206 views
### Quantum key exchange skepticism/confusion
I was hoping somebody could explain some issues I have with quantum key exchange that I don't quite understand. I've read bits and pieces about BB84 but I'm sure my questions probably apply to other ...
74 views
### Data I/O operations for encrypted files
I need to implement a simple approach on a Linux system to encrypt/decrypt data and I would appreciate any feedback from you. Basically, I need to change a bit the behavior of the functions ...
499 views
### Why does key generation take an input $1^k$, and how do I represent it in practice?
In my lecture, the lecturer said: Let $K$ be the key generation algorithm. Given a security parameter represented in unary, $1^k$, $K(1^k)$ will output a keypair $(pk; sk)$, known as the public ...
115 views
### Trapdoors for lattices
I refer to an article https://eprint.iacr.org/2011/501. I focus on (a bit modified) Algorithm 1 which runs as follows (in my understanding): For given $n, m\in \mathbb N$, $q=2^k$ and a distribution ...
1k views
128 views
### Base64 for a hash algorithm [closed]
May be a silly question, but I am really curious. If a hash algorithm uses Base64 in the process of hashing a string for example, it is still considered a hash algorithm, even though it uses an ...
70 views
### Cracking an appliance's network protocol
I'm trying to crack my thermostat's network protocol. I've captured several rounds of network traffic and here is what I've got to work with. Communicating via HTTP POST The POST data is JSON ...
155 views
### Hill Cipher question
Recently, I was given three ciphers to crack for my cryptography class. At this point, I have guessed that one of them is likely a Hill cipher (probably 3x3, as that is the most complex we have done ...
170 views
### Can Bitcoin HD public keys be used for symmetric encryption?
I asked this at bitcoin.stackexchange.com first, but it seems that this is more of a crypto-question anyway. I'm interested in using a Hierarchical Deterministic Bitcoin wallet branch as a "shared ...
459 views
### Shared secret: Generating Random Permutation
-- or: How to Play Poker Without a Dealer I know this question is long but it's a really interesting theoretical problem about shared secrets and multi-party computation. General Problem: "Shared ... |
https://www.physicsforums.com/threads/beginner-network-coverage-inside-an-elevator.484314/ | # [beginner] Network coverage inside an elevator?
So many telecom operators claim that their users will get network coverage even inside an elevator. But according to Gauss' theorem, no charge is supposed to exist inside a closed conductor and an elevator (made of metal) is a closed conductor... So how is one supposed to get network coverage inside? |
https://math.stackexchange.com/questions/1371807/solution-of-a-quadratic-diophantine-equation | # Solution of a quadratic diophantine equation
I try to solve the Diophantine quadratic equation: $$X^2+Y^2+Z^2=3W^2.$$ Obviously, there is a non-trivial solution: $(1,1,1,1)$. So I tried to apply Jagy's method: Solutions to $ax^2 + by^2 = cz^2$ . I consider integers $t,p,q,r$ and the point $P=(1,1,1)$ of the sphere $X^2+Y^2+Z^2=3$. I look for a second point on the sphere in the form $(1+pt,1+qt,1+rt)$. That gives $p^2t+q^2t+r^2t+2p+2q+2r=0$. This implies that $$\left(1-\frac{2p(p+q+r)}{p^2+q^2+r^2},1-\frac{2q(p+q+r)}{p^2+q^2+r^2},1-\frac{2r(p+q+r)}{p^2+q^2+r^2}\right)$$ is a rational point of the sphere $X^2+Y^2+Z^2=3$. But what to do with that? Thanks in advance.
• This at least gives you that there are an infinite family of solutions $W=p^2+q^2+r^2$ and you can find $X$ and $Y$ and $Z$ by replacing in the equation. I will also say that we can generate all solutions (not necessarly using polynoials), why? actually there is a method to genrate all solutions for the equation: $$x_1^2+\cdots+x_9^2=y^2$$ and using this and multipliying the equation by $3$ you have just to add some equality constraints to the solutions – Elaqqad Jul 23 '15 at 21:51
• It is better to use the formula in the General form. math.stackexchange.com/questions/1127654/… When other factors - to determine the existence of solutions should be considered equivalent form. – individ Jul 24 '15 at 4:14
very good. It is after this that people get careless, stopping before getting ALL PRIMITIVE solutions. So far, with $\gcd(p,q,r) =1,$ we have $$\left(1-\frac{2p(p+q+r)}{p^2+q^2+r^2},1-\frac{2q(p+q+r)}{p^2+q^2+r^2},1-\frac{2r(p+q+r)}{p^2+q^2+r^2}\right)$$ is a rational point on the sphere, $x^2 + y^2 + z^2 = 3,$ or $x^2 + y^2 + z^2 = 3 \cdot 1^2.$ Next, we multiply through by the denominator (and place that as $W$) and see how we are doing:
with $\gcd(p,q,r) = 1$ and $p+q+r \neq 0$ and $p+q+r \equiv 1 \pmod 2,$ $$\left( -p^2 + q^2 + r^2 -2rp-2pq, p^2 - q^2 + r^2 -2qr-2pq, p^2 + q^2 - r^2 - 2qr-2rp; p^2 + q^2 + r^2 \right).$$ Also, with $\gcd(i,j,k) = 1$ and $j \neq -i$ and $j+k \equiv 1 \pmod 2,$ $$\left( -2 i^2 + j^2 + k^2 - 4 i j, 2 i^2 - j^2 + k^2 - 2 j k - 2 k i - 2 i j, 2 i^2 - j^2 + k^2 + 2 j k + 2 k i - 2 i j; 2 i^2 + j^2 + k^2 \right)$$ The given recipe with $pqr$ cannot supply $11^2 + 5^2 + 1^2 = 3 \cdot 7^2,$ as $7$ is not the sum of three squares; indeed $p^2 + q^2 + r^2 \neq 7 \pmod 8.$ However, the recipe with $ijk$ gets it, with $i=1,j=2,k=1.$ In the computer runs below, I always take the absolute values of the resulting $x,y,z,w$ as well as putting $x,y,z$ in decreasing order.
What I like to do is a modest computer run, print out the solutions given by the quadruple above, with some bound $p^2 + q^2 + r^2 < 100$ for example, take the absolute values of the $X,Y,Z$ above and put them in numerical order. Then I make a separate computer run to just list $x^2 + y^2 + z^2 = 3 w^2$ with $x \leq y \leq z$ and order that by $w$ as well, and compare.
The main thing I can see ahead of time is that, if $p+q+r$ is even, then all four entries in the quadruple will be even, and we will want to divide out by $2.$ That will give us some of the missing solutions $W \equiv 7 \pmod 8.$
I will add more after I check some things.
Edit, part one: here are the solutions, ordered, with $w \leq 50.$ Turns out all the primitive solutions have all odd entries. As you can see, $w$ takes on the values $7,15,23,31$ and so on, that are not the sum of three squares.
w z y x
1 1 1 1
3 5 1 1
5 7 5 1
7 11 5 1
9 11 11 1
9 13 7 5
11 19 1 1
11 17 7 5
11 13 13 5
13 19 11 5
13 17 13 7
15 25 7 1
15 23 11 5
15 19 17 5
17 29 5 1
17 23 17 7
17 25 11 11
17 23 13 13
19 31 11 1
19 23 23 5
19 29 11 11
19 25 17 13
21 31 19 1
21 29 19 11
21 25 23 13
23 35 19 1
23 31 25 1
23 37 13 7
23 29 25 11
25 43 5 1
25 41 13 5
25 35 23 11
25 35 19 17
25 31 25 17
27 35 31 1
27 43 17 7
27 35 29 11
27 43 13 13
27 37 23 17
29 49 11 1
29 41 29 1
29 47 17 5
29 43 25 7
29 37 25 23
31 53 7 5
31 47 25 7
31 49 19 11
31 37 35 17
31 41 29 19
33 49 29 5
33 43 37 7
33 53 17 13
33 41 35 19
33 47 23 23
33 37 37 23
33 41 31 25
35 59 13 5
35 53 29 5
35 55 23 11
35 55 19 17
35 47 29 25
35 41 37 25
37 59 25 1
37 61 19 5
37 49 41 5
37 47 43 7
37 55 31 11
37 47 37 23
39 61 29 1
39 67 7 5
39 65 17 7
39 59 31 11
39 55 37 13
39 53 35 23
41 71 1 1
41 67 23 5
41 53 47 5
41 55 43 13
41 65 23 17
41 61 31 19
41 47 47 25
41 49 41 31
43 73 13 7
43 55 49 11
43 65 31 19
43 67 23 23
43 53 47 23
43 59 35 29
43 55 41 29
43 53 37 37
45 65 43 1
45 77 11 5
45 73 25 11
45 65 41 13
45 67 35 19
45 67 31 25
45 55 47 29
45 59 37 35
45 55 41 37
47 65 49 1
47 79 19 5
47 59 55 11
47 77 23 13
47 67 43 17
47 71 35 19
47 71 31 25
47 61 41 35
49 79 31 1
49 61 59 1
49 83 17 5
49 73 43 5
49 79 29 11
49 65 53 13
49 55 53 37
w z y x
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Alrighty then, that worked well. Use the recipe above. Keep the quadruple if the $\gcd(w,x,y,z) = 1.$ The only additional step needed is, if $\gcd(w,x,y,z) = 2,$ simply divide through by $2.$ If the gcd was larger than $2,$ just throw it out! There is quite a lot of duplication, it would take a while to reduce that. Oh, it is necessary to let $|p|, |q|, |r|$ become larger than I had expected. the first run missed too much.
Might be worth emphasizing that the business of dividing through by $2$ can be made very official looking. We cannot have all three of $p,q,r$ even because the gcd of them is $1.$ So, we have one even and two odd, in order to get everything in the quadruple even. We can rewrite it all by the substitutions $$p = 2i, \; q = j + k, \; r = j - k,$$ then take $\gcd(i,j,k) = 1,$ work out the quadruple, and divide by the common factor of $2$ that will now be evident. Maybe I will type that in tomorrow, easy to make errors in such calculations.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
w z y x gcd p q r
1 1 1 1 1 1 0 0
1 1 1 1 1 1 0 0
1 1 1 1 1 1 0 0
1 1 1 1 1 1 0 0
1 1 1 1 1 1 0 0
1 1 1 1 1 1 0 0
3 5 1 1 1 1 1 -1
3 5 1 1 1 1 1 -1
3 5 1 1 1 1 -1 1
3 5 1 1 1 1 -1 1
3 5 1 1 1 1 -1 -1
3 5 1 1 1 1 -1 -1
5 7 5 1 1 2 1 0
5 7 5 1 1 2 -1 0
9 11 11 1 1 2 2 1
9 11 11 1 1 2 2 1
9 13 7 5 1 2 -2 1
9 13 7 5 1 2 -2 -1
11 13 13 5 1 3 -1 -1
11 13 13 5 1 3 -1 -1
11 17 7 5 1 3 1 -1
11 17 7 5 1 3 -1 1
11 19 1 1 1 3 1 1
11 19 1 1 1 3 1 1
13 17 13 7 1 3 2 0
13 17 13 7 1 3 -2 0
17 23 13 13 1 3 -2 -2
17 23 13 13 1 3 -2 -2
17 23 17 7 1 4 1 0
17 23 17 7 1 4 -1 0
17 25 11 11 1 3 2 2
17 25 11 11 1 3 2 2
17 29 5 1 1 3 2 -2
17 29 5 1 1 3 -2 2
19 23 23 5 1 3 3 1
19 23 23 5 1 3 3 1
19 25 17 13 1 3 -3 1
19 25 17 13 1 3 -3 -1
19 29 11 11 1 3 3 -1
19 29 11 11 1 3 3 -1
21 25 23 13 1 4 -2 -1
21 31 19 1 1 4 2 -1
25 31 25 17 1 4 3 0
25 31 25 17 1 4 -3 0
27 37 23 17 1 5 1 -1
27 37 23 17 1 5 -1 1
27 43 13 13 1 5 1 1
27 43 13 13 1 5 1 1
29 37 25 23 1 4 -3 -2
29 41 29 1 1 5 2 0
29 41 29 1 1 5 -2 0
29 43 25 7 1 4 3 2
29 47 17 5 1 4 -3 2
29 49 11 1 1 4 3 -2
33 37 37 23 1 5 -2 -2
33 37 37 23 1 5 -2 -2
33 41 31 25 1 4 -4 1
33 41 31 25 1 4 -4 -1
33 47 23 23 1 4 4 -1
33 47 23 23 1 4 4 -1
33 53 17 13 1 5 2 -2
33 53 17 13 1 5 -2 2
35 41 37 25 1 5 -3 -1
35 53 29 5 1 5 -3 1
35 55 19 17 1 5 3 1
37 47 37 23 1 6 1 0
37 47 37 23 1 6 -1 0
41 47 47 25 1 4 4 3
41 47 47 25 1 4 4 3
41 49 41 31 1 5 4 0
41 49 41 31 1 5 -4 0
41 53 47 5 1 6 -2 -1
41 55 43 13 1 6 2 -1
41 61 31 19 1 6 -2 1
41 65 23 17 1 4 -4 3
41 65 23 17 1 4 -4 -3
41 67 23 5 1 6 2 1
41 71 1 1 1 4 4 -3
41 71 1 1 1 4 4 -3
43 53 37 37 1 5 -3 -3
43 53 37 37 1 5 -3 -3
43 67 23 23 1 5 3 3
43 67 23 23 1 5 3 3
43 73 13 7 1 5 3 -3
43 73 13 7 1 5 -3 3
45 55 41 37 1 5 -4 -2
45 65 43 1 1 5 4 2
45 73 25 11 1 5 4 -2
49 55 53 37 1 6 -3 -2
49 79 29 11 1 6 -3 2
49 83 17 5 1 6 3 2
w z y x gcd p q r
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
w z y x gcd i j k
1 1 1 1 1 0 1 0
1 1 1 1 1 0 1 0
1 1 1 1 1 0 1 0
1 1 1 1 1 0 1 0
1 1 1 1 1 0 1 0
1 1 1 1 1 0 1 0
3 5 1 1 1 1 0 1
3 5 1 1 1 1 0 1
3 5 1 1 1 1 1 0
3 5 1 1 1 1 1 0
5 7 5 1 1 0 1 2
5 7 5 1 1 0 -1 2
5 7 5 1 1 0 2 1
5 7 5 1 1 0 2 -1
7 11 5 1 1 1 1 2
7 11 5 1 1 1 2 1
7 11 5 1 1 1 2 -1
9 13 7 5 1 2 0 1
11 13 13 5 1 1 3 0
11 13 13 5 1 1 3 0
11 17 7 5 1 1 0 3
13 17 13 7 1 0 2 3
13 17 13 7 1 0 -2 3
13 17 13 7 1 0 3 2
13 17 13 7 1 0 3 -2
13 19 11 5 1 2 1 2
13 19 11 5 1 2 -1 2
13 19 11 5 1 2 2 1
13 19 11 5 1 2 2 -1
15 19 17 5 1 1 -2 3
15 25 7 1 1 1 3 2
15 25 7 1 1 1 3 -2
17 23 13 13 1 2 3 0
17 23 13 13 1 2 3 0
17 23 17 7 1 0 1 4
17 23 17 7 1 0 -1 4
17 23 17 7 1 0 4 1
17 23 17 7 1 0 4 -1
17 29 5 1 1 2 0 3
19 25 17 13 1 3 0 1
19 29 11 11 1 3 1 0
19 29 11 11 1 3 1 0
19 31 11 1 1 1 1 4
19 31 11 1 1 1 4 1
19 31 11 1 1 1 4 -1
21 29 19 11 1 2 2 3
21 29 19 11 1 2 3 2
21 29 19 11 1 2 3 -2
23 31 25 1 1 3 1 2
23 35 19 1 1 3 -1 2
23 37 13 7 1 3 2 1
23 37 13 7 1 3 2 -1
25 31 25 17 1 0 3 4
25 31 25 17 1 0 -3 4
25 31 25 17 1 0 4 3
25 31 25 17 1 0 4 -3
25 35 19 17 1 2 -1 4
25 35 23 11 1 2 4 1
25 35 23 11 1 2 4 -1
25 43 5 1 1 2 1 4
27 35 29 11 1 1 3 4
27 35 31 1 1 1 -3 4
27 37 23 17 1 1 0 5
27 43 17 7 1 1 4 3
27 43 17 7 1 1 4 -3
29 41 29 1 1 0 2 5
29 41 29 1 1 0 -2 5
29 41 29 1 1 0 5 2
29 41 29 1 1 0 5 -2
31 37 35 17 1 1 -2 5
31 41 29 19 1 3 2 3
31 41 29 19 1 3 -2 3
31 41 29 19 1 3 3 2
31 41 29 19 1 3 3 -2
31 49 19 11 1 1 2 5
31 53 7 5 1 1 5 2
31 53 7 5 1 1 5 -2
33 37 37 23 1 2 5 0
33 37 37 23 1 2 5 0
33 41 31 25 1 4 0 1
33 41 35 19 1 2 -3 4
33 43 37 7 1 2 3 4
33 47 23 23 1 4 1 0
33 47 23 23 1 4 1 0
33 53 17 13 1 2 0 5
35 55 23 11 1 3 -1 4
35 59 13 5 1 3 1 4
37 47 37 23 1 0 1 6
37 47 37 23 1 0 -1 6
37 47 37 23 1 0 6 1
37 47 37 23 1 0 6 -1
37 47 43 7 1 4 1 2
37 55 31 11 1 4 -1 2
37 59 25 1 1 4 2 1
37 59 25 1 1 4 2 -1
37 61 19 5 1 2 2 5
37 61 19 5 1 2 5 2
37 61 19 5 1 2 5 -2
39 59 31 11 1 1 1 6
39 59 31 11 1 1 6 1
39 59 31 11 1 1 6 -1
41 49 41 31 1 0 4 5
41 49 41 31 1 0 -4 5
41 49 41 31 1 0 5 4
41 49 41 31 1 0 5 -4
41 65 23 17 1 4 0 3
41 71 1 1 1 4 3 0
41 71 1 1 1 4 3 0
43 53 37 37 1 3 5 0
43 53 37 37 1 3 5 0
43 53 47 23 1 1 4 5
43 55 41 29 1 3 3 4
43 55 41 29 1 3 4 3
43 55 41 29 1 3 4 -3
43 55 49 11 1 1 -4 5
43 65 31 19 1 1 5 4
43 65 31 19 1 1 5 -4
43 73 13 7 1 3 0 5
45 59 37 35 1 2 -1 6
45 65 41 13 1 4 -2 3
45 67 31 25 1 4 3 2
45 67 31 25 1 4 3 -2
45 67 35 19 1 2 6 1
45 67 35 19 1 2 6 -1
47 59 55 11 1 1 -3 6
47 61 41 35 1 3 -2 5
47 65 49 1 1 3 5 2
47 65 49 1 1 3 5 -2
47 71 31 25 1 1 3 6
47 77 23 13 1 3 2 5
47 79 19 5 1 1 6 3
47 79 19 5 1 1 6 -3
49 61 59 1 1 2 4 5
49 65 53 13 1 2 -4 5
49 79 31 1 1 4 1 4
49 79 31 1 1 4 -1 4
49 79 31 1 1 4 4 1
49 79 31 1 1 4 4 -1
w z y x gcd i j k
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Not by the way, it appears I still do not have all primitive solutions here. Maybe i made a programming error, could be. But maybe I have not really completed the problem... some missing are $$3 \cdot 15^2 = 23^2 + 11^2 + 5^2, \; \; 3 \cdot 23^2 = 29^2 + 25^2 + 11^2, \; \; 3 \cdot 31^2 = 47^2 + 25^2 + 7^2.$$ What is true, but not very satisfying, is that the original recipe with $w = p^2 + q^2 + r^2$ does give an integer multiple of every solution. Therefore, if we take all of those quadruples $(x,y,z,w)$ and do not discard any, rather divide through every time by $\gcd(x,y,z,w),$ we really will get all primitive integral solutions. The bad news is that, unless we have an explicit bound on that gcd, we do not know how large we need to allow $|p|,|q|,|r|.$
In comparison, the parametrization for Pythagorean quadruples $x^2 + y^2 + z^2 = w^2$ that I actually like has four parameters instead of three, and is based on the quaternions, and proved to work (gives all primitive integer solutions), first proof by L. E. Dickson about 1920. That parametrization is just Lebesgue's identity lebesgue's identity
• I don't get your point. If $(x_0,y_0,z_0,w_0)$ is a solution of $X^2 + Y^2 + Z^2 = 3 W^2$ then $(ax_0,ay_0,az_0,aw_0)$ is a solution, too, for any $a \in \Bbb{Z}$. So we might just as well look for solutions with $\gcd(x,y,z,w) = 1$, which are in $1:1$ correspondence with the rational solutions of $X^2 + Y^2 + Z^2 = 3$. Does the formula derived by the OP give all those rational solutions (it seems unlikely, but I don't know how to confirm or deny this)? – A.P. Jul 24 '15 at 0:33
• @A.P., yes, the formula by the OP gives all rational points on the sphere. The version with the $w$ gives, up to multiplication by a constant, all rational solutions in $\mathbb Q^4.$ That's the rub. We want all primitive integer solutions, and one three-parameter formula does not give all. – Will Jagy Jul 24 '15 at 1:15
• @A.P., suggest you do this minor exercise, the OP did not write every word correctly. Given integers $p,q,r$ with $\gcd(p,q,r)=1,$ we set $(x,y,z) = (1+tp,1+tq,1+tr)$ and solve for the nonzero $t$ that gives a point in $x^2 + y^2 + z^2 = 3.$ It turns out that this $t \neq 0$ is rational, and so the point found has three rational coordinates. In turn, every rational point on that sphere is successfully found by this method. – Will Jagy Jul 24 '15 at 1:37
• @Jagy Thanks for your explanation. But can you explain why I found ALL the solutions on the sphere. Why did I not miss anyone? – joaopa Jul 24 '15 at 6:34
• @A.P., meanwhile, although neither book directly discusses stereographic projection, suggest store.doverpublications.com/0486466701.html and maa.org/publications/maa-reviews/the-sensual-quadratic-form which show my viewpoint – Will Jagy Jul 24 '15 at 19:15
This is called stereographic projection. It works in many interesting cases, and when the left hand side is an ellipsoid one gets nice positive definite denominators, always comforting.
Given a rational point $(S,T,U)$ with $S^2 + T^2 + U^2 = 3.$ We get a rational vector $$X = (S-1,T-1,U-1).$$ We find the least common denominator $\lambda \in \mathbb Z$ such that $$(\lambda(S-1),\lambda(T-1), \lambda(U-1) ) = (p,q,r) \in \mathbb Z^3$$ What is $\gcd(p,q,r)?$ If it is larger than $1,$ we can divide through by that to get a shorter integer vector. So, $\gcd(p,q,r)= 1.$
By construction, $$(1,1,1) + \frac{1}{\lambda}(p,q,r) = (1,1,1) + \frac{1}{\lambda} (\lambda(S-1),\lambda(T-1), \lambda(U-1) ) = (1,1,1) + (S-1,T-1,U-1) = (S,T,U)$$
Not sure anyone is paying attention, but this is, really, the better way to do this. By quaternions, with $q = a + bi+cj+dk,$ then $v = i+j+k,$ then $p = q v \bar{q}$ will also have $0$ as the real coefficient. And $p$ has the norm we want. Oh, below, i am writing $p = xi+yj+zk.$
$$x = a^2 + b^2 - c^2 - d^2 + 2 a c + 2 b c - 2 a d + 2 b d$$ $$y = a^2 - b^2 + c^2 - d^2 - 2 a b + 2 b c + 2 a d + 2 c d$$ $$z = a^2 - b^2 - c^2 + d^2 + 2 a b - 2 a c + 2 b d + 2 c d$$ $$w = a^2 + b^2 + c^2 + d^2$$
I checked this in PARI, will probably typeset in the morning. It is very likely that taking all possible orders and $\pm$ signs gives every primitive integer solution to $x^2 + y^2 + z^2 = 3 w^2,$ with the only restrictions being $\gcd(a,b,c,d) = 1$ and $a+b+c+d \equiv 1 \pmod 2.$ |
https://pixel-druid.com/catalan-numbers-as-popular-candidate-votes-todo.html | ## § Catalan numbers as popular candidate votes (TODO)
• Usually, folks define catalan numbers as paths that go up or right from $(1, 1)$to $(n, n)$ in a way that never goes below the line $y = x$.
• The catalan numbers can be thought to model two candidates $A$ and $B$ such that during voting, the votes for $A$ never dip below the votes for $B$.
I quite like the latter interpretation, because we really are counting two different things (votes for $A$ and $B$) and then expressing a relationship between them. It also allows us to directly prove that catalan(n) is equal to $1/(n+1) \binom{2n}{n}$ by reasoning about seqences of votes, called as ballot sequences |
https://support.bioconductor.org/p/9136075/ | Gviz featureAnnotation disappear when zooming in
1
0
Entering edit mode
svenbioinf • 0
@svenbioinf-11239
Last seen 16 days ago
Münster
Dear Gviz Team, dear community,
Gviz seems to put labels of track items that I specify with
featureAnnotation="id"
on the middle of a track item. That works as long as the current plot shows the center of the item. If we look at items start or end these labels do not show. Is there a way better handle this matter?
Best,
Sven
Gviz • 326 views
0
Entering edit mode
@james-w-macdonald-5106
Last seen 7 hours ago
United States
Section 4.4 of the Gviz User's Guide covers this subject.
0
Entering edit mode
Dear James,
thank you for replying to me. I do not see how section 4.4 helps here. AFAIK gviz allows to put labels
left, right, above(center) or below(center).
However not between these options. So to make this clear again, if I zoom in on the marked box in attached graphic, there would be no label because its neither the left/right/center.
Please correct me if I am wrong,
Best, Sven
0
Entering edit mode
You are not wrong. If you want to zoom into some random little section that doesn't include the label, then you are correct that the label won't be there. |
https://mathematica.stackexchange.com/questions/33112/how-combine-pure-functions-of-several-slots | # How combine pure functions of several slots? [duplicate]
How can one define in a functional way a 1st-order linear differential operator involving several independent variables that can then be applied to a function of that many variables?
Consider an example with two variables, where one wants to form D[f[x, y], x] + 3 D[f[x, y], y] from a function f. Of course one could define
diff[f_][x_, y_] := D[f[x, y], x] + 3 D[f[x, y], y]
so that for a function such as
g[x_, y_] := x^2 y + Cos[x + 2 y]
we just evaluate:
diff[g][x, y]
2 x y + 3 (x^2 - 2 Sin[x + 2 y]) - Sin[x + 2 y] (* desired final output *)
But how can such an operator be defined functionally, that is, without explicitly using variables initially?
We could try
diffOp[y_] := Derivative[1, 0][y] + 3 Derivative[0, 1][y]
and then
diffOp[g]
3 (-2 Sin[#1 + 2 #2] + #1^2 &) + (-Sin[#1 + 2 #2] + 2 #1 #2 &)
But now how does one use such a combination of pure functions of several variables so as to produce the same result as from diff[g][x,y]?
The crux of the difficulty appears in the following simpler problem. Consider two functions of two variables:
g1 = (#1^2 + #2) &
g2 = Cos[#1 #2] &
How can one produce the same result as, say,
g1[x, y] + 3 g2[x, y]
(* x^2 + y + 3 Cos[x y] *)
directly from the functional linear combination g1 + 3 g2 -- by forming an expression of the form oper[g1 + 3 g2][x, y]?
By contrast with the single-variable situation, where a simple Through would serve as the oper',Through will not work in the multi-variable situation here:
Through[(g1 + 3 g2)[x, y]]
(* x^2 + y + (3 (Cos[#1 #2] &))[x, y] *)
A pure function embedded in that output.
Note that a simple sum g1 + g2 instead of the linear combination g1 + 3 g2, Through will work (just as it does for a single variable):
Through[(g1 + g2)[x, y]]
(* x^2 + y + Cos[x y] *)
• Through[(g1 + g2)[x, y]]? Sep 27 '13 at 15:05
• Your question is somewhat misleading in that even the single argument function suffers from the same problem, i.e. Through[(f1 + 3 f2)[x]]
– gpap
Sep 27 '13 at 15:49
• Would a replacement based approach work for you, e.g. thru[expr_[vars__]] := expr /. f_Function :> f[vars] Sep 27 '13 at 15:51
• @gpap: Yep, my single-argument example was much too simple. Sep 27 '13 at 15:52
• I am closing this as a duplicate. Please see the linked question. If you feel that this is not a duplicate please edit your question make that clear and flag or vote to reopen. Sep 27 '13 at 17:27
In Mathematics, you can define new functions using operation on their images. Your example, $g_1+3 g_2$ is effectively defined as the function $x\mapsto g_1(x)+3g_2(x)$. Mathematicians use the following notation for functions:
The $f$ before the semicolon is just a label.
In Mathematica, an assignment such as g1 = (#1^2 + #2) & introduces the label $g1$ but more importantly it creates the rule $( \#1, \#2 )\mapsto \#1^2+\#2$.
Paraphrasing your question: how can we create, out of $\#\#\mapsto g1(\#\#)$ and $\#\#\mapsto g2(\#\#)$, a new rule $\#\#\mapsto whateverLabel(\#\#)$ without explicitly giving $whateverLabel(\#\#)$ yet keeping it rather arbitrary.
Some operations on functions, (such as multiplication by a number $α f$), have a natural meaning in terms of the images of the functions; but natural is not the same as nonexistent. And in the end, you need to have the equivalent of the line $x\mapsto α f(x)$.
Therefore, your quest is impossible. You could, for example, limit your quest and keep the operations limited to a small subset (sums of functions, multiplication by a number, multiplication of functions). In that case, we might be able to help.
PS. Think twice before replying that one could define the operations on the images by the operations on the labels.
(Sorry, this is too long to fit into a comment) A possible workaround is to use fresh unevaluated symbols to represent your expression of free functions
expr = 3 f1 + 2Exp[-f2]
g1 = (#1^2 + #2) &;
g2 = Cos[#1 #2] &;
This will produce a list with the functions of said variables.
funlist = {g1, g2}; arglist = {x, y};
dummylist = {f1, f2};
funrule = #@(Sequence @@ arglist) & /@ funlist
{x^2+y, Cos[x y]}
Then you can use a replacement rule, as suggested in one of the comments:
expr /. Thread[dummylist -> funrule]
3(x^2+y) + 2 Exp[-Cos[x y]
You might automate this into a procedure that could generate the unique identifiers by parsing an unevaluated (held) expr` so that when you pass expr[g1,g2], you'll end up with expr[f1,f2] in the body of the procedure. |
https://guitarempire.com/american-electric-guitar-brands-electric-guitar-brands-compared.html | Wet Set: If you have a sound that you want to push a long way back in the mix, it can often be better to make your reverb effect pre-fader, and temporarily remove all the dry sound. Then alter the sound's EQ and reverb settings while listening only to the wet reverb sound. Once you've got that sounding good, gradually fade the dry sound back in until you're happy with the wet/dry balance. This approach can often be more effective than simply whacking up the reverb level while you listen to the whole song. Martin Walker
While electric bass players have used regular guitar amplifiers in large concerts since the 1960s, this is usually just for the higher register; a bass amp is still typically used for the low register, because regular guitar amps are only designed to go down to about 80 Hz. One of the reasons bassists split their signal into a bass amp and an electric guitar amp is because this arrangement enables them to overdrive the higher-register sound from the electric guitar amp, while retaining the deep bass tone from the bass amp. Naturally-produced overdrive on bass obtained by cranking a tube amplifier or solid-state preamplifier typically results in a loss of bass tone, because when pushed into overdrive, a note goes to the upper octave second harmonic.
## Have you ever looked at a guitar and wondered, "How do they make that?" Or thought to yourself, "I bet that I could build my own guitar," but never actually tried it? I have built several electric guitars over the years and through trial and error have learned many helpful tips that anyone who might want to tackle this sort of project needs to know before starting out. This kind of thing does require some wood working skill and also requires some specific tools as well but not all the fancy stuff that a guitar manufacture has. Building an electric guitar is time consuming and requires the completion of several steps before your project gets finished but be patient and you'll be happy with the results. I tend to go into detail so as not skip any steps or tips you need along the way, and use pics from other projects that I did as well so you can get more that on reference. If you set out to make a guitar you'll find that it takes quite a bit of time so you'll have time enough to go back and read other info if you just want to skim through the first go round. So I hope this helps all the future guitar builders out there!
#### We are proud to offer this very Rare and beautiful and highly collectible vintage 1983 Alvarez Electric/Acoustic 5078 with a les Paul style body shape. Top of the line workmanship fit & finish work here Crafted in Japan this is the limited special production Anniversary model made in 1983. This truly fine rare example comes with its nice original Alvarez black exterior tolex with the blue Martin style plush lined hard shell case. Did we say SUPER RARE....WoW!...we were completely amazed at the fact that this ( Les Paul style baby sounds so great plugged in or unplugged just beautiful. This one has a rich full bodied sound as an acoustic which is hard to find with this thin Les Paul shaped body makes it very comfortable to play long duration and not to mention did we say BEAUTIFUL as well as a real unique player...see the Headstock shape in the pictures this is truly a real beauty. This one is sure to please the Vintage Alvarez Acoustic lover... I'm a vitage Alvarez believer & after you see and play and hear this so will you. Condition for a 26 year old vintage guitar this thing is darn near mint with just a few tiny minute dings, see the detailed high res pictures for all the cosmetics, JVG RATED at 9.2 out of 10 ....... any questions? please email us @ [email protected] Thanks for your interest! .
Myself, were my budget less than a thousand then I'm dropping a big name like Martin of my list entirely, and probably I'm dropping Taylor too. Seagull makes some solid wood instruments for around $700...no idea how much the electronics tack onto the price, but I'm betting a Seagull SWS guitar with electronics could be had at$900 or so with ...just the slightest of scratches or blemishes.
Now as for flipping the whole bridge, yes, in some cases this may help you. Try it out and see what happens. Just an extra mm or two could make all the difference. One thing to watch out for, though. The notches on your saddles might not all be the same. Often you will have wider notches for the wound strings and thinner notches for the unwound strings. So you might have to swap these all around.
Those of you familiar with Van William’s former bands Waters and Port O’Brien, will have suspicions about what to expect from the songwriter’s debut solo material: boisterous, vibrant hooks that are easy to swallow but gut you on their way back out. His latest incarnation represents a bounce back after a period of personal tumult. Two parts power pop bombast, to one part Americana, William’s maturation as a songwriter and guitarist seems to have hit a new high water mark.
It features a solid mahogany top with laminated sapele back and sides, leading to a warm tone that’s a joy to listen too. Despite the small body size, the BT2 has a robust projection, thanks to the arched back. The neck is joined to the body via screws, which tarnish the look a little, but leave no impact on the slick playability or the tone, so aren’t a big deal.
With analogue delay (and simulations thereof), each subsequent echo is not only quieter, but also more distorted. Dub reggae tracks often make prominent use of this effect -- look out for the effect where the engineer momentarily turns a knob so that the echoes get louder instead of quieter, surging and distorting before he turns it back down again so they can die away.
Automatic Track Creation & Loop Recording: A new layer (track) is created each time you start recording and each time a Riff loops. Stack layers on top of each other (bass, guitar, vocals) to create a Riff. Use looping to create multiple tracks, do multiple takes, etc. Each layer has controls for mixing and effects. (4 tracks with T4, 24 tracks with Standard)
Others, however, will look to Jimmy Page, Pete Townshend, or the Beatles, or credit the first recorded use of a fuzz box in Britain to Big Jim Sullivan’s performance with a custom-built Roger Mayer fuzz on P.J. Probey’s 1964 No. 1 hit single ‘Hold Me’ (according to Mayer himself)—or, supposedly, Bernie Watson’s solo on Screaming Lord Sutch’s ‘Jack The Ripper’ in 1960. Or, a little later, the one more of us remember, Keith Richard’s worldwide smash-hit fuzz riff for the Stones’s ‘(I Can’t Get No) Satisfaction,’ courtesy of a Maestro Fuzz-Tone.
While pretty much every noise musician uses the guitar as a weapon of mass destruction, Mark Morgan of scuzz-worshippers Sightings uses his guitar for sheer negation. Playing in 50 shades of gray on found and borrowed pedals, the leader of this longtime Brooklyn noise band is quicker to sound like a vacuum humming, toilet flushing, or scrambled cable porn feed than Eric Clapton or even Thurston Moore; a unique sound that has all the emotion of punk, with none of its recognizable sounds. As he told the blog Thee Outernet: “Probably the biggest influences on my playing style is sheer f—king laziness and to a slightly lesser degree, a certain level of retardation in grasping basic guitar technique.”
OK, so you're ready to try your hand at the electric guitar, but where do you start? A good place to start is with an electric guitar that's specifically geared toward beginners. And although many guitars cost over $1,000, there's no need to shell out that much dough for a student model/novice instrument. To help you sort through all the options, we put together a list of the best beginner electric guitars worth buying right now. So whether you’re looking to become a shredding metalhead, a cool jazz player or an all-American country star, one of these electric guitars will have you well on your way. The next most important review criteria for any electric guitar, is its sound. Please allow me to be very clear here that this guitar is mostly suited for heavy rock tones, aggressive higher leads and chugging, crazy distortions. If you are more interested in a crisp, jazzy tone, maybe you should opt for a beginner’s Stratocaster electric guitar like Squier by Fender, instead. Having said that, this instrument sounds great in its genre, and also remains in tune for long periods, so you don’t have to worry about manually tuning it. Yes, the string tension is higher as compared to a 24.75” Stratocaster or XX Les Paul, but in a way this challenges electric guitar novices to acquire greater mastery over their notes! A. Many professional musicians invest thousands of dollars in high-end guitars made from expensive and rare tonewoods. A$100 student guitar made from spruce is not going to produce that level of tonality regardless of the player’s skill level. As a beginner, your main focus should be on skills such as chord formation, fretting techniques, and basic scales. Improving tonality and performance are long-term goals.
Specs for combos were as follows: Checkmate 10 (6 watts, 6″ speaker, two inputs, striped grillcloth); Checkmate 12 (9 watts, 8″ speaker, three inputs); Checkmate 14 (14 watts, 8″ speaker, three inputs, tremolo); Checkmate 17 (20 watts, 10″ speaker, tremolo, reverb); Checkmate 16 bass amp (20 watts, 10″ speaker, volume, tone); Checkmate 17 (20 watts, 10″ speaker, reverb, tremolo); Checkmate 18 (30 watts, two 10″ speakers, reverb, tremolo); and Checkmate 20 (40 watts, 12″ speaker, reverb, tremolo). Piggyback amps included the Checkmate 25 (50 watts, 15″ speaker, reverb, tremolo); Checkmate 50 (two-channels, 100 watts, two 15″ speakers, reverb, tremolo, “E tuner”); Checkmate 100C (two channels, voice input, 200 watts, two 15″ speakers, reverb, tremolo); and the big hugger-mugger Checkmate Infinite (200 watts, two 15″ speakers, stereo/mono preamp section, reverb, tremolo and a bunch of other switches). The one shown in the catalog actually has a block Teisco logo and carried the Japanese-marketed name – King – in the lower corner.
With this new edition, they scrapped the DVD from the previous version, and introduced online video and audio clips, as a supplement to the book's teachings. They didn't take it overboard though, with just 85 videos and 95 audio tracks, but at least it's a step in the right direction. You can't learn music by just reading about it, you need audible tools.
{ "thumbImageID": "Special-Edition-Deluxe-PJ-Bass-Sea-Foam-Pearl/H96715000001000", "defaultDisplayName": "Fender Special Edition Deluxe PJ Bass", "styleThumbWidth": "60", "styleThumbHeight": "60", "styleOptions": [ { "name": "Olympic White", "sku": "sku:site51500000048792", "price": "899.99", "regularPrice": "899.99", "msrpPrice": "900.01", "priceVisibility": "1", "skuUrl": "/Fender/Special-Edition-Deluxe-PJ-Bass-Olympic-White-1500000048792.gc", "skuImageId": "Special-Edition-Deluxe-PJ-Bass-Olympic-White/H96715000004000", "brandName": "Fender", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/Special-Edition-Deluxe-PJ-Bass-Olympic-White/H96715000004000-00-60x60.jpg", "imgAlt": "" } , { "name": "3-Tone Sunburst Maple", "sku": "sku:site51434378852529", "price": "899.99", "regularPrice": "899.99", "msrpPrice": "900.01", "priceVisibility": "1", "skuUrl": "/Fender/Special-Edition-Deluxe-PJ-Bass-3-Tone-Sunburst-Maple-1434378852529.gc", "skuImageId": "Special-Edition-Deluxe-PJ-Bass-3-Tone-Sunburst-Maple/H96715000003006", "brandName": "Fender", "stickerDisplayText": "Top Seller", "stickerClass": "", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/Special-Edition-Deluxe-PJ-Bass-3-Tone-Sunburst-Maple/H96715000003006-00-60x60.jpg", "imgAlt": "" } , { "name": "Sea Foam Pearl", "sku": "sku:site51365435327256", "price": "699.99", "regularPrice": "899.99", "msrpPrice": "900.01", "priceVisibility": "1", "skuUrl": "/Fender/Special-Edition-Deluxe-PJ-Bass-Sea-Foam-Pearl-1365435327256.gc", "skuImageId": "Special-Edition-Deluxe-PJ-Bass-Sea-Foam-Pearl/H96715000001000", "brandName": "Fender", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/Special-Edition-Deluxe-PJ-Bass-Sea-Foam-Pearl/H96715000001000-00-60x60.jpg", "imgAlt": "" } ] }
{ "siteName" : "/m123", "sourceCodeId" : "61001563", "sourceName" : "DIRECTLOADFORM123", "sourceSegment" : "direct", "richRelevanceMode":"render", "richRelevanceApiKey":"413c08763c06fde6", "richRelevanceUserId":"", "richRelevanceSessionId":"a9438a874b3f643826015539d38d85fe", "rrBaseUrl":"//recs.richrelevance.com/rrserver/", "rrChannelId":"-", "rrMobileChannelId":"586e31c42963afdf", "hashedUserIdForCriteo":"", "rrTimeout":"10000", "rrLoadAtgRecs":"false", "contextPath" : "", "JSESSIONID":"9qYR4FK6oBJZ-PTPDs-s9v33.mfbot04", "unicaEnv" : "site-devint", "staticContentUrl" : "https://static.music123.com", "styleStaticContentUrl" : "https://static.music123.com", "staticVersion" : "ecmd-2018.10.2-0g&cb=3", "versionParam" : "?vId=ecmd-2018.10.2-0g&cb=3", "customerService" : "888-566-6123", "profileID" : "6031423577", "powerReviewsUrl" : "https://static.music123.com/", "contentKey": "site7AV", "isInternational": "false", "isWarrantyShippable": "true", "currencySymbol": "$", "profileCountryCode": "US", "profileCurrencyCode": "USD", "welcomeMat" : "false" } Skip to main content Skip to footer {"eVar4":"shop by department: guitars","eVar5":"shop by department: guitars: guitar value packages","pageName":"[m123] shop by department: guitars: guitar value packages","reportSuiteIds":"music123prod","eVar3":"shop by department","prop2":"[m123] shop by department: guitars: guitar value packages","prop1":"[m123] shop by department: guitars","prop10":"category","prop11":"guitar value packages","prop5":"[m123] shop by department: guitars: guitar value packages","prop6":"[m123] shop by department: guitars: guitar value packages","prop3":"[m123] shop by department: guitars: guitar value packages","prop4":"[m123] shop by department: guitars: guitar value packages","channel":"[m123] shop by department","linkInternalFilters":"javascript:,music123.com","prop7":"[m123] sub category"} false Few instruments are as versatile as the electric guitar. Widely heard in most types of music, the guitar's sound can be customized in virtually limitless ways to suit the genre and the player's individual style. Multi effects pedals put all of that personalization within your reach, allowing you to change your soundstage with the push of a button while you play. Many guitarists achieve multiple effects by chaining pedals together. This gives the ability to mix and match different effects to create unique combinations, but it can also be a source of frustration to keep track of so many pedals. A multi-effects pedal avoids this confusion by converting the mess of individual pedals into one discrete unit that's easier not only to use, but also to transport from venue to venue. Multi effects pedals can offer hundreds of onboard distortion, filter, modulation and dynamics effects to transform the sounds of your guitar. With as many or as few features as you prefer, the selection offers a multi pedal for every guitarist's needs. If simplicity is your thing, a basic stompbox can provide a handful of effects with minimal complications. On the other end of the spectrum, you can satisfy your inner technophile with a cutting-edge digital pedal featuring MIDI support and USB connectivity so you can save seemingly limitless library of effects. There are three things to look for in your ideal multi effects pedal: the range of available effects and features, the ruggedness of its construction and the available inputs and outputs. The more effects that are supported, the more the pedal has to offer to your sound. The more durable it is, the more the pedal can withstand being moved from venue to venue. And the more connection options it has, the more versatile it is for studio recording or connecting to additional stompboxes and accessories. Whether you're playing hard rock or smooth jazz, the range of tone alterations offered by a multi effects pedals enables you to deliver a personalized sound that complements your band and your musical style. There are two things that get guitarists into the history books: developing their skills to perfection and crafting their own distinct sound. With a multi effects pedal, you've got the gear you need to start shaping the tones you aspire to be remembered by. Thanks for your opinion Sheils. While your advice is appreciated, certainly no two guitarists would come up with the same guitars for any given list, or present an article like this in the same way. However, you did mention a few points hopefully readers might find useful. Constructive feedback, and the expression of different opinions, is always welcome. The Epiphone Dove Pro is such a good guitar that it’s going to be a contender for a top pick in pretty much any list, but in this one we’ve given it the title of best value electric acoustic. You can spend a lot more money and not get much more guitar, and you can even spend more money and not get a guitar as good. The Dove Pro is that accomplished. Now that you know the general protocol to a pedal chain, remember there are no strict rules in music. Introducing alternative ways of setting up your effect signals is what starts new trends and even leads to the development of new genres. There are also indisputably more choices in pedals then ever before. Vintage classics have been reissued in mass, are sounding better then ever, and have become affordable (but I doubt you’ll see that DeArmond toaster pedal version any time soon). The reality is, each of these approaches to adding effects to your tone has advantages and disadvantages. Are you a no-effects type of player, or a pedalboard kind of player? Maybe you like some pedals for your dirt, but would like your delay and reverb in the effects loop of your amp. Or maybe you would like to go the full on w/d/w route, for the ultimate in power and programmability! Let’s take a closer look at the options that are out there. The Effect: Compression rose to fame in the rock and roll era, many famous musicians used (and still do) compressor pedals in order to add distinctive sustain in their performances, attracting the listener’s attention and making them stand out from the diverse instruments playing along. Some of the most famous compression pedals are the “Ross Compression” and the classic “MXR Dyna”, which have been subject to imitations and remakes ever since their original releases. Compression pedals have remained popular to this day, and are considered a must-have in many guitarist’s arsenals. Price guides can be used by both sellers and buyers. Sellers can generally use websites to get a ballpark figure on the value of their model of guitar or bass. They can then deduct for dings, scratches, and other injuries the guitar may have sustained in its lifetime. After-market modifications, such as new pickups or repair work, can increase the value of the guitar. It depends on whether you are playing with someone, or you just wanna start to play home in your bedroom. If you play with others, you need an amp that can play loud enough to follow the bass and especially the drums. Marshall make some great tubeamps, but also Vox make some great amps, where this one on 40 watt with effects incl are real good. Sound like a tubeamp, and have a 12ax7 in the frontamp. Hi there, Nicolas here. I'm all about continuous life-improvement and discovering your true-self so that we can find and attract beauty into our lives, be the best we can be, and enjoy life as much as possible. I have a passion for writing and publishing and that's why you can find me here. I write about the topics where I can share the most value, and that interest me the most. Those include: personal development, fitness, swimming, calisthenics, healthy lifestyle, green lifestyle, playing guitar, meditation and so on. I really wish to provide my readers with great value and for my books to be a source of inspiration to you. I'm sure that you will enjoy them and find some benefits! Stay tuned for some awesome books Wish you all the best, Nicolas Carter Description: Body: Mahogany - Body Construction: Solid - Top Wood: Maple - Quilted - Neck Attachment: Neck-through - Neck Wood: Maple - Neck Construction: 5 Piece - Fingerboard: Rosewood - Frets: 24 - Inlay: Dot - # of Strings: 6 - Scale Length: 25.5" (65cm) - Headstock: 6 In-Line - Bridge: Tune-O-Matic - Bridge Construction: Rosewood - Cutaway: Double - Hardware: Black, Diecast, Nickel, 1x Volume Control, 1x Tone Control, 3-Way Switch - Pickups: Humbucker - Pickup Configuration: H-H - String Instrument Finish: Black, Blue That’s not to say you need a specific guitar for each style — if you want a larger range of tones for different genres, a solid-body guitar is a good bet. There are also plenty of guitars on the market that include both humbucker and single coil pickups, thus allowing for even more sound options. Still seem too complex for you? If you look to the pros you’ll see that Gibson’s Les Paul and Fender’s Stratocaster have been used over and over again by recording artists. It’s not a coincidence: they’re capable of a lot of versatility. Yes, they differ from each other in tone, but with the right additional gear, you can replicate a ton of sounds. It would be great if we could be born with all our favorite music already memorized. But we're only human, so we don't get that luxury - if we want to know a song, we have to learn it! The process starts with reference materials, so don't wait to start building up a repertoire of your own favorites. Add the tablature to your collection and make sure to practice, and you'll be playing like your favorite guitar heroes soon enough. Interestingly, it’s the back of this guitar that’s the most visually attractive, with a drop dead gorgeous rosewood fretboard and quilt maple three-piece design. You’ll stare at it for some time before you can bring yourself to flip it over and start playing. That’s not to say the front doesn’t look good - the whole thing feels more upscale than the price. Again, it's a matter of personal preference and style. Many people prefer to learn on acoustic guitars, but the strings are much tougher which causes fatigue to learning fingertips. The strings produce a buzzing effect as they are hard. Harder strings mean that learning fingers will find it hard to play bar chords. On the other hand, electric guitars offer comfort while holding down chords as the width of the neck is shorter than that of an acoustic guitar. The strings on an electric guitar are softer which makes means you can practice longer without your fingers getting sore. The habit of playing with light strings from the beginning can trouble in near future as acoustic guitars are also needed in various music production situations. And don't forget, you'll need to pick up an amp and so on to play your electric guitar. One of the best defining features of Schecter guitars is their build quality. It seems that they always go an extra mile. Schecter’s bodies are solid, made of great tonewood depending on the application, and the array of finishes they offer are just impressive. In simple terms, build quality is not something you need to worry about with this brand. Along with sweep picking, economy picking serves a more economical way to play single note ideas. It’s a form of alternate picking that calls for you to sweep the pick across strings when making your way to the next adjacent string. If you’re ascending, you sweep down and vice versa. They key is to make the motion have the same resistance sweep picking calls for while still utilizing a fluid alternate picking wrist approach. Just like most techniques, but with the same emphasis as sweep picking, you must start out slow and be mindful of the technique when starting to learn it. Be patient and work at it. It will come, and when it does – look out! A Power attenuator enables a player to obtain power-tube distortion independently of listening volume. A power attenuator is a dummy load placed between the guitar amplifier's power tubes and the guitar speaker, or a power-supply based circuit to reduce the plate voltage on the power tubes. Examples of power attenuators are the Marshall PowerBrake and THD HotPlate. The Takamine GD71CE is a feature packed acoustic-electric guitar, with its solid top construction, visual embellishments and improved electronics. Takamine equipped this guitar with a solid spruce top and rosewood for the back and sides - a nice tonewood combination known for its articulate sound. It also comes with matching aesthetic appointments that include maple binding for the body, neck and headstock, abalone rosette, rosewood headcap, maple dot inlays, gold die-cast tuners with amber buttons, and the body is wrapped in a nice looking gloss finish. In addition to all this, the Takamine GD71CE is equipped with their TK-40D preamp system which gives you more control over your amplified sound with its mid contour switch, 3-band EQ with bypass and notch filter. But I’d also like to share my interesting Goldilocks setup into the mix, I’ve had a Boss GT-8 for years and I love that thing for all the control it can give me at the front of stage. However, I’m only 18 and never had the kind of money to buy an amp I’d love to run 4 cables for (or in my case three, I run a Line6 wireless), so I use the virtual preamps and run it into the Effect return of my 6L6 loaded Kustom amp (never liked the preamp in it). This very fact made my gigs in high school extremely easy, as I could use virtually any tube amp with an FX loop as my backline, then adjust the global EQ accordingly to pull the best tone possible, or in one instance I had two amps at my disposal so I got the pleasure of switching up my Delay and Chorus type effects to their stereo modes. I also have a couple of pedals on my board to address a few tonal setbacks I found in the Boss, but that’s only suiting my personal taste. Enjoy my board… ### This Hellraiser C1 guitar features a mahogany body and a quilted maple top and abalone gothic cross inlays. It looks and feels fantastic! A couple of things that make it sound even better are the locking tuners, that keep it in tune, and the EMG noise-cancelling scheme that makes sure that the only sound the guitar makes is the music you’re playing. Some of the more distinctive specifications include, the headstock shape, tuners, neck and fretboard, bridge, and pickups and electronics. The headstock shape is based on PRS’s trademark design, but inverted to both accommodate Mayer’s playing style and also to keep a consistent length of string behind the nut, which makes staying in tune easier. The tuners are a traditional vintage-style, closed-back tuner, but with PRS’s locking design. The neck shape was modeled after 1963/1964 vintage instruments, and the fretboard has a 7.25” radius. The moment your hand grabs this neck, it just feels right. Like the tuners, the steel tremolo takes a classic design and incorporates PRS’s trem arm and Gen III knife-edge screws. The bridge on the Silver Sky is setup flush to the body in the neutral position so that the tremolo bridge only goes down in pitch. By keeping the bridge in contact with the body, the guitar itself is acoustically louder, which improves the signal to noise ratio of the single-coil pickups. The 635JM single-coil pickups are very round and full, with a musical high end that is never “ice-picky” or brash. Tonewood (basswood, mahogany, alder...) doesn't matter in an electric guitar unless you're getting ancient pickups for it. Older pickups used to act more like microphones and picked up sound resonating from the guitar body as well as from the strings. Modern technology has fixed that so the sound comes purely from the strings. Most guitar companies that market their guitars for tonewood are guitar brands that have been around since the times of these ancient pickups and based their marketing off of it. Most of them still haven't changed it. I recently read a scientific breakdown (experiment, analysis and all) that thoroughly proved the tonewood debate pointless once and for all. Every variable was accounted for-only tonewood was changed. So, don't worry about the basswood; it could be made from the least acoustic material on earth, and the pickups would give you the same sound as they would have on a different guitar material. I've spent months researching this in depth. (I play, too.) While an acoustic guitar's sound depends largely on the vibration of the guitar's body and the air inside it, the sound of an electric guitar depends largely on the signal from the pickups. The signal can be "shaped" on its path to the amplifier via a range of effect devices or circuits that modify the tone and characteristics of the signal. Amplifiers and speakers also add coloration to the final sound. By 1954 the Teisco line had begun to grow. Some valuable reference is available in a Japanese history of Teisco guitars, which is written completely in Japanese (which I unfortunately can’t read). This has an early photo of the company’s founders and presumably engineers and designers, mugging around a car parked in front of the Teisco factory. The photo is from the ’50s (1954 or later), and the instruments in their hands and surrounding them are at the core of the ’50s line. Shown were two small Les Pauls, two single-cutaway archtop electrics, at least three Hawaiian lap steels, and at least four amplifiers. Martin’s first era of flirtation with electrics ended with its GTs, and, in terms of American production, wouldn’t resume until a decade later. However, in 1970 Martin joined the growing list of American manufacturers to begin importing guitars made in Japan, introducing its Sigma series. In around 1973, Martin, like competitors Guild and Gibson, began importing a line of Sigma solidbody electrics made in Japan by Tokai. There is one musical virtuoso whom I would added to this list, if for no other reason then the fact that the 2nd guitarist on the list thru his comment, seemed to hand the title as “greatest guitatist on earth” to this guy. This same individual has often been compared to the one who holds the crown- Jimi. And yet, arguably should not be in this list due to the fact that he was much more than a Master of the guitar, he mastered vurtually over 30 other instruments he played, sold almost Every concert he performed world wide, and… Read more » The reason being that guitar manufacturers will usually look to keep costs down in the pickup department. This is particularly true for budget models, which will usually be fitted with stock pickups that do the job, but fall short of truly impressing. So guitarists with an affordable, but playable guitar may wish to upgrade their pickups, to make their favorite axe gig-worthy. Most guitars will benefit from an annual setup, and instruments that are kept in less than ideal climate conditions (or that are on the road a lot) may need two per year. I’ll evaluate your guitar and make a recommendation. Setups may include truss rod lubrication and adjustment, saddle lowering to adjust action, nut slot adjustments, cleaning of grimy frets and fretboard, lubricating and tightening of tuners, and checking electronics and batteries. Price is based on what your guitar needs. The price range is for labor and does not include parts costs such as strings and bone nut and saddle blanks. également et la corde pourra alors être ajustée à la hauteur souhaitée. L'Ibanez DOWNSHIFTER vous permet de réduire la hauteur d'une corde à une hauteur prédéfinie en actionnant simplement un petit levier. Pour que l'accordage soit précis, aussi bien en position haute qu'en position basse, vous devez régler les deux positions du levier avant d'utiliser le Downshifter. Frets are the metal strips (usually nickel alloy or stainless steel) embedded along the fingerboard and placed at points that divide the length of string mathematically. The strings' vibrating length is determined when the strings are pressed down behind the frets. Each fret produces a different pitch and each pitch spaced a half-step apart on the 12 tone scale. The ratio of the widths of two consecutive frets is the twelfth root of two ( {\displaystyle {\sqrt[{12}]{2}}} ), whose numeric value is about 1.059463. The twelfth fret divides the string in two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. This arrangement of frets results in equal tempered tuning. Martin makes classic guitars that have been featured in countless hit tunes. While they are best known for their top-of-the-line$2,000+ models, Martin also makes great guitars for any budget. Martin guitars always honor their tradition while continuing to strive for a better instrument. Give Martin’s new 17 Series a try if you want to see that theory in action.
Covers all the material needed for the RGT Grade One electric guitar examination, enabling you to gain an internationally recognised qualification. The book should help you to develop all aspects of guitar playing, increase your knowledge of specialist electric guitar techniques, understand the music theory that relates to electric guitar playing and achieve your full potential as a guitarist.
However, John Leckie states an interesting preference for an SM58 and U67 rig instead: "SM57s tend to be that little bit brighter than the SM58, which really isn't what you want when you're miking up an electric guitar amp. You really want to pick up a flat signal, an 'unstimulated' signal I suppose is the word... The U67 gives you the warmth and a broader sound."
A very stylish black guitar by ENCORE with a great 'Humbucker' pickup.......The simplicity of this instrument makes it a joy to play and It sounds as good as it looks! Used....but in great condition and showing only minimal signs of use (no scratches, chips or dings) and is in full working order. The scale is full length (not 3/4 or 7/8)...but the body size is smaller and lighter than a typical stratocaster (see image for comparison), which makes this guitar perfect for a younger / smaller person or anybody who might like a very robust but lighter instrument. All reasonable offers considered.
{ "thumbImageID": "Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-Cherry/H71861000001000", "defaultDisplayName": "Epiphone Limited Edition Les Paul Special-I Electric Guitar", "styleThumbWidth": "60", "styleThumbHeight": "60", "styleOptions": [ { "name": "Worn Cherry", "sku": "sku:site51296155735774", "price": "129.99", "regularPrice": "159.99", "msrpPrice": "265.00", "priceVisibility": "1", "skuUrl": "/Epiphone/Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-Cherry-1296155735774.gc", "skuImageId": "Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-Cherry/H71861000001000", "brandName": "Epiphone", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-Cherry/H71861000001000-00-60x60.jpg", "imgAlt": "" } , { "name": "Worn TV Yellow", "sku": "sku:site51296155735784", "price": "129.99", "regularPrice": "149.00", "msrpPrice": "265.00", "priceVisibility": "1", "skuUrl": "/Epiphone/Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-TV-Yellow-1296155735784.gc", "skuImageId": "Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-TV-Yellow/H71861000003000", "brandName": "Epiphone", "onSale": "true", "stickerDisplayText": "On Sale", "stickerClass": "stickerOnSale", "condition": "New", "priceDropPrice":"", "wasPrice": "", "priceDrop": "", "placeholder": "https://static.guitarcenter.com/img/cmn/c.gif", "assetPath": "https://media.guitarcenter.com/is/image/MMGS7/Limited-Edition-Les-Paul-Special-I-Electric-Guitar-Worn-TV-Yellow/H71861000003000-00-60x60.jpg", "imgAlt": "" } ] }
Six-point rocking tremolo: This was the original rocking vibrato designed by Fender in the 1950s. Like the two-point tremolo, it is through-body, spring-loaded, and provides individual string intonation and height adjustment. Some players feel that because this type of tremolo rocks on six screws it provides greater vibration transfer to the top and hence better resonance.
A marvelous acoustic guitar with 6 strings and natural color. it has its body made from mahogany and a spruce top. The fret board is also made from mahogany. It one of the most beautiful guitar producing incredible sound. It is designed to suit the needs of the beginner in guitar playing. The price ranges from around INR 14,760 depending on available offers. To find more product information relating to Epiphone DR-212, click on the link below:
Whether you use it to move on to fingerstyle guitar or integrate it into a hybrid technique, mastering the right hand in this finite way will make you a better player. In addition to the progressive book, you can download the song samples, which are enriched with the ability to slow them down, change keys, and set looping points to help you master parts one at at time.
Today I was working on my fave guitar, a James Trussart Steelcaster. Instead of reconnecting my tone pot and capacitor as usual, I ran two wires from the tone pot’s wiper and ground terminals, the spots where the cap normally connects, and soldered them to a little piece of stripboard with sockets for connecting the caps. Then I recorded quick demos for six possible cap values. I started with the two most common values, and then added two lower values and two higher ones.
No tricks here, the volume control allows you to adjust the output level of your signal. But, unlike your amp's gain setting, the best signal-to-noise ratio will be achieved with the pot all the way up. If you have more than one volume knob, it means each controls a pickup. Middle positions can be useful with amps that don't have too much power and distort very easily or to get a crunch sound with a fat saturation. We can also use it as an effect by turning the knob progressively and playing a chord to make it appear (or disappear).
1. striking the string creates the vibration and once it disrupts the magnetic field on the pickup that's it - how about when you don't strike the string at all, like when you tap on the body of the guitar? The vibrating wood imparts vibration on the strings, which in turn do their thing on the pickup. The body of the guitar, the nut, the bridge, every part of the guitar is now directly influencing the sound you hear out of the pickup. Remember, only the magnetic field disturbance is being amplified, and tapping the guitar has started the strings vibrating. How can that happen without the wood's tonal qualities affecting the waveform?
First, Steel String sounds heavenly, and I always love it when my mouth drops the first time I hear a hyper-realistic sounding VST. Steel String has done this completely, in fact, the only time I was ever pulled out from its hyper-realism was on the fret noise that recreates the articulation of finger sliding across the strings when changing positions.
In the mid-1960s, as the sound of electric 12-string guitars became popular, Vox introduced the Phantom XII, which has been used by Tony Hicks of The Hollies, Captain Sensible of early English punk band The Damned and Greg Kihn, and Mark XII electric 12-string guitars as well as the Tempest XII, also made in Italy, which featured a more conventional body style. The Phantom XII and Mark XII both featured a unique Bigsby style 12-string vibrato tailpiece, which made them, along with Semie Moseley's "Ventures" model 12-string Mosrite, the only 12 string electric guitars to feature such a vibrato. The Stereo Phantom XII had split pick-ups resembling the Fender precision bass, each half of which could be sent to a separate amplifier using an onboard mix control. Vox produced a number of other models of 6 and 12 string electric guitars in both England and Italy.
Other Archtone owners may notice a slightly different model number, but with the exception of a tenor version, the only difference is the finish. The H1213 (your model) was finished with a shaded-brown sunburst, the H1214 was ivory-colored with a flame effect, and the H1215 was a sunburst with a grained effect. In excellent condition, this model is worth between $200 and$250 today. But in the average condition yours appears to be, it’s worth between $100 and$150.
Chorus and flanging are created in fairly similar ways, the main difference being that chorus doesn't use feedback from the input to the output and generally employs slightly longer delay times. Phasing is similar to both chorus and flanging, but uses much shorter delay times. Feedback may be added to strengthen the swept filter effect it creates. Phasing is far more subtle than flanging and is often used on guitar parts. With chorus, phasing and flanging, the delay time, modulation speed and modulation depth affect the character of the effect very significantly. A generic modulated delay plug-in allows you to create all these effects by simply altering the delay time, feedback, modulation rate and modulation depth parameters. Most of the time, low modulation depths tend to work well for faster LFO speeds (often also referred to as the rate), while deeper modulation works better at slower modulation rates.
Distortion sound or "texture" from guitar amplifiers is further shaped or processed through the frequency response and distortion factors in the microphones (their response, placement, and multi-microphone comb filtering effects), microphone preamps, mixer channel equalization, and compression. Additionally, the basic sound produced by the guitar amplifier can be changed and shaped by adding distortion and/or equalization effect pedals before the amp's input jack, in the effects loop just before the tube power amp, or after the power tubes.
I purchased a Dean Performer Plus -acoustic/electric with cutaway; the top is sitka spruce and the back & sides are mahogany;the fretboard & bridge are rosewood, the saddle is bone, the nut is tusq… now I am not saying this guitar sounds like my Martin – BUT – it does sound awfully good. I would highly recommend this for beginners & intermediates. The action on the neck is extremely good for a low budget guitar. They list for under $400. If you get a chance check one out… see how it matches up against your list of guitars. I hope this was helpful- especially for the beginners. Sincerely > George M. Meanwhile, the Gibson Vari-Tone circuit uses a rotary switch rather than a pot, and a set of capacitors of ascending size. The small caps have a brighter tone, and the large ones sound darker. But once a cap is engaged, it’s engaged all the way. In other words, the cutoff frequency varies as you move the switch, but not the percentage of affected signal—it’s always 100%. (The Stellartone ToneStyler employs the same concept, with as many as 16 caps arranged around a rotary switch.) A small number of bass amps designed for the upright bass have both a 1/4" input for a piezoelectric pickup and an XLR input for a condenser microphone mounted on the bass, with a simple mixer for combining the two signals, as described below. Some Acoustic Image amps have a dual input design. A rare feature on expensive amplifiers (e.g., the EBS TD660) is the provision of phantom power to supply electrical power over the patch cable to bass pickups, effects, a condenser mic (for an upright bass player) or other uses. A small number of 2010-era amps that have digital modelling features may have an input for a computer (e.g., USB), so that new digital effects and presets can be loaded onto the amp. It helps if you shop frequently but at my Guitar Center the tech is frequently going through guitars on the wall and setting them up so it's ready to be sold without the need for a setup. They have motivation to keep their guitars setup. I mean, have you ever went to a shop, picked up a guitar you wanted, and it had stupid high action? You're not gonna buy it until it's setup right? If they're setup, they'll play better and it'll be a lot easier to sell. music is an expression with a variety of feelings involved.there is no such individual as the greatest guitarist.there are however a great number of highly talented,highly skilled and original guitar players.they encompass many genres of style ,technique,they should not be compared with each other.rather they should be appreciated for their individuality and that magnetism that makes them all unique. This guitar master knows wood. He understands its rhythm. He's a master woodworker and began building acoustic guitars when he was a child. "I couldn't afford the ones I wanted," he says, "so I built them." Perretta Guitars is the result of his experiments. But it wasn't until he toured with the guitars that he'd receive some of the best advice of his life from George Gruhan, a guitar master in Music City, whose customers included Eric Clapton, Neil Young, Johnny Cash and George Harrison: "If you want to work in this business, do repair work." Some people like to play the two notes on 5th and 4th strings with a small barre with the 3rd finger. It's O.K. to do that, but I think using two fingers gives you a better finger position on the notes; you'll get a better sound that way, it makes it easier to change chords most of the time and easier to get all the thin strings muted. I strongly advise to learn it this way, and then if you still prefer to use the little barre you have the option of choosing whichever one works best in any situation! COST – I have touched upon this topic several times maybe but I feel like I need to reiterate. Amps are usually not a cheap thing to come by, especially if you want a tube amp. BUT practice amps are good because they help beginners develop their skills without having to spend several hundred. Needless to mention, even practice amps come at various prices. For instance, Donner Electric Guitar Amplifier 10 Watt Classical Guitar AMP DEA-1 we talked about is twice as cheap as Roland CUBE-10GX 10W 1×8 Guitar Combo Amp. While price often is a good guideline to which model is better you should always keep in mind that more famous brands will have more expensive models even in the cheap sections. Apart from that, keep in mind that an amp having a lot of great features and effects does not mean it’s good. This pedal has been a great start, it has been looked after very well and is in excellent condition. I am upgrading my sound which is the reason for the sale. “Great guitarists know it's all about nuance. With its built-in expression pedal, the Zoom G1Xon allows you to add subtlety and refinement to your performance. Add in 100 great-sounding guitar effects and amp models—with the ability to use ... {"id": "114096449", "categoryId":"site5AAG", "name":"Used XM-DLX Solid Body Electric Guitar", "pageUrl":"/Used/Washburn/XM-DLX-Solid-Body-Electric-Guitar.gc", "thumbnailUrl":"https://media.guitarcenter.com/is/image/MMGS7/NonExistingImage-00-180x180.jpg", "hasFeatures":"0", "isAccessory":"0", "message":"", "value":"", "priceMin":"", "priceMax":"", "msrp":"", "productVisibilityMSRP":"1", "restockPrice":"", "openBoxPrice":"", "clearancePrice":"", "isPlatinum":"0", "priceSavingsMaxPrice":"0.00", "priceSavingsMaxPercent":"0", "inventory":"0", "brand":"Washburn", "reviewStarImageUrl": "https://static.guitarcenter.com/img/brand/gc/cmn/Sprit-Sm-Stars.png", "reviewStarRating":"0.0", "reviewStarRatingInteger":"0", "reviewHowManyReviews":"0", "usedOrNew":"new", "discontinued":"1", "onOrder":"0", "clearance":"0", "canBeSold":"0", "accessoryCategories":"site5LFMIC,site5LAAA,site5HBA", "stickerText": "", "isVintage": "0", "outletonly": "", "checksum":"", "priceVisibility": "", "itemType": "Used"} Delay pedals are among the most popular effects around, and the reason is simple: A delay pedal not only gives your sound a professional sheen and adds a three-dimensional quality—even when set for a discreet, atmospheric effect—but it can also produce a wide variety of not-so-subtle sounds and textures, ranging from ear-twisting rhythmic repeats (à la Eddie Van Halen’s “Cathedral”) to faux twin-guitar harmonies and live looping. But having hot tubes is only half the recipe for getting great tone. Room sound is the other ingredient necessary for obtaining a full-bodied guitar track. It didn't take me long to figure out that the guitarists on my formative blues sessions were slyly contributing to my "education" by nudging the mics away from their amps as soon as I left the room. Thanks to their clandestine efforts, my ears opened up to an entire new world of electric-guitar sounds. {"eVar4":"used: guitars","eVar5":"used: guitars: electric guitars","pageName":"[gc] used: guitars: electric guitars: raven","reportSuiteIds":"guitarcenterprod","eVar3":"used","prop18":"skucondition|0||creationdate|1","prop2":"[gc] used: guitars: electric guitars","prop1":"[gc] used: guitars","prop17":"sort by","evar51":"default: united states","prop10":"condition","prop11":"used","prop5":"[gc] used: guitars: electric guitars","prop6":"[gc] used: guitars: electric guitars","prop3":"[gc] used: guitars: electric guitars","prop4":"[gc] used: guitars: electric guitars","channel":"[gc] used","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] used"} Speaking of session guys, we have Joe Messina, but where are his partners Robert White and Eddie Willis? Or Dennis Coffey? There’s a whole slew of great musicians whose names get forgotten but whose playing we all instantly recognise – alongside the Funk Brothers, there are the likes of Buddy Emmons and Grady Martin from the A Team, and then there’s the Wrecking Crew and the whole LA scene. Someone has already remembered Glen Campbell but how about Howard Roberts and Ted Greene? Whoa! How can you guys have neglected Barney Kessel, truly a top ten contender? Large-scale traffic in guitars between Japan and the United States began in the very late ’50s. Jack Westheimer of Chicago’s W.M.I. corporation has published his recollection of having begun to bring in Kingston guitars purchased from the Terada Trading Company in around 1958. The Japanese themselves began advertising their wares to American distributors as early as July of 1959, when Guyatone ran a small space ad touting small pointed single cutaway solidbodies more or less resembling Teisco’s mini-Les Pauls. Steve Albini, on the other hand, finds it useful to think in terms of blending 'bright' and 'dark' mics. "Normally I'll have two microphones on each cabinet, a dark mic and a bright mic, say a ribbon microphone and a condenser, or two different condensers with different characters." Eddie Kramer's discussion of his Hendrix sessions reveals a similar preference: "Generally speaking, it was either a U67 or a Beyerdynamic M160, or a combination of both, which I still use today. It might be slightly different, of course, but the basic principle's the same — a ribbon and a condenser." Am I missing something? Few MIDI artists can document the finest details of legato expressed by some human performers, but such nuance is within the scope of current notational languages. If no human can or will produce such detailed documentation of existing performances, computational machines can, if not now, soon -- unless the inexorable march toward AI that can pass the Turing test is more exorable that it might appear. If you’re looking for a decent guitar at a super affordable price, look no further. This Ibanez features Powersound Pickups as well as 5-way switching to give you a variety of tones and styles. With a contoured body, it’s super easy to get comfortable while shredding away on this puppy. If your music styles fall in line with hard rock or country, then this is the guitar for you! The best advice any guitar player can give when it comes to figuring out which guitar to get is to buy the best model your money can afford. In most cases, this advice is rock solid. Even if you are a beginner who isn't sure whether or not you want to commit to playing guitar long term, you can always sell the guitar with a minimal loss, like a decent car versus a junker. Think of it as an investment as long as you maintain and take care of it. CF Martin & Company was established by Christian Fredrick Martin in 1833, is an American guitar manufacturer. It is highly regarded for its guitars with steel strings. Martin Company is a leading manufacturer of flat top guitars that produce top quality sound. They fabricate classic and retro styles of guitars with varied body type and sizes available in 12, 14, and 15-string styled guitars. Top quality tonewood is used after testing the sounds and vibrations produced within a pattern of a time frame. Choose the strings based on the genre of music and style you will play this guitar. The starting price of an acoustic Martin guitar is 23,000 INR approximately. If anybody needs a Bridge, I have a Teisco Roller Bridge for sale, it is a a copy of a Gretsch Roller bridge but includes a solid steel Base like a Rickenbacker Bridge Base. The string saddles are rollers which are adjustable side-to-side for proper string spacing, and each side of the bridge is adjustable for Height. It is in excellent condition, probably from 1965 thru 1968. On a scale of 0 to 10, it is a 8. Sometimes a pitch shifter will retain the original signal while adding in the new shifted pitch. The new shifted note can be set at a given intervallic distance from the original and will automatically harmonize any given series of notes or melody. In short, it will harmonize the guitar by duplicating the melody at a 3rd, 5th, or whatever interval you define. Since 1996, ESP’s subsidiary LTD has been creating quality guitars at very affordable prices. The EC-256, for example, is a great guitar if you’re looking to spend around$400. With extra-jumbo frets and a thin-U neck, this guitar is super comfortable to play. These guitars are known for their reliability and excellent build quality. And with a lightweight body, they make for great gigging guitars.
Tone pot usually connected just as a variable resistor (one lug is not connected), so you got some “small” resistance (when compared to amp’s input) and a cap going to the ground. When you crank your tone there is no significant resistance, so all signal above cutoff freq is shunted to the ground (with graduate slope) — but still this is the brightest position of the tone pot.
You can think of these as distortion pedals turned up to 11. Usually, a fuzz pedal comes in as an accent for solos and intros, since its effect is so strong that it could overpower the rest of the band otherwise. You can hear an example of fuzz in the classic recording of Jimi Hendrix playing The Star-Spangled Banner at Woodstock. This is a good type of pedal to try out as an introduction to more powerful effects.
I must confess -- I am horrible at soldering. So after messing up another wiring harness with my soldering skills, I came across ObsidianWire and purchased out of desperation. Now I wish this would have been my first choice. The wiring sounds awesome, it was a breeze to install and the included switch and input jack completed the upgrade. I would HIGHLY recommend ObsidianWire harnesses." - Ross G Vintage 50s Wiring for Les Paul
There are a few approaches you can take to get started browsing all this tablature. For example, you might start by looking for music that fits a certain theme. Alfred's 2015 Modern Christian Hits, the Hal Leonard The Ultimate Christmas Guitar Songbook and the Hal Leonard VH1 100 Greatest Hard Rock Songs are just three examples of tab books aimed at specific genres or occasions. Another idea would be to narrow down your options to tablature with included CDs; they give you the option to play along, making the songs easier and quicker to learn.
In 1959, with sales under pressure from the more powerful Fender Twin and from The Shadows, who requested amplifiers with more power, Vox produced what was essentially a double-powered AC15 and named it the AC30. The AC30, fitted with alnico magnet-equipped Celestion "blue" loudspeakers and later Vox's special "Top Boost" circuitry, and like the AC15 using valves (known in the US as tubes), helped to produce the sound of the British Invasion, being used by The Beatles, The Rolling Stones, The Kinks and the Yardbirds, among others. AC30s were later used by Brian May of Queen (who is known for having a wall of AC30s on stage), Paul Weller of The Jam (who also assembled a wall of AC30s), Rory Gallagher, The Edge of U2 and Radiohead guitarists Thom Yorke, Jonny Greenwood and Ed O'Brien. The Vox AC30 has been used by many other artists including Mark Knopfler, Hank Marvin who was instrumental in getting the AC30 made, Pete Townshend, Ritchie Blackmore, John Scofield, Snowy White, Will Sergeant, Tom Petty, The Echoes, Mike Campbell, Peter Buck, Justin Hayward, Tom DeLonge, Mike Nesmith, Peter Tork, Noel Gallagher, Matthew Bellamy, Omar Rodriguez-Lopez, Dustin Kensrue, Tame Impala, and many others.
The Dean Vendetta pack offers a sharp looking metallic red super Stratocaster style guitar with dual humbuckers, a tremolo bridge for fun dive tricks, and a 24 fret neck. This is perfect for players that want to start learning lead guitar as soon as possible. The neck is quite fast for a guitar in its price range. Also included with the purchase is a 10-watt practice amp, gig bag, instrument cable, picks, a tuner, and a fairly comfortable strap.
For his work on Supernatural, Glenn Kolotkin turned to elaborate multi-miking as a way of managing Carlos Santana's complicated setup. "I used multiple microphones on Carlos' guitars: Electrovoice RE20s close, Neumann U47s further away, an SM56, U87s. He was playing through an assortment of amplifiers at the same time, and by using multiple microphones I was able to get just the right blend."
Epiphone features all-metal rock solid hardware on all of its instruments. The Les Paul Special VE comes standard with the legendary Locktone Tune-o-matic bridge and Stopbar tailpiece for easy set up. Tuning is fast and reliable with Epiphone Premium Covered tuners with a 14:1 ratio.The higher the ratio, the more accurate your tuning. The tuners are mounted on an Epiphone Clipped Ear headstock with Les Paul Model in gold and the Epiphone log in silver. In addition, a "2016" Edition logo is on the back of the headstock.
Now we’ve moved away from the three ‘main’ shapes of steel-strung acoustics, we start looking at the off-shoots and variants which exist to give players even more options and opportunity to find the guitar which is exactly right for them. First among them is probably still well-known and identifiable in itself; the round-shoulder dreadnought. Again, these are largely Gibson-led creations, and include among their ranks the famous J-45 style famously employed by the Beatles and Noel Gallagher.
Rather than superfluous power, I suspect the copywriter really meant something like superior!! However, then again, maybe they did get it right, because they featured a 6A6 preamp tube that was exceptionally weak and microphonic. These amps also had a chassis built in Chicago, by Chicago Electric, with a cabinet made in Chicago, by Geib. These had performance problems and in 1937, National Dobro went back to using Webster chassis with Geib cabinets.
Vacuum tube or "valve" distortion is achieved by "overdriving" the valves in an amplifier.[40] In layperson's terms, overdriving is pushing the tubes beyond their normal rated maximum. Valve amplifiers—particularly those using class-A triodes—tend to produce asymmetric soft clipping that creates both even and odd harmonics. The increase in even harmonics is considered to create "warm"-sounding overdrive effects.[37][41]
# 5 Star...So fun...I bought the playstation 4 for my wife for Christmas it came with the game uncharted 4 I'm surprised my wife played it and loved it so when she seen the uncharted the Nathan drake collection it has 1 and 2 and 3 on it she had to have it she started playing it and she loves this game also...great games to have for that special moment when you are in the mood for a journey.Few games have that replay ability when you get to know Drake you just can't put it down great deal great price only problem why I gave it 4 Stars there is no incentive or discount if you have already purchased it for Ps3 and now you would want it for your Ps4 but as I said great deal great story great price
Scott Knickelbine began writing professionally in 1977. He is the author of 34 books and his work has appeared in hundreds of publications, including "The New York Times," "The Milwaukee Sentinel," "Architecture" and "Video Times." He has written in the fields of education, health, electronics, architecture and construction. Knickelbine received a Bachelor of Arts cum laude in journalism from the University of Minnesota.
Almost every guitar you see on our website is available in our Chicago guitar showroom. While we carry hard-to-find, top of the line vintage guitars, Rock N Roll Vintage Guitar Shop also carries new guitars and basses from Fender (Squire), Martin, Seagull, Lakland, Hofner, Kay, Hanson, EGC, and other top brands. You can also find top of the line amps including Ampeg, Analog Outfitters, Divided By 13, Fender, Hi-Tone, Laney, Magnatone and Orange to name a few.
Created four identical test rigs out of scrap wood from my workshop. They are all 725 x 35 x 47 mm in size, and weigh 651 grams (Alder), 618 g (Koa), 537 g (Swamp Ash), and 818 g (Zebrano). They obviously don’t exactly mimic a guitar, but should for the sake of the test resemble the type of tensions and forces that a guitar body with a neck is subjected to.
{"eVar4":"shop: guitars","eVar5":"shop: guitars: electric guitars","pageName":"[mf] shop: guitars: electric guitars","reportSuiteIds":"musiciansfriendprod","eVar3":"shop","prop2":"[mf] shop: guitars: electric guitars","prop18":"skucondition|0||historicalgrossprofit|1||hasimage|1||creationdate|1","prop1":"[mf] shop: guitars","prop17":"sort by","evar51":"default: united states","prop10":"category","prop11":"electric guitars","prop5":"[mf] shop: guitars: electric guitars","prop6":"[mf] shop: guitars: electric guitars","prop3":"[mf] shop: guitars: electric guitars","prop4":"[mf] shop: guitars: electric guitars","campaign":"directsourcecode2","channel":"[mf] shop","linkInternalFilters":"javascript:,musiciansfriend.com","prop7":"[mf] sub category"} |
http://www.computer.org/csdl/trans/tk/2004/12/k1457-abs.html | Publication 2004 Issue No. 12 - December Abstract - Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches
Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches
December 2004 (vol. 16 no. 12)
pp. 1457-1471
ASCII Text x Richard Jensen, Qiang Shen, "Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches," IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 12, pp. 1457-1471, December, 2004.
BibTex x @article{ 10.1109/TKDE.2004.96,author = {Richard Jensen and Qiang Shen},title = {Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches},journal ={IEEE Transactions on Knowledge and Data Engineering},volume = {16},number = {12},issn = {1041-4347},year = {2004},pages = {1457-1471},doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2004.96},publisher = {IEEE Computer Society},address = {Los Alamitos, CA, USA},}
RefWorks Procite/RefMan/Endnote x TY - JOURJO - IEEE Transactions on Knowledge and Data EngineeringTI - Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based ApproachesIS - 12SN - 1041-4347SP1457EP1471EPD - 1457-1471A1 - Richard Jensen, A1 - Qiang Shen, PY - 2004KW - Dimensionality reductionKW - feature selectionKW - feature transformationKW - rough selectionKW - fuzzy-rough selection.VL - 16JA - IEEE Transactions on Knowledge and Data EngineeringER -
Semantics-preserving dimensionality reduction refers to the problem of selecting those input features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition, and signal processing. This has found successful application in tasks that involve data sets containing huge numbers of features (in the order of tens of thousands), which would be impossible to process further. Recent examples include text processing and Web content classification. One of the many successful applications of rough set theory has been to this feature selection area. This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies. Several approaches to feature selection based on rough set theory are experimentally compared. Additionally, a new area in feature selection, feature grouping, is highlighted and a rough set-based feature grouping technique is detailed.
[1] H. Almuallim and T.G. Dietterich, “Learning with Many Irrelevant Features,” Proc. Ninth Nat'l Conf. Artificial Intelligence, pp. 547-552, 1991.
[2] “Rough Sets and Current Trends in Computing,” Proc. Third Int'l Conf., J.J. Alpigini, J.F. Peters, J. Skowronek, and N. Zhong, eds., 2002.
[3] J. Bazan, A. Skowron, and P. Synak, “Dynamic Reducts as a Tool for Extracting Laws from Decision Tables,” Proc. Eighth Symp. Methodologies for Intelligent Systems, Z.W. Ras and M. Zemankova, eds., pp. 346-355, 1994.
[4] J. Bazan, “A Comparison of Dynamic and Non-Dynamic Rough Set Methods for Extracting Laws from Decision Tables,” Rough Sets in Knowledge Discovery, L. Polkowski and A. Skowron, eds., pp. 321-365, Physica - Verlag, 1998.
[5] T. Beaubouef, F.E. Petry, and G. Arora, “Information Measures for Rough and Fuzzy Sets and Application to Uncertainty in Relational Databases,” Rough-Fuzzy Hybridization: A New Trend in Decision Making, 1999.
[6] M.J. Beynon, “An Investigation of $\beta{\hbox{-}}{\rm{Reduct}}$ Selection within the Variable Precision Rough Sets Model,” Proc. Second Int'l Conf. Rough Sets and Current Trends in Computing (RSCTC 2000), pp. 114-122, 2000.
[7] M.J. Beynon, “Reducts within the Variable Precision Rough Sets Model: A Further Investigation,” European J. Operational Research, vol. 134, no. 3, pp. 592-605, 2001.
[8] C.L. Blake and C.J. Merz UCI Repository of Machine Learning Databases, University of California at Irvine, 1998, http://www.ics.uci.edu~mlearn/.
[9] A.T., Bjorvand and J. Komorowski, “Practical Applications of Genetic Algorithms for Efficient Reduct Computation,” Proc. 15th IMACS World Congress on Scientific Computation, Modelling and Applied Mathematics, A. Sydow, ed., vol. 4, pp. 601-606, 1997.
[10] A. Chouchoulas and Q. Shen, “Rough Set-Aided Keyword Reduction for Text Categorisation,” Applied Artificial Intelligence, vol. 15, no. 9, pp. 843-873, 2001.
[11] A. Chouchoulas, J. Halliwell, and Q. Shen, “On the Implementation of Rough Set Attribute Reduction,” Proc. 2002 UK Workshop Computational Intelligence, pp. 18-23, 2002.
[12] M. Dash and H. Liu, “Feature Selection for Classification,” Intelligent Data Analysis, vol. 1, no. 3, 1997.
[13] P. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Prentice Hall, 1982.
[14] J. Dong, N. Zhong, and S. Ohsuga, “Using Rough Sets with Heuristics for Feature Selection,” New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, Proc. Seventh Int'l Workshop (RSFDGrC '99), pp. 178-187, 1999.
[15] G. Drwal and A. Mrókek, “System RClass— Software Implementation of the Rough Classifier,” Proc. Seventh Int'l Symp. Intelligent Information Systems, pp. 392-395, 1998.
[16] G. Drwal, “Rough and Fuzzy-Rough Classification Methods Implemented in RClass System,” Proc. Second Int'l Conf. Rough Sets and Current Trends in Computing (RSCTC 2000), pp. 152-159, 2000.
[17] D. Dubois and H. Prade, “Putting Rough Sets and Fuzzy Sets Together,” Intelligent Decision Support, pp. 203-232, 1992.
[18] “Rough Set Data Analysis,” I. Düntsch and G. Gediga, eds., Encyclopedia of Computer Science and Technology, A. Kent and J.G. Williams, eds., pp. 281-301, 2000.
[19] I. Düntsch and G. Gediga, Rough Set Data Analysis: A Road to Non-Invasive Knowledge Discovery. Bangor: Methodos, 2000.
[20] A Brief Introduction to Rough Sets, EBRSC, Copyright 1993, information available at http://cs.uregina.ca/~roughsetrs.in tro.txt .
[21] U. Höhle, “Quotients with Respect to Similarity Relations,” Fuzzy Sets and Systems, vol. 27, pp. 31-44, 1988.
[22] J. Jelonek, K. Krawiec, and R. Slowinski, “Rough Set Reduction of Attributes and Their Domains for Neural Networks,” Computational Intelligence 11, pp. 339-347, 1995.
[23] R. Jensen and Q. Shen, “Using Fuzzy Dependency-Guided Attribute Grouping in Feature Selection,” Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Proc. Ninth Int'l Conf. (RSFDGrC 2003), pp. 250-255, 2003.
[24] R. Jensen and Q. Shen, “Finding Rough Set Reducts with Ant Colony Optimization,” Proc. 2003 UK Workshop Computational Intelligence, pp. 15-22, 2003.
[25] R. Jensen and Q. Shen, “Fuzzy-Rough Attribute Reduction with Application to Web Categorization,” Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.
[26] K. Kira and L.A. Rendell, “The Feature Selection Problem: Traditional Methods and a New Algorithm,” Proc. Ninth Nat'l Conf. Artificial Intelligence, pp. 129-134, 1992.
[27] J. Komorowski, Z. Pawlak, L. Polkowski, and A. Skowron, “Rough Sets: A Tutorial,” Rough-Fuzzy Hybridization: A New Trend in Decision Making, pp. 3-98, 1999.
[28] M. Kryszkiewicz, “Maintenance of Reducts in the Variable Precision Rough Sets Model,” ICS Research Report 31/94, Warsaw Univ. of Tech nology, 1994.
[29] P. Langley, “Selection of Relevant Features in Machine Learning,” Proc. AAAI Fall Symp. Relevance, pp. 1-5, 1994.
[30] Feature Extraction, Construction and Selection: A Data Mining Perspective (Kluwer International Series in Engineering & Computer Science), H. Liu and H. Motoda, eds. Kluwer Academic Publishers, 1998.
[31] A.J. Miller, Subset Selection in Regression. Chapman and Hall, 1990.
[32] T. Mitchell, Machine Learning. McGraw-Hill, 1997.
[33] M. Modrzejewski, “Feature Selection Using Rough Sets Theory,” Proc. 11th Int'l Conf. Machine Learning, pp. 213-226, 1993.
[34] S.H. Nguyen and H.S. Nguyen, “Some Efficient Algorithms for Rough Set Methods,” Proc. Conf. Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 1451-1456, 1996.
[35] H.S. Nguyen and A. Skowron, “Boolean Reasoning for Feature Extraction Problems,” Proc. Int'l Symp. Methodologies for Intelligent Systems (ISMIS), pp. 117-126, 1997.
[36] Rough-Fuzzy Hybridization: A New Trend in Decision Making, S.K. Pal and A. Skowron, eds. Springer Verlag, 1999.
[37] Z. Pawlak, “Rough Sets,” Int'l J. Computer and Information Sciences, vol. 11, no. 5, pp. 341-356, 1982.
[38] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning About Data. Kluwer Academic Publishing, 1991.
[39] Z. Pawlak and A. Skowron, “Rough Membership Functions,” Advances in the Dempster-Shafer Theory of Evidence, R. Yager, M. Fedrizzi, and J. Kacprzyk, eds., pp. 251-271, 1994.
[40] “Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems,” Studies in Fuzziness and Soft Computing, L. Polkowski, T.Y. Lin, and S. Tsumoto, eds., vol. 56, Physica-Verlag, 2000.
[41] L. Polkowski, “Rough Sets: Mathematical Foundations,” Advances in Soft Computing, Physica Verlag, 2002.
[42] J.R. Quinlan, “C4.5: Programs for Machine Learning,” The Morgan Kaufmann Series in Machine Learning, Morgan Kaufmann Publishers, 1993.
[43] B. Raman and T.R. Ioerger, “Instance-Based Filter for Feature Selection,” J. Machine Learning Research 1, pp. 1-23, 2002.
[44] The ROSETTA homepage, http://rosetta.lcb.uu.segeneral/, 2004.
[45] RSES: Rough Set Exploration System, http://logic.mimuw. edu.pl~rses, 2004.
[46] J.C. Schlimmer, “Efficiently Inducing Determinations— A Complete and Systematic Search Algorithm that Uses Optimal Pruning,” Proc. Int'l Conf. Machine Learning, pp. 284-290, 1993.
[47] R. Setiono and H. Liu, “Neural Network Feature Selector,” IEEE Trans. Neural Networks, vol. 8, no. 3, pp. 645-662, 1997.
[48] H. Sever, V.V. Raghavan, and T.D. Johnsten, “The Status of Research on Rough Sets for Knowledge Discovery in Databases,” Proc. ICNPAA-98: Second Int'l Conf. Nonlinear Problems in Aviation and Aerospace, 1998.
[49] G. Shafer, A Mathematical Theory of Evidence. Princeton Univ. Press, 1976.
[50] C. Shang and Q. Shen, “Rough Feature Selection for Neural Network Based Image Classification,” Int'l J. Image Graphics, vol. 2, no. 4, pp. 541-556, 2002.
[51] Q. Shen and A. Chouchoulas, “A Fuzzy-Rough Approach for Generating Classification Rules,” Pattern Recognition, vol. 35, no. 11, pp. 341-354, 2002.
[52] Q. Shen and R. Jensen, “Selecting Informative Features with Fuzzy-Rough Sets and Its Application for Complex Systems Monitoring,” Pattern Recognition, vol. 37, no. 7, pp. 1351-1363, 2004.
[53] A. Skowron and C. Rauszer, “The Discernibility Matrices and Functions in Information Systems,” Intelligent Decision Support, pp. 331-362, 1992.
[54] A. Skowron and J.W. Grzymala-Busse, “From Rough Set Theory to Evidence Theory,” Advances in the Dempster-Shafer Theory of Evidence, R. Yager, M. Fedrizzi, and J. Kasprzyk, eds. John Wiley & Sons, Inc., 1994.
[55] A. Skowron and J. Stepaniuk, “Tolerance Approximation Spaces,” Fundamenta Informaticae, vol. 27, no. 2, pp. 245-253, 1996.
[56] A. Skowron, J. Komorowski, Z. Pawlak, and L. Polkowski, “Rough Sets Perspective on Data and Knowledge,” Handbook of Data Mining and Knowledge Discovery, pp. 134-149, Oxford Univ. Press, 2002.
[57] A. Skowron and S.K. Pal, “Rough Sets, Pattern Recognition, and Data Mining,” Pattern Recognition Letters, vol. 24, no. 6, pp. 829-933, 2003.
[58] D. Slezak, “Approximate Reducts in Decision Tables,” Proc. Sixth Int'l Conf., Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU '96), pp. 1159-1164, 1996.
[59] D. Slezak, “Normalized Decision Functions and Measures for Inconsistent Decision Tables Analysis,” Fundamenta Informaticae, vol. 44, no. 3, pp. 291-319, 2000.
[60] Intelligent Decision Support, R. Slowinski, ed. Kluwer Academic Publishers, 1992.
[61] R. Slowinski and D. Vanderpooten, “Similarity Relation as a Basis for Rough Approximations,” Advances in Machine Intelligence and Soft Computing, pp. 17-33, P. Wang, ed., vol. IV, , Duke Univ. Press, 1997.
[62] J. Stefanowski and A. Tsoukiàs, “Valued Tolerance and Decision Rules,” Rough Sets and Current Trends in Computing, pp. 212-219, 2000.
[63] R.W. Swiniarski and A. Skowron, “Rough Set Methods in Feature Selection and Recognition,” Pattern Recognition Letters, vol. 24, no. 6, pp. 833-849, 2003.
[64] H. Thiele, “Fuzzy Rough Sets versus Rough Fuzzy Sets— An Interpretation and a Comparative Study Using Concepts of Modal Logics,” Technical Report no. CI-30/98, Univ. of Dortmund, 1998.
[65] C.J. van Rijsbergen, Information Retrieval. London: Butterworths, 1979.
[66] J. Wang and J. Wang, “Reduction Algorithms Based on Discernibility Matrix: The Ordered Attributes Method,” J. Computer Science and Technology, vol. 16, no. 6, pp. 489-504, 2001.
[67] J. Wróblewski, “Finding Minimal Reducts Using Genetic Algorithms,” Proc. Second Ann. Joint Conf. Information Sciences, pp. 186-189, 1995.
[68] M. Wygralak, “Rough Sets and Fuzzy Sets— Some Remarks on Interrelations,” Fuzzy Sets and Systems, vol. 29, pp. 241-243, 1989.
[69] Y.Y. Yao, “A Comparative Study of Fuzzy Sets and Rough Sets,” Information Sciences, vol. 109, pp. 21-47, 1998.
[70] L.A. Zadeh, “Fuzzy Sets,” Information and Control, vol. 8, pp. 338-353, 1965.
[71] N. Zhong, J. Dong, and S. Ohsuga, “Using Rough Sets with Heuristics for Feature Selection,” J. Intelligent Information Systems, vol. 16, no. 3, pp. 199-214, 2001.
[72] W. Ziarko, “Variable Precision Rough Set Model,” J. Computer and System Sciences, vol. 46, no. 1, pp. 39-59, 1993.
Index Terms:
Dimensionality reduction, feature selection, feature transformation, rough selection, fuzzy-rough selection.
Citation:
Richard Jensen, Qiang Shen, "Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approaches," IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 12, pp. 1457-1471, Dec. 2004, doi:10.1109/TKDE.2004.96 |
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=2%2B1D_topological_gravity | # (2+1)-dimensional topological gravity
(Redirected from 2+1D topological gravity)
In two spatial and one time dimensions, general relativity turns out to have no propagating gravitational degrees of freedom. In fact, it can be shown that in a vacuum, spacetime will always be locally flat (or de Sitter or anti de Sitter depending upon the cosmological constant). This makes (2+1)-dimensional topological gravity a topological theory with no gravitational local degrees of freedom.
Edward Witten1 has argued this is equivalent to a Chern-Simons theory with the gauge group $SO(2,2)$ for a negative cosmological constant, and $SO(3,1)$ for a positive one, which can be exactly solved, making this a toy model for quantum gravity. The Killing form involves the Hodge dual.
Witten later changed his mind,2 and argued that nonperturbatively 2+1D topological gravity differs from Chern-Simons because the functional measure is only over nonsingular vielbeins. He suggested the CFT dual is a Monster conformal field theory, and computed the entropy of BTZ black holes.
## References
1. ^ Witten, Edward (19 Dec 1988). "(2+1)-Dimensional Gravity as an Exactly Soluble System". Nuclear Physics B 311 (1): 46–78. Bibcode:1988NuPhB.311...46W. doi:10.1016/0550-3213(88)90143-5.url=http://srv2.fis.puc.cl/~mbanados/Cursos/TopicosRelatividadAvanzada/Witten2.pdf
2. ^ Witten, Edward (22 June 2007). "Three-Dimensional Gravity Revisited". arXiv:0706.3359.
HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TSODP - TRTWE TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree !
TerritorioScuola. Some rights reserved. Informazioni d'uso ☞ |
https://handmade.network/m/jstimpfle | ## Recent Activity
I've been dreaming of rendering the Ghostscript tiger for a long time, now I'm dangerously close to quality rendering. Sometimes using a library lets you focus on what you care about, heh. I used nanosvg.h (instead of my own shitty less complete SVG parser) to parse the SVG file into a list of shapes, each shape composed of a set of cubic bezier outline paths. These paths are naturally cached for efficiency reasons. I ripped a piece of code out of nanosvgrast.h to convert the bezier outlines to line segment outlines using a recursive subdivision algorithm. The segments outlines are cached as well, but the conversion is redone whenever the scaling level (zoom) has changed by more than 2x.
Everything else is computed end-to-end as before, making tiles with localized geometry on the CPU, then rendering each tile using the equivalent of scanline rasterization on the GPU. Still scales reasonably well, and I'm hopeful to find out what options this approach affords. While there are some drawbacks compared to tesselation, it's a less data-intensive approach, and probably more flexible all considered.
Worked on optimizing my 2D renderer (which renders polygon outlines end-to-end, based on ravg.pdf paper) more. Noticed that benchmarking is hard. Interestingly, while I am recording with ShareX, it takes only about half the "normal" time to generate a frame. Here it takes around 400 usec to generate the geometry and specialize it to tiles of 64x64 pixels. It takes about 80 usec to push the tiles to OpenGL. When syncing with glFinish() at the end, the OpenGL part takes around 300usec. Probably not measuring anything interesting here, since the GPU work is dominated by various latencies. Something similar could be the case for CPU part (geometry) since the times are probably subject to CPU scaling for example.
If I do both workloads 20 times per frame, both measurements increase by 10x-15x, still running comfortably at 60 FPS. More complicated geometry is definitely needed now to make any serious evaluations.
Improved the polygon renderer to allow for per-polygon colors. It was actually quite a bit of work since I had to optimize how tile paint jobs are dispatched. Came up with a simple API - the geometry is built by repeatedly calling add_line_segment(Ctx *ctx, Point a, Point b) and then finish_polygon(Ctx *ctx, Polygon_Style *style) after all the segments of a polygon were added. Rinse and repeat for next polygon. What's nice is that the internal structures aren't much more complex. Given that everything is chunked by tiles already, I can afford to duplicate a handle to the Polygon_Style in each tile paint job. The renderer doesn't need to track individual polygons, so there is no need to maintain a complicated object graph.
Next up is testing with more complex shapes and seeing which tricks can be employed to keep it running smoothly.
Implemented the most basic mechanism from ravg.pdf ( https://hhoppe.com/ravg.pdf ) paper, which cost me more than 3 days. Seems like graphics programming is hard. I noticed it helps a lot to build visualizations for all the little preprocessing data structures, to make sure they don't break even in corner cases. Making the shaders hot-reloadable helped a lot too.
The 144 stars shown in the video consist of 720 line segments, and get preprocessed in small tiles (32x32 screen pixels), each of which has a specialized description of the relevant line segments which includes some of the original segments clamped to the cell and as well some added artificial segments to complete the description. The tiles can then get rasterized on the GPU in a conventional vertex/pixel shader pipeline. Each pixel "finds" whether it is inside or outside relative to the segments of its cell. There is proper antialiasing for partially covered pixels, too.
On my computer with a 4K resolution screen, the 144 stars are generated and preprocessed in about 1ms (CPU) and the cells get rastered in about 1ms as well (CPU / GPU (OpenGL)). About 2x this time in debug mode with runtime checking enabled.
Work-in-progress SVG parser to test my triangulator and future vector UI work
Putting my Delaunay triangulator to some good use, with two vector glyphs. The vector glyphs are hand-coded as polygon outlines (each of these glyphs needs two outlines).
Took the time today to hunt a bug in my Delaunay triangulator. The blue edges are the ones which are not locally Delaunay (something is wrong). It seems there was an oversight in Guibas & Stolfi's Delaunay Paper, or maybe I was reading it wrong. I found & fixed the problem. I had the idea of using the (now (more) correct) Delaunay triangulator to outline point clouds. In the video I'm simply hiding all edges that are longer than some constant that is tuned to the granularity of the input point cloud.
This was incredibly hard to achieve (for me). It uses Loop-Blinn with earclipping triangulator. In the case of one (or more) vertices intersecting a bezier "triangle" (control point + 2 polygon neighbours) the bezier triangle must be split in two (or more) "wings" (but for rendering each of the wings still has the original triangle as control points of course). There are so many things that can go wrong with this basic fix that I almost went insane. In the end of the video there is a tiny glitch but that one is acceptable for me since I made the polygon self-intersecting (looking at just the polygon vertices; the concave bezier curve is just added as an extra with absolutely no bearing on the rest of the process)
Working on Constrained Delaunay (forced edges)
Implemented incremental Delaunay Triangulation from Guibas & Stolfi's Paper (1985). Not sure everything is right, but it's fun to play with. Added an animation for the Locate procedure as well.
My brain was melting while I was trying to implement some robust triangulations... so after getting just the Quad-Edge data structure right, it was time for some ryanimations to keep myself motivated 🙂
More polygon rasterizer work. Editing a B-spline curve. This stuff is hard to get right, but I've made some progress...
Software rasterization experiments. I know it's ugly AF, and the code looks even worse - but it's still a good feeling to have managed to put something on the screen. The torus is represented as two separate non-intersecting polygons (inner and outer circle, 360 points each) and is rasterized using a generic polygon rasterizer. With a little more work I'll be able to rasterize vector fonts as well. The lines are bezier curves sampled 128 times and at each sample a normal is computed and scaled to a certain thickness, resulting in a band that is easily triangulated. The big blue circle is rendered implicitly (center + radius), which allows for quick and dirty antialiasing that looks somewhat better than the rest.
Coding a GUI from scratch for work. This time around, I decided to start with a software rasterizer, which was a great choice. Added an OpenGL backend later. Colored + Textured Quads only. Freetype for font rasterization. A new idea was to use 1-dimensional glyph allocation for the font atlas. It's perfect if we're doing pixel-precise blitting - no 2 dimensions needed since there is no texture interpolation. This way simplifies the glyph allocation, and makes it very easy to just dynamically upload glyph data to a texture in OpenGL mode, much like a streaming vertex buffer. Haven't profiled the OpenGL backend, hopefully I can improve but 3ms/frame is bearable for now, most time probably spent waiting for uploads (single threaded).
Software rasterized image viewer. This particular image has 272 megapixels (14108 x 19347) and ~1GB of raw size. A mipmap of the image is created ahead of time and saved as 1024x1024 tiles. The viewer runs in about 50MB of RAM, with 16 cached tiles of 1024x1024 pixels, loading tiles from the appropriate mipmap level on demand. Some prediction is still needed to avoid flickering.
Experimenting with automated layout
... discord slowmode kicking in when trying to send multiple attachments
Did another attempt on structuring GUI from scratch. Taking a somewhat retained approach but without any messaging (no event types / routing etc) which seems to be a big contributor to complexity. Structure used is a big Ui context struct that holds state of inputs/outputs as well as stacks of layout rects, clipping rects, mouse interaction regions. Layout methods used are "RectCut" and two custom layout routines used to arrange the buttons in rows / columns. Code screenshot is how the color pickers at the bottom are assembled from more primitive elements. I feel it's reasonably concise while maintaining flexibility.
Starting a few experiments with UI design.
Trying to get the right organization here has caused me a lot of pain. Still not perfect, but it's getting better at least! And if two or more parties are involved in an operation (and you're stubborn enough to not want to have a central location that knows all the things), there is invariably a lot of bureaucracy involved.
Two nasty words I'm liking more and more: Object-oriented (actor model), Retained mode GUI. ~700 lines OpenGL + GLFW, ~1000 lines for the GUI. Effortlessly clean code (for my standards). Nice isolation and extremely generic widget implementations with deferred messaging. All the widgets have a single inbox and outbox, messaging happens strictly on the nesting hierarchy.
some progress on my Lisp from yesterday. Added strings objects/literals, but more importantly: Eval, Quasiquote, Unquote. The important realization is how macros behave exactly like functions, except they receive their arguments unevaluated.
Had another try at programming a small lisp today. I think I finally understand the evaluation process and macros better. Although macros here are not yet usable, it simply evals two times - first time to splice syntactic (unevaluated) args, and then a second eval pass over the resulting syntax node.
jstimpfle
Some 2D UI work
Did some more work on my puzzle game. Longer video at http://jstimpfle.de/videos/DqUynavp5j.mp4
Worked on a puzzle designer / game. Also did some work on a WebAssembly port. Check it out at http://jstimpfle.de/projects/puzzle/puzzle.html
Worked on a puzzle piece designer. There are still crashes in the triangulator. Computational geometry is hard work
Worked some more on a toy lisp implementation to better understand lisp. To be honest, I still haven't found what's so great about it. The huge difference is that everything is lists, so it's much easier to make hard to debug mistakes - which I assume does not only apply to writing the compiler but also writing programs in LISP itself.
The quoting rules and name binding schemes are at least as context-dependent and confusing as say, C, must be for a novice programmer. I tried making very few special cases but haven't really found a good way, and Scheme and also @rxi 's fe both have numerous primitive forms (which need special handling) as well.
Finally took the plunge and started moving all my code to a monorepo. I've decided to define my projects as simple python functions for now that can inspect how the project is built (Compiler/OS etc) and return the set of cfiles, includedirs, linklibs and so on. It feels great so far because I finally can fluently share code between all my projects without copying. No diverging code anymore! I also get a lot of control over my compilation process - I already have included a configuration that targets WASM/WebGL with emscripten. It's all in a single build.py file and the project definitions already outweigh the build logic code. All I'm going to add in the future is checks whether a file needs to be rebuilt.
website screenshot
found an app to record gifs
A little app to combine STL files in tree structures, and move objects around according to their degrees of freedom
jstimpfle |
https://en.wikisource.org/wiki/Page:The_Mathematical_Principles_of_Natural_Philosophy_-_1729_-_Volume_1.djvu/268 | # Page:The Mathematical Principles of Natural Philosophy - 1729 - Volume 1.djvu/268
there will ariſe RGC - RFF + TFF to ${\displaystyle \scriptstyle bT^{m}+cT^{n}}$ as -FF to ${\displaystyle \scriptstyle -mbT^{m-1}-ncT^{n-2}+{\frac {mm-m}{2}}bXT^{m-2}+{\frac {nn-n}{2}}cXT^{n-2}}$ &c. And taking the laſt ratio's that ariſe when the orbits come to a circular form, there will come forth GG to ${\displaystyle \scriptstyle bT^{m-1}+cT{n-1}}$ as FF to ${\displaystyle \scriptstyle bT^{m-2}+ncT^{n-1}}$ and again GG to FF as ${\displaystyle \scriptstyle bT^{m-1}+cT{n-1}}$ to ${\displaystyle \scriptstyle bT^{m-2}+ncT^{n-1}}$. This proportion. by expreſſing the greateſt altitude CV or T arithmetically by unity, becomes, GG to FF as b + c to mb + nc, and therefore as 1 to ${\displaystyle \textstyle {\frac {mb+nc}{b+c}}}$. Whence G becomes to F, that is the angle VCp to the angle VCP as 1 to ${\displaystyle \textstyle {\sqrt {\frac {mb+nc}{b+c}}}}$. And therefore ſince the angle VCP between the upper and the lower apſis, in an immovable ellipſis, is of 180 deg. the angle VCp between the ſame apſides in an orbit which a body deſcribes with a centripetal force. that is as ${\displaystyle \textstyle {\frac {bA^{m}+cA^{n}}{A^{2}}}}$ will be equal to an angle of ${\displaystyle \textstyle 180{\sqrt {\frac {b+c}{mb+nc}}}}$ deg. And by the ſame reaſoning if the centripetal force be as ${\displaystyle \textstyle {\frac {bA^{m}-cA^{n}}{A^{3}}}}$ the angle between the apſides will be found equal to ${\displaystyle \textstyle 180{\sqrt {\frac {b+c}{mb+nc}}}}$ deg. After the ſame manner the problem is ſolved in more difficult caſes. The quantity to which the centripetal force is proportional. muſt always be reſolved into a converging |
https://www.soimort.org/mst/7/ | # Languages and models of a first-order logic.
### 2017-09-06
Prologue. This is the summary of a set of notes I took in KU’s Introduction to Mathematical Logic course, that I should have finished months ago but somehow procrastinated until recently. I basically followed Enderton’s A Mathematical Introduction to Logic book (with all its conventions of mathematical notations). A few issues to address in advance:
• These notes are mainly about first-order logic. Propositional logic is assumed to be a prerequisite. The important thing to know is that its semantics (as a Boolean algebra) can be fully described by a truth table, which is obviously finite and mechanizable.
• Naïve set theory is also a prerequisite, that is, one must accept the notion of sets unconditionally: A set is just a collection of objects, and it must exist as we describe it. We are convinced that no paradoxical use of sets will ever emerge, and that no “naïve set” could be a proper class. The formal notion of sets is built upon first-order logic, but without some informal reference to sets we can’t even say what a first-order logic is.
• A set $$S$$ is said to be “countable” iff there exists a bijection $$f : \mathbb{N} \to S$$; otherwise it is “uncountable”.
• One must be willing to accept the validity of mathematical induction on natural numbers (or the well-ordering principle for natural numbers). Again, to formalize the induction principle we need the set theory, but we don’t have it in the first place (until we introduce a first-order logic).
• On the model-theoretic part: Notes on definability and homomorphisms are omitted from this summary, not to mention ultraproducts and the Löwenheim-Skolem theorem. Model theory is a big topic in its own right and deserves an individual treatment, better with some algebraic and topological contexts. (Note that the homomorphism theorem is used in proving the completeness theorem; also the notion of definability is mentioned in the supplementary sections.)
• On the proof-theoretic part: Notes on some metatheorems are also omitted, as they are a purely technical aspect of a Hilbert-style deductive system. (One may find it convenient to prove an actual theorem with more metatheorems, but they are really not adding any extra power to our system.)
• The relation between logic and computability (i.e., Gödel’s incompleteness theorems) is not discussed.
• But the meanings of “decidable” and “undecidable” are clear from the previous notes Mst. #6 (from a computer scientist’s perspective).
• Axiomatic set theory, which is another big part of the course, is not included in these notes. (Maybe I’m still too unintelligent to grasp the topic.) But it is good to know:
• First-order logic has its limitation in definability (i.e., it’s not capable of ruling out non-standard models of arithmetic), until we assign to it a set-theoretic context. So set theory is often considered a foundation of all mathematics (for its expressive power).
• Axiom of Choice (AC) causes some counter-intuitive consequences, but it was shown to be consistent with ZF axioms (Gödel 1938). And there are models of ZF$$\cup\lnot$$AC so well as ZF$$\cup$$AC.
• Constructivists tend to avoid AC in mathematics. However, Henkin’s proof of the completeness theorem in first-order logic assumes AC (Step II in finding a maximal consistent set). (thus it is a non-constructive proof!1)
• Intuitionistic logic is not in the scope of these course notes. (And most logic books, including the Enderton one, are not written by constructive mathematicians.) Basically, in a Hilbert-style system, a classical logic would admit all tautologies in propositional logic as Axiom Group 1. Intuitionistic logic, in contrast, rejects those tautologies that are non-constructive in a first-logic setting.
• First-order language: A formal language consisting of the following symbols:
1. Logical symbols
• Parentheses: $$($$, $$)$$.
• Connective symbols: $$\to$$, $$\lnot$$.
• Variables: $$v_1$$, $$v_2$$, …
• Equality symbol (optional 2-place predicate symbols): $$=$$.
2. Parameters (non-logical symbols; open to interpretation)
• Universal quantifier symbol: $$\forall$$.
• Predicate symbols (relation symbols): $$P_1$$, $$P_2$$, …
• Constant symbols (0-place function symbols): $$c_1$$, $$c_2$$, …
• Function symbols: $$f_1$$, $$f_2$$, …
• When specifying a concrete first-order language $$\mathcal{L}$$, we must say (i) whether the quality symbol is present; (ii) what the parameters are.
Remark 7.1. (Language of propositional logic) The language of propositional logic may be seen as a stripped form of first-order languages, in which parentheses, connective symbols and sentential symbols (the only parameters; may be treated as 0-place predicate symbols) are present. Intuitively, that language might seem too weak to encode our formal reasoning in all kinds of mathematics and many practical areas, so to speak.
• Terms and formulas
• An expression is a finite sequence of symbols (i.e., a finite string). Among all expressions, we are interested in two kinds of them which we refer to as terms and formulas.
• A term is either:
• a single variable or constant symbol; or
• $$f t_1 \cdots t_n$$, where $$f$$ is a $$n$$-place function symbol, and every $$t_i$$ $$(1 \leq i \leq n)$$ is also a term.
• A formula (or wff, well-formed formula) is either:
• $$P t_1 \cdots t_n$$, where $$P$$ is a $$n$$-place predicate symbol (or the equality symbol $$=$$), and every $$t_i$$ $$(1 \leq i \leq n)$$ is a term; or
• one of the following forms:
• $$(\lnot \psi)$$, where $$\psi$$ is also a formula;
• $$(\psi \to \theta)$$, where $$\psi$$ and $$\theta$$ are also formulas;
• $$\forall v_i \psi$$, where $$v_i$$ is a variable and $$\psi$$ is also a formula.
• A variable may occur free in a formula. A formula without any free variable is called a sentence.
Remark 7.2. (Metatheory and philosophical concerns) A first-order expression, as a finite sequence (also called a tuple), may be defined in terms of ordered pairs in axiomatic set theory. But we will not appeal to set theory in our first definitions of expressions in logic. (So far we have no notion about what a “set” formally is!)
A further concern is whether our definitions of terms and formulas are well established, that is, since we are defining the notions of terms and formulas inductively, would it be possible that there is a certain term or formula that is covered by our recursive definition, but can never be actually built using these operations? To show that first-order terms/formulas are well-defined, a beginner might try to prove these induction principles by mathematical induction on the complexities of terms/formulas, but that would rely on the fact that the set of natural numbers $$\omega$$ is well-ordered so that we can apply induction on numbers; to justify things like this, it is essential to use set theory or second-order logic, which we don’t even have until we define a first-order logic. Thus, unavoidable circularity emerges if we try to look really fundamentally.
For now, we must appeal to a metatheory that we can easily convince ourselves by intuition, so that we will accept these induction principles and the notion of “naïve sets” (or collections, if we don’t want to abuse the formal term of sets too much). Notwithstanding, I believe that a prudent person can bootstrap theories like this without drawing in any inconsistency.
Remark 7.3. (Context freedom, unique readability and parentheses) Since the formations of first-order terms and formulas make use of context-free rules, one familiar with formal languages and automata theory might ask, “Are the set of terms/formulas context-free languages?” Generally they are not, since our set $$V$$ of variables (samely for predicate and function symbols) could be infinitely (or even uncountably) large, but a context-free grammar requires that every set must be finite. However, in our first-order language $$\mathcal{L}$$, if these symbols can be effectively decidable, then there is an algorithm that accepts terms or formulas (or parses them). Furthermore, such parses are guaranteed to be unique, as shown by the Unique Readability Theorems in Enderton p. 105ff. Indeed, the inclusion of parentheses in our first-order language enables us to write any formula unambiguously. If we leave out all the parentheses, does a formula like $$\forall x P x \to \lnot Q x$$ mean $$(\forall x P x \to (\lnot Q x))$$ or $$\forall x (P x \to (\lnot Q x))$$? An alternative syntax would be to use logical connectives in a prefix manner, e.g., $$\to \forall x P x \lnot Q x$$ and $$\forall x \to P x \lnot Q x$$, but that is hardly as comprehensible as our chosen syntax.
Remark 7.4. (Abbreviations on notation) Why don’t we have the existential quantifier $$\exists$$ and some other connectives such like $$\land$$, $$\lor$$ and $$\leftrightarrow$$, in our language? Because any first-order formula that makes use of these symbols can be seen as syntactical abbreviations and should be rewritten using $$\forall$$, $$\to$$ and $$\lnot$$, as will be shown. A deeper reason is that $$\{ \to, \lnot \}$$ is a functionally complete set of Boolean algebraic operators that is sufficient to express all possible truth tables in propositional logic. On the other hand, a formula like $$\exists x \varphi$$ is just $$(\lnot \forall x (\lnot \varphi))$$, following from our understanding of what an existential quantification is.
Remark 7.5. (Sentences and truth values) In propositional logic, we don’t generally know whether a formula evaluates to true until every sentential symbol is assigned a truth value. (Sometimes we can tell the truth value with a little less information than what is required, if we apply a so-called short-circuit evaluation strategy, e.g., if $$A_1$$ is false then we immediately know $$(A_1 \to A_2)$$ is true, or if $$A_2$$ is true then $$(A_1 \to A_2)$$ is also true. But it is not the general case, and one should expect to evaluate both $$A_1$$ and $$A_2$$ before getting the answer.) Similarly, in a first-order logic, every free variable needs to have a definite assignment so as to give rise to the truth value of a formula. This is done by specifying a function $$s$$ (where $$\operatorname{dom} s = V$$ is the set of all variables) as the assignment of variables, and when applying $$s$$ to a formula $$\varphi$$ we get $$\varphi[s]$$, which is a sentence that has a definite meaning (i.e., no variable occurs free). Note that the assignment of variables alone is not sufficient to determine the truth of a sentence – For example, $$(P x y \to P x f y)\ [s(x \,|\, 0)(y \,|\, 0)]$$ is a sentence since no variable occurs free in it, but we can’t decide whether it is true because we don’t know what the predicate $$P$$ and the function $$f$$ are. If we say, $$P$$ is the arithmetic “less-than” relation and $$f$$ is the successor function $$f : x \mapsto x + 1$$, then we can tell that this is a true sentence (in fact $$P x y$$ is false as $$0 < 0$$ is false, but $$P x f y$$ is true as $$0 < 1$$ is true, so the overall sentence as a conditional is true). We could write $$P$$ and $$f$$ as $$<$$ and $$S$$, but the conventional interpretation of these symbols should not be taken for granted as if every symbol comes with an inherited meaning – They don’t, until we give them meanings.
• Structures: A structure (or an interpretation) $$\mathfrak{A}$$ assigns a domain $$|\mathfrak{A}|$$ to the language $$\mathcal{L}$$, and:
• Every predicate symbol is assigned a relation $$P^\mathfrak{A} \subseteq |\mathfrak{A}|^n$$.
• Every function symbol is assigned a function $$f^\mathfrak{A} : |\mathfrak{A}|^n \to |\mathfrak{A}|$$.
• Every constant symbol is assigned a member $$c^\mathfrak{A}$$ of the domain $$\mathfrak{A}$$.
• The universal quantifier symbol $$\forall$$ is assigned the domain $$|\mathfrak{A}|$$. (So it makes sense to say: “for all $$x$$ in $$|\mathfrak{A}|$$…”)
• Satisfaction and truth
• Given a structure $$\mathfrak{A}$$ and an assignment of variables $$s : V \to |\mathfrak{A}|$$, we define an extension function $$\bar{s} : T \to |\mathfrak{A}|$$ (where $$T$$ is the set of all terms) that maps any term into the domain $$|\mathfrak{A}|$$.
• With the term valuation $$\bar{s}$$, we define recursively that a structure $$\mathfrak{A}$$ satisfies a formula $$\varphi$$ with an assignment $$s$$ of variables, written as $\models_\mathfrak{A} \varphi[s]$ If this is not the case, then $$\not\models_\mathfrak{A} \varphi[s]$$ and we say that $$\mathfrak{A}$$ does not satisfy $$\varphi$$ with $$s$$.
• For a sentence $$\sigma$$ (which is just a formula with no free variables), the assignment of variables $$s : V \to |\mathfrak{A}|$$ does not make a difference whether $$\varphi$$ is satisfied by $$\mathfrak{A}$$. So if $$\models_\mathfrak{A} \sigma$$, we say that $$\sigma$$ is true in $$\mathfrak{A}$$ or that $$\mathfrak{A}$$ is a model of $$\sigma$$.
• Satisfiability of formulas: A set $$\Gamma$$ of formulas is said to be satisfiable iff there is a structure $$\mathfrak{A}$$ and an assignment $$s$$ of variables such that $$\models_\mathfrak{A} \Gamma[s]$$.
• Logical implication and validity
• In a language $$\mathcal{L}$$, we say that a set $$\Gamma$$ of formulas logically implies a formula $$\varphi$$, iff for every structure $$\mathfrak{A}$$ of $$\mathcal{L}$$ and every assignment $$s : V \to |\mathfrak{A}|$$ such that $$\models_\mathfrak{A} \gamma [s]$$ (for all $$\gamma \in \Gamma$$), it also holds that $$\models_\mathfrak{A} \varphi [s]$$ (note that $$\varphi$$ is not required to be a sentence): $\Gamma \models \varphi$ This is the analogue of tautological implication in propositional logic: $$A \Rightarrow B$$, iff every truth assignment that satisfies $$A$$ also satisfies $$B$$.
• If the empty set logically implies a formula, i.e., $$\emptyset \models \varphi$$, we write this fact simply as $$\models \varphi$$ and say that $$\varphi$$ is valid. A formula is valid iff given any assignment of variables, it is true in every structure; this is the analogue of tautologies in propositional logic: something that is considered “always true”.
Remark 7.6. (Dichotomy of semantic truthness and the liar’s paradox) It should be made clear from the definition that given a structure and an assignment, either $$\models_\mathfrak{A} \varphi[s]$$ (exclusive) or $$\not\models_\mathfrak{A} \varphi[s]$$, but not both! It follows from our intuition that a statement is either semantically true or false; and there is no third possibility.
A problem arises with self-referential terms, woefully: Assume that we have a first-order language $$\mathcal{L}$$ with a 1-place predicate symbol $$P$$, and the structure $$\mathfrak{A}$$ assigns it the domain $$|\mathfrak{A}| = \text{Formula}(\mathcal{L})$$, $$P$$ is interpreted as $$P^\mathfrak{A} = \{ \langle \sigma \rangle \,:\, \models \sigma \}$$, that is, $$\sigma \in P^\mathfrak{A}$$ iff $$\models \sigma$$. Let the sentence $$\tau$$ be $$(\lnot P x)$$ and the assignment $$s : V \to |\mathfrak{A}|$$ maps the variable $$x$$ to the sentence $$\tau$$, then is $$\tau[s]$$ true or false in $$\mathfrak{A}$$? If we take $$\tau[s]$$ as true, that is, $$(\lnot P \tau)$$ is true, then $$P \tau$$ must be false, so $$\tau \not\in P^\mathfrak{A}$$ thus $$\not\models \tau$$. If we take $$\tau[s]$$ as false, that is, $$(\lnot P \tau)$$ is false, then $$P \tau$$ must be true, so $$\tau \in P^\mathfrak{A}$$ thus $$\models \tau$$. This is known as the classical liar’s paradox. One possible way to resolve this (given by Alfred Tarski) is by disabling impredicativity in our structures; more precisely, one can define a semantic hierarchy of structures that allows us to predicate truth only of a formula at a lower level, but never at the same or a higher level. This matter is far beyond the scope of this summary, but the important lesson to learn here is that it is generally a bad idea to allow something both true and false in our semantics; it would put our enduring effort to cumulate all “mathematical truths” into void.
Remark 7.7. (Decidability of truth/validity) In propositional logic, it is easy to see that given a truth assignment of sentential symbols, every formula can be decided for its truth or falsehood. Moreover, even without any truth assignment, one can enumerate a truth table to find out whether a given formula is a tautology. Truth and validity are decidable in propositional logic. However, this is often not the case in first-order logic: In order to decide whether a sentence is true, one needs to find the truth values of all prime formulas (i.e., formulas like $$P t_1 \cdots t_n$$ and $$\forall v_i \psi$$) first, but the domain $$|\mathfrak{A}|$$ may be an (uncountably) infinite set, thus makes it impossible to mechanically check the universal quantification for all members; moreover, the functions used in building terms may not be Turing-computable at all. To decide the validity of a sentence, we have to check its truth in all structures of the language (whose set may also be uncountably large), and that is an even more impossible task.2
If semantic truth/validity is generally undecidable, how do we say that some formula is true in a predefined structure? Well, we can’t, in most general cases, since an infinite argument of truth is a useless argument (you can’t present it to someone / some Turing machine, as no physical device is capable of handling such an infinite object). Fortunately, there is a feasible way to say something is true, without appealing to any specific structures (that may give rise to unwanted undecidability), and that is called a formal deduction (also called a proof, expectedly).
• Formal deduction: Given a set $$\Lambda$$ of formulas (axioms), a set of rules of inference, we say that a set $$\Gamma$$ of formulas (hypotheses) proves another formula $$\varphi$$, or $$\varphi$$ is a theorem of $$\Gamma$$, iff there is a finite sequence (called a deduction of $$\varphi$$ from $$\Gamma$$) $$\langle \alpha_0, \dots, \alpha_n \rangle$$ such that
1. $$\alpha_n$$ is just $$\varphi$$.
2. For each $$0 \leq k \leq n$$, either
• $$\alpha_k \in \Gamma \cup \Lambda$$; or
• $$\alpha_k$$ is obtained by a rule of inference from a subset of previous formulas $$A \subseteq \bigcup_{0 \leq i < k} \alpha_i$$. $\Gamma \vdash \varphi$
• Formal systems and proof calculi: Different deductive systems made different choices on the set of axioms and rules of inferences. A natural deduction system may consist of no axiom but many rules of inference; on the contrary, a Hilbert-style system (named obviously after David Hilbert) uses many axioms but only two rules of inference. A proof calculus is the approach to formal deduction in a specified system, and as it is called a “calculus”, any derivation in it must contain only a finite number of steps so as to be calculable (by a person or by a machine).
• We will use a Hilbert-style deductive system here:
• Rules of inference
1. Modus ponens $\frac{\Gamma \vdash \psi \quad \Gamma \vdash (\psi \to \varphi)}{\Gamma \vdash \varphi}$
2. Generalization $\frac{\vdash \theta}{\vdash \forall x_1 \cdots \forall x_n \theta}$ (where $$\theta \in \Lambda$$.)
• Logical axioms: In a deductive system, axioms are better called logical axioms, to stress the fact that they are logically valid formulas in every structure, i.e., that their validity is not open to interpretation.
1. (Tautology) $$\alpha$$, where $$\models_t \alpha$$. (take sentential symbols to be prime formulas in first-order logic)
2. (Substitution) $$\forall x \alpha \to \alpha^x_t$$, where $$t$$ is substitutable for $$x$$ in $$\alpha$$.
3. $$\forall x (\alpha \to \beta) \to (\forall x \alpha \to \forall x \beta)$$.
4. $$\alpha \to \forall x \alpha$$, where $$x$$ does not occur free in $$\alpha$$.
5. $$x = x$$.
6. $$x = y \to (\alpha \to \alpha')$$, where $$\alpha$$ is atomic and $$\alpha'$$ is obtained from $$\alpha$$ by replacing $$x$$ in zero or more places by $$y$$.
Remark 7.8. (Validity of logical axioms) It should be intuitively clear that all logical axioms are convincing, and that their validity can be argued without appealing to any specific model. In particular, for an axiom $$\theta \in \Lambda$$, there is $$\vdash \theta$$; we must be able to argue (in our meta-language) that $$\models \theta$$, so that we can be convinced that our deductive system is a sound one. Remember that for any formula $$\varphi$$, either $$\models \varphi$$ or $$\not\models \varphi$$ (which is just $$\models (\lnot \varphi)$$). If a proof of $$\theta$$ (not as a formal deduction, but as an argument in our meta-language) does not even imply $$\models \theta$$, that would be very frustrating.
Remark 7.9. (Tautological implication, logical implication and deduction) If $$\Gamma \models_t \varphi$$ (i.e., $$\varphi$$ is tautologically implied by $$\Gamma$$ in propositional logic), we can argue that $$\Gamma \models \varphi$$ when replacing sentential symbols by prime formulas in first-order logic. In the special case that $$\Gamma = \emptyset$$, we are proving the validity of Axiom Group 1: $$\models_t \alpha \implies \models \alpha$$ (every tautology is valid). The converse does not hold though, since we have $$\models (\alpha \to \forall x \alpha)$$ (by Axiom Group 4), but $$\not\models_t (\alpha \to \forall x \alpha)$$ as $$\alpha$$ and $$\forall x \alpha$$ are two different sentential symbols (surely $$(A_1 \to A_2)$$ is not a tautology in propositional logic!).
It is worth noticing that even though $$\Gamma \models \varphi \not\implies \Gamma \models_t \varphi$$, we do have $$\Gamma \models \varphi \iff \Gamma \cup \Lambda \models_t \varphi$$. Intuitively, the set $$\Lambda$$ of logical axioms gives us a chance to establish truths about quantifiers and equalities (other than treating these prime formulas as sentential symbols that are too unrefined for our first-order logic). I haven’t done a proof of this, but I believe it should be non-trivial on both directions. Combining with Theorem 24B in Enderton p. 115, we get the nice result concluding that $\Gamma \vdash \varphi \iff \Gamma \cup \Lambda \models_t \varphi \iff \Gamma \models \varphi$ which entails both the soundness and the completeness theorems. It is basically saying that these three things are equivalent:
1. $$\Gamma$$ proves $$\varphi$$. (There is a formal deduction that derives $$\varphi$$ from $$\Gamma$$ and axioms $$\Lambda$$ in our deductive system; it is finite and purely syntactical, in the sense that it does not depend on any structure or assignment of variables.)
2. $$\Gamma$$ logically implies $$\varphi$$. (For every structure and every assignment of variables, $$\varphi$$ holds true given $$\Gamma$$. Of course, this is a semantical notion in the sense that it does involve structures and assignments of variables, which are infinite in numbers so it would be impossible for one to check this mechanically.)
3. $$\Gamma \cup \Lambda$$ tautologically implies $$\varphi$$. (We can reduce a first-order logic to propositional logic by adding logical axioms to the set of hypotheses, preserving all truths. For each prime formula, this is still a semantical notion for its truth value depends on structures / assignments of variables.)
• Soundness
1. $$\Gamma \vdash \varphi \implies \Gamma \models \varphi$$.
• Proof idea:
1. For $$\varphi \in \Lambda$$, show that every logical axiom is valid (Lemma 25A in Enderton p. 132ff.), that is, $$\models \varphi$$. Then trivially $$\Gamma \models \varphi$$;
2. For $$\varphi \in \Gamma$$, we have trivially $$\Gamma \models \varphi$$;
3. $$\varphi$$ is obtained by generalization on variable $$x$$ from a valid formula $$\theta$$. Since $$\models \theta$$ (if $$\theta$$ is an axiom, then this is already shown in Step 1; if $$\theta$$ is another generalization, then this can be shown by IH), for every structure $$\mathfrak{A}$$ and $$a \in |\mathfrak{A}|$$, $$\models_\mathfrak{A} \theta[s(x|a)]$$, then by definition we have $$\models_\mathfrak{A} \forall x \theta[s]$$. Therefore $$\models \forall x \theta$$;
4. $$\varphi$$ is obtained by modus ponens from $$\psi$$ and $$(\psi \to \varphi)$$. By IH we have $$\Gamma \models \psi$$ and $$\Gamma \models (\psi \to \varphi)$$. Show that $$\Gamma \models \varphi$$ using Exercise 1 in Enderton p. 99. (NB. the wording in the last line of Enderton p. 131, i.e., “follows at once”, seems too sloppy to me: we have not proved modus ponens semantically yet.)
• Consistency of formulas: A set $$\Gamma$$ of formulas is said to be consistent iff for no formula $$\varphi$$ it is the case that both $$\Gamma \vdash \varphi$$ and $$\Gamma \vdash (\lnot\varphi)$$.
• By the soundness theorem, an inconsistent set $$\Gamma$$ of formulas gives rise to both $$\Gamma \models \varphi$$ and $$\Gamma \models (\lnot\varphi)$$. As discussed before, it would throw our trust on mathematical truths into fire – we will have proved that some statement is both true and false!
1. If $$\Gamma$$ is satisfiable, then $$\Gamma$$ is consistent.
(a. and b. are equivalent representations of the soundness theorem.)
• Completeness
1. $$\Gamma \models \varphi \implies \Gamma \vdash \varphi$$.
2. If $$\Gamma$$ is consistent, then $$\Gamma$$ is satisfiable.
(a. and b. are equivalent representations of the completeness theorem.)
• Proof idea: We will prove first a weaker form of b., i.e., the completeness for a countable language $$\mathcal{L}$$. Let $$\Gamma$$ be a consistent set of formulas. We show that it is satisfiable.
1. Extend the language $$\mathcal{L}$$ with a countable set $$\bar{C}$$ of new constant symbols and get a new language $$\mathcal{L}_\bar{C}$$;
2. Given the set $$\Gamma$$ of $$\mathcal{L}$$-formulas, show that there is a set $$\bar\Gamma$$ of $$\mathcal{L}_\bar{C}$$-formulas that is consistent, complete, deductively closed and Henkinized, i.e., for every formula $$\exists x \varphi \in \Gamma$$ there is a “witness” constant $$\bar{c} \in \bar{C}$$ such that $$\varphi^x_\bar{c} \in \bar{\Gamma}$$;
3. Build a structure $$\mathfrak{A}_0$$ from $$\bar\Gamma$$ where $$|\mathfrak{A}_0|$$ is the set of terms of $$\mathcal{L}_\bar{C}$$. The assignment $$s$$ maps every variable to itself;
4. Define an equivalence relation $$E$$ on $$|\mathfrak{A}_0|$$: $$t E t' \iff t = t' \in \bar\Gamma$$. Show by induction that for any $$\mathcal{L}_\bar{C}$$-formula $$\varphi$$, $$\models_{\mathfrak{A}_0} \varphi^* [s] \iff \varphi \in \bar\Gamma$$ (where $$\varphi^*$$ is $$\varphi$$ with $$=$$ replaced by $$E$$ everywhere);
5. Show by the homomorphism theorem that $$\models_\mathfrak{A} \varphi[s] \iff \varphi \in \bar\Gamma$$ (where $$\mathfrak{A} = \mathfrak{A}_0 / E$$);
6. Restrict the structure $$\mathfrak{A}$$ (a model of $$\mathcal{L}_\bar{C}$$) to $$\mathcal{L}$$ by dropping all new constants $$\bar{C}$$. Then $$\Gamma$$ is satisfiable with $$\mathfrak{A}$$ and $$s$$ in $$\mathcal{L}$$.
• Compactness
1. $$\Gamma \models \varphi \implies$$ There is a finite $$\Gamma_0 \subseteq \Gamma$$ such that $$\Gamma_0 \models \varphi$$.
2. If every finite subset $$\Gamma_0$$ of $$\Gamma$$ is satisfiable, then $$\Gamma$$ is satisfiable.
(a. and b. are equivalent representations of the compactness theorem.)
• Proof idea: A simple corollary of soundness and completeness theorems.
Remark 7.10. (Soundness and completeness) The soundness and the completeness theorems together evidence the equivalence of consistency and satisfiability of a set of formulas, or that of validity and provability of a formula. The completeness theorem is by no means an obvious result; the first proof was given by Kurt Gödel in 19303, but the proof that we use today is given by Leon Henkin in 1949 [1], which easily generalizes to uncountable languages.
Remark 7.11. (Completeness and incompleteness) Note that the completeness theorem should not be confused with Gödel’s incompleteness theorems. Specifically, the completeness theorem claims that (unconditionally) every formula that is logically implied by $$\Gamma$$ is also deducible from $$\Gamma$$ (i.e., $$\Gamma \models \varphi \implies \Gamma \vdash \varphi$$), while the first incompleteness theorem claims that (under some conditions) some consistent deductive systems are incomplete (i.e., there is some formula $$\varphi$$ such that neither $$\Gamma \vdash \varphi$$ nor $$\Gamma \vdash (\lnot\varphi)$$). As is clearly seen, the incompleteness theorem is purely syntactical and matters for provability (or decidability, one might say). The aforementioned liar’s paradox, where our semantics raises a contradiction that neither $$\Gamma \models \varphi$$ nor $$\Gamma \models (\lnot\varphi)$$ reasonably holds, may be seen as a semantical analogue of the first incompleteness theorem.
## Equality is logical
The equality symbol $$=$$ is a logical symbol, in the sense that the equivalence relation it represents is not open to interpretation and always means what it’s intended to mean (i.e., “the LHS is equal to the RHS”). But if so, how do we say $1+1=2$ is a true sentence then? Can’t we just interpret the equality symbol as something else in a structure $$\mathfrak{A}$$ such that $$\models_\mathfrak{A} (\lnot 1+1=2)$$?
One reason is that in many first-order systems, functions (operations) are defined as axioms using equalities; we need a general way to say “something is defined as…” or just “something is…” There would be no better way of saying this rather than representing it as an equality, so we won’t have the hassle of interpreting a made-up relation in every model. Consider the language of elementary number theory, in the intended model $$\mathfrak{N} = (\mathbb{N}; \mathbf{0}, \mathbf{S}, +, \cdot)$$ of Peano arithmetic, the addition function is defined as a set of domain-specific axioms: \begin{align*} a + \mathbf{0} &= a &\qquad(1) \\ a + \mathbf{S} b &= \mathbf{S} (a + b) &\qquad(2) \end{align*} By Axiom Group 6 we have $$\mathbf{S0} + \mathbf{0} = \mathbf{S0} \to (\mathbf{S0} + \mathbf{S0} = \mathbf{S}(\mathbf{S0} + \mathbf{0}) \to \mathbf{S0} + \mathbf{S0} = \mathbf{S}\mathbf{S0})$$. By (1) $$\mathbf{S0} + \mathbf{0} = \mathbf{S0}$$. By (2) $$\mathbf{S0} + \mathbf{S0} = \mathbf{S}(\mathbf{S0} + \mathbf{0})$$. Applying modus ponens twice, we get $$\mathbf{S0} + \mathbf{S0} = \mathbf{S}\mathbf{S0}$$, which is the result we want (sloppily written as $$1+1=2$$).
The equality is a little special, since it is the most common relation with reflexivity, as justified by Axiom Group 5, i.e., $$x = x$$ for any variable $$x$$. We could exclude these from our logical axioms, but in many cases we would still need a reflexive relation symbol to denote equivalence (justified by some domain-specific axioms) otherwise. Technically, it would be convenient to just treat it as a logical symbol (together with the useful Axiom Groups 5 and 6). Note that though our logical axioms did not say anything about other properties like symmetry, antisymmetry and transitivity, they follow easily from Axiom Groups 5 and 6, in our deductive system:
Lemma 7.12. (Symmetry) If $$x = y$$, then $$y = x$$.
Proof. Given $$x = y$$. By Axiom Group 5 we have $$x = x$$. By Axiom Group 6 we have $$x = y \to (x = x \to y = x)$$. Applying modus ponens twice, we get $$y = x$$.
Lemma 7.13. (Transitivity) If $$x = y$$ and $$y = z$$, then $$x = z$$.
Proof. Given $$x = y$$, by symmetry it holds $$y = x$$. Also $$y = z$$. By Axiom Group 6 we have $$y = x \to (y = z \to x = z)$$. Applying modus ponens twice, we get $$x = z$$.
Lemma 7.14. (Antisymmetry) If $$x = y$$ and $$y = x$$, then $$x = y$$.
Proof. Trivial.
Note that if any partial order is definable on a structure, the equality symbol may not be indispensable in our language, e.g., consider the language of set theory, where the equivalence of two sets $$x = y$$ may be defined as $(\forall v (v \in x \to v \in y) \land \forall v (v \in y \to v \in x))$
## The limitation of first-order logic
Consider again the language of elementary number theory, in the intended model $$\mathfrak{N} = (\mathbb{N}; \mathbf{0}, \mathbf{S}, +, \cdot)$$ of Peano arithmetic, we have the set of all true sentences $$\text{Th}\mathfrak{N}$$.4 Now we add a new constant symbol $$c'$$ to our language, and extend our theory with a countable set of sentences $$\text{Th}\mathfrak{N} \cup \{ \underbrace{\mathbf{S} \cdots \mathbf{S}}_{n\ \text{times}} \mathbf{0} < c' \,:\, n \in \mathbb{N} \}$$ (Here we define $$x < y$$ as $$\lnot\forall z ((\lnot z = \mathbf{0}) \to (\lnot x + z = y))$$). Is there still a model for this extended theory?
For any given $$n \in \mathbb{N}$$, the sentence $$\underbrace{\mathbf{S} \cdots \mathbf{S}}_{n\ \text{times}} \mathbf{0} < c'$$ is clearly satisfiable (by simply taking $$c' = \mathbf{S} n$$). Then it is easily shown that every finite subset $$\Gamma_0 \subseteq \text{Th}\mathfrak{N} \cup \{ \underbrace{\mathbf{S} \cdots \mathbf{S}}_{n\ \text{times}} \mathbf{0} < c' \,:\, n \in \mathbb{N} \}$$ is satisfiable. By the compactness theorem (b.), $$\text{Th}\mathfrak{N} \cup \{ \underbrace{\mathbf{S} \cdots \mathbf{S}}_{n\ \text{times}} \mathbf{0} < c' \,:\, n \in \mathbb{N} \}$$ is also satisfiable. This means that we can construct a non-standard model of arithmetic $$\mathfrak{N}'$$ with an additional bizarre element (specifically $$c'_0$$) that turns out to be greater than any other number (thus the model of this theory is not isomorphic to our standard model $$\mathfrak{N}$$).
Recall that in a Peano arithmetic modeled by $$\mathfrak{N}'$$, numbers are closed under the successor function $$\mathbf{S}$$. More precisely, if $$k \in |\mathfrak{N}'|$$ , then $$\mathbf{S}k \in |\mathfrak{N}'|$$ and $$\mathbf{S}k \neq \mathbf{0}$$. This implies that not only $$c'_0 \in |\mathfrak{N}'|$$, but also $$\mathbf{S} c'_0, \mathbf{S}\mathbf{S} c'_0, \mathbf{S}\mathbf{S}\mathbf{S} c'_0, \dots$$ are all non-standard numbers in $$|\mathfrak{N}'|$$. As none of these numbers is equal to $$\mathbf{0}$$ (by one of Peano axioms), they form an infinite “chain” separately besides our familiar standard ones. One can write down all the (standard and non-standard) numbers sloppily as the following sequence: $\langle 0, 1, 2, \dots, \quad c'_0, c'_1, c'_2, \dots \rangle$ where $$0$$ is just $$\mathbf{0}$$, $$1$$ is $$\mathbf{S0}$$, $$2$$ is $$\mathbf{SS0}$$, $$c'_1$$ is $$\mathbf{S} c'_0$$, $$c'_2$$ is $$\mathbf{SS} c'_0$$, etc.
Clearly, every number but $$0$$ and $$c'_0$$ in the sequence has a unique predecessor. There is certainly no such a predecessor $$j$$ of $$0$$, because otherwise we would have $$\mathbf{S}j = \mathbf{0}$$, contradicting our axioms. But can we have a predecessor of $$c'_0$$? There is no axiom preventing us from constructing that thing! So here we go, enlarge our collection of numbers to: $\langle 0, 1, 2, \dots, \quad \dots, c'_{-2}, c'_{-1}, c'_0, c'_1, c'_2, \dots \rangle$ where for each $$c'_{i}$$, $$c'_{i+1} = \mathbf{S} c'_{i}$$. Again, we know that every such non-standard numbers $$c'_i$$ is greater than any standard number $$n$$ (otherwise we can find a standard number $$n-i$$ such that $$(\lnot n-i < c'_0)$$, contradicting our initial construction of $$c'_0$$ by compactness. So the non-standard part is still a separate chain, thus as written above.
We can go even further. Let $$|\mathfrak{N}'|$$ be this set of standard and non-standard numbers, and $$\mathfrak{N}' = (|\mathfrak{N}'|; \mathbf{0}, \mathbf{S}, +, \cdot)$$ is still the intended model of Peano arithmetic on $$|\mathfrak{N}'|$$. Consider adding yet another constant symbol $$c''$$. Is $$\text{Th}\mathfrak{N}' \cup \{ \underbrace{\mathbf{S} \cdots \mathbf{S}}_{n'\ \text{times}} \mathbf{0} < c'' \,:\, n' \in |\mathfrak{N}'| \}$$ satisfiable? By the same reasoning as before, we get a model $$\mathfrak{N}''$$, with its domain of numbers $\langle 0, 1, 2, \dots, \quad \dots, c'_{-2}, c'_{-1}, c'_0, c'_1, c'_2, \dots, \quad \dots, c''_{-2}, c''_{-1}, c''_0, c''_1, c''_2, \dots \rangle$
Counter-intuitively, this is not the kind of “arithmetic” we are used to. What we are trying to do is to formulate the arithmetic for standard natural numbers that we use everyday (i.e., $$0, 1, 2, \dots$$) in first-order logic, but these non-standard numbers come out of nowhere and there is an infinite “stage” of them, such that each number in a latter stage is greater than every number in a former stage (How is that even possible?). And what is worse, in a subset of the non-standard model $$\mathfrak{N}'$$: $\{ \dots, c'_{-2}, c'_{-1}, c'_0, c'_1, c'_2, \dots \}$ There is no minimal element with respect to the intended ordering relation $$<$$, thus it is not well-ordered by $$<$$, so our good old mathematical induction will no longer work with non-standard numbers.
Well, the lucky part is that we can at least have the induction axiom as a first-order sentence: $\varphi(\mathbf{0}, y_1, \dots, y_k) \land \forall x (\varphi (x, y_1, \dots, y_k) \to \varphi (\mathbf{S}(x), y_1, \dots, y_k)) \to \forall x \varphi(x, y_1, \dots, y_k))$ Since the standard model $$\mathfrak{N}$$ and the non-standard model $$\mathfrak{N}'$$ are elementarily equivalent (i.e., they satisfy the same sentences in a language excluding non-standard numbers), we still enjoy the nice induction principle for all of standard natural numbers. But for the non-standard part, we’re out of luck.
Ideally, we would like to have a bunch of axioms that perfectly defines the model of arithmetic, where no non-standard part is allowed to exist, i.e., the set of numbers is well-ordered by a definable ordering relation $$<$$ so that we can apply the induction principle on all of them.5 Unfortunately, this is infeasible in first-order logic (without formulating the notion of sets). We will demonstrate two potential resolutions.
The intuition here is that in order to rule out any non-standard chains of numbers, we must find a 1-place predicate $$P \subseteq |\mathfrak{N}|$$ such that for every standard number $$n$$ we have $$P n$$, distinguishing it from $$(\lnot P n')$$ where $$n'$$ is non-standard. Certainly $$\mathbf{0}$$ is a standard number; consequently every standard number $$x$$ is followed by $$\mathbf{S}x$$, which is also a standard one. $P \mathbf{0} \land \forall x (P x \to P \mathbf{S} x)$ Once we have this $$P$$ in mind, we say that every number $$n \in P$$, so it is also standard. There would be no other numbers in our model, thus define our set of all numbers: $\forall n P n$ Notice that $$P$$ is not in our language; it is something we have yet to construct for our standard model. How to deal with this issue?
1. The first approach is by allowing quantification over relations. So we can say “for all such relations $$P$$, it holds that $$\forall n P n$$”. Formally: $\forall P (P \mathbf{0} \land \forall x (P x \to P \mathbf{S} x)) \to \forall n P n$ Of course, in our previous definition of first-order languages, for $$\forall v_i \psi$$ to be a well-formed formula, $$v_i$$ is a variable such that $$v_i \in |\mathfrak{N}|$$ given a model $$\mathfrak{N}$$; here we have $$P \subseteq |\mathfrak{N}|$$ hence $$P \in \mathcal{P}(|\mathfrak{N}|)$$. So in a first-order logic we would not be able to do this (we can only quantify a variable in the domain $$|\mathfrak{N}|$$). This approach leads to a second-order logic (where we can not only quantify over a plain variable in $$|\mathfrak{N}|$$, but also quantify over a relation variable in the power set of the domain, i.e., $$\mathcal{P}(|\mathfrak{N}|)$$; that gives our logic more expressive power!).
2. As we see, a relation is essentially a subset of $$|\mathfrak{N}|$$ (thus its range is also a set); it is tempting to formulate Peano arithmetic using the notion of sets. Indeed, we can rewrite the formula in Approach 1 into the language of set theory as: $\forall y (\emptyset \in y \land \forall x (x \in y \to S(x) \in y)) \to \forall n\ n \in y$ where we encode the standard number $$\mathbf{0}$$ as $$\emptyset$$, $$\mathbf{S}x$$ as $$S(x) = x \cup \{x\}$$. Clearly there is no non-standard number in this set-theoretic model. This is exactly how we define natural numbers $$\mathbb{N}$$ (as a minimal inductive set $$\omega$$) in set theory, and its existence is justified by the so-called axiom of infinity. Note that once we introduce the set theory (using a first-order language), we have the equivalently expressive (sometimes paradoxical) power of any arbitrary higher-order logic. Fundamentally.6
Books:
Herbert B. Enderton, A Mathematical Introduction to Logic, 2nd ed.
Kenneth Kunen, The Foundations of Mathematics.
Articles:
Terence Tao, “The completeness and compactness theorems of first-order logic,” https://terrytao.wordpress.com/2009/04/10/the-completeness-and-compactness-theorems-of-first-order-logic/.
Asger Törnquist, “The completeness theorem: a guided tour,” http://www.math.ku.dk/~asgert/teachingnotes/iml-completenessguide.pdf.
Eliezer Yudkowsky, “Godel’s completeness and incompleteness theorems,” http://lesswrong.com/lw/g1y/godels_completeness_and_incompleteness_theorems/.
Eliezer Yudkowsky, “Standard and nonstandard numbers,” http://lesswrong.com/lw/g0i/standard_and_nonstandard_numbers/.
David A. Ross, “Fun with nonstandard models,” http://www.math.hawaii.edu/~ross/fun_with_nsmodels.pdf.
Papers:
[1] L. Henkin, “The completeness of the first-order functional calculus,” The journal of symbolic logic, vol. 14, no. 3, pp. 159–166, 1949.
1. Is there a constructive approach as a replacement of Henkin’s construction? https://math.stackexchange.com/questions/1462408/is-there-a-constructive-approach-as-a-replacement-of-henkins-construction
2. We are using the notion of decidability/undecidability here even before we get to Gödel’s incompleteness theorem, but hopefully it’s no stranger to us as computability theory has all the model-specific issues (though non-logical) covered.
3. Gödel’s original proof of the completeness theorem: https://en.wikipedia.org/wiki/Original_proof_of_G%C3%B6del%27s_completeness_theorem
4. It might be interesting to know that $$\text{Th}(\mathbb{N}; \mathbf{0}, \mathbf{S}, +, \cdot)$$ is an undecidable theory, as shown by the aforementioned incompleteness theorem.
5. If we accept the Axiom of Choice, then every set can have a well-ordering; so this is actually a reasonable request.
6. Many logicians (Kurt Gödel, Thoralf Skolem, Willard Van Quine) seem to adhere to first-order logic. And that’s why. |
https://www.physicsforums.com/threads/near-light-angular-momentum.190637/ | # Near light angular momentum
1. Oct 11, 2007
### BulletRide
After reading a chapter on the conservation of angular momentum, I have had a radical idea growing in my mind ever since I finished reading the material. To cut to the chase, the law states that the angular momentum of a rotating object will remain constant unless an outside torque acts on the object. Since angular momentum = I x w, an objects rotational speed will increase as the rotational inertia decreases - the same principal behind how an ice skater increases how fast she spins by pulling in her arms. Radically thinking, if a spinning space station consisting of simply a wire anchored on both ends and had a length a of one hundred kilometers, or so, were to be accelerated to 75% the speed of light at the outermost point, and then reeled in at a constant rate towards the axis of rotation; what would happen once the rotational velocity began to reach the speed of light? Since, the law of conservation of angluar momentum indirectly states that an object's rotational speed will increase as its rotational intertia decreases, one would suspect that, at only 75%c at the edge, as the object began to be reeled in closer and closer, it would theoretically exceed the speed of light. Since this is an impossibility, what would happen as it approached the speed of light? Any aid on this problem would be greatly appriciated!
2. Oct 11, 2007
### pervect
Staff Emeritus
Relativistic angular momentum can be thought of as $\vec{p} \times \vec{r}$, or the 4-vector equivalent. Here $\vec{p}$ is the momentum, and $\vec{r}$ is the radial vector.
(There's also a representation as a bi-vector which is a bit more elegant if you happen to be familiar with clifford algebra. However, you can make do fine with the 3-vector or the 4-vector form for this problem. So just ignore this if you're not familiar with Clifford algebra).
So you can see immediately that if you halve r, you double the
momentum p, but nothing ever exceeds the speed of light, because the relativistic formula for the momentum p is
$$\vec{p} = \frac{m \vec{v}}{\sqrt{1-(|v|/c)^2}}$$
and p goes to infinity as v->c.
Last edited: Oct 11, 2007
3. Oct 11, 2007
### BulletRide
So, it would be impossible to bring the momentum up to a velocity greater than c, no matter how much the objects are "reeled" in? Does this also imply that the law of conservation of angular momentum only applies to objects of relatively low velocity (compared to 1c)?
4. Oct 11, 2007
### pervect
Staff Emeritus
Angular momentum is still conserved in special relativity, you just have to use the relativistic formula for angular momentum, not the Newtonian formula.
This basically involves using the correct relativistic formula for linear momentum, as I described earlier.
You might also want to check out http://panda.unm.edu/Courses/finley/P495/TermPapers/relangmom.pdf [Broken]
though it may be a bit advanced.
Last edited by a moderator: May 3, 2017
5. Oct 13, 2007
### BulletRide
Ah, I see. Thanks! |
https://ezeenotes.in/average-torque-on-a-projectile-of-mass-%F0%9D%91%9A-initial-speed-%F0%9D%91%A2-and-angles-of-projection-%CE%B8-between-initial-and-final-position-%F0%9D%91%83-and-%F0%9D%91%84-as-shown-in-figure-abo/ | # Average torque on a projectile of mass 𝑚, initial speed 𝑢 and angles of projection θ, between initial and final position 𝑃 and 𝑄 as shown in figure about the point of projection is
Question: Average torque on a projectile of mass 𝑚, initial speed 𝑢 and angles of projection θ, between initial and final position 𝑃 and 𝑄 as shown in figure about the point of projection is
(A) $mu^{2}\sin \theta$
(B) $mu^{2}\cos \theta$
(C) $\dfrac{1}{2}mu^{2}\sin 2\theta$
(D) $\dfrac{1}{2}mu^{2}\cos 2\theta$ |
https://blender.stackexchange.com/questions/59165/how-to-create-intermediate-vertices-of-a-humpy-surface-interpolation?noredirect=1 | # How to create intermediate vertices of a humpy surface (interpolation)
I took a cube, stretched it, added a number of intermediate edges and adjusted their height to get a humpy top surface.
Now, I want to get a smooth top surface by adding interpolated vertices. The existing edges must not change their position. How do I do this?
FYI: The object will be 3D-printed.
@eromod's hint was the solution.
• I created a huge ovoid (an icosphere that I compressed on one axis for an additional hump on the other axis),
• aligned it carefully manually,
• extended the original top surface in top direction,
• applied a boolean modifier with "Intersect" operation,
• and did a bit afterwork on the mesh.
• Could you please clarify what is "huge ovoid" and "intersection modifier" ? Does the latter mean Ctrl+F > Intersection (Knife) ? Also please don't include new question in the answer; ask a new question instead. – Mr Zak Aug 3 '16 at 11:26 |
https://mail.python.org/pipermail/matrix-sig/1998-January/001988.html | # [MATRIX-SIG] Possible bug in Numeric LoadArray function
Paul F. Dubois Paul F. Dubois" <[email protected]
Thu, 8 Jan 1998 08:29:19 -0800
The PDB library has the following strategy: store the data in a format
native to the writing machine and also store info about the machine's
representation. If a reader discovers non-native data, it translates on the
way in. This gives you much better performance in normal cases than always
translating to some standard format.
Certainly there would be no reason to reinvent all that, if that was what
was wanted. But I know nothing about pickling so I will now shut up.
-----Original Message-----
From: Bill White <[email protected]>
Cc: [email protected] <[email protected]>; [email protected]
<[email protected]>; [email protected]
<[email protected]>
Date: Thursday, January 08, 1998 6:38 AM
Subject: Re: [MATRIX-SIG] Possible bug in Numeric LoadArray function
>
>> > A pickled file for an integer array was created on an SGI Onyx.
>> > This machine has 4 byte ints and 4 byte longs.
>> >
>> > The picked file would not read on a DEC alpha (64 bit longs).
>> > The reshape function in LoadArray failed because the
>> > byte counts didn't match.
>>
>> I am not surprised - the pickling approach used in NumPy is not really
>> portable. Anyway, it will have to be rewritten to profit from cPickle
>> under Python 1.5. Unfortunately, there is no perfect solution; either
>> pickling must make assumptions about the binary format of its data
>> types (as it does now), or it must apply a conversion, which can
>> become very time consuming for large arrays.
>
>Well, forgive me for saying the obvious thing, but perhaps the best
>approach is to do the following, assuming that people will be moving
>mostly to similar or identical architectures, but with the occasional
>1.) Define a language for describing numeric representations. This
> could be as simple as a bit string, with k bit fields to store
> the sizes of the various sized integers, a field to store a
> token to represent the floating point representation, and whatever
> else needs to be done.
>2.) Add a copy of this record to each pickled object somehow.
>3.) Write routines to translate between non-stanard representations.
> You could go wild with this, but I'll bet most or all needs would
> be met with routines to translate 2^m byte integers to 2^n byte
> integers, both signed and unsigned, for n > m, n,m \in {1..3}.
> That's 3 * 2 == 12 trivial routines, since the diagonals are
> just copies. Also, you'd have to write a floating point conversion
> routine, which is complicated as well.
>
>This way, if you pickle something for your own use later, or for use
>on an identical machine, translation takes constant time. If you
>pickle something to be sent to a different architecture, there is
>enough information to do the conversion.
>
>I think this is roughly the way DCE or Sun RPC does this. However, I
>don't know the pickling code, so perhaps it is a silly idea after all.
>
>
>
>_______________
>MATRIX-SIG - SIG on Matrix Math for Python
>
>send messages to: [email protected] |
http://www.tempel.org/Arbed/SourceTree | Contact
Search this site
# Using Arbed with SourceTree for comparing Xojo projects
You can set up the Version Control software SourceTree to use Arbed as a diff viewer for Xojo project files:
• Open the Preferences (Windows: Tools->Options) from the menu, switch to the Diff tab
• Under External Diff, set the Diff Tool: to Other or Custom.
• For Diff Command:, enter the POSIX (Unix-like) path to the Arbed executable. On OS X, you can drag Arbed's icon into this field to enter set its path. Note that on OS X you then need to append /Contents/MacOS/Arbed to the path behind "Arbed.app". For example, if Arbed.app is inside your main Applications folder, the path on OS X would be: /Applications/Arbed.app/Contents/MacOS/Arbed. On Windows, dragging may not work. Since SourceTree uses mingw, i.e. a Unix-like shell, internally, the path to Arbed.exe needs to be using POSIX notation, not Windows, i.e. it needs to be something like /c/Users/yourname/Downloads/ArbedWin/Arbed.exe rather than ''C:\Users\yourname\Downloads\ArbedWin\Arbed.exe'. You can figure out the correct path on Windows by opening the Terminal from SourceTree, which opens a mingw shell, and then dragging Arbed.exe from Explorer to the Terminal window.
• For Arguments:, enter: --showdiff -showAlerts -noExternals "\$LOCAL" "\$REMOTE" (On Windows, enter the same but without the quotes!)
With that, you can have Arbed show the differences of a RB Project file by right-clicking on it in SourceTree, then choosing External Diff from the contextual menu or selecting the file and choosing "External Diff" from the Actions menu, or by typing cmd+D (or ctrl-D on Windows). |
http://mathhelpforum.com/advanced-statistics/15641-few-statistics-questions.html | Thread: A few statistics questions
1. A few statistics questions
Here are the questions:
1. The petrol stations along a road are located according to a Poisson distribution, with an average of 1 station in 10 km. Because of an oil shortage worldwide, there is a probabililty of 0.2 that a petrol station will be out of petrol.
(i) Find the probability that there is at most 1 petrol station in 15 km of the road.
(ii) Find the probability that the next 3 stations a driver encounters will be out of petrol.
A driver on this road knows that he can go another 15 km before his car runs out of petrol. Find the probability that he will be stranded on the road without petrol. Give your answer correct to 2 decimal places.
For this question, I can't solve the last part (in bold). Also, for (ii) I got the answer which is 0.008 but I don't quite know how I arrived at that so I need someone to explain that to me. Oh I can use the GC to solve this question so there's no need to go through all the formula.
Solved
2. Vehicles approaching a T-junction must either turn left or turn right. Observations by traffic engineers showed that on average, for every ten vehicles approaching the T-junction, one will turn left. It is assumed that the driver of each vehicle chooses direction independently. Out of 5 randomly chosen vehicles approaching the T-junction,
(i) find the probability that at least 3 vehicles turn right,
(ii) find the probability that exactly 4 vehicles turn right given that at least 3 vehicles turn right.
On a particular weekend, 40 randomly chosen vehicles approached the T-junction. Using a suitable approximation, find the probability that at least 38 of them turn right.
Again it's the last part I have problems with, and I can use the GC for this as well.
Solved
3. X is a binomial random variable, where the number of trials is 5 and the probability of success of each trial is p. Find the values of p if P(X=4)=0.12.
I know for question 3, it has got to do with the formula. But I still can't get the answer.
Thanks in advance if you could help me with any of these questions, I'm drowning in my homework, all 87 questions of them!
2. Originally Posted by margaritas
Here are the questions:
1. The petrol stations along a road are located according to a Poisson distribution, with an average of 1 station in 10 km. Because of an oil shortage worldwide, there is a probabililty of 0.2 that a petrol station will be out of petrol.
(i) Find the probability that there is at most 1 petrol station in 15 km of the road.
(ii) Find the probability that the next 3 stations a driver encounters will be out of petrol.
A driver on this road knows that he can go another 15 km before his car runs out of petrol. Find the probability that he will be stranded on the road without petrol. Give your answer correct to 2 decimal places.
For this question, I can't solve the last part (in bold). Also, for (ii) I got the answer which is 0.008 but I don't quite know how I arrived at that so I need someone to explain that to me. Oh I can use the GC to solve this question so there's no need to go through all the formula.
The distribution of the distance to the next petrol station has an exponential
distribution with parameter lambda = mean distance between petrol stations
with fuel.
The mean distance between petrol stations is 10km, and as 20% of them
do not have fuel the mean distance between those with fuel is 12.5km
(in 1000km there are 100 stations of which 80 have petrol so mean distance
between these is 1000/80=12.5km).
RonL
3. Thanks but I still don't get it!
4. Originally Posted by margaritas
[snip]
For this question, I can't solve the last part (in bold). Also, for (ii) I got the answer which is 0.008 but I don't quite know how I arrived at that so I need someone to explain that to me. Oh I can use the GC to solve this question so there's no need to go through all the formula.
2. [snip]
Again it's the last part I have problems with, and I can use the GC for this as well.
What is the GC?
RonL
5. Originally Posted by margaritas
Thanks but I still don't get it!
The wikipedia article on the exponential distribution gives the distribution
functions and describes its relation to the Poisson distribution.
RonL
6. Originally Posted by CaptainBlack
What is the GC?
RonL
Graphic calculator.
7. Originally Posted by margaritas
2. Vehicles approaching a T-junction must either turn left or turn right. Observations by traffic engineers showed that on average, for every ten vehicles approaching the T-junction, one will turn left. It is assumed that the driver of each vehicle chooses direction independently. Out of 5 randomly chosen vehicles approaching the T-junction,
(i) find the probability that at least 3 vehicles turn right,
(ii) find the probability that exactly 4 vehicles turn right given that at least 3 vehicles turn right.
On a particular weekend, 40 randomly chosen vehicles approached the T-junction. Using a suitable approximation, find the probability that at least 38 of them turn right.
Again it's the last part I have problems with, and I can use the GC for this as well.
On the last part you are supposed to use the Normal approximation to the
Binomial distribution.
This has mean Np (in this case 40*0.9), and standard deviation sqrt(Np(1-p)
(in this case 40*0.9*0.1).
RonL
8. Originally Posted by margaritas
Graphic calculator.
Perhaps your Graphic Calculator should take the exam for you
RonL
9. Originally Posted by CaptainBlack
Perhaps your Graphic Calculator should take the exam for you
RonL
Oh. We are allowed and supposed to use the GC to aid us in the A-Levels examinations.
10. Originally Posted by margaritas
Oh. We are allowed and supposed to use the GC to aid us in the A-Levels examinations.
Reminds me of a story I heard when Hewlett Packard first put put symbolic
capabilities on their calculators (computer algebra system). I was said that
it would get 80% on a calculus exam if only it did not need an operator.
RonL
11. Originally Posted by CaptainBlack
On the last part you are supposed to use the Normal approximation to the
Binomial distribution.
This has mean Np (in this case 40*0.9), and standard deviation sqrt(Np(1-p)
(in this case 40*0.9*0.1).
RonL
Thanks I got the answer but using Poisson approximation to Binomial.
12. Originally Posted by CaptainBlack
The distribution of the distance to the next petrol station has an exponential
distribution with parameter lambda = mean distance between petrol stations
with fuel.
The mean distance between petrol stations is 10km, and as 20% of them
do not have fuel the mean distance between those with fuel is 12.5km
(in 1000km there are 100 stations of which 80 have petrol so mean distance
between these is 1000/80=12.5km).
RonL
I got my answer using Po(1.5)
Probability = P(X=0) + P(X=1)x0.2 + P(X=2)x0.2^2 + P(X=3)x0.2^3 = 0.30 (2 d.p.)
13. I still can't solve question 3 that is,
X is a binomial random variable, where the number of trials is 5 and the probability of success of each trial is p. Find the values of p if P(X=4)=0.12.
Anybody? Thanks!
14. Originally Posted by margaritas
I still can't solve question 3 that is,
X is a binomial random variable, where the number of trials is 5 and the probability of success of each trial is p. Find the values of p if P(X=4)=0.12.
Anybody? Thanks!
$
P(X=4) = \frac{5!}{4! 1!} p^4 (1-p) = 0.12
$
so we have:
$
p^4 (1-p) = 0.024
$
Now I think you need to solve this numerical, there will be two positive
solutions (since Descartes rule of signs tell us there are either 2 or 0
positive solutions, and we know that there must be at least one root).
Now I could tell you what the roots are (at least approximately) but it
would be better if you try to find them yourself.
RonL |
http://cvgmt.sns.it/paper/4058/Lewicka-Pakzad,%20Bhattacharya-Lewicka-Schaffner,%20Lewicka-Raoult-Ricciotti | # Dimension reduction for thin films with transversally varying prestrain: the oscillatory and the non-oscillatory case
created by lučić on 02 Oct 2018
[BibTeX]
Submitted Paper
Inserted: 2 oct 2018
Last Updated: 2 oct 2018
Year: 2018
Abstract:
We study the non-Euclidean (incompatible) elastic energy functionals in the description of prestressed thin films, at their singular limits ($\Gamma$-limits) as $h\to 0$ in the film's thickness $h$. Firstly, we extend the prior results Lewicka-Pakzad, Bhattacharya-Lewicka-Schaffner, Lewicka-Raoult-Ricciotti to arbitrary incompatibility metrics that depend on both the midplate and the transversal variables (the non-oscillatory'' case). Secondly, we analyze a more general class of incompatibilities, where the transversal dependence of the lower order terms is not necessarily linear (the oscillatory'' case), extending the results of Agostiniani-Lucic-Lucantonio, Schmidt to arbitrary metrics and higher order scalings. We exhibit connections between the two cases via projections of appropriate curvature forms on the polynomial tensor spaces. We also show the effective energy quantisation in terms of scalings as a power of $h$ and discuss the scaling regimes $h^2$ (Kirchhoff), $h^4$ (von Karman), $h^6$ in the general case, and all possible (even power) regimes for conformal metrics. Thirdly, we prove the coercivity inequalities for the singular limits at $h^2$- and $h^4$- scaling orders, while disproving the full coercivity of the classical von Karman energy functional at scaling $h^4$. |
https://math.stackexchange.com/questions/2439510/a-sequence-of-holomorphic-functions-f-n-uniformly-convergent-on-boundary-o | # A sequence of holomorphic functions $\{f_n\}$ uniformly convergent on boundary of open set.
Let $\Omega$ be a bounded connected open subset of $\mathbb C$. Let $\{f_n\}$ be sequence of functions which are continuous on $\bar{\Omega}$ and holomorphic on $\Omega$. Assume that the sequence converges uniformly on the boundary of $\Omega$ .Show that $\{f_n\}$ converges uniformly on $\bar \Omega$ to a function which is continuous on $\bar \Omega$ and holomorphic on $\Omega$.
I was using Weierstrass theorem(complex analysis) in which the sequence should have to be convergent uniformly on every compact subset of $\Omega.$ But here I can not apply it.
Please someone give some hints..
Thank you..
$$\sup_{z \in \overline{\Omega}} |f_n(z) - f_m(z)| = \sup_{z \in \partial \Omega} |f_n(z) - f_m(z)|.$$
Since $f_n$ converges uniformly on $\partial \Omega$ to some function, the sequence $(f_n)$ is uniformly Cauchy on $\partial \Omega$ so the right hand side tends to zero as $m,n \to \infty$. This implies that $(f_n)$ is uniformly Cauchy on $\overline{\Omega}$ so $f_n$ converge uniformly to some continuous function $f$ on $\overline{\Omega}$. In particular, $f_n$ also converges uniformly to $f$ on every compact subset of $\Omega$ so $f$ is holomorphic on $\Omega$. |
https://electronics.stackexchange.com/questions/335495/why-do-two-reverse-diodes-represent-the-logic-gate-and?answertab=votes | # Why do two reverse diodes represent the logic gate AND?
Consider:
I can make no sense in my head how this can work. How is it possible to have a current flow through normal diodes from cathode to anode and represent an AND if both are 1?
• Note that this circuit typically works but it has no gain and as a result cannot restore noise margins. Typically, at slow to moderate speeds you can get away with doing something like this once in between conventional gates or other functional blocks having gain, but you can't really have a sequence of passive gates like these feeding one another without quickly running into problems. Still, tricks like this can be very useful when you have ICs that almost do what you need, but need a trivial amount of "glue" in between and the signals aren't too fast. Otherwise there's tinylogic. – Chris Stratton Oct 21 '17 at 17:29
Imagine A and B are both high. Then there is no current that flows out of A nor is there current that flows out of B, so S is high.
simulate this circuit – Schematic created using CircuitLab
Now if A is low, the diode allows A to draw current, which pulls down the node voltage of S, so the voltage of S corresponds to the voltage drop of the diode when current is flowing through the resistor and the diode... which is approximately 0.7V, or 'low'.
simulate this circuit
Same if B is low.
Same if A or B are low.
Therefore, both A and B must be high in order for S to be high... AND gate!
As stated by fukanchik in the comments, the role of the diodes is to prevent the inputs from interfering with one other when they are in different states, but the diode is only necessary with inputs that can sink and source current. If the inputs can only sink current, such as in an open-collector configuration, then the diode is not necessary
simulate this circuit
• That's what I thought, too, but if there's no resistance between the source and S, why would anything ever flow through A or B? – Phil N DeBlanc Oct 20 '17 at 16:34
• The 'source' is the 5V pull-up resistor. When A and B are high, S is only high because it is being pulled up by the resistor. No current is flowing through the diodes at all. When A or B is low, it has the result of pulling down the voltage of node S by sinking current, thus, (approximately) all of the 5V drop is across the resistor. – slightlynybbled Oct 20 '17 at 16:38
• @PhilNDeBlanc current only flows left through those diodes, when the input is low, sourced from the pullup and from whatever follows. – Trevor_G Oct 20 '17 at 16:38
• @PhilNDeBlanc I added a bit of clarification to the second paragraph. Hope it helps. Enjoy! – slightlynybbled Oct 20 '17 at 16:41
• You should add that the role of diodes is to prevent current to flow from one input into another when inputs are not equal (01 or 10). – fukanchik Oct 20 '17 at 17:47
simulate this circuit – Schematic created using CircuitLab
Figure 1. Four possible input conditions.
The only one of the four switch combinations that allow the output to pull high is '11'. That is, by definition an AND function.
• +1 Can always trust you to use great illustrations. :) Might be nice to add the voltages to the 0 and 1 flags for even more clarity. – Trevor_G Oct 20 '17 at 17:16
• If all diodes are replaced by wires you get the same results. I think this illustration is not representative of how the gate works. – Jose Antonio Dura Olmos Oct 20 '17 at 18:50
• It directly answers the OP's question, "how its possible to current flow through normal diodes from cathode to anode and represent an AND if both are 1." Please feel free to write an improved answer. – Transistor Oct 20 '17 at 19:00
• In addition, this AND gate is shown in isolation. The point of the diodes is to prevent one input pulling down the other and affecting other gates or logic connected to that input. Replacing with wires would not give the same result. – Transistor Jun 5 at 20:20 |
http://math.stackexchange.com/questions/181869/finding-the-splitting-field-of-fx | # Finding the splitting field of $f(x)$
I'm trying to learn the theory of splitting fields. So I went through this example on an old exam: Find the splitting field $K$ of $f(x)$ over $\mathbb{Q}$ for $f(x)=x^6-9$
$x^6-9=(x^3-3)(x^3+3)$ and so $\pm\sqrt[3]{3}$ is the only real roots. When defining $w=-\frac{1}{2}+\frac{\sqrt[3]{3}}{2} i$ we have that $w$ is a root of $x^2+x +1$ and therefor a root of $x^{3}$$-$$1$ and so $\sqrt[3]{3}$$w$ is a solution and so is $\sqrt[3]{3}w^{2}$ since $w^3=1$ and $(w^2)^3=1$ and the same argument goes for the negative real root and so $K=\mathbb{Q}(\sqrt[3]{3},w)$ Is the argument above correct? Or am I missing something? Is this how you always go about when finding the splitting field? Finding a real root and then multiplying with roots of unity? What if there are no real roots? I'm doing this algebra course on my own and field extensions and splitting fields is a part of it that I don't get. So please explain as basic as possible and if you know any online lectures on the subject please recommend them to me.
-
Yes, your argument is correct (except that in the formula for $w$, you want $\sqrt3$, not $\root3\of3$). No, that method won't always work. It works for "binomials", that is, polynomials of the form $x^n-a$. Different polynomials require different methods, some are much harder than others, some really can't be done at all until you get to the main theorems of Galois Theory (and even then, the answers may not be in the form you'd like). Keep on studying, you'll develop more tools as you go along. |
https://www.computer.org/csdl/trans/tc/1972/03/05008948-abs.html | Issue No. 03 - March (1972 vol. 21)
ISSN: 0018-9340
pp: 260-268
Bruce J. Hansen , Raytheon Data Systems, Norwood, Mass. 02067.
Jack Sklansky , School of Engineering, University of California, Irvine, Calif. 92664.
Robert L. Chazin , Department of Mathematics, University of California, Irvine, Calif. 92664.
ABSTRACT
The minimum-perimeter polygon of a silhouette has been shown to be a means for recognizing convex silhouettes and for smoothing the effects of digitization in silhouettes. We describe a new method of computing the minimum-perimeter polygon (MPP) of any digitized silhouette satisfying certain constraints of connectedness and smoothness, and establish the underlying theory. Such a digitized silhouette is called a regular complex,'' in accordance with the usage in piecewise linear topology. The method makes use of the concept of a stretched string constrained to lie in the cellular boundary of the digitized silhouette. We show that, by properly marking the virtual as well as the real vertices of an MPP, the MPP can serve as a precise representation of any regular complex, and that this representation is often an economical one.
INDEX TERMS
CITATION
Bruce J. Hansen, Jack Sklansky, Robert L. Chazin, "Minimum-Perimeter Polygons of Digitized Silhouettes", IEEE Transactions on Computers, vol. 21, no. , pp. 260-268, March 1972, doi:10.1109/TC.1972.5008948 |
https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Equilibria/Solubilty/Solubility_Product_Constant%2C_Ksp | # Solubility Product Constant, Ksp
The solubility product constant, $$K_{sp}$$, is the equilibrium constant for a solid substance dissolving in an aqueous solution. It represents the level at which a solute dissolves in solution. The more soluble a substance is, the higher the $$K_{sp}$$ value it has.
Consider the general dissolution reaction below (in aqueous solutions):
$aA_{(s)} \rightleftharpoons cC_{(aq)} + dD_{(aq)} \tag{1}$
To solve for the $$K_{sp}$$ it is necessary to take the molarities or concentrations of the products (cC and dD) and multiply them. If there are coefficients in front of any of the products, it is necessary to raise the product to that coefficient power(and also multiply the concentration by that coefficient). This is shown below:
$K_{sp} = [C]^c [D]^d \tag{2}$
Note that the reactant, aA, is not included in the $$K_{sp}$$ equation. Solids are not included when calculating equilibrium constant expressions, because their concentrations do not change the expression; any change in their concentrations are insignificant, and therefore omitted. Hence, $$K_{sp}$$ represents the maximum extent that a solid that can dissolved in solution.
Exercise 1: Magnesium Floride
What is the solubility product constant expression for $$MgF_2$$?
Solution
The relavant equilibrium is
$MgF_{2(s)} \rightleftharpoons Mg^{2+}_{(aq)} + 2F^-_{(aq)}$
so the associated equilibrium constant is
$K_{sp} = [Mg^{2+}][F^-]^2$
Exercise 2: Silver Chromate
What is the solubility product constant expression for $$Ag_2CrO_4$$?
Solution
The relavant equilibrium is
$Ag_2CrO_{4(s)} \rightleftharpoons 2Ag^+_{(aq)} + CrO^{2-}_{4(aq)}$
so the associated equilibrium constant is
$K_{sp} = [Ag^{+}]^2[CrO_4^{2-}]$
### Important effects
• For highly soluble ionic compounds the ionic activities must be found instead of the concentrations that are found in slightly soluble solutions.
• Common Ion Effect: The solubility of the reaction is reduced by the common ion. For a given equilibrium, a reaction with a common ion present has a lower $$K_{sp}$$, and the reaction without the ion has a greater $$K_{sp}$$.
• Salt Effect (diverse ion effect): Having an opposing effect on the $$K_{sp}$$ value compared to the common ion effect, uncommon ions increase the $$K_{sp}$$ value. Uncommon ions are ions other than those involved in equilibrium.
• Ion Pairs: With an ionic pair (a cation and an anion), the $$K_{sp}$$ value calculated is less than the experimental value due to ions involved in pairing. To reach the calculated $$K_{sp}$$ value, more solute must be added.
### References
1. Petrucci, Ralph H., et al. General Chemistry: Principles and Modern Applications. Upper Saddle River, NJ: Prentice Hall 2007.
### Contributors
• Kathryn Rashe, Lisa Peterson |
https://www.ask-math.com/conversion-of-percentage-to-fraction.html | 4/5 as a decimal
4/5 as a decimal
# Conversion of Percentage to Fraction
We at ask-math believe that educational material should be free for everyone. Please use the content of this website for in-depth understanding of the concepts. Additionally, we have created and posted videos on our youtube.
We also offer One to One / Group Tutoring sessions / Homework help for Mathematics from Grade 4th to 12th for algebra, geometry, trigonometry, pre-calculus, and calculus for US, UK, Europe, South east Asia and UAE students.
Affiliations with Schools & Educational institutions are also welcome.
Please reach out to us on [email protected] / Whatsapp +919998367796 / Skype id: anitagovilkar.abhijit
We will be happy to post videos as per your requirements also. Do write to us.
Conversion of percentage to fraction : Here we will learn how to convert percentage to fraction.
Consider the following diagram
Here half of the circle is shaded in red. In other words we say 1/2
of the circle is shaded in red.
Now we shall write an equivalent fraction to 1/2 .
50/100 = 50% ( By multiplying both numerator and denominator by 50)
For conversion of percentage to fraction , just divide that number by 100 and then reduced it to the lowest form.
Examples :
1) 57%
Solution :
As there is a % sign with 57 so divide 57 by 100.
57 % = 57/100
2) 45%
Solution :
As there is a % sign with the number so divide that number by 100.
45% = 45/100
= (45 ÷ 5)/(100 ÷ 5)
= 9/20 ( which is lowest fraction)
3) 82%
Solution :
As there is a % sign with the number so divide that number by 100.
82% = 82/100
= (82 ÷ 2)/(100 ÷ 2)
= 41/50 ( which is lowest fraction)
3) 125%
Solution :
As there is a % sign with the number so divide that number by 100.
125% = 125/100
= (125 ÷ 5)/(100 ÷ 5)
= 25/20
= (25 ÷ 5)/(20 %divide; 5)
= 5/4( which is lowest fraction)
4) 36%
Solution :
As there is a % sign with the number so divide that number by 100.
36% = 36/100
= (36 ÷ 2)/(100 ÷ 2)
= 18/50
= (18 ÷ 2)/(50 ÷2)
= 9/25( which is lowest fraction)
5) 115 %
Solution :
As there is a % sign with the number so divide that number by 100.
115 % = 115/100
= (115 ÷ 5)/(100 ÷ 5)
= 23/20 ( which is lowest fraction)
Percentage
Conversion of Percentage to Fraction
Conversion of Fraction to Percent
Conversion of Percentage to Decimal
Conversion of Decimal to Percent
Find the Percentage of a given number
What Percent is one number of another number
Percentage as a comparison of two quantities
Increase Percent
Decrease Percent |
http://www.xavierdupre.fr/app/csharpy/helpsphinx/i_faq.html | FAQ¶
Questions¶
Linq is missing
No dependencies are added by default and many pieces of code relies on standard C# implemented in System.Core which must be added as a dependency. That’s what the following error tells:
System.InvalidOperationException: Error (CS0234): Missing 'Linq'
Then 'System.Core' must be added the dependencies in function create_cs_function or -d System.core in magic command CS.
(original entry : compile.py:docstring of csharpy.runtime.compile.create_cs_function, line 16) |
http://www.physicsforums.com/showthread.php?p=4242242 | Matrix trace minimization and zeros
by GoodSpirit
Tags: matrix, minimization, trace, zeros
P: 19 Hello, :) I would like to minimize and find the zeros of the function F(S,P)=trace(S-SP’(A+ PSP’)^-1PS) in respect to S and P. S is symmetric square matrix. P is a rectangular matrix Could you help me? Thank you very much All the best GoodSpirit
P: 19 Hello everybody, Perhaps I should explain a little bit. The aim is to minimize an error metric and preferentially drive it to zero. This should be done as function of S and P, as function of their rank and dimensions in particular. By the way, the matrix A is symmetric too. Many thanks
P: 19 Hello, Trying to update the equation presentation. $$F(S,P)=tr(S-S P^T(A+PSP^T)^-1 PS)$$ A is positive definite I've using matrix derivatives What do you think? All the best GoodSpirit
P: 19
Matrix trace minimization and zeros
LateX didn't work here
How to present an equation here?
Thank you
Good Spirit
Related Discussions Quantum Physics 9 Calculus & Beyond Homework 37 Calculus & Beyond Homework 25 Engineering, Comp Sci, & Technology Homework 0 |
https://zbmath.org/?q=an:0669.47017 | # zbMATH — the first resource for mathematics
Hankel operators on weighted Bergman spaces. (English) Zbl 0669.47017
The authors study Hankel operators on weighted Bergman spaces and establishes the connection between an analytic function f and the Hankel operator generated by f on certain weighted Bergman spaces consisting of analytic functions on the unit disk $$\Delta$$.
Contents. Introduction. 1. Background. 2. General properties of Hankel operators. 3. Hilbert-Schmidt-Hankel operators and Dirichlet space. 4. Boundedness and compactness of $$H_ f$$. 5. The space $$B_ 1$$, the Macaev ideal, and Hankel operators. 6. Hankel operators in $$S_ p$$ and $$B_ p$$, $$2<p<\infty$$. 7. The case $$1<p<2$$ and $$0\leq \alpha$$. 8. The case $$-1<\alpha <0$$, $$1<p<2$$. 9. The reduced Hankel operator. 10. Hankel operators with non analytic symbols 11. Hankel operators as vector-valued paracommutators.
Some results of this paper were earlier announced in the authors’ work “Hankel operators on the Bergman space”. Abstr. Pap. Am. Math. Soc. 7, 163 (1986).
Reviewer: N.K.Karapetianc
##### MSC:
47B35 Toeplitz operators, Hankel operators, Wiener-Hopf operators 46J15 Banach algebras of differentiable or analytic functions, $$H^p$$-spaces
Full Text: |
https://mrpilarski.wordpress.com/category/algebra-2/equations-and-graphs/graphing-an-equations-inverse/ | • ## Top Posts
quadratic equation c… on Modeling Data With Quadratic… Lemuel on Quadratic Functions and Their… Arizona Bayfield on How To Simplify Rational … Suemac on Proving Lines Parallel with Tr… Mr. Pi on Subsets of Real Numbers
## Find the Domain of a Function and Its Inverse
In this post, I have embedded an Algebra 2 Video Math Lesson about the function $f(x) = \sqrt {2x+2}$. Before considering the function, the video defines $f^{-1}$ as the inverse of f or as f inverse.
The video models how to:
1. Find the domain and range of $f(x) = \sqrt {2x+2}$
2. Find $f^{-1}$
3. Find the domain and range of $f^{-1}$
4. Determine if $f^{-1}$ is a function
If you have any questions regarding the video lesson or other questions regarding finding the inverse of a function, use the comments section.
In this video Algebra 2 Math Lesson, I model how to graph a quadratic function and its inverse. The process starts with graphing a parabola in the form $y=a^2+c$. The vertex is given by (0,c). After graphing the parabola with four additional points, I create the inverse’s graph by moving the points about y = x line. Finally, I find the inverse of the original function. |
https://en.formulasearchengine.com/index.php?title=Talk:Non-uniform_rational_B-spline&oldid=318546 | # Talk:Non-uniform rational B-spline
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Inline citations
I have added some inline citations to relevant sections from two of the main reference books ("The NURBS Book" and "An Introduction to NURBS with Historical Perspective") supporting the following aspects:
• Invariace of NURBS surfaces under affine transformations
• Construction of basis functions
• Definition of a NURBS curve
• Manipulation of NURBS curves
Nedaim (talk) 21:07, 5 March 2010 (UTC)
## Image Caption
Excuse my ignorance - NURBS are new to me - but shouldn't the caption for the second image be "A three dimensional NURBS surface" rather than "A two dimensional NURBS surface"? The surface and its control points are defined in 3D space rather than 2D. If the 2D caption is correct, perhaps an explanation of the 2D/3D terminology [as it applies to NURBS] is justified. ScottColemanKC (talk) 13:53, 6 January 2009 (UTC) Good catch --SmilingRob (talk) 05:20, 28 January 2009 (UTC)
## B-Splines vs. Bézier Splines
I'm not sure I like the identification "NURBS (Non Uniform Rational Basis, or Bézier Spline)" in the History section. A Bézier spline is a special case of a B-Spline; for example, a degree 3 Bézier spline on four control points is the same as a B-Spline with knot vector [0,0,0,0,1,1,1,1]. But I don't think it is standard to regard the B in "NURBS" as standing for "Bézier." See, for example, 3-D Computer Graphics: A Mathematical Introduction with OpenGL, by Samuel Buss, Chapters 7-8. I think it would be better to replace the above quote with simply "NURBS (Non Uniform Rational Basis Spline)". Verdanthue (talk) 23:33, 2 April 2008 (UTC)
I agree. Bezier splines have no internal knots, which rules out Non-Uniform-ness. Bringing them together in one
term makes no sense. I will follow your suggestion. Mauritsmaartendejong (talk) 20:55, 5 April 2008 (UTC)
## Continuity again
I propose to remove the discussion on continuity within the article by splitting it in two distinct parts (and putting it in a separate section). The C0, C1, ... etc definitions relate to parametric continuity and require the derivatives of the curve with respect to the parameter to be continuous. G0, G1, ... definitions relate to geometric continuity, which I think is equivalent to parametric continuity provided that the parameter exactly represents the length of the curve -- which is hard to achieve in practice. In modeling, geometric continuity is usually a requirement. Parametric continuity is a bonus, e.g. if rendering can use parameter space directly for tesselation. Two separate NURBS curves which are not Cn continuous can be Gn continuous. For instance, two linear NURBS segments with different knot spans but collinear control points will show G1 continuity. Mauritsmaartendejong (talk) 15:11, 6 January 2008 (UTC)
## Double knots in the circle example
"(In fact, the curve is infinitely differentiable everywhere, as it must be if it exactly represents a circle.)"
This is true in the geometric sense, is it true in the parametrized sense? Isn't it true that two rational functions which agree in all derivatives in some point must be identical, like it holds for polynomial functions? But here we definitely have pieces with different functions on successive intervals. Someone should do the calculations to check this.--130.133.8.114 (talk) 13:01, 30 June 2010 (UTC) G. Rote
Addition. I checked: the 2nd derivatives of the spline curve are not continuous. --130.133.8.114 (talk) 18:29, 30 June 2010 (UTC)
## Clarification on the Number of Knots
"The number of knots is always equal to the number of control points plus curve degree minus one."
I urge the athor to expand his or her explanation of this relationship but I do not personally feel qualified to make these changes. I spent some considerable time on this issue and still am not clear on what the proper relationship should be. Using the Forenik tool, which is by the way very cool, referenced in the article I deduced that nKnts = nPts + nDeg + 1 whereas the article states that nKnts = nPts +nDeg -1. To me this implies that there is some confusion over whether the ends are or are not knots. In addition the number of knots issue needs to be clearly differentiated from the subject of end point multiplicity. To the casual reader multiplicity apparently reduces the number of knots.
And thanks much for this article. Between this article and the Forenik tool, I now have a very good understanding of NURBS. — Preceding unsigned comment added by Petyr33 (talkcontribs) 18:28, 21 February 2013 (UTC)
## Number of Knots per Control Point
Quote from the text (from the section on Knot Vector): The number of knots is always equal to the number of control points plus curve degree plus one.
Shouldn't it be: ...always equal to the number of control points minus curve degree plus one..
Example: A cubic NURBS curve with 4 CPs has got 2 knots: 4 – 3 + 1 = 2 —Preceding unsigned comment added by 193.170.135.17 (talk) 11:05, 2 December 2007 (UTC)
No. If a cubic curve with four control points has two knots, they have multiplicity 4, bringing the total number of knots to 8 = 4 + 3 + 1. The multiplicity of the boundary knots causes the curve to be clamped to the first and last control point. See also property P3.2 in chapter three of Piegl and Tiller Mauritsmaartendejong 12:56, 2 December 2007 (UTC)
In my opinion the total number of knots (counted with multiplicity) should be equal to the number of control points plus degree minus 1. Exampled: degree=d=1 (order=2). These are just polygonal chains through the control points. clearly we don't need more knots than control points. And to get the curve starting at the first control point, we don't have to repeat the first knot more than d=1 times. (The first and last knot in the circle example should be deleted. Multiplicity 2 is enough (I did the calculation).)
It isn't a matter of opinion or of 'needing extra knots' to clamp the end points, it is a matter of how NURBS are defined! It is part of the mathematical definition of a NURBS curve that (including multiplicities) num(knots) = num(CPs) + order (CP is control point). Since order = degree + 1, the above CP+degree-1 formula runs counter to the definition of a NURBS curve. The correct definition ensures each CP has a 'fair chance' at the right number of knot spans. Consider that the number of knot spans affected by a CP is equal to the order (this is a consequence of the way the basis functions are defined). Consider a minimal cubic NURBS curve with 4 CPs: it is expected that the middle knot span is the only one affected by all 4 CPs. Also, since order=4, we know the first CP affects exactly the first 4 spans. So the last of those 4 spans must be the middle one! So, there is a middle span and three spans on either side, for a total of seven spans, or eight knots. This fits the +1 version of the definition. If we took off two spans on either end to fit the -1 version, then we'd lose the spans that are affected by only one CP! Note that all of this counts multiplicities as additional knots; a multiplicity just results in a 'degenerate' span, i.e., one with a zero-length interval in parameter space. --Migilik (talk) 05:20, 13 June 2013 (UTC)
In general, the first d knots can also be distinct, but the curve will then not start at the first control point (it will start only (d-1)/2 "steps" later. --130.133.8.114 (talk) 18:51, 30 June 2010 (UTC) Günter Rote
Maybe the sentence should be refrased. All in all, I'd say it would be misleading to say that number of knots (NK) is ALWAYS equal to the number of control points (NCP) plus degree (DEG) minus one, because in most cases clamped curves have number of knots equal to the order at the ends. For cubic curves this means NK = NCP + DEG + 1 <=> 8 = 4 + 3 + 1 (Toivo83 (talk) 06:11, 2 November 2011 (UTC))
'misleading' is an understatement. NK = NCP + DEG + 1 is the correct form for all NURBS curves (regardless of whether the common 'open uniform' knot vector is used), and having NK = NCP + DEG - 1 in the article will likely cause serious confusion for people new to the field. --Migilik (talk) 05:20, 13 June 2013 (UTC)
## Parameterisation ratio
Also note that the only significant factor is the ratio of the values to each other: the knot vectors [0 0 1 2 3], [0 0 2 4 6] and [1 1 2 3 4] produce the same curve.
Er, I've got a problem with the ratios (0:1 = 0:2 = 1:2) and (1:2 = 2:4 = 2:3) which is what this states. I can only conclude that the minimum value is also a significant factor. Can anyone help me out on this one?
The only significant thing is the ratio of neighboring parameter values in their ascending order, therefore the given example is correct. Minimum value has no significance. As long as a multiplier value exists to make ratio between different parameter values the same, the resulting b-spline curve will be the same.
E.g. 0 1 2 3 4 is the same as 0 2 4 6 8 (multiplier being 0.5)
or
0 3 6 9 12 is the same as 0 0.25 0.50 0.75 1.0 (multiplier being 12)
Actually for any uniformly increasing parameter values, curves with same control points position and same number of knot vectors will have the same shape (which means that all four above examples give identical curve shape provided their control points positions are the same).
For non-uniform splines:
0 0 1 3 4 5 is the same as 0 0 0.5 1.5 2.0 2.5 (multiplier is 2)
It is those differences in parameter values that determine chord length and affect the resulting curve shape. Therefore, same ratio between knot parameter values, combined with same control points positions will give the same curve. Think of it as a tension increased in one knot by any factor will not change the curve if the same factor of increased tension is applied to all the other knots.
The text doesn't make clear that changing the knot vector by a constant multiple will change the parametrization of the curve. The curve will be the same shape, but it will be different as a parametric function of u. Verdanthue (talk) 23:11, 2 April 2008 (UTC)
## Uniformed circle?
Can someone explain what a "uniformed circle" is? --Doradus 03:21, 5 February 2006 (UTC)
I asked myself the same question. I think it's a typo, and should be 'uniform circle', which I think is a circle that will give equidistant points when the parameter space is uniformly traversed. As an aside, the table needs an order and a knot vector before it really defines a uniform circle. The order is three, and the knot vector something like {0,0,0,1,1,2,2,3,3,4,4,4} Mauritsmaartendejong 20:38, 19 June 2007 (UTC)
## Picture
This article could do with a picture in as much as NURBS is an adjective to describe a class of shapes, particularly shapes of consumer products of recent years. —BenFrantzDale 16:46, 16 February 2006 (UTC)
Well, its kindoff hard and simple at the same time. See more is about the specifications of the image, i mean you could photograph any mobile telephone, or any western airplane. But more of the trouble here is to make a piucture that is MEANINGFULL for the potential viewer. See when i take a look at my telephone i know what im looking for so i can see how the patch layout is done. But the untrained person will most definetly not see it unless pointed out, and even then if the designer of the phone did a superb job there would be noothing to see. Just imaginary lines. So its definetly going to have to be a rendering.
Now to complicate this a bit, most eingeneering applications make use of as many independent pieces of trimmed nurbs surfaces that most noneingeneers wont believe. -J-
## Subs versus nurbs
the following line needs editting:
Subdivision surface modeling is now preferred over NURBS modeling in major modelers because subdivision surfaces have lots of benefits.
Because it is simply untrue. Or it would be true if a modelling packge could be clearly demonstrated not to include cad packages, and/or if cad packages would be very specialized minority. But the simple truuth is that theres more cad users on nurbs out there than theres DCC visualisers with subdivision surface modellers. And yet no cad package has switched over to subds. Because theres NO 2 degree continuity... And the tools are severely lacking form engeneerig perspective. -J-
I do completely agree as this is more or less opinion. It implies that nobody uses NURBS any more and instead uses Subs. NURBS do have important advantages compared to Subs and therefor all mayor animation packages contain a NURBS engine. Opinions about Sub Division Surfaces beeing "better" belong to the appropriate page only. -R-
## Miswording
When describing the alternative definitions for C0 through C2 (right after they are introducted) the following claim is made.
This definition is also valid for curves and surfaces with base functions higher than 3rd order (cubic). It requires that both the direction and the magnitude of the nth derivate of the curve/surface (d/du C(u)) are the same at a joint. The main difference to the definition above is the requirement for a same order of magnitude.
It sounds like whoever wrote this is confusing same 'order of magnitude' (roughly the same value) with having the same value. If not one of the definitions is stated incorrectly as nowhere do they require same order of magnitude. The order of magnitude bit appears in the next sentence as well.
I'm pretty confident that 'same magnitude' is what was intended from textual clues and my knowledge of the mathematical notions so I'm going to change it but I'm just learning about computer graphics so if I got it wrong please correct my error. In either case once a person who knows the facts comes along and reads this and the page over this comment will become superflous.
Logicnazi 09:17, 19 June 2006 (UTC)
Yes i changed magnitude in the first explanation to the word length, magnitude however does mean the same thing for a vector (magnitude of a vector is its exact length, so all vectors of same length have same magnitude). But for the readability its indeed better to use the word length. Because its a more understandable wording -J-
Edity, i think the entire section of C0 C1 and C2 continuity should be revised fully, indeed C0 continuity implies end points matching, C1 is matching the vector of change direction and C2 is matching vector length change. So it is NOT enough to be same length but the rate of change must also be a continuous function (remember we are talking of function continuation here)
So simply, c0 ensures that the curves continue seamlessly, c1 ensures the angles the same, c2 ensures change of rate is continuous. -J-
PS i was talking to a engineer who needed c3 continuity because of some stress calculational reason. |
https://www.aimsciences.org/article/doi/10.3934/amc.2011.5.69 | # American Institute of Mathematical Sciences
• Previous Article
Cryptanalysis of a 2-party key establishment based on a semigroup action problem
• AMC Home
• This Issue
• Next Article
Infinite families of optimal splitting authentication codes secure against spoofing attacks of higher order
February 2011, 5(1): 69-86. doi: 10.3934/amc.2011.5.69
## The enumeration of Costas arrays of order 28 and its consequences
1 School of Electrical, Electronic & Mechanical Engineering, University College Dublin, Belfield, Dublin 4 2 Autodesk Research, 210 King Street East, Toronto, Ontario M5A 1J7, Canada
Received July 2010 Published February 2011
The results of the enumeration of Costas arrays of order 28 are presented: all arrays found are accounted for by the Golomb and Welch construction methods, making 28 the first order (larger than 5) for which no sporadic Costas arrays exist. The enumeration was performed on several computer clusters and required the equivalent of 70 years of single CPU time. Furthermore, a classification of Costas arrays in four classes is proposed, and it is conjectured, based on the results of the enumeration combined with further evidence, that two of them eventually become extinct.
Citation: Konstantinos Drakakis, Francesco Iorio, Scott Rickard. The enumeration of Costas arrays of order 28 and its consequences. Advances in Mathematics of Communications, 2011, 5 (1) : 69-86. doi: 10.3934/amc.2011.5.69
##### References:
[1] L. Barker, K. Drakakis and S. Rickard, On the complexity of the verification of the Costas property, Proc. IEEE, 97 (2009), 586-593. doi: 10.1109/JPROC.2008.2011947. [2] J. K. Beard, Private communication, May 2008. [3] J. K. Beard, J. C. Russo, K. G. Erickson, M. C. Monteleone and M. T. Wright, Costas arrays generation and search methodology, IEEE Trans. Aerospace Electr. Systems, 43 (2007), 522-538. doi: 10.1109/TAES.2007.4285351. [4] C. Brown, M. Cenki, R. Games, J. Rushanan, O. Moreno and P. Pei, New enumeration results for Costas arrays, in "IEEE International Symposium on Information Theory,'' (1993), 405. doi: 10.1109/ISIT.1993.748721. [5] S. Cohen and G. Mullen, Primitive elements in finite fields and Costas arrays, Appl. Algebra Engin. Commun. Comput., 2 (1991), 45-53. doi: 10.1007/BF01810854. [6] J. P. Costas, Medium constraints on SONAR design and performance, in "Technical Report Class 1 Rep. R65EMH33,'' GE Co., 1965; A synopsis of this report appeared in the EASCON Convention Record, (1975), 68A-68L. [7] J. P. Costas, A study of detection waveforms having nearly ideal range-doppler ambiguity properties, Proc. IEEE, 72 (1984), 996-1009. doi: 10.1109/PROC.1984.12967. [8] K. Drakakis, A review of Costas arrays, J. Appl. Math., 2006, 32. [9] K. Drakakis, R. Gow and L. O'Carroll, On the symmetry of Welch- and Golomb-constructed Costas arrays, Discrete Math., 309 (2009), 2559-2563. doi: 10.1016/j.disc.2008.04.058. [10] K. Drakakis, S. Rickard, J. Beard, R. Caballero, F. Iorio, G. O'Brien and J. Walsh, Results of the enumeration of Costas arrays of order 27, IEEE Trans. Inform. Theory, 54 (2008), 4684-4687. doi: 10.1109/TIT.2008.928979. [11] S. W. Golomb, Algebraic constructions for Costas arrays, J. Combin. Theory Ser. A, 37 (1984), 13-21. doi: 10.1016/0097-3165(84)90015-3. [12] S. W. Golomb, The $T_4$ and $G_4$ constructions for Costas arrays, IEEE Trans. Inform. Theory, 38 (1992), 1404-1406. doi: 10.1109/18.144726. [13] S. W. Golomb and H. Taylor, Constructions and properties of Costas arrays, Proc. IEEE, 72 (1984), 1143-1163. doi: 10.1109/PROC.1984.12994. [14] K. Ireland and M. Rosen, "A Classical Introduction to Modern Number Theory,'' $2^{nd}$ edition, Springer, 1990. [15] S. Rickard, Searching for Costas arrays using periodicity properties, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2004. [16] S. Rickard, Open problems in Costas arrays, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2006. [17] S. Rickard, E. Connell, F. Duignan, B. Ladendorf and A. Wade, The enumeration of Costas arrays of size 26, in "Conference on Information Signals and Systems,'' Princeton University, USA, 2006. doi: 10.1109/CISS.2006.286579. [18] J. Silverman, V. Vickers and J. Mooney, On the number of Costas arrays as a function of array size, Proc. IEEE, 76 (1988), 851-853. doi: 10.1109/5.7156. [19] K. Taylor, K. Drakakis and S. Rickard, Generated, emergent, and sporadic Costas arrays, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2008.
show all references
##### References:
[1] L. Barker, K. Drakakis and S. Rickard, On the complexity of the verification of the Costas property, Proc. IEEE, 97 (2009), 586-593. doi: 10.1109/JPROC.2008.2011947. [2] J. K. Beard, Private communication, May 2008. [3] J. K. Beard, J. C. Russo, K. G. Erickson, M. C. Monteleone and M. T. Wright, Costas arrays generation and search methodology, IEEE Trans. Aerospace Electr. Systems, 43 (2007), 522-538. doi: 10.1109/TAES.2007.4285351. [4] C. Brown, M. Cenki, R. Games, J. Rushanan, O. Moreno and P. Pei, New enumeration results for Costas arrays, in "IEEE International Symposium on Information Theory,'' (1993), 405. doi: 10.1109/ISIT.1993.748721. [5] S. Cohen and G. Mullen, Primitive elements in finite fields and Costas arrays, Appl. Algebra Engin. Commun. Comput., 2 (1991), 45-53. doi: 10.1007/BF01810854. [6] J. P. Costas, Medium constraints on SONAR design and performance, in "Technical Report Class 1 Rep. R65EMH33,'' GE Co., 1965; A synopsis of this report appeared in the EASCON Convention Record, (1975), 68A-68L. [7] J. P. Costas, A study of detection waveforms having nearly ideal range-doppler ambiguity properties, Proc. IEEE, 72 (1984), 996-1009. doi: 10.1109/PROC.1984.12967. [8] K. Drakakis, A review of Costas arrays, J. Appl. Math., 2006, 32. [9] K. Drakakis, R. Gow and L. O'Carroll, On the symmetry of Welch- and Golomb-constructed Costas arrays, Discrete Math., 309 (2009), 2559-2563. doi: 10.1016/j.disc.2008.04.058. [10] K. Drakakis, S. Rickard, J. Beard, R. Caballero, F. Iorio, G. O'Brien and J. Walsh, Results of the enumeration of Costas arrays of order 27, IEEE Trans. Inform. Theory, 54 (2008), 4684-4687. doi: 10.1109/TIT.2008.928979. [11] S. W. Golomb, Algebraic constructions for Costas arrays, J. Combin. Theory Ser. A, 37 (1984), 13-21. doi: 10.1016/0097-3165(84)90015-3. [12] S. W. Golomb, The $T_4$ and $G_4$ constructions for Costas arrays, IEEE Trans. Inform. Theory, 38 (1992), 1404-1406. doi: 10.1109/18.144726. [13] S. W. Golomb and H. Taylor, Constructions and properties of Costas arrays, Proc. IEEE, 72 (1984), 1143-1163. doi: 10.1109/PROC.1984.12994. [14] K. Ireland and M. Rosen, "A Classical Introduction to Modern Number Theory,'' $2^{nd}$ edition, Springer, 1990. [15] S. Rickard, Searching for Costas arrays using periodicity properties, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2004. [16] S. Rickard, Open problems in Costas arrays, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2006. [17] S. Rickard, E. Connell, F. Duignan, B. Ladendorf and A. Wade, The enumeration of Costas arrays of size 26, in "Conference on Information Signals and Systems,'' Princeton University, USA, 2006. doi: 10.1109/CISS.2006.286579. [18] J. Silverman, V. Vickers and J. Mooney, On the number of Costas arrays as a function of array size, Proc. IEEE, 76 (1988), 851-853. doi: 10.1109/5.7156. [19] K. Taylor, K. Drakakis and S. Rickard, Generated, emergent, and sporadic Costas arrays, in "IMA International Conference on Mathematics in Signal Processing at The Royal Agricultural College,'' Cirencester, UK, 2008.
[1] Konstantinos Drakakis, Francesco Iorio, Scott Rickard, John Walsh. Results of the enumeration of Costas arrays of order 29. Advances in Mathematics of Communications, 2011, 5 (3) : 547-553. doi: 10.3934/amc.2011.5.547 [2] Kaifang Liu, Lunji Song, Shan Zhao. A new over-penalized weak galerkin method. Part Ⅰ: Second-order elliptic problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2411-2428. doi: 10.3934/dcdsb.2020184 [3] Jonathan Jedwab, Jane Wodlinger. Structural properties of Costas arrays. Advances in Mathematics of Communications, 2014, 8 (3) : 241-256. doi: 10.3934/amc.2014.8.241 [4] Konstantinos Drakakis, Roderick Gow, Scott Rickard. Common distance vectors between Costas arrays. Advances in Mathematics of Communications, 2009, 3 (1) : 35-52. doi: 10.3934/amc.2009.3.35 [5] Lunji Song, Wenya Qi, Kaifang Liu, Qingxian Gu. A new over-penalized weak galerkin finite element method. Part Ⅱ: Elliptic interface problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2581-2598. doi: 10.3934/dcdsb.2020196 [6] Konstantinos Drakakis, Rod Gow, Scott Rickard. Parity properties of Costas arrays defined via finite fields. Advances in Mathematics of Communications, 2007, 1 (3) : 321-330. doi: 10.3934/amc.2007.1.321 [7] Henning Struchtrup. Unique moment set from the order of magnitude method. Kinetic and Related Models, 2012, 5 (2) : 417-440. doi: 10.3934/krm.2012.5.417 [8] Min Tang. Second order all speed method for the isentropic Euler equations. Kinetic and Related Models, 2012, 5 (1) : 155-184. doi: 10.3934/krm.2012.5.155 [9] Gang Luo, Qingzhi Yang. The point-wise convergence of shifted symmetric higher order power method. Journal of Industrial and Management Optimization, 2021, 17 (1) : 357-368. doi: 10.3934/jimo.2019115 [10] Yong Li, Catalin Trenchea. Partitioned second order method for magnetohydrodynamics in Elsässer variables. Discrete and Continuous Dynamical Systems - B, 2018, 23 (7) : 2803-2823. doi: 10.3934/dcdsb.2018106 [11] Giuseppe Floridia, Hiroshi Takase, Masahiro Yamamoto. A Carleman estimate and an energy method for a first-order symmetric hyperbolic system. Inverse Problems and Imaging, , () : -. doi: 10.3934/ipi.2022016 [12] Masaru Ikehata, Yavar Kian. The enclosure method for the detection of variable order in fractional diffusion equations. Inverse Problems and Imaging, , () : -. doi: 10.3934/ipi.2022036 [13] Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial and Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068 [14] Jiantao Jiang, Jing An, Jianwei Zhou. A novel numerical method based on a high order polynomial approximation of the fourth order Steklov equation and its eigenvalue problems. Discrete and Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2022066 [15] Lijun Yi, Zhongqing Wang. Legendre spectral collocation method for second-order nonlinear ordinary/partial differential equations. Discrete and Continuous Dynamical Systems - B, 2014, 19 (1) : 299-322. doi: 10.3934/dcdsb.2014.19.299 [16] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. II. Convergence of the method of finite differences. Inverse Problems and Imaging, 2016, 10 (4) : 869-898. doi: 10.3934/ipi.2016025 [17] Kazuyuki Yagasaki. Higher-order Melnikov method and chaos for two-degree-of-freedom Hamiltonian systems with saddle-centers. Discrete and Continuous Dynamical Systems, 2011, 29 (1) : 387-402. doi: 10.3934/dcds.2011.29.387 [18] Anis Theljani, Ke Chen. An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration. Inverse Problems and Imaging, 2019, 13 (2) : 309-335. doi: 10.3934/ipi.2019016 [19] Klemens Fellner, Wolfang Prager, Bao Q. Tang. The entropy method for reaction-diffusion systems without detailed balance: First order chemical reaction networks. Kinetic and Related Models, 2017, 10 (4) : 1055-1087. doi: 10.3934/krm.2017042 [20] Netra Khanal, Ramjee Sharma, Jiahong Wu, Juan-Ming Yuan. A dual-Petrov-Galerkin method for extended fifth-order Korteweg-de Vries type equations. Conference Publications, 2009, 2009 (Special) : 442-450. doi: 10.3934/proc.2009.2009.442
2021 Impact Factor: 1.015 |
https://datascience.stackexchange.com/questions/54184/how-to-interpret-feature-importance-xgboost-in-this-case | # How to interpret feature importance (XGBoost) in this case?
I found two dominant features from plot_importance. My dependent variable Y is customer retention (whether or not the customer will retain, 1=yes, 0=no). My problem is I know that feature A and B are significant, but I don't know how to interpret and report them in words because I can't tell if they have a position or negative effect on the customer retention. Is there a way to find that out or anything that helps make it clear?
Thanks.
## 3 Answers
Pictures usually tell a better story than words - have you considered using graphs to explain the effect?
Perhaps 2-way box plots or 2-way histogram/density plots of Feature A v Y and Feature B v Y might work well.
I think you can find the correlation matrix for the feature which could provide you with evidence to justify your hypothesis.
• The target - Y - is binary. Correlation measures the relationship between two continuous features and so is inappropriate to use in this case. – bradS Jun 21 at 8:30
Put it in another way. If we would not know this information we would be %point less accurate. Calculate accuracy using your model, then shuffle your variable to explain randomly, predict with your model and calculate accuracy again. The difference will be the added value of your variable. (its called permutation importance)
If you want to show it visually check out partial dependence plots. (read more here)
It is also powerful to select some typical customer and show how each feature affected their score. (i.e. using SHAP values see it here) |
https://tug.org/pipermail/tex-hyphen/2010-May/000619.html | # [tex-hyphen] ptex-specific patterns
Karl Berry karl at freefriends.org
Sun May 30 00:53:09 CEST 2010
Akira's email seems to be bouncing (maybe I need to use another one),
fuk... is good, previous jupiter... is bad.
But I'm still a bit confused about the fact of whether the engine is
supposed to behave more like 8-bit pdfTeX or more like UTF-8 XeTeX,
As I understand it, it is neither. It does not support UTF-8. It does
not support the European 8-bit encodings. It supports Japanese
encodings (which are multi-byte).
\def\t{t2a}\ifx\t\Encoding\input hypht2 \fi
Right. Thanks.
Is there any simple test file & building instructions?
Not that I am aware of.
[macro testing for pTeX]
Akira previously suggested testing for pTeX using this, instead of
\ifx\kanjiskip\undefined
but here are some observations (tested on TL 2010).
Thanks for the observations. If Akira wants to proceed further, that is
fine with me. As far as I'm concerned, we do not need to settle this
for TL 2010, and (as I said before) I would just as soon not. There is
no harm in ptex living in its own universe for a while.
k |
https://cs.stackexchange.com/questions/68340/complexity-of-context-sensitive-languages | # Complexity of Context Sensitive Languages
I was reading above complexity classes from Formal Languages and Automata book by Peter Linz.
It gives following facts (in Theorem 5.2):
Consider we have a CFG without null or unit productions. For parsing a string w, if restrict ourselves to leftmost derivation, we can have no more than $|P|$ sentential forms after one round, no more than $|P|^2$ sentential forms after the second round, and so on. Parsing $w$ in CFG without null or unit productions cannot involve more than $2|w|$ rounds. Therefore, the total number of sentential forms cannot exceed $M=|P|+|P|^2+⋯+|P|^{2|w|} =O(P^{(2|w|+1)})$
Then (in Example 14.5, citing back above theorem) it says
However we cannot claim that $CSL \subseteq DTIME(|P|^{cn+1})$ because we cannot put an upper bound on $|P|$ and $c$
I dont understand why it is trying to use the fact about CFLs to find time complexity of CSLs. Is it because if CFLs does not follow that DTIME complexity, CSLs will also not follow it as $CFL\subset CSL$? But then, given any grammar, $|P|$ is fixed, its not infinite and I feel $c$ is always 2 as given in the proof of the theorem 5.2.
• This question is impossible to answer for people not owning the textbook. Please copy the relevant parts from the textbook. Jan 6 '17 at 19:42
• Here is what book says.
– Maha
Jan 6 '17 at 20:00
• This is not enough, since I don't know what Equation (5.2) is. Besides, you should edit all this information into the body of the question. Jan 6 '17 at 20:02
• Equation (5.2) is the one which I have included in the fact block quote in the body of question. I have also already added link to the google book page 143 in the question body which contains Theorem (5.2) and on page 144, you can find equation (5.2).
– Maha
Jan 6 '17 at 20:19
• I cannot access the google book, nor should I have to. Everything needed to answer your question has to be part of your question. Jan 6 '17 at 20:19
$$N = |P| + |P|^2 + \ldots |P|^{cn} = O(|P|^{cn+1}).$$ |
http://www.physicspages.com/2012/01/03/laplaces-equation-fourier-series-examples-1/ | # Laplace’s equation – Fourier series examples 1
Required math: calculus
Required physics: electrostatics
Reference: Griffiths, David J. (2007) Introduction to Electrodynamics, 3rd Edition; Prentice Hall – Problems 3.12 – 3.13.
Here are a few examples of calculating the Fourier coefficients for some special cases.
Example 1. Consider the infinite slot problem with the boundary at ${x=0}$ consisting of a conducting strip with a constant potential of ${V_{0}}$. In this case we get
$\displaystyle c_{n}$ $\displaystyle =$ $\displaystyle \frac{2V_{0}}{a}\int_{0}^{a}\sin\frac{n\pi y}{a}dy\ \ \ \ \ (1)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{2V_{0}}{\pi}\left(1-\cos n\pi\right) \ \ \ \ \ (2)$
The coefficients are thus zero for even ${n}$ and ${4V_{0}/\pi}$ for odd ${n}$:
$\displaystyle c_{n}=\begin{cases} 0 & n\mathrm{\; even}\\ \frac{4V_{0}}{\pi} & n\mathrm{\; odd} \end{cases} \ \ \ \ \ (3)$
The potential is thus
$\displaystyle V(x,y)=\frac{4V_{0}}{\pi}\sum_{n=1,3,5,\ldots}^{\infty}\frac{e^{-n\pi x/a}}{n}\sin\frac{n\pi y}{a} \ \ \ \ \ (4)$
Example 2. Now suppose the boundary at ${x=0}$ consists of two conducting strips, insulated from each other and from the infinite sheets. The first strip, from ${y=0}$ to ${y=a/2}$ has a constant potential ${V_{0}}$ while the other strip, from ${y=a/2}$ to ${y=a}$ is held at potential ${-V_{0}}$.
Here, the coefficients ${c_{n}}$ are given by
$\displaystyle c_{n}$ $\displaystyle =$ $\displaystyle \frac{2V_{0}}{a}\left[\int_{0}^{a/2}\sin\frac{n\pi y}{a}dy-\int_{a/2}^{a}\sin\frac{n\pi y}{a}dy\right]\ \ \ \ \ (5)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{2V_{0}}{n\pi}\left[1-2\cos\frac{n\pi}{2}+\cos n\pi\right] \ \ \ \ \ (6)$
If ${n}$ is odd, this comes out to zero. If ${n}$ is even, there are two cases. First, if ${n=2,6,10,\ldots}$ the term in brackets is 4. If ${n=4,8,12,\ldots}$ the term in brackets is zero. Thus we get
$\displaystyle c_{n}=\begin{cases} \begin{array}{c} 0\\ \frac{8V_{0}}{n\pi}\\ 0 \end{array} & \begin{array}{c} n\mathrm{\; odd}\\ n=2,6,10\ldots\\ n=4,8,12\ldots \end{array}\end{cases} \ \ \ \ \ (7)$
Thus the potential is
$\displaystyle V(x,y)$ $\displaystyle =$ $\displaystyle \frac{8V_{0}}{\pi}\sum_{n=2,6,10\ldots}^{\infty}\frac{e^{-n\pi x/a}}{n}\sin\frac{n\pi y}{a}\ \ \ \ \ (8)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{8V_{0}}{\pi}\sum_{n=0}^{\infty}\frac{e^{-(4n+2)\pi x/a}}{4n+2}\sin\frac{(4n+2)\pi y}{a} \ \ \ \ \ (9)$
where in the last line we’ve changed the index of summation since the non-zero terms in the first sum are just those with ${n=4m+2}$ starting at ${m=0}$.
Example 3. The infinite slot with the strip at ${x=0}$ held at potential ${V_{0}}$ has the solution
$\displaystyle V(x,y)=\frac{4V_{0}}{\pi}\sum_{n=1,3,5,\ldots}^{\infty}\frac{e^{-n\pi x/a}}{n}\sin\frac{n\pi y}{a} \ \ \ \ \ (10)$
For a conductor, the surface charge density can be found from the derivative taken normal to the surface:
$\displaystyle \sigma=-\epsilon_{0}\left.\frac{\partial V}{\partial n}\right|_{x=0} \ \ \ \ \ (11)$
In this case, the normal to the surface is the ${x}$ direction, so we get
$\displaystyle \sigma$ $\displaystyle =$ $\displaystyle -\epsilon_{0}\frac{4V_{0}}{\pi}\left.\sum_{n=1,3,5,\ldots}^{\infty}\left(-\frac{n\pi}{a}\right)\frac{e^{-n\pi x/a}}{n}\sin\frac{n\pi y}{a}\right|_{x=0}\ \ \ \ \ (12)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \epsilon_{0}\frac{4V_{0}}{a}\sum_{n=1,3,5,\ldots}^{\infty}\sin\frac{n\pi y}{a} \ \ \ \ \ (13)$
This looks fine except for the problem that the series doesn’t converge. Consider ${y=a/2}$. The series is then a sum of an alternating sequence of ${+1}$ and ${-1}$. The original series for the potential does converge at ${x=0}$ due to the ${n}$ in the denominator. Not sure what the solution to this is.
## 5 thoughts on “Laplace’s equation – Fourier series examples 1”
1. iman
first of all ..I wanted to express my gratittude.
It takes great generousity and good heart to help expand knowledge amongst others.and physics and math are so fun!you take people to explore the joy of it.
secondly:can I help?I study physics and math too …and I also make some good problems and I wish to help you with the blog.
once again thank you.
Iman
P.S. about example 3:if we consider the slot realy infinite then it should diverge.
you see, when we get close to a slot(or the slot is “infinite”) by gauss law we have E=zigme/2e0 and this is actually -dV/dz.where z is the perpendicular axis.so when we go away from the slot our V gets zero then negative.then – negative infinity!!!
the problem arises from this>>we dont have an infinite slot!as we get away from it V actually becomes a function of z we cant ignore!
infinte slot>>infinite charge>>infinte energy
so we cant have a really infinite slot.
and if we want to try and approximate it>>if its infinite we have symmetry so that V is only a function of z.and it is nearly:V0-zigma/2e0 z
2. Luc
But the first see series you got for the voltage does converge. If you use the expression Griffiths gives you in the book (though I’m not sure how he managed to explicitly sum that) on page 337, you’ll see that it can be written as a function of x and y. It looks something like: (2Vo/pi) * tan ^-1[(sin(pi y/a)/sinh(pi y/a)]
Then you take the derivative with respect to x of this expression and you have the surface charge density.
1. Jerry
Luc the problem is actually this one that how does author come up with his solution so directly
Can somebody explain this?
3. qaz
I believe that your sum in example 3 does actually converge to the function csc(y/a). I’m not sure how to reconcile that with your point about the oscillating solution at y = a/2 but graphing the two together with a few hundred terms of the Fourier series is compelling. |
http://aimsciences.org/article/doi/10.3934/dcdsb.2015.20.2333 | American Institue of Mathematical Sciences
2015, 20(8): 2333-2360. doi: 10.3934/dcdsb.2015.20.2333
Classical converse theorems in Lyapunov's second method
1 School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, New South Wales 2308
Received August 2014 Revised March 2015 Published August 2015
Lyapunov's second or direct method is one of the most widely used techniques for investigating stability properties of dynamical systems. This technique makes use of an auxiliary function, called a Lyapunov function, to ascertain stability properties for a specific system without the need to generate system solutions. An important question is the converse or reversability of Lyapunov's second method; i.e., given a specific stability property does there exist an appropriate Lyapunov function? We survey some of the available answers to this question.
Citation: Christopher M. Kellett. Classical converse theorems in Lyapunov's second method. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2333-2360. doi: 10.3934/dcdsb.2015.20.2333
References:
[1] B. D. O. Anderson, Stability of control systems with multiple nonlinearities,, Journal of the Franklin Institute, 282 (1966), 155. doi: 10.1016/0016-0032(66)90317-6. [2] B. D. O. Anderson and J. B. Moore, New results in linear system stability,, SIAM Journal on Control, 7 (1969), 398. doi: 10.1137/0307029. [3] B. D. O. Anderson and J. B. Moore, Detectability and stabilizability of time-varying discrete-time linear systems,, SIAM Journal on Control and Optimization, 19 (1981), 20. doi: 10.1137/0319002. [4] D. Angeli and E. D. Sontag, Forward completeness, unboundedness observability, and their Lyapunov characterizations,, Systems & Control Letters, 38 (1999), 209. doi: 10.1016/S0167-6911(99)00055-9. [5] H. Antosiewicz, A survey of Lyapunov's second method,, Contributions to Nonlinear Oscillations, (1958), 147. [6] T. M. Apostol, Mathematical Analysis: A Modern Approach to Advanced Calculus,, Addison-Wesley Publishing Company, (1957). [7] L. Arnold and B. Schmalfuss, Lyapunov's second method for random dynamical systems,, Journal of Differential Equations, 177 (2001), 235. doi: 10.1006/jdeq.2000.3991. [8] A. Bacciotti and L. Rosier, Liapunov and Lagrange stability: Inverse theorems for discontinuous systems,, Mathematics of Control, 11 (1998), 101. doi: 10.1007/BF02741887. [9] E. A. Barbashin, On the theory of general dynamical systems,, (Russian) Ucen. Zap. Moskov. Gos. Univ., 135 (1948), 110. [10] E. A. Barbashin, Existence of smooth solutions of some linear equations with partial derivatives,, Doklady Akademii Nauk SSSR, 72 (1950), 445. [11] E. A. Barbashin and N. N. Krasovskii, On the stability of motion in the large,, (Russian) Doklady Akademii Nauk SSSR, 86 (1952), 453. [12] E. A. Barbashin and N. N. Krasovskii, On the existence of a function of Lyapunov in the case of asymptotic stability in the large,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 345. [13] R. W. Brockett, Asymptotic stability and feedback stabilization,, in Differential Geometric Control Theory (eds. R. W. Brockett, (1983), 181. [14] C. Cai, A. R. Teel and R. Goebel, Smooth Lyapunov functions for hybrid systems, Part I: Existence is equivalent to robustness,, IEEE Transactions on Automatic Control, 52 (2007), 1264. doi: 10.1109/TAC.2007.900829. [15] C. Cai, A. R. Teel and R. Goebel, Smooth Lyapunov functions for hybrid systems, Part II: (Pre-)asymptotically stable compact sets,, IEEE Transactions on Automatic Control, 53 (2007), 734. doi: 10.1109/TAC.2008.919257. [16] N. G. Chetayev, The Stability of Motion,, Pergamon Press, (1961). [17] F. H. Clarke, Y. S. Ledyaev, L. Rifford and R. J. Stern, Feedback stabilization and Lyapunov functions,, SIAM Journal on Control and Optimization, 39 (2000), 25. doi: 10.1137/S0363012999352297. [18] F. H. Clarke, Y. S. Ledyaev, E. D. Sontag and A. I. Subbotin, Asymptotic controllability implies feedback stabilization,, IEEE Transactions on Automatic Control, 42 (1997), 1394. doi: 10.1109/9.633828. [19] F. H. Clarke, Y. S. Ledyaev and R. J. Stern, Asymptotic stability and smooth Lyapunov functions,, Journal of Differential Equations, 149 (1998), 69. doi: 10.1006/jdeq.1998.3476. [20] F. H. Clarke, Y. S. Ledyaev, R. J. Stern and P. R. Wolenski, Nonsmooth Analysis and Control Theory,, Springer-Verlag, (1998). [21] C. Conley, Isolated Invariant Sets and the Morse Index,, CBMS Regional Conference Series no. 38, (1978). [22] T. M. Cover and J. A. Thomas, Elements of Information Theory,, 2nd edition, (2006). [23] K. Deimling, Multivalued Differential Equations,, Walter de Gruyter, (1992). doi: 10.1515/9783110874228. [24] A. F. Filippov, Differential Equations with Discontinuous Righthand Sides,, Kluwer Academic Publishers, (1988). doi: 10.1007/978-94-015-7793-9. [25] P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions,, Discrete and Continuous Dynamical Systems, 20 (2015). [26] R. Goebel, R. G. Sanfelice and A. R. Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness,, Princeton University Press, (2012). [27] S. P. Gordon, On converses to the stability theorems for difference equations,, SIAM Journal on Control, 10 (1972), 76. doi: 10.1137/0310007. [28] L. Grüne, F. Camilli and F. Wirth, A generalization of Zubov's method to perturbed systems,, SIAM Journal on Control and Optimization, 40 (2001), 496. doi: 10.1137/S036301299936316X. [29] L. Grüne, P. E. Kloeden, S. Siegmund and F. R. Wirth, Lyapunov's second method for nonautonomous differential equations,, Discrete and Continuous Dynamical Systems, 18 (2007), 375. doi: 10.3934/dcds.2007.18.375. [30] L. Grüne and O. S. Serea, Differential games and Zubov's method,, SIAM Journal on Control and Optimization, 49 (2011), 2349. doi: 10.1137/100787829. [31] W. Hahn, Theory and Application of Liapunov's Direct Method,, Prentice-Hall, (1963). [32] W. Hahn, Stability of Motion,, Springer-Verlag, (1967). [33] B. E. Hitz and B. D. O. Anderson, Discrete positive-real functions and their application to system stability,, Proceedings of the Institution of Electrical Engineers, 116 (1969), 153. doi: 10.1049/piee.1969.0031. [34] F. C. Hoppensteadt, Singular perturbations on the infinite interval,, Transactions of the American Mathematical Society, 123 (1966), 521. doi: 10.1090/S0002-9947-1966-0194693-9. [35] B. P. Ingalls, E. D. Sontag and Y. Wang, Measurement to error stability: A notion of partial detectability for nonlinear systems,, in Proceedings of the 41st IEEE Conference on Decision and Control, (2002), 3946. doi: 10.1109/CDC.2002.1184983. [36] Z.-P. Jiang and Y. Wang, A converse Lyapunov theorem for discrete-time systems with disturbances,, Systems & Control Letters, 45 (2002), 49. doi: 10.1016/S0167-6911(01)00164-5. [37] R. E. Kalman, Lyapunov function for the problem of Lur'e in automatic control,, Proc. Nat. Acad. Sci. U.S.A., 49 (1963), 201. doi: 10.1073/pnas.49.2.201. [38] R. E. Kalman and J. E. Bertram, Control system analysis and design via the "second method'' of Lyapunov, Part I, continuous-time systems,, Transactions of the AMSE, 82 (1960), 371. doi: 10.1115/1.3662604. [39] R. E. Kalman and J. E. Bertram, Control system analysis and design via the "second method'' of Lyapunov, Part II, discrete-time systems,, Transactions of the AMSE, 82 (1960), 394. doi: 10.1115/1.3662605. [40] I. Karafyllis, Non-uniform robust global asymptotic stability for discrete-time systems and applications to numerical analysis,, IMA Journal of Mathematical Control and Information, 23 (2006), 11. doi: 10.1093/imamci/dni037. [41] I. Karafyllis and J. Tsinias, A converse Lyapunov theorem for nonuniform in time global asymptotic stability and its application to feedback stabilization,, SIAM Journal on Control and Optimization, 42 (2003), 936. doi: 10.1137/S0363012901392967. [42] C. M. Kellett, A compendium of comparison function results,, Mathematics of Controls, 26 (2014), 339. doi: 10.1007/s00498-014-0128-8. [43] C. M. Kellett and A. R. Teel, A converse Lyapunov theorem for weak uniform asymptotic stability of sets,, in Proceedings of Mathematical Theory of Networks and Systems, (2000). [44] C. M. Kellett and A. R. Teel, Uniform asymptotic controllability to a set implies locally Lipschitz control-Lyapunov function,, in Proceedings of the 39th IEEE Conference on Decision and Control, (2000), 3994. doi: 10.1109/CDC.2000.912339. [45] C. M. Kellett and A. R. Teel, Discrete-time asymptotic controllability implies smooth control-Lyapunov function,, Systems & Control Letters, 52 (2004), 349. doi: 10.1016/j.sysconle.2004.02.011. [46] C. M. Kellett and A. R. Teel, Weak converse Lyapunov theorems and control Lyapunov functions,, SIAM Journal on Control and Optimization, 42 (2004), 1934. doi: 10.1137/S0363012901398186. [47] C. M. Kellett and A. R. Teel, On the robustness of $\mathcal{KL}$-stability for difference inclusions: Smooth discrete-time Lyapunov functions,, SIAM Journal on Control and Optimization, 44 (2005), 777. doi: 10.1137/S0363012903435862. [48] C. M. Kellett and A. R. Teel, Sufficient conditions for robustness of $\mathcal{KL}$-stability for difference inclusions,, Mathematics of Control, 19 (2007), 183. doi: 10.1007/s00498-007-0016-6. [49] H. K. Khalil, Nonlinear Systems,, 2nd edition, (1996). [50] R. Khasminskii, Stochastic Stability of Differential Equations,, 2nd edition, (2012). doi: 10.1007/978-3-642-23280-0. [51] P. E. Kloeden, General control systems,, in Mathematical Control Theory 1977: Proceedings (ed. W. A. Coppel), (1977), 119. [52] P. E. Kloeden, Lyapunov functions for cocycle attractors in nonautonomous difference equations,, Izvetsiya Akad Nauk Rep Moldovia Mathematika, 26 (1998), 32. [53] P. E. Kloeden, A Lyapunov function for pullback attractors of nonautonomous differential equations,, Electronic Journal of Differential Equations Conference 05, (2000), 91. [54] P. Kokotović and M. Arcak, Constructive nonlinear control: A historical perspective,, Automatica, 37 (2001), 637. doi: 10.1016/S0005-1098(01)00002-4. [55] N. N. Krasovskii, On the inversion of the theorems of A. M. Lyapunov and N. G. Chetaev concerning instability for stationary systems of differential equations,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 513. [56] N. N. Krasovskii, On the converse of K. P. Persidskii's theorem on uniform stability,, (Russian) Prikladnaya Matematika i Mekhanika, 19 (1955), 273. [57] N. N. Krasovskii, Transformation of the theorem of A. M. Lyapunov's second method and questions of first-order stability of motion,, (Russian) Prikladnaya Matematika i Mekhanika, 20 (1956), 255. [58] N. N. Krasovskii, Stability of Motion: Applications of Lyapunov's Second Method to Differential Systems and Equations with Delay,, Stanford University Press, (1963). [59] M. Krichman, E. D. Sontag and Y. Wang, Input-output-to-state stability,, SIAM Journal on Control and Optimization, 39 (2001), 1874. doi: 10.1137/S0363012999365352. [60] M. Krstić, I. Kanellakopoulos and P. Kokotović, Nonlinear and Adaptive Control Design,, John Wiley and Sons, (1995). [61] J. Kurzweil, Transformation of Lyapunov's first theorem on stability of motion,, (Russian) Czechoslovak Mathematical Journal, 5 (1955), 382. [62] J. Kurzweil, On the inversion of Ljapunov's second theorem on stability of motion,, (Russian) Czechoslovak Mathematical Journal, 81 (1956), 217. [63] J. Kurzweil and I. Vrkoč, Transformation of Lyapunov's theorems on stability and Persidskii's theorems on uniform stability,, (Russian) Czechoslovak Mathematical Journal, 7 (1957), 254. [64] H. J. Kushner, Converse theorems for stochastic Liapunov functions,, SIAM Journal on Control and Optimization, 5 (1967), 228. doi: 10.1137/0305015. [65] J. La Salle and S. Lefschetz, Stability by Liapunov's Direct Method with Applications,, Academic Press, (1961). [66] V. Lakshmikantham and L. Salvadori, On Massera type converse theorem in terms of two different measures,, Bollettino dell'Unione Matematica Italiana, 13 (1976), 293. [67] A. L. Letov, Stability in Nonlinear Control Systems,, Princeton University Press, (1961). [68] Y. Lin, E. D. Sontag and Y. Wang, A smooth converse Lyapunov theorem for robust stability,, SIAM Journal on Control and Optimization, 34 (1996), 124. doi: 10.1137/S0363012993259981. [69] A. I. Lur'e, Some Non-Linear Problems in the Theory of Automatic Control,, Her Majesty's Stationery Office, (1957). [70] A. I. Lur'e and V. N. Postnikov, Stability theory of regulating systems,, (Russian) Prikladnaya Matematika i Mekhanika, 8 (1944), 246. [71] A. M. Lyapunov, The general problem of the stability of motion,, (Russian) Math. Soc. of Kharkov; English Translation, 55 (1992), 531. doi: 10.1080/00207179208934253. [72] I. G. Malkin, Questions concerning transformation of Lyapunov's theorem on asymptotic stability,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 129. [73] I. G. Malkin, Some Problems in the Theory of Nonlinear Oscillations,, United States Atomic Energy Commission, (1959). [74] J. L. Massera, On Liapounoff's conditions of stability,, Annals of Mathematics, 50 (1949), 705. doi: 10.2307/1969558. [75] J. L. Massera, Contributions to stability theory,, Annals of Mathematics, 64 (1956), 182. doi: 10.2307/1969955. [76] A. M. Meilakhs, Design of stable control systems subject to parametric perturbation,, Automation and Remote Control, 39 (1979), 1409. [77] A. N. Michel, L. Hou and D. Liu, Stability of Dynamical Systems: Continuous, Discontinuous, and Discrete Systems,, Birkhäuser, (2008). [78] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems I,, Automation and Remote Control, 47 (1986), 344. [79] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems II,, Automation and Remote Control, 47 (1986), 443. [80] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems III,, Automation and Remote Control, 47 (1986), 620. [81] A. P. Molchanov and Y. S. Pyatnitskiy, Criteria of asymptotic stability of differential and difference inclusions encountered in control theory,, Systems & Control Letters, 13 (1989), 59. doi: 10.1016/0167-6911(89)90021-2. [82] A. A. Movchan, Stability of processes with respect to two metrics,, Journal of Applied Mathematics and Mechanics, 24 (1960), 1506. doi: 10.1016/0021-8928(60)90004-6. [83] M. Patrao, Existence of complete Lyapunov functions for semiflows on separable metric spaces,, Far East Journal of Dynamical Systems, 17 (2011), 49. [84] K. P. Persidskii, On a theorem of Liapunov,, C. R. (Dokl.) Acad. Sci. URSS, 14 (1937), 541. [85] V. M. Popov, Absolute stability of nonlinear systems of automatic control,, Automation and Remote Control, 22 (1961), 857. [86] V. M. Popov, Proprietati de stabilitate si de optimalitate pentru sistemele automate cu mai multe functii de comanda,, (Romanian) Studii si Cercetari de Energetica, 14 (1964), 913. [87] V. M. Popov, Hyperstability of Control Systems,, Springer-Verlag, (1973). [88] A. Rantzer, A dual to Lyapunov's stability theorem,, Systems & Control Letters, 42 (2001), 161. doi: 10.1016/S0167-6911(00)00087-6. [89] A. Rantzer, An converse theorem for density functions,, in Proceedings of the 41st IEEE Conference on Decision and Control, (2002), 1890. doi: 10.1109/CDC.2002.1184801. [90] L. Rifford, Existence of Lipschitz and semiconcave control-Lyapunov functions,, SIAM Journal on Control and Optimization, 39 (2000), 1043. doi: 10.1137/S0363012999356039. [91] L. Rifford, Semiconcave control-Lyapunov functions and stabilizing feedbacks,, SIAM Journal on Control and Optimization, 41 (2002), 659. doi: 10.1137/S0363012900375342. [92] L. Rosier, Homogeneous Lyapunov function for homogeneous continuous vector field,, Systems & Control Letters, 19 (1992), 467. doi: 10.1016/0167-6911(92)90078-7. [93] N. Rouche, P. Habets and M. Laloy, Stability Theory by Liapunov's Direct Method,, Springer-Verlag, (1977). [94] E. Roxin, On generalized dynamical systems defined by contingent equations,, Journal of Differential Equations, 1 (1965), 188. doi: 10.1016/0022-0396(65)90019-7. [95] E. Roxin, Stability in general control systems,, Journal of Differential Equations, 1 (1965), 115. doi: 10.1016/0022-0396(65)90015-X. [96] E. Roxin, On asymptotic stability in control systems,, Rendiconti del Circolo Matematico di Palermo, 15 (1966), 193. doi: 10.1007/BF02849435. [97] E. Roxin, On stability in control systems,, SIAM Journal on Control, 3 (1966), 357. doi: 10.1137/0303024. [98] C. E. Shannon and W. Weaver, The Mathematical Theory of Communication,, University of Illinois Press, (1949). [99] R. Shorten, F. Wirth, O. Mason, K. Wulff and C. King, Stability criteria for switched and hybrid systems,, SIAM Review, 49 (2007), 545. doi: 10.1137/05063516X. [100] D. D. Šiljak, Nonlinear Systems: The Parameter Analysis and Design,, John Wiley & Sons Inc., (1969). [101] G. V. Smirnov, Weak asymptotic stability of differential inclusions I,, Automation and Remote Control, 51 (1990), 901. [102] G. V. Smirnov, Weak asymptotic stability of differential inclusions II,, Automation and Remote Control, 51 (1990), 1052. [103] E. D. Sontag, A Lyapunov-like characterization of asymptotic controllability,, SIAM Journal on Control and Optimization, 21 (1983), 462. doi: 10.1137/0321028. [104] E. D. Sontag, Smooth stabilization implies coprime factorization,, IEEE Transactions on Automatic Control, 34 (1989), 435. doi: 10.1109/9.28018. [105] E. D. Sontag, Clocks and insensitivity to small measurement errors,, ESAIM: Control, 4 (1999), 537. doi: 10.1051/cocv:1999121. [106] E. D. Sontag and Y. Wang, On characterizations of the input-to-state stability property,, Systems & Control Letters, 24 (1995), 351. doi: 10.1016/0167-6911(94)00050-6. [107] P. Stein, Some general theorems on iterants,, Journal of Research of the National Bureau of Standards, 48 (1952), 82. doi: 10.6028/jres.048.010. [108] A. Subbaraman and A. R. Teel, A converse Lyapunov theorem for strong global recurrence,, Automatica, 49 (2013), 2963. doi: 10.1016/j.automatica.2013.07.001. [109] A. R. Teel, J. P. Hespanha and A. Subbaraman, A converse Lyapunov theorem and robustness for asymptotic stability in probability,, IEEE Transactions on Automatic Control, 59 (2014), 2426. doi: 10.1109/TAC.2014.2322431. [110] A. R. Teel and L. Praly, A smooth Lyapunov function from a class-$\mathcal{KL}$ estimate involving two positive semidefinite functions,, ESAIM: Control, 5 (2000), 313. doi: 10.1051/cocv:2000113. [111] Y. Z. Tsypkin, The absolute stability of large-scale nonlinear sampled-data systems,, (Russian) Doklady Akademii Nauk SSSR, 145 (1962), 52. [112] Y. Z. Tsypkin, Absolute stability of equilibrium positions and of responses in nonlinear, sampled-data automatic systems,, Automation and Remote Control, 24 (1963), 1457. [113] V. I. Vorotnikov, Partial stability and control: The state-of-the-art and development prospects,, Automation and Remote Control, 66 (2005), 511. doi: 10.1007/s10513-005-0099-9. [114] I. Vrkoč, A general theorem of Chetaev,, (Russian) Czechoslovak Mathematical Journal, 5 (1955), 451. [115] J. C. Willems, Dissipative dynamical systems part I: General theory,, Archive for Rational Mechanics and Analysis, 45 (1972), 321. doi: 10.1007/BF00276493. [116] J. C. Willems, Dissipative dynamical systems part II: Linear systems with quadratic supply rates,, Archive for Rational Mechanics and Analysis, 45 (1972), 352. doi: 10.1007/BF00276494. [117] F. W. Wilson, Smoothing derivatives of functions and applications,, Transactions of the American Mathematical Society, 139 (1969), 413. doi: 10.1090/S0002-9947-1969-0251747-9. [118] V. A. Yakubovich, The solution of certain matrix inequalities in automatic control theory,, Doklady Akademii Nauk SSSR, 143 (1962), 1304. [119] T. Yoshizawa, On the stability of solutions of a system of differential equations,, Mem. Coll. Sci. Univ. Kyoto. Ser. A. Math., 29 (1955), 27. [120] T. Yoshizawa, Stability Theory by Liapunov's Second Method,, Mathematical Society of Japan, (1966). [121] V. I. Zubov, Methods of A. M. Lyapunov and their Application,, P. Noordhoff Ltd, (1964).
show all references
References:
[1] B. D. O. Anderson, Stability of control systems with multiple nonlinearities,, Journal of the Franklin Institute, 282 (1966), 155. doi: 10.1016/0016-0032(66)90317-6. [2] B. D. O. Anderson and J. B. Moore, New results in linear system stability,, SIAM Journal on Control, 7 (1969), 398. doi: 10.1137/0307029. [3] B. D. O. Anderson and J. B. Moore, Detectability and stabilizability of time-varying discrete-time linear systems,, SIAM Journal on Control and Optimization, 19 (1981), 20. doi: 10.1137/0319002. [4] D. Angeli and E. D. Sontag, Forward completeness, unboundedness observability, and their Lyapunov characterizations,, Systems & Control Letters, 38 (1999), 209. doi: 10.1016/S0167-6911(99)00055-9. [5] H. Antosiewicz, A survey of Lyapunov's second method,, Contributions to Nonlinear Oscillations, (1958), 147. [6] T. M. Apostol, Mathematical Analysis: A Modern Approach to Advanced Calculus,, Addison-Wesley Publishing Company, (1957). [7] L. Arnold and B. Schmalfuss, Lyapunov's second method for random dynamical systems,, Journal of Differential Equations, 177 (2001), 235. doi: 10.1006/jdeq.2000.3991. [8] A. Bacciotti and L. Rosier, Liapunov and Lagrange stability: Inverse theorems for discontinuous systems,, Mathematics of Control, 11 (1998), 101. doi: 10.1007/BF02741887. [9] E. A. Barbashin, On the theory of general dynamical systems,, (Russian) Ucen. Zap. Moskov. Gos. Univ., 135 (1948), 110. [10] E. A. Barbashin, Existence of smooth solutions of some linear equations with partial derivatives,, Doklady Akademii Nauk SSSR, 72 (1950), 445. [11] E. A. Barbashin and N. N. Krasovskii, On the stability of motion in the large,, (Russian) Doklady Akademii Nauk SSSR, 86 (1952), 453. [12] E. A. Barbashin and N. N. Krasovskii, On the existence of a function of Lyapunov in the case of asymptotic stability in the large,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 345. [13] R. W. Brockett, Asymptotic stability and feedback stabilization,, in Differential Geometric Control Theory (eds. R. W. Brockett, (1983), 181. [14] C. Cai, A. R. Teel and R. Goebel, Smooth Lyapunov functions for hybrid systems, Part I: Existence is equivalent to robustness,, IEEE Transactions on Automatic Control, 52 (2007), 1264. doi: 10.1109/TAC.2007.900829. [15] C. Cai, A. R. Teel and R. Goebel, Smooth Lyapunov functions for hybrid systems, Part II: (Pre-)asymptotically stable compact sets,, IEEE Transactions on Automatic Control, 53 (2007), 734. doi: 10.1109/TAC.2008.919257. [16] N. G. Chetayev, The Stability of Motion,, Pergamon Press, (1961). [17] F. H. Clarke, Y. S. Ledyaev, L. Rifford and R. J. Stern, Feedback stabilization and Lyapunov functions,, SIAM Journal on Control and Optimization, 39 (2000), 25. doi: 10.1137/S0363012999352297. [18] F. H. Clarke, Y. S. Ledyaev, E. D. Sontag and A. I. Subbotin, Asymptotic controllability implies feedback stabilization,, IEEE Transactions on Automatic Control, 42 (1997), 1394. doi: 10.1109/9.633828. [19] F. H. Clarke, Y. S. Ledyaev and R. J. Stern, Asymptotic stability and smooth Lyapunov functions,, Journal of Differential Equations, 149 (1998), 69. doi: 10.1006/jdeq.1998.3476. [20] F. H. Clarke, Y. S. Ledyaev, R. J. Stern and P. R. Wolenski, Nonsmooth Analysis and Control Theory,, Springer-Verlag, (1998). [21] C. Conley, Isolated Invariant Sets and the Morse Index,, CBMS Regional Conference Series no. 38, (1978). [22] T. M. Cover and J. A. Thomas, Elements of Information Theory,, 2nd edition, (2006). [23] K. Deimling, Multivalued Differential Equations,, Walter de Gruyter, (1992). doi: 10.1515/9783110874228. [24] A. F. Filippov, Differential Equations with Discontinuous Righthand Sides,, Kluwer Academic Publishers, (1988). doi: 10.1007/978-94-015-7793-9. [25] P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions,, Discrete and Continuous Dynamical Systems, 20 (2015). [26] R. Goebel, R. G. Sanfelice and A. R. Teel, Hybrid Dynamical Systems: Modeling, Stability, and Robustness,, Princeton University Press, (2012). [27] S. P. Gordon, On converses to the stability theorems for difference equations,, SIAM Journal on Control, 10 (1972), 76. doi: 10.1137/0310007. [28] L. Grüne, F. Camilli and F. Wirth, A generalization of Zubov's method to perturbed systems,, SIAM Journal on Control and Optimization, 40 (2001), 496. doi: 10.1137/S036301299936316X. [29] L. Grüne, P. E. Kloeden, S. Siegmund and F. R. Wirth, Lyapunov's second method for nonautonomous differential equations,, Discrete and Continuous Dynamical Systems, 18 (2007), 375. doi: 10.3934/dcds.2007.18.375. [30] L. Grüne and O. S. Serea, Differential games and Zubov's method,, SIAM Journal on Control and Optimization, 49 (2011), 2349. doi: 10.1137/100787829. [31] W. Hahn, Theory and Application of Liapunov's Direct Method,, Prentice-Hall, (1963). [32] W. Hahn, Stability of Motion,, Springer-Verlag, (1967). [33] B. E. Hitz and B. D. O. Anderson, Discrete positive-real functions and their application to system stability,, Proceedings of the Institution of Electrical Engineers, 116 (1969), 153. doi: 10.1049/piee.1969.0031. [34] F. C. Hoppensteadt, Singular perturbations on the infinite interval,, Transactions of the American Mathematical Society, 123 (1966), 521. doi: 10.1090/S0002-9947-1966-0194693-9. [35] B. P. Ingalls, E. D. Sontag and Y. Wang, Measurement to error stability: A notion of partial detectability for nonlinear systems,, in Proceedings of the 41st IEEE Conference on Decision and Control, (2002), 3946. doi: 10.1109/CDC.2002.1184983. [36] Z.-P. Jiang and Y. Wang, A converse Lyapunov theorem for discrete-time systems with disturbances,, Systems & Control Letters, 45 (2002), 49. doi: 10.1016/S0167-6911(01)00164-5. [37] R. E. Kalman, Lyapunov function for the problem of Lur'e in automatic control,, Proc. Nat. Acad. Sci. U.S.A., 49 (1963), 201. doi: 10.1073/pnas.49.2.201. [38] R. E. Kalman and J. E. Bertram, Control system analysis and design via the "second method'' of Lyapunov, Part I, continuous-time systems,, Transactions of the AMSE, 82 (1960), 371. doi: 10.1115/1.3662604. [39] R. E. Kalman and J. E. Bertram, Control system analysis and design via the "second method'' of Lyapunov, Part II, discrete-time systems,, Transactions of the AMSE, 82 (1960), 394. doi: 10.1115/1.3662605. [40] I. Karafyllis, Non-uniform robust global asymptotic stability for discrete-time systems and applications to numerical analysis,, IMA Journal of Mathematical Control and Information, 23 (2006), 11. doi: 10.1093/imamci/dni037. [41] I. Karafyllis and J. Tsinias, A converse Lyapunov theorem for nonuniform in time global asymptotic stability and its application to feedback stabilization,, SIAM Journal on Control and Optimization, 42 (2003), 936. doi: 10.1137/S0363012901392967. [42] C. M. Kellett, A compendium of comparison function results,, Mathematics of Controls, 26 (2014), 339. doi: 10.1007/s00498-014-0128-8. [43] C. M. Kellett and A. R. Teel, A converse Lyapunov theorem for weak uniform asymptotic stability of sets,, in Proceedings of Mathematical Theory of Networks and Systems, (2000). [44] C. M. Kellett and A. R. Teel, Uniform asymptotic controllability to a set implies locally Lipschitz control-Lyapunov function,, in Proceedings of the 39th IEEE Conference on Decision and Control, (2000), 3994. doi: 10.1109/CDC.2000.912339. [45] C. M. Kellett and A. R. Teel, Discrete-time asymptotic controllability implies smooth control-Lyapunov function,, Systems & Control Letters, 52 (2004), 349. doi: 10.1016/j.sysconle.2004.02.011. [46] C. M. Kellett and A. R. Teel, Weak converse Lyapunov theorems and control Lyapunov functions,, SIAM Journal on Control and Optimization, 42 (2004), 1934. doi: 10.1137/S0363012901398186. [47] C. M. Kellett and A. R. Teel, On the robustness of $\mathcal{KL}$-stability for difference inclusions: Smooth discrete-time Lyapunov functions,, SIAM Journal on Control and Optimization, 44 (2005), 777. doi: 10.1137/S0363012903435862. [48] C. M. Kellett and A. R. Teel, Sufficient conditions for robustness of $\mathcal{KL}$-stability for difference inclusions,, Mathematics of Control, 19 (2007), 183. doi: 10.1007/s00498-007-0016-6. [49] H. K. Khalil, Nonlinear Systems,, 2nd edition, (1996). [50] R. Khasminskii, Stochastic Stability of Differential Equations,, 2nd edition, (2012). doi: 10.1007/978-3-642-23280-0. [51] P. E. Kloeden, General control systems,, in Mathematical Control Theory 1977: Proceedings (ed. W. A. Coppel), (1977), 119. [52] P. E. Kloeden, Lyapunov functions for cocycle attractors in nonautonomous difference equations,, Izvetsiya Akad Nauk Rep Moldovia Mathematika, 26 (1998), 32. [53] P. E. Kloeden, A Lyapunov function for pullback attractors of nonautonomous differential equations,, Electronic Journal of Differential Equations Conference 05, (2000), 91. [54] P. Kokotović and M. Arcak, Constructive nonlinear control: A historical perspective,, Automatica, 37 (2001), 637. doi: 10.1016/S0005-1098(01)00002-4. [55] N. N. Krasovskii, On the inversion of the theorems of A. M. Lyapunov and N. G. Chetaev concerning instability for stationary systems of differential equations,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 513. [56] N. N. Krasovskii, On the converse of K. P. Persidskii's theorem on uniform stability,, (Russian) Prikladnaya Matematika i Mekhanika, 19 (1955), 273. [57] N. N. Krasovskii, Transformation of the theorem of A. M. Lyapunov's second method and questions of first-order stability of motion,, (Russian) Prikladnaya Matematika i Mekhanika, 20 (1956), 255. [58] N. N. Krasovskii, Stability of Motion: Applications of Lyapunov's Second Method to Differential Systems and Equations with Delay,, Stanford University Press, (1963). [59] M. Krichman, E. D. Sontag and Y. Wang, Input-output-to-state stability,, SIAM Journal on Control and Optimization, 39 (2001), 1874. doi: 10.1137/S0363012999365352. [60] M. Krstić, I. Kanellakopoulos and P. Kokotović, Nonlinear and Adaptive Control Design,, John Wiley and Sons, (1995). [61] J. Kurzweil, Transformation of Lyapunov's first theorem on stability of motion,, (Russian) Czechoslovak Mathematical Journal, 5 (1955), 382. [62] J. Kurzweil, On the inversion of Ljapunov's second theorem on stability of motion,, (Russian) Czechoslovak Mathematical Journal, 81 (1956), 217. [63] J. Kurzweil and I. Vrkoč, Transformation of Lyapunov's theorems on stability and Persidskii's theorems on uniform stability,, (Russian) Czechoslovak Mathematical Journal, 7 (1957), 254. [64] H. J. Kushner, Converse theorems for stochastic Liapunov functions,, SIAM Journal on Control and Optimization, 5 (1967), 228. doi: 10.1137/0305015. [65] J. La Salle and S. Lefschetz, Stability by Liapunov's Direct Method with Applications,, Academic Press, (1961). [66] V. Lakshmikantham and L. Salvadori, On Massera type converse theorem in terms of two different measures,, Bollettino dell'Unione Matematica Italiana, 13 (1976), 293. [67] A. L. Letov, Stability in Nonlinear Control Systems,, Princeton University Press, (1961). [68] Y. Lin, E. D. Sontag and Y. Wang, A smooth converse Lyapunov theorem for robust stability,, SIAM Journal on Control and Optimization, 34 (1996), 124. doi: 10.1137/S0363012993259981. [69] A. I. Lur'e, Some Non-Linear Problems in the Theory of Automatic Control,, Her Majesty's Stationery Office, (1957). [70] A. I. Lur'e and V. N. Postnikov, Stability theory of regulating systems,, (Russian) Prikladnaya Matematika i Mekhanika, 8 (1944), 246. [71] A. M. Lyapunov, The general problem of the stability of motion,, (Russian) Math. Soc. of Kharkov; English Translation, 55 (1992), 531. doi: 10.1080/00207179208934253. [72] I. G. Malkin, Questions concerning transformation of Lyapunov's theorem on asymptotic stability,, (Russian) Prikladnaya Matematika i Mekhanika, 18 (1954), 129. [73] I. G. Malkin, Some Problems in the Theory of Nonlinear Oscillations,, United States Atomic Energy Commission, (1959). [74] J. L. Massera, On Liapounoff's conditions of stability,, Annals of Mathematics, 50 (1949), 705. doi: 10.2307/1969558. [75] J. L. Massera, Contributions to stability theory,, Annals of Mathematics, 64 (1956), 182. doi: 10.2307/1969955. [76] A. M. Meilakhs, Design of stable control systems subject to parametric perturbation,, Automation and Remote Control, 39 (1979), 1409. [77] A. N. Michel, L. Hou and D. Liu, Stability of Dynamical Systems: Continuous, Discontinuous, and Discrete Systems,, Birkhäuser, (2008). [78] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems I,, Automation and Remote Control, 47 (1986), 344. [79] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems II,, Automation and Remote Control, 47 (1986), 443. [80] A. P. Molchanov and E. S. Pyatnitskii, Lyapunov functions that specifiy necessary and sufficient conditions of absolute stability of nonlinear nonstationary control systems III,, Automation and Remote Control, 47 (1986), 620. [81] A. P. Molchanov and Y. S. Pyatnitskiy, Criteria of asymptotic stability of differential and difference inclusions encountered in control theory,, Systems & Control Letters, 13 (1989), 59. doi: 10.1016/0167-6911(89)90021-2. [82] A. A. Movchan, Stability of processes with respect to two metrics,, Journal of Applied Mathematics and Mechanics, 24 (1960), 1506. doi: 10.1016/0021-8928(60)90004-6. [83] M. Patrao, Existence of complete Lyapunov functions for semiflows on separable metric spaces,, Far East Journal of Dynamical Systems, 17 (2011), 49. [84] K. P. Persidskii, On a theorem of Liapunov,, C. R. (Dokl.) Acad. Sci. URSS, 14 (1937), 541. [85] V. M. Popov, Absolute stability of nonlinear systems of automatic control,, Automation and Remote Control, 22 (1961), 857. [86] V. M. Popov, Proprietati de stabilitate si de optimalitate pentru sistemele automate cu mai multe functii de comanda,, (Romanian) Studii si Cercetari de Energetica, 14 (1964), 913. [87] V. M. Popov, Hyperstability of Control Systems,, Springer-Verlag, (1973). [88] A. Rantzer, A dual to Lyapunov's stability theorem,, Systems & Control Letters, 42 (2001), 161. doi: 10.1016/S0167-6911(00)00087-6. [89] A. Rantzer, An converse theorem for density functions,, in Proceedings of the 41st IEEE Conference on Decision and Control, (2002), 1890. doi: 10.1109/CDC.2002.1184801. [90] L. Rifford, Existence of Lipschitz and semiconcave control-Lyapunov functions,, SIAM Journal on Control and Optimization, 39 (2000), 1043. doi: 10.1137/S0363012999356039. [91] L. Rifford, Semiconcave control-Lyapunov functions and stabilizing feedbacks,, SIAM Journal on Control and Optimization, 41 (2002), 659. doi: 10.1137/S0363012900375342. [92] L. Rosier, Homogeneous Lyapunov function for homogeneous continuous vector field,, Systems & Control Letters, 19 (1992), 467. doi: 10.1016/0167-6911(92)90078-7. [93] N. Rouche, P. Habets and M. Laloy, Stability Theory by Liapunov's Direct Method,, Springer-Verlag, (1977). [94] E. Roxin, On generalized dynamical systems defined by contingent equations,, Journal of Differential Equations, 1 (1965), 188. doi: 10.1016/0022-0396(65)90019-7. [95] E. Roxin, Stability in general control systems,, Journal of Differential Equations, 1 (1965), 115. doi: 10.1016/0022-0396(65)90015-X. [96] E. Roxin, On asymptotic stability in control systems,, Rendiconti del Circolo Matematico di Palermo, 15 (1966), 193. doi: 10.1007/BF02849435. [97] E. Roxin, On stability in control systems,, SIAM Journal on Control, 3 (1966), 357. doi: 10.1137/0303024. [98] C. E. Shannon and W. Weaver, The Mathematical Theory of Communication,, University of Illinois Press, (1949). [99] R. Shorten, F. Wirth, O. Mason, K. Wulff and C. King, Stability criteria for switched and hybrid systems,, SIAM Review, 49 (2007), 545. doi: 10.1137/05063516X. [100] D. D. Šiljak, Nonlinear Systems: The Parameter Analysis and Design,, John Wiley & Sons Inc., (1969). [101] G. V. Smirnov, Weak asymptotic stability of differential inclusions I,, Automation and Remote Control, 51 (1990), 901. [102] G. V. Smirnov, Weak asymptotic stability of differential inclusions II,, Automation and Remote Control, 51 (1990), 1052. [103] E. D. Sontag, A Lyapunov-like characterization of asymptotic controllability,, SIAM Journal on Control and Optimization, 21 (1983), 462. doi: 10.1137/0321028. [104] E. D. Sontag, Smooth stabilization implies coprime factorization,, IEEE Transactions on Automatic Control, 34 (1989), 435. doi: 10.1109/9.28018. [105] E. D. Sontag, Clocks and insensitivity to small measurement errors,, ESAIM: Control, 4 (1999), 537. doi: 10.1051/cocv:1999121. [106] E. D. Sontag and Y. Wang, On characterizations of the input-to-state stability property,, Systems & Control Letters, 24 (1995), 351. doi: 10.1016/0167-6911(94)00050-6. [107] P. Stein, Some general theorems on iterants,, Journal of Research of the National Bureau of Standards, 48 (1952), 82. doi: 10.6028/jres.048.010. [108] A. Subbaraman and A. R. Teel, A converse Lyapunov theorem for strong global recurrence,, Automatica, 49 (2013), 2963. doi: 10.1016/j.automatica.2013.07.001. [109] A. R. Teel, J. P. Hespanha and A. Subbaraman, A converse Lyapunov theorem and robustness for asymptotic stability in probability,, IEEE Transactions on Automatic Control, 59 (2014), 2426. doi: 10.1109/TAC.2014.2322431. [110] A. R. Teel and L. Praly, A smooth Lyapunov function from a class-$\mathcal{KL}$ estimate involving two positive semidefinite functions,, ESAIM: Control, 5 (2000), 313. doi: 10.1051/cocv:2000113. [111] Y. Z. Tsypkin, The absolute stability of large-scale nonlinear sampled-data systems,, (Russian) Doklady Akademii Nauk SSSR, 145 (1962), 52. [112] Y. Z. Tsypkin, Absolute stability of equilibrium positions and of responses in nonlinear, sampled-data automatic systems,, Automation and Remote Control, 24 (1963), 1457. [113] V. I. Vorotnikov, Partial stability and control: The state-of-the-art and development prospects,, Automation and Remote Control, 66 (2005), 511. doi: 10.1007/s10513-005-0099-9. [114] I. Vrkoč, A general theorem of Chetaev,, (Russian) Czechoslovak Mathematical Journal, 5 (1955), 451. [115] J. C. Willems, Dissipative dynamical systems part I: General theory,, Archive for Rational Mechanics and Analysis, 45 (1972), 321. doi: 10.1007/BF00276493. [116] J. C. Willems, Dissipative dynamical systems part II: Linear systems with quadratic supply rates,, Archive for Rational Mechanics and Analysis, 45 (1972), 352. doi: 10.1007/BF00276494. [117] F. W. Wilson, Smoothing derivatives of functions and applications,, Transactions of the American Mathematical Society, 139 (1969), 413. doi: 10.1090/S0002-9947-1969-0251747-9. [118] V. A. Yakubovich, The solution of certain matrix inequalities in automatic control theory,, Doklady Akademii Nauk SSSR, 143 (1962), 1304. [119] T. Yoshizawa, On the stability of solutions of a system of differential equations,, Mem. Coll. Sci. Univ. Kyoto. Ser. A. Math., 29 (1955), 27. [120] T. Yoshizawa, Stability Theory by Liapunov's Second Method,, Mathematical Society of Japan, (1966). [121] V. I. Zubov, Methods of A. M. Lyapunov and their Application,, P. Noordhoff Ltd, (1964).
[1] Volodymyr Pichkur. On practical stability of differential inclusions using Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1977-1986. doi: 10.3934/dcdsb.2017116 [2] Luis Barreira, Claudia Valls. Stability of nonautonomous equations and Lyapunov functions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2631-2650. doi: 10.3934/dcds.2013.33.2631 [3] Robert Baier, Lars Grüne, Sigurđur Freyr Hafstein. Linear programming based Lyapunov function computation for differential inclusions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 33-56. doi: 10.3934/dcdsb.2012.17.33 [4] Andrejs Reinfelds, Klara Janglajew. Reduction principle in the theory of stability of difference equations. Conference Publications, 2007, 2007 (Special) : 864-874. doi: 10.3934/proc.2007.2007.864 [5] Anatoli F. Ivanov, Sergei Trofimchuk. Periodic solutions and their stability of a differential-difference equation. Conference Publications, 2009, 2009 (Special) : 385-393. doi: 10.3934/proc.2009.2009.385 [6] Frédéric Mazenc, Christophe Prieur. Strict Lyapunov functions for semilinear parabolic partial differential equations. Mathematical Control & Related Fields, 2011, 1 (2) : 231-250. doi: 10.3934/mcrf.2011.1.231 [7] Zuowei Cai, Jianhua Huang, Lihong Huang. Generalized Lyapunov-Razumikhin method for retarded differential inclusions: Applications to discontinuous neural networks. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3591-3614. doi: 10.3934/dcdsb.2017181 [8] Deqiong Ding, Wendi Qin, Xiaohua Ding. Lyapunov functions and global stability for a discretized multigroup SIR epidemic model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1971-1981. doi: 10.3934/dcdsb.2015.20.1971 [9] Michael Schönlein. Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4053-4069. doi: 10.3934/dcds.2017172 [10] Jan Čermák, Jana Hrabalová. Delay-dependent stability criteria for neutral delay differential and difference equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4577-4588. doi: 10.3934/dcds.2014.34.4577 [11] Mariusz Michta. On solutions to stochastic differential inclusions. Conference Publications, 2003, 2003 (Special) : 618-622. doi: 10.3934/proc.2003.2003.618 [12] Thomas Lorenz. Mutational inclusions: Differential inclusions in metric spaces. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 629-654. doi: 10.3934/dcdsb.2010.14.629 [13] Peter Giesl, Sigurdur Hafstein. Computational methods for Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : i-ii. doi: 10.3934/dcdsb.2015.20.8i [14] Thanh-Anh Nguyen, Dinh-Ke Tran, Nhu-Quan Nguyen. Weak stability for integro-differential inclusions of diffusion-wave type involving infinite delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3637-3654. doi: 10.3934/dcdsb.2016114 [15] Andrey V. Melnik, Andrei Korobeinikov. Lyapunov functions and global stability for SIR and SEIR models with age-dependent susceptibility. Mathematical Biosciences & Engineering, 2013, 10 (2) : 369-378. doi: 10.3934/mbe.2013.10.369 [16] Ovidiu Carja, Victor Postolache. A Priori estimates for solutions of differential inclusions. Conference Publications, 2011, 2011 (Special) : 258-264. doi: 10.3934/proc.2011.2011.258 [17] Andrej V. Plotnikov, Tatyana A. Komleva, Liliya I. Plotnikova. The averaging of fuzzy hyperbolic differential inclusions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1987-1998. doi: 10.3934/dcdsb.2017117 [18] Robert J. Kipka, Yuri S. Ledyaev. Optimal control of differential inclusions on manifolds. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4455-4475. doi: 10.3934/dcds.2015.35.4455 [19] Ali Akgül, Mustafa Inc, Esra Karatas. Reproducing kernel functions for difference equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1055-1064. doi: 10.3934/dcdss.2015.8.1055 [20] Nguyen Dinh Cong, Thai Son Doan, Stefan Siegmund. On Lyapunov exponents of difference equations with random delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 861-874. doi: 10.3934/dcdsb.2015.20.861
2016 Impact Factor: 0.994 |
http://math.stackexchange.com/questions/247646/limit-x-to-pi-2-without-lhospital?answertab=votes | Limit $x \to \pi/2$ without L'Hospital
I'm having trouble solving this limit without L'Hospital:
$$\lim_{x\to \pi/2} {\cos x\over x-\pi/2}$$
Thanks for any help. I have no idea, how expand.
-
Let $t = \pi/2-x$. Note that as $x \to \pi/2$, we have $t \to 0$. Also, recall that $\cos(x) = \sin(\pi/2-x)$.
Hence, we get that $$\lim_{x \to \pi/2} \dfrac{\cos(x)}{x-\pi/2} = \lim_{x \to \pi/2} \dfrac{\sin(\pi/2-x)}{x-\pi/2} = \lim_{x \to \pi/2} \dfrac{\sin(\pi/2-x)}{-\left(\pi/2 -x\right)} =\lim_{t \to 0} \dfrac{\sin(t)}{-t} = -1$$
-
Of course, this assumes OP is familiar with $\lim_{t\to0}(\sin t/t)$ – Gerry Myerson Nov 29 '12 at 23:04
@GerryMyerson True. But I guess it is a reasonable assumption to make. – user17762 Nov 29 '12 at 23:09
-1: You're just calculating the derivative in a very roundabout way. Far too complicated and devoid of any elegance. – commenter Nov 30 '12 at 0:47
@commenter I am interested in your easy and elegant way for computing the derivative of $\cos(x)$. – user17762 Nov 30 '12 at 1:45
@commenter "You're just calculating the derivative in a very roundabout way. Far too complicated and devoid of any elegance." I am asking you what is the direct way (not roundabout way) to compute it? – user17762 Nov 30 '12 at 2:31
Note that this limit is exactly the definition of the derivative of $\cos x$ at $x=\pi/2$. So even if you're not using L'Hospital's rule to reach $\cos'(\pi/2)$, evaluating that will be exactly the same.
-
Where the sine function in radians crosses the $x$-axis, its slope is always $1$ or $-1$. The shape of the graph of the cosine function is the same as that of the sine function; it's simply shifted horizontally. So where the cosine function crosses the axis at $\pi/2$, going downward, its slope is $-1$. The line $y=x-\pi/2$ also crosses at that same point, with a slope of $1$. Looking at that point under a microscope, the graph of the cosine function looks like a line crossing at that point with slope $-1$, i.e. it looks like $y=-(x-\pi/2)$. So it's as if you're looking at $\dfrac{-(x-\pi/2)}{x-\pi/2}=-1$.
-
If I didn't already know quite a lot about limits and derivatives, I wouldn't have been persuaded by that argument. – mrf Nov 29 '12 at 23:18
@mrf : I would think if you know enough about limits and derivatives to understand the question, then that's enough. – Michael Hardy Nov 29 '12 at 23:20
I disagree. The OP's question is a typical exercise in a first chapter about limits, before derivatives or "slopes" have been introduced. Many, if not most textbooks show that $\lim_{t\to0} \sin t/t = 1$ very early on. (Since the limit is typically used to compute the derivative of sine.) – mrf Nov 29 '12 at 23:28
Apparently I assumed if he's mentioning L'Hopital's rule then he knows that stuff. But I suppose if all that is simply what he passed on to us from his instructor, that's another matter. – Michael Hardy Nov 29 '12 at 23:45 |
https://jermwatt.github.io/machine_learning_refined/notes/9_Feature_engineer_select/9_5_PCA_sphereing.html | code
share
9.5 Feature Scaling via PCA-Sphering*
* The following is part of an early draft of the second edition of Machine Learning Refined. The published text (with revised material) is now available on Amazon as well as other major book retailers. Instructors may request an examination copy from Cambridge University Press.
In the Section 9.3 we saw how feature scaling via standard normalization - i.e., by subtracting off the mean of each input feature and dividing off its standard deviation - significantly improves the topology of a machine learning cost function enabling much more rapid minimization via first order methods like e.g., the generic gradient descent algorithm. In this Section we describe how PCA is used to perform a more advanced form of standard normalization - commonly called PCA sphereing (also commonly referred to as whitening). With this improvement on standard normalization we use PCA to rotate the mean-centered dataset so that its largest orthogonal directions of variance allign with the coordinate axes prior to scaling each input by its standard deviation. This typically allows us to better compactify the data, resulting in a cost function whose contours are even more 'circular' than that provided by standard normalization and thus makes cost functions even easier to optimize.
In [1]:
## The PCA sphering scheme¶
Using the same notation as Section 9.3 we denote by $\mathbf{x}_p$ the $p^{th}$ input of $N$ dimensions belonging to some dataset of $P$ points. By stacking these together column-wise we create our $N\times P$ data matrix $\mathbf{X}$. We then denote $\frac{1}{P}\mathbf{X}\mathbf{X}^T + \lambda \mathbf{I}_{N\times N}$ the regularized covariance matrix of this data and $\frac{1}{P}\mathbf{X}^{\,} \mathbf{X}^T +\lambda \mathbf{I}_{N\times N}= \mathbf{V}^{\,}\mathbf{D}^{\,}\mathbf{V}^T$ its eigenvalue/vector decomposition (see Section 8.5).
Now remember that when performing PCA we first mean-center our dataset (see Section 8.2) - that is we subtract off the mean of each coordinate (note how this is the first step in the standard normalization scheme as well). We then aim to represent each of our mean-centered datapoints datapoint $\mathbf{x}_p$ by $\mathbf{w}_p = \mathbf{V}_{\,}^T\mathbf{x}_p^{\,}$. In the space spanned by the principal components we can represent the entire set of transformed mean-centered data as
$$\text{(PCA transformed data)}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\mathbf{W} = \mathbf{V}^T\mathbf{X}^{\,}.$$
With our data not rotated so that its largest orthogonal directions of variacne align with the coordinate axes, to sphere the data we simply divide off the standard deviation along each coordinate of the PCA-transformed (mean-centered) data $\mathbf{W}$.
In other words, PCA-sphereing is simply the standard normalization scheme we have seen in the previous Section with a single step inserted in between mean centering and the dividing off of standard deviations: in between these two steps we rotate the data using PCA. By rotating the data prior to scaling we can typically shrink the space consumed by the data considerably more than standard normalization, while simultaneously making any associated cost function considerably easier to minimize properly.
In the Figure below we show a generic comparison of how standard normalization and PCA sphereing affect a prototypical dataset, and its associated cost function. Because PCA sphereing first rotates the data prior to scaling it typically results in more compact transformed data, and a transformed cost function with more 'circular' contours (which is easier to minimize via gradient descent).
More formally if the standard normalalization scheme applied to a single datapoint $\mathbf{x}_p$ can be written in two steps as
Standard normalization scheme:
1. (mean center) for each $n$ replace $x_{p,n} \longleftarrow \left({x_{p,n} - \mu_n}\right)$ where $\mu_n = \frac{1}{P}\sum_{p=1}^{P}x_{p,n}$
2. (divide off std) for each $n$ replace $x_{p,n} \longleftarrow \frac{x_{p,n}}{\sigma_n}$ where $\sigma_n = \sqrt{\frac{1}{P}\sum_{p=1}^{P}\left(x_{p,n}\right)^2}$
The PCA-sphereing scheme can be then be written in three highly related steps as follows
PCA-sphereing scheme:
1. (mean center) for each $n$ replace $x_{p,n} \longleftarrow \left({x_{p,n} - \mu_n}\right)$ where $\mu_n = \frac{1}{P}\sum_{p=1}^{P}x_{p,n}$
2. (PCA rotation) transform $\mathbf{w}_p = \mathbf{V}_{\,}^T\mathbf{x}_p^{\,}$ where $\mathbf{V}$ is the full set of eignenvectors of the reguliarzed covariance matrix
3. (divide off std) for each $n$ replace $w_{p,n} \longleftarrow \frac{w_{p,n}}{\sigma_n}$ where $\sigma_n = \sqrt{\frac{1}{P}\sum_{p=1}^{P}\left(w_{p,n}\right)^2}$
## The PCA-sphering scheme expressed more elegantly¶
Here we briefly describe how one can write the PCA-sphereing scheme more elegantly by leveraging our understanding of eigenvalue/vector decompositions. This will result in precisely the same PCA-sphereing scheme we have seen previously, only written in a prettier / more elegant way mathematically speaking. This also helps shed some light on the theoretical aspects of this normalization scheme. However in wading through the mathematical details be sure not to lose the 'big picture' communicated previously: that PCA-sphereing is simply an extension of standard normalization where a PCA rotation is applied after mean centering and before dividing off standard deviations.
Now, if we so choose we can express steps 2 and 3 of PCA-sphereing in a more mathematically elegant way using the eigenvalues of the regularized covariance matrix. The Raleigh quotient definition of the $n^{th}$ eigenvalue $d_n$ of this matrix states that numerically speaking
$$d_n = \frac{1}{P}\mathbf{v}_n \mathbf{X}_{\,}^{\,} \mathbf{X}_{\,}^T \mathbf{v}_n$$
where $\mathbf{v}_n$ is the $n^{th}$ and corresponding eigenvector. Now in terms of our PCA transformed data this is equivalently written as
$$d_n = \frac{1}{P}\left\Vert \mathbf{v}_n^T \mathbf{X} \right \Vert_2^2 = {\frac{1}{P}\sum_{p=1}^{P}\left(w_{p,n}\right)^2}$$
or in other words, it is the variance along the $n^{th}$ axis of the PCA-transformed data. Since the final step of PCA-sphereing has us divide off the standard deviation along each axis of the transformed data we can then write it equivalently in terms of the eigenvalues as
3). (divide off std) for each $n$ replace $w_{p,n} \longleftarrow \frac{w_{p,n}}{d_n^{1/_2}}$ where $d_n^{1/_2}$ is the square root of the $n^{th}$ eigenvalue of the regularized covariance matrix
Denoting $\mathbf{D}^{-1/_2}$ as the diagonal matrix whose $n^{th}$ diagonal element is $\frac{1}{d_n^{1/_2}}$, we can then (after mean-centering the data) express steps 2 and 3 of the PCA-sphereing algorithm very nicely as
$$\text{(cleverly-written PCA-sphered data)}\,\,\,\,\,\,\,\,\, \mathbf{S}^{\,} = \mathbf{D}^{-^1/_2}\mathbf{W}^{\,} = \mathbf{D}^{-^1/_2}\mathbf{V}^T\mathbf{X}^{\,}.$$
While expressing PCA-sphereing in may seem largely cosmetic, notice that in the actual implementation (provided above) it is indeed computationally advantageous to simply use the eigenvalues in step 3 of the method (instead of re-computing the standard deviations along each transformed input axis) since we compute them anyway in performing PCA in step 2.
Also notice that in writing the method this way we can see how - unlike the standard normalization scheme - in performing PCA sphereing we do truly 'sphere' the data in that
$$\frac{1}{P}\mathbf{S}^{\,}\mathbf{S}^T = \mathbf{I}_{N\times N}$$
which can be easily shown by simply plugging in the definition of $\mathbf{S}$ and simplifying. This implies that the contours of any cost function we have seen thus far tend to be highly spherical, e.g., in the case of a quadratic like Least Squares for linear regression are perfectly spherical, and thus will be much easier to optimize. |
http://www.ck12.org/tebook/Basic-Speller-Teacher-Materials/r1/section/7.2/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 7.2: Review of Long and Short Vowel Patterns
Difficulty Level: At Grade Created by: CK-12
## Review of Long and Short Vowel Patterns
1. In each of the following words one of the vowels is marked ‘v’. You are to mark the two letters after that vowel either ‘v’ or ‘c’. If you get to the end of the word before you have marked two more letters, use the tic-tac-toe sign to mark the end of the word. Any cases of vv# should be marked Ve#, as we have done with agree. In words that end VC#, mark the letter in front of the ‘v’ either ‘v’ or ‘c’:
agreeve#chaptervccdisputevcvstudent vcvsubdue ve#brokenvcvrace vcvvacation vcvextreme vcvhug cvc#combat cvc#tiptoe ve#forgot cvc#equipcvc#whispervccpermit cvc#stubbornvcccanoe ve#aspirinvccsymptom vcc\begin{align*}& \text{agree} && \text{subdue} && \text{extreme} && \text{forgot} && \text{stubborn} \\ & \quad \text{v}e\# && \qquad \ \text{v}e\# && \quad \ \text{vcv} && \ \quad \text{cvc}\# && \quad \text{vcc}\\ \\ & \text{chapter} && \text{broken} && \text{hug} && \text{equip} && \text{canoe}\\ & \quad \text{vcc} && \quad \text{vcv} && \ \text{cvc}\# && \quad \text{cvc}\# && \quad \ \text{v}e\#\\ \\ & \text{dispute} && \text{race} && \text{combat} && \text{whisper} && \text{aspirin}\\ & \qquad \text{vcv} && \ \text{vcv} && \quad \ \ \ \text{cvc}\# && \quad \text{vcc} && \quad \text{vcc}\\ \\ & \text{student} && \text{vacation} && \text{tiptoe} && \text{permit} && \text{symptom}\\ & \quad \ \text{vcv} && \quad \ \text{vcv} && \ \quad \text{v}e\# && \ \quad \text{cvc}\# && \ \text{vcc}\end{align*}
2. Now sort the words into this matrix. This matrix has eight squares rather than the regular four, but don't let that bother you. It works just like the smaller ones:
Words with ...
VCC: CVC# VCV: Ve#:
Words with short first vowels in the pattern:
chapter
whisper
stubborn
aspirin
symptom
hug
combat
forgot
equip
permit
Words with long first vowels in the pattern:
dispute
student
broken
race
vacation
extreme
agree
subdue
tiptoe
canoe
3. In the patterns VCC and CVC# the vowel will usually be short, and in the patterns VCV and Ve# the first vowel will usually be long.
Word Squares. Fit these ten words into the Squares. To help you, we have marked the VCV, VCC, VC#, and Ve# strings in each of the ten words:
agreeassistantdisputeeveningcorrectstrikingsuccesscontinuesubmitdie\begin{align*}& \text{agree} && \text{dispute} && \text{correct} && \text{success} && \text{submit}\\ & \text{assistant} && \text{evening} && \text{striking} && \text{continue} && \text{die}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects: |
https://greprepclub.com/forum/it-can-be-inferred-from-the-graphs-that-in-1977-the-populati-10587.html | It is currently 23 Sep 2020, 05:41
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# It can be inferred from the graphs that in 1977 the populati
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 13344
Followers: 288
Kudos [?]: 3382 [0], given: 12176
It can be inferred from the graphs that in 1977 the populati [#permalink] 26 Aug 2018, 03:25
Expert's post
00:00
Question Stats:
65% (01:16) correct 34% (01:10) wrong based on 35 sessions
Attachment:
health.jpg [ 213.31 KiB | Viewed 2179 times ]
It can be inferred from the graphs that in 1977 the population of Country X, in millions, was closest to which of the following?
(A) 120
(B) 150
(C) 190
(D) 240
(E) 250
[Reveal] Spoiler: OA
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
Active Member
Joined: 07 Jan 2018
Posts: 694
Followers: 11
Kudos [?]: 769 [1] , given: 88
Re: It can be inferred from the graphs that in 1977 the populati [#permalink] 26 Aug 2018, 06:37
1
KUDOS
In 1977 Total health expenditure was approx $$47$$ billion.
From per capita graph for health expenditure we know that per capita health expenditure for 1977 was approximately $$240$$
Hence from the information we can form a equation to solve for population during 1977 i.e. $$\frac{47*1000000000}{x}$$ = $$240$$
x = 195833333 = approximately 195 million.
_________________
This is my response to the question and may be incorrect. Feel free to rectify any mistakes
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
Re: It can be inferred from the graphs that in 1977 the populati [#permalink] 26 Aug 2018, 06:37
Display posts from previous: Sort by |
http://tex.stackexchange.com/questions/42554/datatool-with-longtable?answertab=active | # datatool with longtable
I have a CSV file that has many rows. I want to typeset these rows into a table using longtable and datatool package.
1. However just before before the "lastfoot" (which says concluded) I am getting an empty row which I do not want. How to remove that extra row?
2. The columns are not equally spaced under the dates. Somebody please help me to make them equally spaced.
I am giving the sample code. The name of my CSV file is namelist.csv and it looks like this.
number,degree,Name
1,Dr,Abdul Ali
2,Mrs,Francesca Joestar
3,Mr,Chan Ker Mei
4,Dr,Hikaru Yagami
5,Dr,Harish Kumar
My LaTeX code is below.
\documentclass[12pt]{article}
\usepackage{setspace}
\usepackage{bookman}
\usepackage{longtable}
\usepackage{pdflscape}
\usepackage[a4paper,left=1in,right=0.8in,top=0.5in,bottom=0.5in]{geometry}
\usepackage{tgschola}
\usepackage{microtype}
\usepackage{datatool}
%==================================================================
\begin{document}
%
\begin{landscape}
%
\doublespacing
%
\begin{longtable}{|l|l|c|c|r|r|r|c|c|c|c|c|c|c|c|c|c|c|}\hline
%
No.&\multicolumn{1}{c|}{Name}& \multicolumn{2}{c|}{13.12.2010}&
\multicolumn{2}{c|}{14.12.2010}&\multicolumn{2}{c|}{15.12.2010}&
\multicolumn{2}{c|}{16.12.2010}
&\multicolumn{2}{c|}{17.12.2010}&\multicolumn{2}{c|}{18.12.2010}
&\multicolumn{2}{c|}{19.12.2010} &\multicolumn{2}{c|}{20.12.2010}\\ \hline\hline
%
\multicolumn{18}{c}%
{{\bfseries Continued from previous page}} \\
\hline
%
No.&\multicolumn{1}{c|}{Name}& \multicolumn{2}{c|}{13.12.2010}&
\multicolumn{2}{c|}{14.12.2010}&\multicolumn{2}{c|}{15.12.2010}&
\multicolumn{2}{c|}{16.12.2010}
&\multicolumn{2}{c|}{17.12.2010}&\multicolumn{2}{c|}{18.12.2010}
&\multicolumn{2}{c|}{19.12.2010} &\multicolumn{2}{c|}{20.12.2010}\\ \hline\hline
%
\hline \multicolumn{18}{|r|}{{Continued on next page}} \\ \hline
\endfoot
%
\hline
\multicolumn{18}{|r|}{{Concluded}} \\ \hline
\endlastfoot
%
\DTLforeach{names}{
\no=number, \dg=degree, \name=Name}{
\no& \dg. \name & & & & & & & &
& & & & & & & & \\ \hline
}
\end{longtable}%
%
\end{landscape}
%
\end{document}
-
I've solved your first problem, but I don't understand what you mean by the columns not being equally spaced (your second problem). Do you want the vertical lines in the date columns to be exactly halfway in each date column? – Alan Munn Jan 29 '12 at 4:42 Dear Alan Munn, Yes. That is what I want. Please give it a try. – Harish Kumar Jan 29 '12 at 6:56 Dear Alan Thank you for helping out the first problem. It works. In second one I want the vertical lines in the date columns to be exactly halfway in each date column. Please help. – Harish Kumar Jan 29 '12 at 7:03 @Dr.Harishkumar I've updated my answer. P.S. if you prefix comments with @ (e.g. @Alan) the person you are addressing the comment to will be notified. – Alan Munn Jan 29 '12 at 14:56 @AlanMunn, Thank you. I did not know this. I will follow. – Harish Kumar Jan 31 '12 at 0:59
When using \DTLforeach with \hline it's generally better to use the following schematic structure which puts the \\ and the \hline at the beginning of each non-initial row instead of at the end of each row.
\DTLforeach{<database>}{<assignments>}{%
\DTLiffirstrow{}{\\\hline}
rest of table with no final \\
}
For your second problem, there may be different solutions, but the simplest may be to use a fixed column width. If the cells will always be blank, just a p{2em} column would be fine; if you will subsequently be inserting centred text in the cells, then you need a slightly more complicated version, which is what I've use in the example.
Using the array package, I've defined a C columnn:
\newcolumntype{C}{>{\centering\arraybackslash}p{2em}}
and used this as the basis for the main c columns.
\documentclass[12pt]{article}
\usepackage{setspace}
\usepackage{longtable,array}
\usepackage{pdflscape}
\usepackage[a4paper,left=1in,right=0.8in,top=0.5in,bottom=0.5in]{geometry}
\usepackage{datatool}
%==================================================================
\begin{document}
%
\begin{landscape}
%
\doublespacing
%
\begin{longtable}{|r|l|C|C|C|C|C|C|C|C|C|C|C|C|C|C|C|C|} % THIS COMMAND CHANGED
\hline
%
No.&\multicolumn{1}{c|}{Name}& \multicolumn{2}{c|}{13.12.2010}&
\multicolumn{2}{c|}{14.12.2010}&\multicolumn{2}{c|}{15.12.2010}&
\multicolumn{2}{c|}{16.12.2010}
&\multicolumn{2}{c|}{17.12.2010}&\multicolumn{2}{c|}{18.12.2010}
&\multicolumn{2}{c|}{19.12.2010} &\multicolumn{2}{c|}{20.12.2010}\\ \hline\hline
%
\multicolumn{18}{c}%
{{\bfseries Continued from previous page}} \\
\hline
%
No.&\multicolumn{1}{c|}{Name}& \multicolumn{2}{c|}{13.12.2010}&
\multicolumn{2}{c|}{14.12.2010}&\multicolumn{2}{c|}{15.12.2010}&
\multicolumn{2}{c|}{16.12.2010}
&\multicolumn{2}{c|}{17.12.2010}&\multicolumn{2}{c|}{18.12.2010}
&\multicolumn{2}{c|}{19.12.2010} &\multicolumn{2}{c|}{20.12.2010}\\ \hline\hline
%
\hline \multicolumn{18}{|r|}{{Continued on next page}} \\ \hline
\endfoot
%
\hline
\multicolumn{18}{|r|}{{Concluded}} \\ \hline
\endlastfoot
%
\DTLforeach{names}{%
\no=number, \dg=degree, \name=Name}{%
\DTLiffirstrow{}{\\\hline} % THIS LINE HAS BEEN ADDED
\no& \dg. \name & & & & & & & &
& & & & & & & & % THIS LINE HAS BEEN CHANGED
}
\end{longtable}%
%
\end{landscape}
%
\end{document}
-
A blank line in LT is most likely from a spurious \\ somewhere. From your description it isn't clear if it is in the foot as an empty first row, or in the body of the table as an empty last row, these can look the same in the output, but of course are completely separate in the source.
-
Dear sir I have included the sample code now. I could not find any spurious \\ anywhere. Please help me. – Harish Kumar Jan 29 '12 at 1:51 |
https://pwilliamclarke.com/current-i-ekfyv/usb-c-to-lightning-cable-plug-2bdbda | Select Page
Use the calculator now (This is based on the ISO 2859-1 standard, normal severity, single sampling plans only.) N Sample size Z Constant for confidence level (like 1.645 for 90% confidence, 1.96 for 95% confidence, 2.575 99% confidence) Sampl .• 9 e Size 5. On Table 2 locate the code letter J and, and you can see your sample size is 80, which indicates the number of pieces required to be inspected. A factory has a bin size of 9000. The suggested minimum sample sizes are consistent with sample sizes provided in tables A.1 and A.2 of appendix A in the AICPA Audit Guide Audit Sampling. You can also email me directly at [email protected] and find me on LinkedIn. In order to use the s-chart along with the x-bar chart, the sample size n must be greater than than 10. You can already simulate using our sampling calculation tool below: enter your shipment lot quantity, and select sampling levels and AQL to see the impact on inspected quantities and accepted defects. A Control Chart is also known as the Shewhart chart since it was introduced by Walter A Shewhart. Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday. For discussion of Fannie Mae’s new 6 (or 3) month statement standard, please see this post . They help visualize variation, find and correct problems when they occur, predict expected ranges of outcomes and analyze patterns of process variation from special or common causes. statistical quality control. On the s-chart, the y-axis shows the sample standard deviation, the standard deviation overall mean and the control limits, while the x-axis shows the sample group. In this paper, we focus on sample size calculations for RCTs, but also for studies with another design such as case-control or cohort studies, sample size calculations are sometimes required. If you found this article useful, feel welcome to download my personal code on GitHub. Note 2 to entry: Although individual lots with quality as bad as the acceptance quality limit may be accepted with fairly high probability, the designation of an acceptance quality limit does not suggest that this is a desirable quality level. C Control Chart is used when there is more than one defect and the sample size is fixed. Please contact. The manufacturer should therefore pull 315 units from the batch for quality … Quality control charts are often used in Lean Six Sigma projects and DMAIC projects under the control phase and are considered as one of the seven basic quality tools for process improvement. As a rule of thumb, this is the inspection level that you should use most of the time. The Chapter discusses methods to calculate the exception rate for monetary and nonmonetary compliance attributes; noting that similar to the control sample size table, the compliance sample size table is based on an expectation of zero exceptions. This helps in quicker and reasonable decision-making. In this example, the sample size code letter is M, which table 2-A shows corresponds to a sample size of 315. I created my own YouTube algorithm (to stop me wasting time), 5 Reasons You Don’t Need to Learn Machine Learning, 7 Things I Learned during My First Big Project as an ML Engineer, Ridgeline Plots: The Perfect Way to Visualize Data Distributions with Python. Incoming Quality Control Principle. To put this in perspective let’s assume the product is a medical … The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. • You may have noticed that the second column on Table 1 features four so-called ‘Special Inspection Levels’. The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. Our trained account managers will be available to ensure you choose the inspection level and AQL values that best suit your needs. The table of control chart constants shown below are approximate values used in calculating control limits for the X-bar chart based on rational subgroup size.Subgroups falling outside the control limits should be removed from the calculations to remove their statistical bias. Protocol Quality Assurance Plan Sample is a Free easy to use, user-friendly Word Template which ensures that everything moves in the right direction. Once you have created a x-bar chart, you will only need to add the following lines of code to generate the s-chart. Example : If N=100, then the corrected sample size would be =18600/285 (=65.26 or 66) And the standard level, the one used by default and by 98% of buyers is the General level G-II for a standard AQL inspection. In one application involving the operation of a drier, samples of the output were taken at periodic intervals; the average value for each sample was computed and recorded on a chart called an x¯ chart. The Protocol Quality Assurance Plan template considers all anchors of the organization including its machinery, workers, suppliers, and distributors, and points out their strengths and weaknesses. The sample size to be used is given by the new sample size code letter, not by the original letter. This AQL sampling plan is designed to help in determining the right sample size for inspection and the acceptable number of defects. [adsense:block:AdSense1] More about acceptance sampling plans. While U Control Chart is used for more than one defect and if the sample size … Enter the table to find sample size code letter. For any services. 3. To answer your question in general, you can apply the sample size calculation using the discrete data method. It wants to check on its product bin by bin before foiling, using ISO 2859-1 and the same inspection levels as in ISO 4074. As defined below, confidence level, confidence interva… The following table shows how to determine sample size using attribute sampling. Table 1 – Sample Size Code Letters. For the following example, we will be focusing on quality control charts for continuous data for when the sample size is greater than 10. x-bar chart The x-bar and s-chart are quality control charts used to monitor the mean and variation of a process based on samples taken in a given time. We’ll talk more about the other levels later. customer requirements, engineering tolerances or other specifications). Quality control charts represent a great tool for engineers to monitor if a process is under statistical control. A factory has a bin size of 9000. Once again, I invite you to continue discovering the amazing stuff you can perform using R as an industrial engineer. In one application involving the operation of a drier, samples of the output were taken at periodic intervals; the average value for each sample was computed and recorded on a chart called an x¯ chart. Sample size is a statistical concept that involves determining the number of observations or replicates (the repetition of an experimental condition used to estimate variability of a phenomenon) that should be included in a statistical sample. Figure 1.3: Sample Size versus Total Cost with Cs = $5 and Ct =$1000 Figure 1.4: Three-Dimensional Projection of Resultant Sample Size as a function of varying per sample and per trait costs Figure 2.1: Histogram of Company Shortfall Amounts Figure 2.2: Bootstrapping Steps Figure 2.3: Bootstrapping Program Input Worksheet Figure 2.4: Input Worksheet – John Hancock Data . Although the sample sizes are consistent with statistically-based tables, the sample sizes provided in the Chapter can be used for either statistical or nonstatistical sampling. As an example, one way of sampling is to use a so-called “Random Sample,” where respondents are chosen entirely by chance from the population at large. The control chart factors can be found on Table 3 of the Excel spreadsheet downloaded from the Quality Digest Web site. user is warned of the assumed risks relative to the chosen sample size and AQL. To comply with AQL 2.5, no more than 10 units from that sample size may fail inspection. Make learning your daily ritual. This level will increase the sample size that the inspector will examine, making the results more accurate. statistical quality control. • This site is protected by copyright and trademark laws under US and international law. Tell me more about control … When using the "1 out of:" and "2 out of:" columns, it does not mean no more than that number of Quality System Regulation violations per the appropriate sample size is acceptable. Your “General Inspection Level” is level II (also underlined). ... standard deviations is shown in the following table. sample size when expected misstatement is zero or where the expected taint of any misstatement found is assumed to be a 100 percent taint (a conservative planning assumption). The need for an objective measurement of quality. chart we will use A 2 = 0.577 from Table 3 for a subgroup sample size of n=5. Your inspection report will clearly state whether your production has passed or failed your selected Acceptable Quality Tolerance level. Please contact. UCL , LCL (Upper and Lower Control Limit) When we plot the 25 sample subgroup averages on this chart, the plot does not reveal any out-of-control conditions. Once you have done so, add the last line of code below to generate the process capability summary chart. Thus 186 sample size arrived at ,should be corrected /adjusted for finite population. The standard offers three general and four special levels. In this case the number of defects has to be zero for a population or lot to be accepted as a quality lot. This application gives the single and double sampling plans for attributes, according to the Military Standard 105E (ANSI/ASQ Z1.4, BS6001, DIN40.080, NFX06-022, UN148-42, KS A 3109) tables, for a given lot size and AQL. The acceptable quality limit (AQL) sample is calculated automatically with our AQL Calculator to find out if the entire product order has met the client's specifications.GIM uses International Acceptable Quality Limit (A.Q.L) Standards for inspections to accept or reject a product order. Based on the sampling data, the customer can make an informed decision to accept or reject the lot. Don’t Start With Machine Learning. The standard AQL sampling plan used by 98% of the people is the level II for a normal product inspections. There is an easy-to-use c=0 chart that defines the number of samples that should be taken to statistically determine if a specific population size meets a predefined quality level. Once you have generated the x-bar and R-charts using R, you will only have to add the following lines of code specifying the lower control limit, upper control limit and the target. baseline and endline surveys). The Global Harmonization Task Force (GHTF) defines process validation as a term used in the medical device industry to indicate that a proce… It wants to check on its product bin by bin before foiling, using ISO 2859-1 and the same inspection levels as in ISO 4074. If an AQL of 1 % nonconforming items is designated for inspection of the series of lots, Table 6-A indicates that the minimum sample size shall be given by sample size code letter L. In … We use the internationally recognized Acceptable Quality Limit (AQL) standard for all product inspections. For a one-sided test at significance level $$\alpha$$, look under the value of 2$$\alpha$$ in column 1. This method is most appropriate for the Performance Qualification phase of process validation. Westport, Conn., Redgrave Information Resources Corp. [1973] (OCoLC)608598130: Document Type: Book: All Authors / Contributors: Herman Burstein. It is an important aspect of any empirical study requiring that inferences be made about a population based on a sample. The release batch size is 24 bins. The formula does not cover finite population. Engineers must take a special look at these points in order to identify and assign causes attributed to changes in the system that led the process to be out-of-control. where nj is the sample size (number of units) of group j. We have gone through one of the many industrial engineering applications that R and the qcc package have to offer. See Sample size: A rough guide for other tables that can be used in these cases. The standard offers three general and four special levels. Using tables or software to set sample size. Figure 1 (opposite) contains a sample size lookup table for samples selected using simple random sampling, the most frequently used method in the Office. The article “Sample Examples – The Calculator in Action” provides guidance for using the Calculator below in various scenarios. Fill in the calculator to easily find your perfect sample size (the number of pieces that should be randomly checked from the lot being examined) and the number of major and minor defects that can be tolerated based on your AQL specifications. For an explanation of why the sample estimate is normally distributed, study the Central Limit Theorem. Control Chart Constants. control versus intervention) or two points in time (e.g. If the population is N, then the corrected sample size should be = (186N)/( N+185). This document, published by the International Organization for Standardization (ISO), is an international standard with equivalents in all national regulations (ANSI/ASQC Z1.4, NF06-022, BS 6001, DIN 40080). Determine the lot size. For discussion of Fannie Mae’s new 6 (or 3) month statement standard, please see this post.For more information about Cogent’s sampling methodology, please see the white papers on our Resources page. Let’s take a look at the R code using the qcc package to generate a x-bar chart. Sample sizes may be evaluated by the quality of the resulting estimates. By contacting QIMA you agree to our privacy policy, Thank you - your inquiry has been sent.We will come back to you shortly. The FDA defines Process Validation as a means established by objective evidence, a process that consistently produces a result, or product meeting its predetermined specifications. For example, the power determination for sample sizes of 12–20 are displayed in Table 6. Important within regulated and non-regulated industries product risk and non-regulated industries letter, not by new. The ISO 2859-1 standard, please see this post all product inspections Central... Beyond AQL the grand mean and the sample size for the Performance Qualification phase of process is... To answer your question in general, you will only need to add the last of... An acceptable quality level ( AQL ) must be selected based upon product risk calculate the defect rate, defect..• 9 e size 5 units from that sample size table 2 below shows grand! Be used on measurements which are normally distributed, study the Central Limit Theorem you only... Survey receives is your sample size may fail inspection quality control sample size table an informed decision to accept or reject the lot and. A-1–A-4 A.11 the tables were computed using the binomial distribution and as-sume a large population s-chart along the. Plan is designed to help in determining the right quality control population or lot to be for... Every time interval, and reduced plans to be used is given by the original.... Need to add the following table shows how to determine quality are displayed in table 6 the ANSI ASQ table! We usually read control chart is used when there is more than one defect the! In one pen the sample size code letter of time to expand testing and/or report findings it quality control sample size table process chart... Z1.9 are essentially the same as for MIL-STD-414 level that you should use most of the Z1.9 standard other )... Professionals for AQL sampling plan is designed to help in determining the right quality control were in processing. Tables were computed using the Calculator now ( this is based on the AQL protected by copyright and trademark under... To generate a x-bar chart, the y-axis shows the minimum original '' sample sizes of are... Fannie Mae ’ s policy supplier audits, laboratory testing or certification?. Than than 10 case the number of samples that must be selected based upon product risk laboratory testing or services... We have gone through one of the maximum acceptable number of completed your! Single sampling plans only. analytics, data science and machine learning applications in the use of assumed... Used by 98 % of the sample quality over time and detect any unusual behavior the inherent process of..., research, tutorials, and describe the difference between normal and tightened in. Then the corrected sample size that the second method to determine the right direction the original letter client level. Deviations is shown in the standard AQL sampling plan used by QC professionals for AQL sampling used! To add the last line of code below to generate the s-chart defines! Calculator below in various scenarios been retained and leads the user through application of the earliest applications! The level II normal inspection and QIMA will provide results as within or beyond AQL averages this. Of items in the sample size arrived at, should be selected based upon risk... 3.2 sample size that the second method to determine the sample size for inspection and will. |
https://worldbuilding.stackexchange.com/questions/175457/superhuman-extreme-temperatures-pt-2 | # Superhuman: Extreme Temperatures Pt.2
I have decided I will add my whole question… I’m new give me a bit of grace…
I am creating a superhuman and I am in need of some helpful tips and answers. To get straight to the point I need to know what types of modifications could be made to the human nervous, integumentary, circulatory, and hormonal systems to allow the production of extreme amounts of heat. This extreme amount of temperature is induced through her emotional state.
She can fluctuate her body heat to inhuman levels. Her highest temperature is at 3,000 ℉. She does not stay at this heat at all, this is just her highest possible temperature. She is not immune to overheating but can stay at 3,000 ℉ for only a minute and temperatures higher than 550 ℉ are still a stretch for her.
That’s the first ability, the second is that she secretes three different types of sweat. From her enlarged and retractable exocrine glands, she secretes a potassium nitrate-like sweat. Her sudoriferous glands produce a very sugary and starchy sweat. Finally, the last one is just normal sweat production.
Her skin is made up out of multiple colorless elements. These consist of charcoal, sulfur, aluminum, iron, steel, zinc, and magnesium. They are produced like skin cells are. They only become visible when detonated or moisture is applied and dried.
Finally, she has a multicolored “chalk” substance which is what gives her explosions color. It cannot be detonated but can be set ablaze. The colors are not visible until there is a sudden action. Say someone bumped into her and as a result, a cloud of “chalk” would billow off. The chalk changes colors… also linked to the emotional state but this is completely unrealistic and mostly for making it a showy power.
The gist of this, is I’m making a human firework...
My first idea was to increase the size of the heart and add two more chambers to store and heat blood. Maybe her heart could beat much faster than normal situations. I need to know how much adrenalin and hormones could also be produced. As a cooling system, I was thinking I would implement some sort of hormone much like ones the thyroid produces. I would probably create this hormone. Another thing would have to be modifications to the size of the sweat glands.
I am ignoring certain facts like a human body would incinerate under 3,000 ℉ and that potassium nitrate and the skin elements are impossible for the body to make. Her heart would also explode… and for now, I am ignoring fuel intake because that factor is unrealistic. I don’t need any modifications for the entire body to sustain such heat, I need a way for the body to reach said temperature.
The visibility and realistic qualities of the “chalk” and elements I am also choosing to ignore. I do not need modifications for the human body to produce said elements but if there are any suggestions I am open. Substitutions for the elements and chalk are welcome.
This is a superhuman that has her powers passed down genetically, not given through extraordinary circumstances. Like MetaAbilities, mutants, or quirks.
This is an entirely impossible design but I would still like to implement some scientific reasoning to my design.
Thank you
- Whimsy
Okay, putting aside the inevitability of a human steak, here we go:
The concept: To heat her up rapidly, she could burn oil stores in her body the transfer energy into her blood. Oil has the benefit of being made from biological materials, and with the advent of biodiesel and specialized bacteria, the oil could conceivably be created within her body in a specialized organ.
The numbers: A liter of crude oil contains 38500KJ. To heat her body up to 3000℉ she would have to store a whopping 21 liters of it(which would weigh approximately 45 pounds). To get up to 550℉, she only needs to store around 3.5 liters.
The act: She could have a combustion chamber somewhere in her chest that has "veins" that are spread through her body and terminate at her pores. These veins would carry air into and out of the chamber and prevent her from exploding from the expansion of the liquid into a gas(at least hopefully it does). Another vein would pump fuel into the chamber.
A final note: The fuel doesn't have to be oil, it's just the easiest I could think of. More efficient fuel like methane and hydrogen would have to have dedicated coolant systems, or else she would pop as the liquids boil inside her.
Hope that helps.
• Incredible! I really like the idea of the combustion chamber. I hadn't even considered that! Using some sort of oil is a brilliant idea. That was my second choice in the early development stages but turned it down because of some other complications in her early design. I have to say oil is definitely the best solution. I will see what others have to say on this matter but so far yours has definitely solved the heating dilemma. – user75444 May 2 '20 at 1:52
• Humans are great at storing oily energy reserves! But she would need to keep clothes in lots of different sizes. – Willk May 2 '20 at 1:52 |
http://library.cirm-math.fr/listRecord.htm?list=link&xRecord=19253217146910714999 | • D
F Nous contacter
0
Documents 47B35 | enregistrements trouvés : 6
O
Sélection courante (0) : Tout sélectionner / Tout déselectionner
P Q
Post-edited Hankel and composition operators on spaces of Dirichlet series Seip, Kristian (Auteur de la Conférence) | CIRM (Editeur )
I will give a survey of the operator theory that is currently evolving on Hardy spaces of Dirichlet series. We will consider recent results about multiplicative Hankel operators as introduced and studied by Helson and developments building on the Gordon-Hedenmalm theorem on bounded composition operators on the $H^2$ space of Dirichlet series.
Multi angle Truncated Toeplitz operators Câmara, Cristina (Auteur de la Conférence) | CIRM (Editeur )
Toeplitz matrices and operators constitute one of the most important and widely studied classes of non-self-adjoint operators. In this talk we consider truncated Toeplitz operators, a natural generalisation of finite Toeplitz matrices. They appear in various contexts, such as the study of finite interval convolution equations, signal processing, control theory, diffraction problems, hydrodynamics, elasticity, and they play a fundamental role in the study of complex symmetric operators. We will focus mainly on their invertibility and Fredholmness properties, showing in particular that they are equivalent after extension to block Toeplitz operators, and how this can be used to study the spectra of several classes of truncated Toeplitz operators. Toeplitz matrices and operators constitute one of the most important and widely studied classes of non-self-adjoint operators. In this talk we consider truncated Toeplitz operators, a natural generalisation of finite Toeplitz matrices. They appear in various contexts, such as the study of finite interval convolution equations, signal processing, control theory, diffraction problems, hydrodynamics, elasticity, and they play a fundamental role in ...
47B35
Multi angle The invariant subspace problem: a concrete operator theory approach Gallardo-Gutiérrez, Eva (Auteur de la Conférence) | CIRM (Editeur )
The Invariant Subspace Problem for (separable) Hilbert spaces is a long-standing open question that traces back to Jonhn Von Neumann's works in the fifties asking, in particular, if every bounded linear operator acting on an infinite dimensional separable Hilbert space has a non-trivial closed invariant subspace. Whereas there are well-known classes of bounded linear operators on Hilbert spaces that are known to have non-trivial, closed invariant subspaces (normal operators, compact operators, polynomially compact operators,...), the question of characterizing the lattice of the invariant subspaces of just a particular bounded linear operator is known to be extremely difficult and indeed, it may solve the Invariant Subspace Problem.
In this talk, we will focus on those concrete operators that may solve the Invariant Subspace Problem, presenting some of their main properties, exhibiting old and new examples and recent results about them obtained in collaboration with Prof. Carl Cowen (Indiana University-Purdue University).
The Invariant Subspace Problem for (separable) Hilbert spaces is a long-standing open question that traces back to Jonhn Von Neumann's works in the fifties asking, in particular, if every bounded linear operator acting on an infinite dimensional separable Hilbert space has a non-trivial closed invariant subspace. Whereas there are well-known classes of bounded linear operators on Hilbert spaces that are known to have non-trivial, closed ...
Multi angle Rational approximation of functions with logarithmic singularities Pushnitski, Alexander (Auteur de la Conférence) | CIRM (Editeur )
I will report on the results of my recent work with Dmitri Yafaev (Rennes I). We consider functions $\omega$ on the unit circle with a finite number of logarithmic singularities. We study the approximation of $\omega$ by rational functions in the BMO norm. We find the leading term of the asymptotics of the distance in the BMO norm between $\omega$ and the set of rational functions of degree $n$ as $n$ goes to infinity. Our approach relies on the Adamyan-Arov-Krein theorem and on the study of the asymptotic behaviour of singular values of Hankel operators. In particular, we make use of the localisation principle, which allows us to combine the contributions of several singularities in one asymptotic formula. I will report on the results of my recent work with Dmitri Yafaev (Rennes I). We consider functions $\omega$ on the unit circle with a finite number of logarithmic singularities. We study the approximation of $\omega$ by rational functions in the BMO norm. We find the leading term of the asymptotics of the distance in the BMO norm between $\omega$ and the set of rational functions of degree $n$ as $n$ goes to infinity. Our approach relies on the ...
Multi angle Products of Toeplitz operators on the Fock space Zhu, Kehe (Auteur de la Conférence) | CIRM (Editeur )
Let $f$ and $g$ be functions, not identically zero, in the Fock space $F^2$ of $C^n$. We show that the product $T_fT_\bar{g}$ of Toeplitz operators on $F^2$ is bounded if and only if $f= e^p$ and $g= ce^{-p}$, where $c$ is a nonzero constant and $p$ is a linear polynomial.
Multi angle Analytic continuation of Toeplitz operators Englis, Miroslav (Auteur de la Conférence) | CIRM (Editeur )
Generalizing results of Rossi and Vergne for the holomorphic discrete series on symmetric domains, on the one hand, and of Chailuek and Hall for Toeplitz operators on the ball, on the other hand, we establish existence of analytic continuation of weighted Bergman spaces, in the weight (Wallach) parameter, as well as of the associated Toeplitz operators (with sufficiently nice symbols), on any smoothly bounded strictly pseudoconvex domain. Still further extension to Sobolev spaces of holomorphic functions is likewise treated. Generalizing results of Rossi and Vergne for the holomorphic discrete series on symmetric domains, on the one hand, and of Chailuek and Hall for Toeplitz operators on the ball, on the other hand, we establish existence of analytic continuation of weighted Bergman spaces, in the weight (Wallach) parameter, as well as of the associated Toeplitz operators (with sufficiently nice symbols), on any smoothly bounded strictly pseudoconvex domain. Still ...
Codes MSC
Nuage de mots clefs ici
Z |
https://math.meta.stackexchange.com/questions/linked/1868?sort=hot&page=4 | 66 questions linked to/from List of Generalizations of Common Questions
372 views
### questions asked over 100 times
I just saw another question with the limit of $(1 + 1/n)^n.$ A few months ago, somebody asked some other multiple repeat question, I wrote to check any of the last 137 times it had been asked. he ...
418 views
### Generalization case vs Particular case
Could the - generalization of a problem - post be a good enough reason in order to vote for closing a question? I agree that in some cases is proving to be useful and the effort is significantly ...
464 views
### How to quickly find duplicate questions which are not exact duplicates.
Some moderators are able to find duplicate posts which are not exact duplicates. That is to say even in the case where the question is posed completely differently but the solution uses similar ...
365 views
I am aware of the questions List of Generalizations of Common Questions and Coping with abstract duplicate questions, but I would like to make a suggestion. On Computer Science meta, there is a ...
348 views
### Reference Textbooks at one place
Can anybody recommend me a topology textbook? Introductory book on Topology Best book for topology? Choosing a text for a First Course in Topology All these are multiple ...
97 views
### Should we have quick links? If yes, then how?
These are very useful. Can we have quick links of them while answering or on sidebar of questions under some link?: Catalog of limits, a, MathJax basic tutorial and quick reference, List of comment ...
338 views
### Questions that show no effort to find existing similar questions a.k.a. abstract duplicates
EDIT: After amWhy brought the term "abstract duplicates" to my attention in the comments below, I thought some tidying-up was in order. The original body of the question is mostly intact below; I have ...
133 views
### How to deal with "how I compute this density?" questions
This is not about solving homework questions per-se. Several questions of the type "How do I solve for this probability density" can be solved by the Jacobian method (change of variables) but most ...
132 views
I've created this post with the same purpose as the List of comment templates post, although I imagine it'll be far less useful. There are a few questions on the Unanswered Questions Queue only ...
232 views
### Editing the list of abstract duplicates
I'd like to add some content to the meta post List of Generalizations of Common Questions. The entry will be under Probability and is going to be titled something ...
130 views
### Please comment on this possible answer for abstract duplicate of "integration by partial fractions"
I've expanded an old answer to include a general discussion of how to solve integrals by partial fractions. Before I add it to the list of abstract duplicates, I'd appreciate corrections and ...
206 views
### Canonical duplicate for periodic sequence formula questions?
This question was on the hot network questions today: Function which creates the sequence 1, 2, 3, 1, 2, 3, ... While it certainly has some pedantic value, it's not very interesting mathematically ...
231 views
### Database of abstract duplicates displayed when asking question/clicking on tag
The following question came up today: Prove $5a^2 ≠ 3b^2$ for all non-zero rational numbers $a$ and $b$ This is surely an abstract duplicate if not an actual duplicate. Yet as I intended to indicate ... |
http://www.reddit.com/r/LaTeX/comments/stgsh/makelatex_easily_build_pdfs_from_latex_documents/c4k60sg | you are viewing a single comment's thread.
[–] 0 points1 point * (1 child)
sorry, this has been archived and can no longer be voted on
Thanks. I'll definitely look into that. I'm a bit of a novice at drawing stuff within the source files, but it seems to produce nice results when it works.
I'm happy with the results that TikZ produces - so I think I'll see if there's a way to use the shell escape to let me process the data using other programs. I'm basically plotting the Sloan Digital Sky Survey spectra. As far as I know they're all about the same size and include about a few thousand data points, so if I manage to code it for one, it'll work for the others.
Perhaps gnuplot is the way forward though - just installed it! Time to learn a whole new thing...
[–] 1 point2 points (0 children)
sorry, this has been archived and can no longer be voted on
Here's a simple example of how to use gnuplot to create plots from data files, then use that plot in LaTeX or XeLaTeX.
First, suppose you have a data file (gpdata.dat) with the x values in the first column and y values in the second column, separated by whitespace:
1 2
2 1
3 4
4 3
Then use these commands in gnuplot to create the plot:
unset key
set xlabel '$x$'
set ylabel '$y$'
set title 'My plot'
set grid
set size square
plot [0:5][0:5] 'gpdata.dat' with points pointtype 7 pointsize 2
set terminal epslatex color header "\\small"
set output "gpdata.tex"
replot
That will create two files: gpdata.tex and gpdata.eps. In LaTeX you input the .tex file, which handles the text for the .eps image file. You could do it like this:
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\noindent Here is a plot created by gnuplot:
\begin{center}
\input{gpdata}
\end{center}
\end{document}
Run xelatex on this file. Notice that the axes are labeled x and y in LaTeX math mode, which is the advantage of using the epslatex terminal in gnuplot (otherwise you get the default ugly gnuplot font). |
http://mathoverflow.net/questions/32626/how-to-shuffle-a-deck-by-parts/32871 | # How to shuffle a deck by parts?
This question is mainly a curiosity, but comes from a practical experience (all players of Race for the galaxy, for example, must have ask themselves the question).
Assume I have a deck of cards that I would like to shuffle. Unfortunately, the deck is so big that I cannot hold it entirely in my hands. Let's say that the deck contains $kn$ cards, and that the operation I can perform are: 1. cut a deck into any number of sub-decks, without looking at the cards but remembering for all $i$ where the $i$-th card from top of the original deck has been put; 2. gather several decks into one deck in any order (but assume that we do not intertwin the various decks, nor change the order inside any of them); 3. shuffle any deck of at most $n$ cards. Assume moreover that such a shuffle consist in applying an unknown random permutation drawn uniformly.
Here is the question: is it possible to design a finite number of such operations so that the resulting deck has uniform law among all possible permutations of the original deck? If yes, how many shuffles are necessary, or sufficient, to achieve that ?
The case $k=2$ seems already interesting.
-
You should probably specify: no randomness is allowed in steps 1 (cutting) or 2 (gathering). If one may use randomness in either of these steps one can shuffle exactly without step 3 (shuffling). Eg trivially if one can gather randomly, I'll just cut into $kn$ decks of one card each, and reassemble them in a uniformly random order. If one has to gather deterministically but can cut randomly, it's a bit more complicated but one can still implement a simple exact shuffle (for example, the one that, for i=0 to n-2, moves a card uniformly chosen from the top n-i cards into position n-i). – James Martin Jul 20 '10 at 13:26
Upon reading just the title of your question, I immediately thought of Race for the Galaxy. – Richard Dore Jul 20 '10 at 15:02
@James Martin: you are right, that is what the "remembering" and "not intertwin nor change order" parts were intended for. – Benoît Kloeckner Jul 20 '10 at 21:19
Also relevant for Magic: The Gathering :-) – Greg Friedman Jul 21 '10 at 1:23
A truly uniform distribution, no. (Well, your question is not completely well posed, but I will argue this for most ways of making it so.) There are $(kn)!$ factorial ways to shuffle a deck of $kn$ cards. So you want each permutation to occur with probability $1/(kn)!$. In particular, for every prime $p \leq kn$, you want $p$ to occur in the denominator of the probability that each permutation occurs. Let's look at your operations: Reordering $i$ decks can only introduce primes $\leq i$. Perfectly shuffling an $n$ card deck can only introduce primes $\leq n$. Cutting depends on what mathematical model you use for cutting; if all cut points are equally likely, you only get primes dividing $n(n-1)\ldots (n-i+1)$. I imagine other models of cutting will cause similar problems.
The more commonly studied question is how to get a probability distribution that is extremely close to random. There are lots of good results on this; see Trailing the Dovetail Shuffle to its Lair.
-
Nice! This reminds me of my favorite proof that the 'naive shuffle' (for i=1..n, set j=rand(1..n) and swap cards i and j) can't possibly give a uniform result: there are n^n equally likely computation paths through the loop (n independent choices of j for each of the n values of i) and this can't possibly divide evenly into n! because of the presence of primes p < n which don't divide n. – Steven Stadnicki Aug 19 '10 at 17:45
Assuming you want a practical answer to "I have too many cards to hold in my hands at once; how do I shuffle them reasonably well in a relatively short amount of time?", you might want to consider a "parallel shuffle", distributing the work over several players in hopes that we can get an adequately shuffled deck in less wall-clock minutes than a single-person shuffle, even if it requires more total operations and player-minutes than a single-person shuffle.
I am reminded of the "FFT butterfly diagram" used in digital signal processing and the "Omega Network" used in some computer clusters, based on the "perfect shuffle interconnection".
http://www.ece.ucsb.edu/~kastner/ece15b/project1/fft_description_files/image032.jpg
http://github.com/vijendra/Omega-network/raw/master/16X16.png
Parallel shuffle-deal-shuffle algorithm: (for $k \le n$)
• somehow give k players n cards each (either grab a block of n cards off the top for each player, or evenly deal the cards to the k players)
• shuffle: each of the k players uniformly shuffles their sub-deck of n cards
• deal: each of the k players evenly deals -- face down -- her sub-deck to the k other players (including herself). Equivalently, each player breaks her sub-deck into k equal sub-sub-decks, and distributes one sub-sub-deck to each player (including herself). After all the players have dealt, each player gathers her cards (a few from each player, including herself) into one sub-deck of n cards.
• shuffle: (as above)
By this stage (1 round), we have done the equivalent to randomizing each row of a matrix, then each column. Any particular single card could be anywhere after one round of shuffle-deal-shuffle, with equal probability. Alas, at this stage, there are still a few permutations that have probability zero. For example, the possible permutations equivalent to a rotation by shear (RBS) ("how do I rotate a bitmap?") require 3 shears. The closest that a single round of shuffle-deal-shuffle can produce is 2 shears, which is not enough to produce those permutations. So we continue with the second round:
• deal: (as above)
• shuffle: (as above)
• gather all the sub-decks into one large full deck
The full 2-round shuffle-deal-shuffle-deal-shuffle algorithm can produce any possible permutation, but each permutation does not have exactly the same probability.
Each of the two "deal" steps mixes at least as well as a single riffle shuffle of the entire kn cards. The paper -- by Dave Bayer and Persi Diaconis -- that David Speyer mentioned proves that $m = \frac{3}{2} \log_2 (kn) + \theta$ riffle shuffles are sufficient.
-
Though David's answer settles the original question, there is no reason to restrict to the case where the total number of cards is a multiple of $n$. In general, we can ask whether it is possible to completely shuffle a deck with $M$ cards if only $n$-card subdecks are directly shuffleable.
As David points out, this is necessarily impossible if there exists a prime $p$ for which $n < p \leq M$. This means that the smallest case that's still open is $n=3$ and $M=4$. That is, is it possible to completely randomize a 4-card deck if only 2 or 3 cards may be shuffled at a time?
-
I would like to point out that David's proof actually settles the question for any real $k>1$ and sufficiently large $n$ (how large depends on $k$), by the generalization of Chebyshev's theorem and the observation in the first sentence of yyour second paragraph. So this new question is only open for $1<k<2$ and small $n$. – Daniel Litt Jul 20 '10 at 20:12 |
https://pypi.org/project/maskattack.lbp/ | Texture (LBP) based counter-measures for the 3D MASK ATTACK database
## Project description
This package implements the LBP counter-measure to spoofing attacks with 3d masks to 2d face recognition systems as described in the paper Spoofing in 2D Face Recognition with 3D Masks and Anti-spoofing with Kinect, by N. Erdogmus and S. Marcel, presented at BTAS 2013.
If you use this package and/or its results, please cite the following publications:
1. The original paper with the counter-measure explained in details:
@INPROCEEDINGS{Erdogmus_BTAS_2013,
author = {Erdogmus, Nesli and Marcel, S{\'{e}}bastien},
keywords = {3D Mask Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing},
month = sep,
title = {Spoofing in 2D Face Recognition with 3D Masks and Anti-spoofing with Kinect},
journal = {BTAS 2013},
year = {2013},
}
2. Bob as the core framework used to run the experiments:
@inproceedings{Anjos_ACMMM_2012,
author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel},
title = {Bob: a free signal processing and machine learning toolbox for researchers},
year = {2012},
month = oct,
booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
publisher = {ACM Press},
}
If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.
## Raw data
The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the 3D MASK ATTACK database portal for more information.
## Installation
There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.
### Using an automatic installer
Using pip is the easiest (shell commands are marked with a $signal): $ pip install maskattack.lbp
You can also do the same with easy_install:
$./bin/buildout These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment. ## User Guide This section explains how to use the package in order to: a) calculate the LBP features on the 3D Mask Attack database; b) perform classification using Chi-2, Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). It is assumed you have followed the installation instructions for the package, and got the required database downloaded and uncompressed in a directory. After running the buildout command, you should have all required utilities sitting inside the bin directory. We expect that the video files of the database are installed in a sub-directory called database at the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package: $ ln -s /path/where/you/installed/the/database database
If you don’t want to create a link, use the --input-dir flag (available in all the scripts) to specify the root directory containing the database files.
### Calculate the LBP features
The first stage of the process is calculating the feature vectors, which are essentially normalized LBP histograms. A single feature vector for each frame of the video (both for the depth and color images) is computed and saved as a multiple row array in a single file.
The program to be used for this is ./bin/calclbp.py. It uses the utility script spoof/calclbp.py. Depending on the command line arguments, it can compute different types of LBP histograms over the normalized face bounding box. Cropped and normalized images can be saved to a folder (./img_cropped by default) and used in future computations to skip cropping using -sc flag.
Furthermore, the normalized face-bounding box can be divided into blocks or not.
The following commands will calculate the feature vectors of all the videos in the database and will put the resulting .hdf5 files with the extracted feature vectors in the output directory ./lbp_features/r_1 with and without skipping the cropping step:
$bin/calclbp.py -ld ./lbp_features/r_1 --el regular$ bin/calclbp.py -cd ./img_cropped -ld ./lbp_features/r_1 --el regular -sc
In the above command, the program will crop (64x64 by default) and normalize the images according to the eye positions available in the database. The cropped images will be saved to the default directory (img_cropped) which can be changed using --cropped-dir argument.
To see all the options for the script calclbp.py, just type --help at the command line. Change the default option in order to obtain various features described in the paper.
### Classification using Chi-2 distance
The clasification using Chi-2 distance consists of two steps. The first one is creating the histogram model (average LBP histogram of all the real access videos in the training set). The second step is comparison of the features of development and test videos to the model histogram and writing the results.
The script for performing Chi-2 histogram comparison is ./bin/cmphistmodels.py. It expects that the LBP features of the videos are stored in a folder ./bin/lbp_features.
First, it calculates the average real-access histogram using the training set and next, it computes the distances and writes the results in a file using the utility script spoof/chi2.py. The default input directory is ./lbp_features, while the default output directory is ./res. To execute this script on previously extracted features, just run:
$./bin/cmphistmodels.py -v ./lbp_features/r_1 To see all the options for the script cmphistmodels.py, just type --help at the command line. ### Classification with linear discriminant analysis (LDA) The classification with LDA is performed using the script ./bin/ldatrain_lbp.py. It makes use of the scripts ml/lda.py, ml/pca.py (if PCA reduction is performed on the data) and ml/norm.py (if the data need to be normalized). The default input and output directories are ./lbp_features and ./res. To execute the script with prior PCA dimensionality reduction as is done in the paper, call: $ ./bin/ldatrain_lbp.py -v ./lbp_features/r_1 -r
To see all the options for this script, just type --help at the command line.
### Classification with support vector machine (SVM)
The classification with SVM is performed using the script ./bin/svmtrain_lbp.py. It makes use of the scripts ml/pca.py (if PCA reduction is performed on the data) and ml\norm.py (if the data need to be normalized). The default input and output directories are ./lbp_features and ./res. To execute the script as is done in the paper, call:
## Project details
Uploaded source` |
https://labs.tib.eu/arxiv/?author=E.Nakano | • ### Measurement of Branching Fractions of Hadronic Decays of the $\Omega_c^0$ Baryon(1712.01333)
Jan. 17, 2018 hep-ex
Using a data sample of 980 ${\rm fb}^{-1}$ of $e^+e^-$ annihilation data taken with the Belle detector operating at the KEKB asymmetric-energy $e^+e^-$ collider, we report the results of a study of the decays of the $\Omega_c^0$ charmed baryon into hadronic final states. We report the most precise measurements to date of the relative branching fractions of the $\Omega_c^0$ into $\Omega^-\pi^+\pi^0$, $\Omega^-\pi^+\pi^-\pi^+$, $\Xi^-K^-\pi^+\pi^+$, and $\Xi^0K^-\pi^+$, as well as the first measurements of the branching fractions of the $\Omega_c^0$ into $\Xi^-\bar{K}^0\pi^+$, $\Xi^0\bar{K}^0$, and $\Lambda \bar{K}^0\bar{K}^0$, all with respect to the $\Omega^-\pi^+$ decay. In addition, we investigate the resonant substructure of these modes. Finally, we present a limit on the branching fraction for the decay $\Omega_c^0\to\Sigma^+K^-K^-\pi^+$.
• ### Study of Two-Body $e^+e^- \to B_s^{(*)}\bar{B}_s^{(*)}$ Production in the Energy Range from 10.77 to 11.02 GeV(1609.08749)
Sept. 28, 2016 hep-ex
We report results on the studies of the $e^+e^-\to B_s^{(*)}\bar{B}_s^{(*)}$ processes. The results are based on a $121.4$ fb$^{-1}$ data sample collected with the Belle detector at the center-of-mass energy near the $\Upsilon(10860)$ peak and $16.4$ fb$^{-1}$ of data collected at 19 energy points in the range from 10.77 to 11.02 GeV. We observe a clear $e^+e^-\to\Upsilon(10860)\to B_s^{(*)}\bar{B}_s^{(*)}$ signal, with no statistically significant signal of $e^+e^-\to \Upsilon(11020)\to B_s^{(*)}\bar{B}_s^{(*)}$. The relative production ratio of $B_s^*\bar{B}_s^*$, $B_s\bar{B}_s^{*}$, and $B_s\bar{B}_s$ final states at $\sqrt{s}=10.866$ GeV is measured to be $7:$ $0.856\pm0.106(stat.)\pm0.053(syst.):$ $0.645\pm0.094(stat.)^{+0.030}_{-0.033}(syst.)$. An angular analysis of the $B_s^*\bar{B}_s^*$ final state produced at the $\Upsilon(10860)$ peak is also performed.
• ### Precise measurement of the CP violation parameter sin2phi_1 in B0-->(c\bar c)K0 decays(1201.4643)
Dec. 20, 2012 hep-ex
We present a precise measurement of the CP violation parameter sin2phi_1 and the direct CP violation parameter A_f using the final data sample of 772x10^6 B\bar B pairs collected at the Upsilon(4S) resonance with the Belle detector at the KEKB asymmetric-energy e+e- collider. One neutral B meson is reconstructed in a J/psi K0S, psi(2S) K0S, chi_c1 K0S or J/psi K0L CP-eigenstate and its flavor is identified from the decay products of the accompanying B meson. From the distribution of proper time intervals between the two B decays, we obtain the following CP violation parameters: sin2phi_1=0.667+-0.023(stat)+-0.012(syst) and A_f=0.006+-0.016(stat)+-0.012(syst).
• ### First Measurement of Inclusive B -> X_s eta Decays(0910.4751)
Aug. 20, 2010 hep-ex
We report a first measurement of inclusive B -> X_s eta decays, where X_s is a charmless state with unit strangeness. The measurement is based on a pseudo-inclusive reconstruction technique and uses a sample of 657 x 10^6 BB-bar pairs accumulated with the Belle detector at the KEKB e^+e^- collider. For M_{X_s} < 2.6 GeV/c^2, we measure a branching fraction of (26.1 +/- 3.0 (stat) +1.9 -2.1 (syst) +4.0 -7.1 (model)) x 10^-5 and a direct CP asymmetry of A_{CP} = -0.13 +/- 0.04 +0.02 -0.03. Over half of the signal occurs in the range M_{X_s} > 1.8 GeV/c^2.
• ### Magnetism and superconductivity in quark matter(hep-ph/0506002)
June 1, 2005 hep-ph
Magnetic properties of quark matter and its relation to the microscopic origin of the magnetic field observed in compact stars are studied. Spontaneous spin polarization appears in high-density region due to the Fock exchange term, which may provide a scenario for the behaviors of magnetars. On the other hand, quark matter becomes unstable to form spin density wave in the moderate density region, where restoration of chiral symmetry plays an important role. Coexistence of magnetism and color superconductivity is also discussed.
• ### Spin Polarization and Color Superconductivity in Quark Matter(hep-ph/0304223)
July 23, 2003 hep-ph
A coexistent phase of spin polarization and color superconductivity in high-density QCD is investigated using a self-consistent mean-field method at zero temperature. The axial-vector current stemming from the Fock exchange term of the one-gluon-exchange interaction has a central role to cause spin polarization. The magnitude of spin polarization is determined by the coupled Schwinger-Dyson equation with a superconducting gap function. As a significant feature the Fermi surface is deformed by the axial-vector self-energy and then rotational symmetry is spontaneously broken. The gap function is also taken to be anisotropic in accordance with the deformation. As a result of numerical calculation, it is found that spin polarization barely conflicts with color superconductivity, but almost coexists with it. |
https://physics.stackexchange.com/questions/375234/are-lorentz-transformations-a-direct-consequence-of-finiteness-of-signal-speed | # Are Lorentz transformations a direct consequence of finiteness of signal speed?
I have this silly doubt in my head and it's bugging me for a real long time now. Let us consider the Galilean transformation $x=x'+vt$ for two frames measuring coordinates $x$ and $x'$. For simplicity, I'll call these frames YOU(measuring $x'$) and ME (measuring $x$).Now, there are two ways to look at this(I'll mention both so that the reader gets the drift-changing one concept of Galilean relativity gives Lorentz transforms)-
(1) I (i.e. frame ME) sees the origin of YOU at a distance of $vt$ after a time $t$ ( the regular stuff about origins coinciding at $t=0$ holds of course). Now, there is an event, $E$, which occurs at a coordinate $x'$ in you frame. So,
(Distance of E from ME)=(Distance of E from YOU) + (Distance of YOU from ME)
So $x=x'+vt$, the Galilean transformation law.
Now, in SR, we just let go of the idea that all frames measure the same length. Since it is possible to show the existence of time dilation and length contraction in SR from purely physical reasoning (i.e. without invoking Lorentz transforms), we may write
(Distance of E from ME)=(Distance of E from YOU 'as seen by ME') + (Distance of YOU from ME)
And we get $x=x'/\gamma+vt$, using the result of lorentz contraction for the 'length' $x'$ YOU measured. And we have derived the lorentz transformation rule.
Now, the problem is here-
(2) The way I am now trying to look at this is to imagine the following. Suppose YOU measures a coordinate $x'$, and communicates his measurement to me. At this instant, YOU is at a distance $vt$ from me. If YOU could communicate this INSTANTLY, ME could geometrically add the information about distances he knows to get his coordinate- $x=x'+vt$. Thus, Galilean transformations can sort of be ascribed to this 'instantaneous' communication of measurements.
Now, if we account for the finite speed at which a signal can be transmitted, so that NO instantaneous communication is possible, can a similar line of reasoning as above lead to Lorentz Transforms? Or are Lorentz transformations much more fundamental than that? I have a feeling that there is a very trivial point I have overlooked and gotten myself in this mess.
I tried to work it out (not too diligently, I admit), but did not get anywhere close. So is my guess that the Lorentz transformations are simply a correction induced in SR due to finiteness of signal speed incorrect? This is annoying. Any help would be appreciated.
## 1 Answer
Ok, so as far as I understand, there are two parts to answering your question:
1. It is true that Lorentz transformation can be thought of as corrections to the Galilean transformation due to the finiteness of the speed of causality. In the sense that if you take the limit in which $c$ goes to $\infty$ then the Lorentz transformations reduce to Galilean transformations.
2. I don't think a direct method you propose of communicating with the YOU frame can give us Lorentz transformation--at least not trivially. Because, trivially, the communication lag would be incorporated as $x=x'+v\bigg(t+\dfrac{t}{c}\bigg)$. Further, this approach is flawed in its very origin. See, an observer in SR means a whole array of measuring instruments spread over all the spacetime points. So, for ME frame to communicate with YOU frame, it would take no time at all. There is an instrument which is a part of the ME frame in the very vicinity of the origin of the YOU frame.
• Thanks a lot @Dvij!..I was missing the fact that for 2 frames to 'communicate', no distance needs to be traveled by the signal, as you rightly pointed out. I was kinda hopeful because this sort of reasoning gave the correct answer for galilean relativity. Is there maybe a flaw in that too? – GRrocks Dec 19 '17 at 10:13
• @GRrocks Yes, this reasoning is flawed on its very basis due to the simple fact that you realize that communication between frames is instantaneous no matter whether the speed of causality is finite or infinite. The reason it works out seems to be purely coincidental. Somewhat mathematically speaking, Lorentz transformation is not the only transformation that is mathematically conceivable that reduces to Galilean transformation in $c$ goes to zero limit. There could be many. For example, $x'=x-v(t+\dfrac{t}{c})$. – Dvij Mankad Dec 19 '17 at 13:04
• The logic you use works, in a mathematical way, because this clearly wrong transformation reduces to Galilean transformation in the concerned limit. I could write another crazy transformation that does the same in the same limit and I could formulate a corresponding verbal description of the crazy formula and argue that since it works for Galilean limit it should work for the cases in which $c$ is finite. Clearly, as you can see, this is not logical. Hope this helps. – Dvij Mankad Dec 19 '17 at 13:04 |
http://en.wikipedia.org/wiki/Fracture | Fracture
A fracture is the separation of an object or material into two, or more, pieces under the action of stress.
The fracture of a solid almost always occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops in this case perpendicular to the surface of displacement, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially to the surface of displacement, it is called a shear crack, slip band, or dislocation.[1]
The word fracture is often applied to bones of living creatures (that is, a bone fracture), or to crystals or crystalline materials, such as gemstones or metal. Sometimes, in crystalline materials, individual crystals fracture without the body actually separating into two or more pieces. Depending on the substance which is fractured, a fracture reduces strength (most substances) or inhibits transmission of light (optical crystals).
A detailed understanding of how fracture occurs in materials may be assisted by the study of fracture mechanics.
A fracture is also the term used for a particular mask data preparation procedure within the realm of integrated circuit design that involves transposing complex polygons into simpler shapes such as trapezoids and rectangles.
Fracture strength
Stress vs. strain curve typical of aluminum
1. Ultimate tensile strength
2. Yield strength
3. Proportional limit stress
4. Fracture
5. Offset strain (typically 0.2%)
Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture.[2] This is usually determined for a given specimen by a tensile test, which charts the stress-strain curve (see image). The final recorded point is the fracture strength.
Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS.[2] If a ductile material reaches its ultimate tensile strength in a load-controlled situation,[Note 1] it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled,[Note 2] the deformation of the material may relieve the load, preventing rupture.
If the stress-strain curve is plotted in terms of true stress and true strain the curve will always slope upwards and never reverse, as true stress is corrected for the decrease in cross-sectional area. The true stress on the material at the time of rupture is known as the breaking strength. This is the maximum stress on the true stress-strain curve, given by point 1 on curve B.
Types
Brittle fracture
Brittle fracture in glass.
Fracture of an aluminum crank arm. Bright: brittle fracture. Dark: fatigue fracture.
In brittle fracture, no apparent plastic deformation takes place before fracture. In brittle crystalline materials, fracture can occur by cleavage as the result of tensile stress acting normal to crystallographic planes with low bonding (cleavage planes). In amorphous solids, by contrast, the lack of a crystalline structure results in a conchoidal fracture, with cracks proceeding normal to the applied tension.
The theoretical strength of a crystalline material is (roughly)
$\sigma_\mathrm{theoretical} = \sqrt{ \frac{E \gamma}{r_o} }$
where: -
$E$ is the Young's modulus of the material,
$\gamma$ is the surface energy, and
$r_o$ is the equilibrium distance between atomic centers.
On the other hand, a crack introduces a stress concentration modeled by
$\sigma_\mathrm{elliptical\ crack} = \sigma_\mathrm{applied}\left(1 + 2 \sqrt{ \frac{a}{\rho}}\right) = 2 \sigma_\mathrm{applied} \sqrt{\frac{a}{\rho}}$ (For sharp cracks)
where: -
$\sigma_\mathrm{applied}$ is the loading stress,
$a$ is half the length of the crack, and
$\rho$ is the radius of curvature at the crack tip.
Putting these two equations together, we get
$\sigma_\mathrm{fracture} = \sqrt{ \frac{E \gamma \rho}{4 a r_o}}.$
Looking closely, we can see that sharp cracks (small $\rho$) and large defects (large $a$) both lower the fracture strength of the material.
Recently, scientists have discovered supersonic fracture, the phenomenon of crack motion faster than the speed of sound in a material.[3] This phenomenon was recently also verified by experiment of fracture in rubber-like materials.
Ductile fracture
Ductile failure of a specimen strained axially.
Schematic representation of the steps in ductile fracture (in pure tension).
In ductile fracture, extensive plastic deformation (necking) takes place before fracture. The terms rupture or ductile rupture describe the ultimate failure of tough ductile materials loaded in tension. Rather than cracking, the material "pulls apart," generally leaving a rough surface. In this case there is slow propagation and an absorption of a large amount energy before fracture.[citation needed]
Many ductile metals, especially materials with high purity, can sustain very large deformation of 50–100% or more strain before fracture under favorable loading condition and environmental condition. The strain at which the fracture happens is controlled by the purity of the materials. At room temperature, pure iron can undergo deformation up to 100% strain before breaking, while cast iron or high-carbon steels can barely sustain 3% of strain.[citation needed]
Because ductile rupture involves a high degree of plastic deformation, the fracture behavior of a propagating crack as modeled above changes fundamentally. Some of the energy from stress concentrations at the crack tips is dissipated by plastic deformation before the crack actually propagates.
The basic steps are: void formation, void coalescence (also known as crack formation), crack propagation, and failure, often resulting in a cup-and-cone shaped failure surface.
Crack separation modes
The three fracture modes.
There are three ways of applying a force to enable a crack to propagate:
• Mode I crack – Opening mode (a tensile stress normal to the plane of the crack)
• Mode II crack – Sliding mode (a shear stress acting parallel to the plane of the crack and perpendicular to the crack front)
• Mode III crack – Tearing mode (a shear stress acting parallel to the plane of the crack and parallel to the crack front)
Crack initiation and propagation accompany fracture. The manner through which the crack propagates through the material gives great insight into the mode of fracture. In ductile materials (ductile fracture), the crack moves slowly and is accompanied by a large amount of plastic deformation. The crack will usually not extend unless an increased stress is applied. On the other hand, in dealing with brittle fracture, cracks spread very rapidly with little or no plastic deformation. The cracks that propagate in a brittle material will continue to grow and increase in magnitude once they are initiated. Another important mannerism of crack propagation is the way in which the advancing crack travels through the material. A crack that passes through the grains within the material is undergoing transgranular fracture. However, a crack that propagates along the grain boundaries is termed an intergranular fracture.
Notes
1. ^ A simple load-controlled tensile situation would be to support a specimen from above, and hang a weight from the bottom end. The load on the specimen is then independent of its deformation.
2. ^ A simple displacement-controlled tensile situation would be to attach a very stiff jack to the ends of a specimen. As the jack extends, it controls the displacement of the specimen; the load on the specimen is dependent on the deformation.
References
1. ^ Cherepanov, G.P., Mechanics of Brittle Fracture
2. ^ a b Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, p. 32, ISBN 0-471-65653-4.
3. ^ C. H. Chen, H. P. Zhang, J. Niemczura, K. Ravi-Chandar and M. Marder (November 2011). "Scaling of crack propagation in rubber sheets". Europhysics Letters 96 (3): 36009. Bibcode:2011EL.....9636009C. doi:10.1209/0295-5075/96/36009. |
https://admin.clutchprep.com/chemistry/practice-problems/103613/what-is-the-value-of-the-equilibrium-constant-at-500-c-for-the-formation-of-nh3- | # Problem: What is the value of the equilibrium constant at 500 °C for the formation of NH 3 according to the following equation?N2(g) + 3H2(g) ⇌ 2NH3(g)An equilibrium mixture of NH3(g), H2(g), and N2(g) at 500 °C was found to contain 1.35 M H 2, 1.15 M N2, and 4.12 × 10−1 M NH3.
🤓 Based on our data, we think this question is relevant for Professor Xie's class at HOWARDCC.
###### FREE Expert Solution
We’re being asked to determine the equilibrium constant at 500˚C for this reaction:
N2(g) + 3 H2(g) 2 NH3(g)
Recall that the equilibrium constant is the ratio of the products and reactants
We use Kp when dealing with pressure and Kc when dealing with concentration:
$\overline{){{\mathbf{K}}}_{{\mathbf{p}}}{\mathbf{=}}\frac{{\mathbf{P}}_{\mathbf{products}}}{{\mathbf{P}}_{\mathbf{reactants}}}}$ $\overline{){{\mathbf{K}}}_{{\mathbf{c}}}{\mathbf{=}}\frac{\mathbf{\left[}\mathbf{products}\mathbf{\right]}}{\mathbf{\left[}\mathbf{reactants}\mathbf{\right]}}}$
Note that solid and liquid compounds are ignored in the equilibrium expression.
###### Problem Details
What is the value of the equilibrium constant at 500 °C for the formation of NH 3 according to the following equation?
N2(g) + 3H2(g) ⇌ 2NH3(g)
An equilibrium mixture of NH3(g), H2(g), and N2(g) at 500 °C was found to contain 1.35 M H 2, 1.15 M N2, and 4.12 × 10−1 M NH3. |
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/5/lesson/5.2.5/problem/5-106 | ### Home > CALC > Chapter 5 > Lesson 5.2.5 > Problem5-106
5-106.
Use your derivative tools to find the second derivative, $\frac { d ^ { 2 } y } { d x ^ { 2 } }$, of each function below.
1. $y = \frac { \operatorname { sin } x } { x }$
1. $y =\operatorname{csc}^2 x −\operatorname{cot}^2 x$
1. $y = \sqrt { \frac { 1 } { x } }$
You could use the quotient rule AND the chain rule...Or you could rewrite $y$ as with exponents and quickly use the Power Rule!
1. $y = | x - 2 |$
Rewrite $y$ as a piecewise function. The derivative will be a piecewise function as well. So will the second derivative. |
https://support.bioconductor.org/u/3224/?page=2&sort=New%20answers&limit=All%20time&answered=all&q= | ## User: Axel Klenk
Axel Klenk920
Reputation:
920
Status:
Trusted
Location:
Switzerland
Last seen:
4 hours ago
Joined:
10 years, 1 month ago
Email:
a*********@idorsia.com
#### Posts by Axel Klenk
<prev • 125 results • page 2 of 13 • next >
1
347
views
1
... Well, consider the pros and cons: if your screening step is too conservative, you may not have enough candidates for your wet lab experiments and you have a higher risk of missing false negatives. If, OTOH, it is not conservative at all, you may end up with too many candidates for wet lab follow-up ...
written 7 months ago by Axel Klenk920
1
347
views
1
... > results(dds, lfcThreshold = log2(1.5), alpha = 0.1) does not do what you apparently think it does, see ?results . You probably want something like > write.csv(as.data.frame(res[res$padj < 0.1 & res$log2FoldChange > log2(1.5),]), file="DEGs.csv") ...
written 7 months ago by Axel Klenk920
1
347
views
1
... Yes, if I read the figure legend and paper correctly, they did this as a first step to select candidates that were then tested in additional experiments, and the thresholds were apparently chosen to yield a manageable number of candidates. Why not. So, let me clarify, that I would not recommend usin ...
written 7 months ago by Axel Klenk920
1
347
views
1
... 1) Yes, in which case the resulting list of "DEGs" is expected to contain 10% false discoveries instead of 5%. 2) Sure, but you may want to check that the lfc of your "DEGs" is biologically meaningful in the context of your experiment and not "only" statistically significant. 3) I would not recomm ...
written 7 months ago by Axel Klenk920
1
137
views
1
... Dear Paul, that's a bit too concise. I'm not really sure what you mean by "series matrix file"... If this is about a GEO series, use GEOquery::getGEO() to download it as an ExpressionSet, then use exprs(), pData(), fData(), etc. from the Biobase package to access the contents. ?pData Hope this ...
written 7 months ago by Axel Klenk920
1
713
views
1
Comment: C: Cannot read Biom file
... Ouch, apologies for providing misleading help by posting a quick reply when I was about to leave my office... I had completely overlooked your paste() call... In your call to paste(), you're explicitly asking for no (empty) separator between file path and filename by setting argument sep = "". It w ...
written 7 months ago by Axel Klenk920
1
713
views
1
Comment: C: Cannot read Biom file
... Dear voteroj, without looking at the read_biom() code, I guess it is trying to tell you that there is a problem with your filename. These look like Windows paths and I'd suggest to simply add a colon to the drive letter, as in jsonbiomfile = "C:/Users/metagenomic/Desktop/otu_table_json.biom" (u ...
written 7 months ago by Axel Klenk920
1
113
views
1
... Dear yueli7, try P[1:4, 5:7] or P[1:4, 5:ncol(P)] is that what you want? If yes, it is very basic R subsetting and not related to Bioconductor. It should be asked on StackOverflow or r-help and you may benefit from reading some introduction to R. Hope this helps. ...
written 7 months ago by Axel Klenk920
1
689
views
1
... Dear Dastjerdi, as you don't show us the complete code you have used, I need to ask: have you loaded the package before use, e.g. library("DESeq2") ?library HTH. ...
written 7 months ago by Axel Klenk920
3
613
views
3
... Yep, that's why I usually use unadjusted p-values for plotting and ordering. W.r.t. p-value vs. FDR, see ?p.adjust ...
written 13 months ago by Axel Klenk920
#### Latest awards to Axel Klenk
Popular Question 4 months ago, created a question with more than 1,000 views. For Download problem with GEOquery::getGEO()
Scholar 8 months ago, created an answer that has been accepted. For A: Identification of DEGs through limma analysis
Teacher 20 months ago, created an answer with at least 3 up-votes. For A: Limma un moderated t-test
Scholar 20 months ago, created an answer that has been accepted. For A: Identification of DEGs through limma analysis
Centurion 20 months ago, created 100 posts.
Scholar 20 months ago, created an answer that has been accepted. For A: Limma un moderated t-test
Scholar 2.9 years ago, created an answer that has been accepted. For A: Limma un moderated t-test
Supporter 3.0 years ago, voted at least 25 times.
Teacher 4.1 years ago, created an answer with at least 3 up-votes. For A: basic R question: concatenate two numeric vectors grouped by element index
Content
Help
Access
Use of this site constitutes acceptance of our User Agreement and Privacy Policy. |
https://iacr.org/cryptodb/data/paper.php?pubkey=13646 | ## CryptoDB
### Paper: Breaking the Symmetry: a Way to Resist the New Differential Attack
Authors: Jintai Ding Bo-Yin Yang Chen-Mou Cheng Owen Chen Vivien Dubois URL: http://eprint.iacr.org/2007/366 Search ePrint Search Google Sflash had recently been broken by Dubois, Stern, Shamir, etc., using a differential attack on the public key. The $C^{\ast-}$ signature schemes are hence no longer practical. In this paper, we will study the new attack from the point view of symmetry, then (1) present a simple concept (projection) to modify several multivariate schemes to resist the new attacks; (2) demonstrate with practical examples that this simple method could work well; and (3) show that the same discussion of attack-and-defence applies to other big-field multivariates. The speed of encryption schemes is not affected, and we can still have a big-field multivariate signatures resisting the new differential attacks with speeds comparable to Sflash.
##### BibTeX
@misc{eprint-2007-13646,
title={Breaking the Symmetry: a Way to Resist the New Differential Attack},
booktitle={IACR Eprint archive},
keywords={public-key cryptography /},
url={http://eprint.iacr.org/2007/366},
note={multivariate public key cryptography,differential, symmetry, projection [email protected] 13769 received 13 Sep 2007},
author={Jintai Ding and Bo-Yin Yang and Chen-Mou Cheng and Owen Chen and Vivien Dubois},
year=2007
} |
https://calendar.math.illinois.edu/?year=2005&month=11&day=16&interval=day | Department of
# Mathematics
Seminar Calendar
for events the day of Wednesday, November 16, 2005.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
October 2005 November 2005 December 2005
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 1 2 3 4 5 1 2 3
2 3 4 5 6 7 8 6 7 8 9 10 11 12 4 5 6 7 8 9 10
9 10 11 12 13 14 15 13 14 15 16 17 18 19 11 12 13 14 15 16 17
16 17 18 19 20 21 22 20 21 22 23 24 25 26 18 19 20 21 22 23 24
23 24 25 26 27 28 29 27 28 29 30 25 26 27 28 29 30 31
30 31
Wednesday, November 16, 2005
3:00 pm in 441 Altgeld Hall,Wednesday, November 16, 2005
#### Topology of algebraic varieties
###### Christian Haesemeyer (UIUC Math)
Abstract: Projective algebraic varieties have many special topological properties. In this talk, we will explore some of these properties of complex projective manifolds and present some of their applications (like the classification of the homotopy types of complete interesections of hypersurfaces). Time allowing, we will also look at the topology of varieties defined over subfields of the complex numbers (for example, the homotopy type of their complex points depends on the choice of embedding of the subfield, even though the homology type does not).
4:00 pm in 245 Altgeld Hall,Wednesday, November 16, 2005
#### The Math behind Oscillator-Wave Systems
###### Eduard Kirr (Dept. of Mathematics, UIUC)
Abstract: Oscillator-wave systems are now ubiquitous in science, however, their evolution is far from being completely understood. I will exemplify with two models. First is related with mechanical experiments you might have done in your physics classes, the second is related with current experimental work on Bose-Einstein Condensates. I will try to gently introduce the modern analysis needed to study the two. But the conclusion will be that we need far more mathematical tools to attack more general oscillator-wave systems. |
https://socratic.org/questions/how-do-you-factor-completely-2x-2-20-x-2-9x | # How do you factor completely 2x^2+20=x^2+9x?
Dec 28, 2017
GIven: $2 {x}^{2} + 20 = {x}^{2} + 9 x$
Combine terms so that the quadratic is equal to 0:
${x}^{2} - 9 x + 20 = 0$
This will factor into $\left(x - {r}_{1}\right) \left(x - {r}_{2}\right) = 0$, if we can find numbers such that ${r}_{1} {r}_{2} = 20$ and $- \left({r}_{1} + {r}_{2}\right) = - 9$.
4 and 5 will do it $\left(4\right) \left(5\right) = 20$ and $- \left(4 + 5\right) = - 9$:
$\left(x - 4\right) \left(x - 5\right) = 0$
Dec 28, 2017
Set it equal to zero, then find factors of $c$ that add to $b$.
#### Explanation:
Set equal to zero:
${x}^{2} - 9 x + 20 = 0$
Looking at the discriminate: (${b}^{2} - 4 a c$)
$81 - 4 \cdot 1 \cdot 20 = 81 - 80 = 1$
Since $1$ is a perfect square, we know it factors.
Factors of $20$ that add to $- 9$ are $- 5$ and $- 4$.
$\left(x - 5\right) \left(x - 4\right) = {x}^{2} - 9 x + 20$ |
https://ham.stackexchange.com/questions/17207/how-do-dual-band-j-poles-work | # How do dual band J poles work?
My understanding is a J pole is a $$\frac{1}{2} \lambda$$ dipole attached to a $$\frac{1}{4} \lambda$$ impedance matching element.
How does this dual band work? What acts as the impedance matching element and what is the radiating element? Does the smaller rod act as the impedance match for 2 m and the larger one for 440 MHz with the longest rod acting as the radiating element?
How is a feedline connected to this?
• For the dual-band version, the feedline connects to the 19 1/4" element using a threaded stud adapter thing that has an SO-239 on the bottom and a 3/8" threaded coupling nut on top. Lemme dig up a link for that. I've built a couple of these dual-band units and found the SWR curves very disappointing. – Tyler Stone Aug 26 at 23:15
• Feedline connection: americanradiosupply.com/… – Tyler Stone Aug 26 at 23:20
• Instead of what? – Tyler Stone Aug 26 at 23:26
• @TylerStone for 440 MHz, better to use N connectors. They are waterproof, too. – Mike Waters Aug 26 at 23:36
• @Mike I am in no way endorsing or recommending these 1930s-era SO-239/PL-259 connectors, but its what Arrow uses on their J-poles and my understanding is that is part of what was asked. – Tyler Stone Aug 26 at 23:44 |
http://www.gamedev.net/topic/628589-cone-tracing-and-path-tracing-differences/page-2 | • Create Account
Banner advertising on our site currently available from just \$5!
Cone Tracing and Path Tracing - differences.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
72 replies to this topic
#21scyfris Members - Reputation: 182
Like
1Likes
Like
Posted 09 August 2012 - 10:15 AM
That being said, I still wouldn't recommend implementing this paper if you are new to this field, although it would be interesting to implement the octree-only part as you'll learn a lot about how all these data structures are fitting together within the GPU. If you decide to do that, please let us know how it went :-)
#22MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 09 August 2012 - 11:42 AM
Well yeah, actually I have like worst gpu ever.
This is my gpu: http://www.geforce.com/hardware/desktop-gpus/geforce-9500-gt/specifications
So I can't do anything in compute shader.
#23gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 14 August 2012 - 06:39 AM
I think its also the way the paper is organized which can be confusing. With the cone-tracing section, this is how I've interpreted it:
1. Capture direct illumination first (9.7.2) to store the incoming radiance into each leaf. This is done by placing a camera at the light's position and direction to create a light-view-map (just like shadow-mapping). I think each pixel location is transformed into world space position and the index of the leaf corresponding to this position is derived. Two lots of information: direction distribution and energy (which I think is color?) of that pixel is then stored in the leaf.
2. Correct me if I've misinterpreted the paper, but I think the values are averaged at each level from the bottom leaves to the top of the octree (is direction distribution also averaged?).
3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?
#24CryZe Members - Reputation: 768
Like
0Likes
Like
Posted 14 August 2012 - 06:51 AM
3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?
The octree only represents the scene with it's direct illumination and also features different 3 dimensional mip map levels of the scene. To obtain indirect lighting you still need to traces rays or even better cones through the scene as you would with Path Tracing. Cone Tracing a SVO is just a feasible way to realise Path Tracing in a real time application.
Edited by CryZe, 14 August 2012 - 06:52 AM.
#25gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 15 August 2012 - 04:02 AM
you still need to traces rays or even better cones through the scene as you would with Path Tracing
So for each pixel on the screen, I would send out a cone, with their apex starting from the pixel?
Or do the apex of the cones start from a point on every surface?
Edit: Okay, I think I get it now. For every pixel on the screen, a number of cones are spawned from the surface corresponding to that pixel in world-space. These are used to sample the pre-integrated information from the voxelized volumes intersecting with the cones. The final gathering involves averaging the total of all the information collected by the cones and this is projected onto the pixel.
Am I correct?
Edited by gboxentertainment, 15 August 2012 - 05:37 AM.
#26CryZe Members - Reputation: 768
Like
0Likes
Like
Posted 15 August 2012 - 05:57 AM
Yes you're mostly correct. The information is not averaged together though. The incoming radiance of the cones is evaluated against the BRDF of the surface to solve the rendering equation and is not just averaged together.
Edited by CryZe, 15 August 2012 - 05:58 AM.
#27gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 17 August 2012 - 06:38 AM
Okay, now I finally understand that the incoming radiance is "splatted" into the leaves of the structure, which correspond to the surfaces of the scene. This is done using the same concepts as Reflective Shadow Maps, but instead of generating Virtual Point Lights, color values are directly added to the leaves, then the values are transferred to neighbouring bricks and filtered upwards to each parent node.
However, if we are using leaves to transfer the light, does that mean we need to subdivide planar surfaces like floors and walls to the lowest level as well in order for these surfaces to contribute to the bounce lighting?
#28MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 17 August 2012 - 10:25 AM
Oh god yes, I was waiting for this paper so long.
http://www.decom.ufop.br/sibgrapi2012/eproceedings/technical/ts5/102157_3.pdf
Ambient occlusion using cone tracing with scene voxelization.
It's only ambient occlusion but it still explains CONE TRACING! YES
@edit:
This technique is more like cone-sphere tracing:
The volume of each cone is sampled by a series of spheres. The obstructed volumes of the spheres are used to estimate the amount of rays that are blocked by the scene geometry.
Edited by MrOMGWTF, 17 August 2012 - 10:28 AM.
#29gboxentertainment Members - Reputation: 772
Like
1Likes
Like
Posted 18 August 2012 - 04:34 AM
I'm starting to get the feeling the voxel octree cone tracing is very similar to, if not an upgrade to "voxel-based global illumination" by Thiedemann, et. al.
Thiedemann uses voxelization of RSMs with raytracing. I think that Crassin vastly improved on this by introducing an octree structure for the voxels and thus was able to approximate the raytracing into cone tracing by utilizing the uniform multi-level structure of the octree, which can approximate a cone-like shape using voxels that increase in size.
#30gboxentertainment Members - Reputation: 772
Like
1Likes
Like
Posted 20 August 2012 - 07:37 AM
I did a little research on RSMs in order to understand how voxel cone tracing works.
With RSMs, if I render the world-position map from the light's point of view into a texture, I can sample that texture to locate the world-position (corresponding to each texel of the light's position map) of each bounce - this is simulated by spawning VPLs at a number of these points. The indirect illumination is calculated the same way as for a direct light by taking I = NdotL/attenuation where L = (VPL position - position of triangle) for every triangle and every VPL. I is divided by a factor to account for energy conservation and this becomes the output color of the triangles. Through this process the result is also filtered using some filtering algorithm to reduce artifacts.
Now doing this accurately would be too expensive because for a 1280x720 display the cost would be: 1280*720 = 921,600 lights, each evaluated against the number of triangles on the screen (for a deferred approach). A tile-based approach could probably improve it but it'd be even more expensive if you wanted a second bounce.
Now, from this understanding, this is how I currently understand voxel cone tracing:
With a voxel octree structure, say for a single cone, the leaf-voxel corresponding to the world-space position from a light's point of view would be sampled by a texel from the light-position-map in the same way as RSMs.
After defining a cone angle, the maximum sized voxel level that falls within this cone at distances along the cone axis would receive the reflected color multiplied by the percentage emission factor stored in the first leaf-voxel and divided by the size of the voxel to conserve energy, with attenuation also taken into account. Then I assume that these values are inherited by their children down to the leaf nodes (then NdotL BRDF is solved to get the correct surface distribution, where N is stored in each receiving leaf voxels and L is the vector distance between each leaf and the leaf of the surface of the first bounce). Emission/absorption coefficients stored in this leaf also affect the resulting color.
For the voxel intersection test I'm assuming that you don't need to actually perform an intersection between a cone and voxels at every level. I'm assuming that because the direction of the cone is predefined (I'm setting it at the normal direction to the bounce surface), you can just take the rate of increase of the voxel size with the distance the cone has travelled (so for a 90degree cone, a maximum voxel size of 4 should fit inside a cone at distance 6) - I'm sure with some trigonometry you can work out the relationship between voxel size and cone distance so I won't go into detail here. You check all voxel sizes along the cone axis to find the position of each size corresponding to the distance.
I'm going to give some pseudo-HLSL-code a try (assuming octree structure is built and leaf voxels filled with pre-integrated material values):
[When I use "for" I mean allocate a thread for, with either the pixel shader or compute shader; and I also use 'direct' to represent the surface receiving direct lighting for the first bounce]
float3 coneOrigin = directPositionMap.Sample(TextureSampler, input.uv);
float3 coneDir = directNormalMap.Sample(TextureSampler, input.uv);
float3 coneColor = directColorMap.Sample(TextureSampler, input.uv);
float emissiveFactor = directMaterialMap.Sample(TextureSampler, input.uv).x;
float3 voxelPosInCone[numLevels];
float voxelSize;
float distance(voxelLevel)
{
voxelSize = ..some function of voxelLevel.. //should be easily worked out
float distance = ..some function of voxelSize // based on cone angle
return distance;
}
// run through every voxel level to find corresponding positions
for(int i=0; i<numLevels; i++)
{
voxelPosInCone[i] = coneOrigin + distance(i)*coneDir;
voxelIdx = ..derived from voxelPosInCone[i]... //still need to find out the most efficient way of doing this.
voxel[voxelIdx].color += coneColor*emissiveFactor/(distance(i)*distance(i))/voxelVolume;
for all levels of children beneath voxel[voxelIdx]
{
childVoxel.color = voxel[voxelIdx].color;
float3 N = normalize(leafVoxel.normal);
float3 L = normalize(coneOrigin - leafVoxel.pos);
leafVoxel.color = dot(N,L)*leafVoxel.absorptionFactor;
} //Also need to find out most efficient way of doing this
}
The latter part still needs a bit more thought - I think maybe the color values from the leaf could be transferred to the triangles that fall inside them and then the NdotL BRDF is evaluated. Also, I need to find out the quickest way of getting voxel index from position (currently, I just traverse the octree structure from top-to-bottom for each voxel position until I get to the required level).
Maybe I'll draw some diagrams as well to help explain the cone traversal.
Now all of this has just been a rough, educated guess so please let me know if parts of it are correct/incorrect or if I am completely off the mark.
Edited by gboxentertainment, 20 August 2012 - 07:54 AM.
Like
0Likes
Like
Posted 20 August 2012 - 08:44 AM
I'm not sure whether I can follow your train of thought exactly here, but I would like to point a couple of things out:
First of all, be wary of making the comparison between reflective shadow maps and voxels. While an RSM can be used to generate a limited set of voxels it does not contain the same data as would be expected from a set of voxels. When working with voxels the world-space position of your sampled voxel for example is inferred by its neighbours and the size of your voxel volume (note: when you look at it formally voxels themselves do not have a size, just like pixels don't have a size), whereas the world position in an RSM is determined by reconstructing a light-space position from the stored depth which you then transform into a world-space position by applying the inverse light transformation.
The RSM method reminds me more of the global illumination technique described by Kaplanyan and Dachsbacher, but they create a limited low-frequency voxel representation of the scene (non-mipmapped!) which they use as a starting point for creating a light volume with a propagation algorithm.
The method of spawning VPLs using a RSM also sounds more like a technique called instant radiosity, which as far as I know has very little in common with the voxel cone tracing paper.
Second, the factor used for energy conservation for lambertian lighting (N.L lighting) is a fixed constant of 1/Pi. Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.
Your assumption about determining the correct mip level based on the cone angle and and distance to the sampled surface sounds correct the way I understand it.
I gets all your texture budgets!
#32MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 24 August 2012 - 09:31 AM
<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />
So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?
Like
0Likes
Like
Posted 24 August 2012 - 11:33 AM
<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />
So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?
If I unterstand correctly it is just a path trace of your pre-filtered voxel data, but doing such a path trace is still a cone trace, so technically there are cones involved ;)
You can look at a cone trace as tracing a bundle of paths and weighting the results of each path, which is basically an integration over a disk-shaped surface. In this technique your voxel data is actually pre-integrated (=downsampled) for each step along your cone axis which means you only have to do a path trace on the pre-integrated data.
I gets all your texture budgets!
#34gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 24 August 2012 - 09:04 PM
Has anyone tried to implement the soft-shadow cone tracing explained in Crassin's Thesis (p.162)? I think I might give this one a go first because it seems to be a lot more simpler to understand and possibly much more simpler to implement so would be a good starting point.
It is just a "cone" with its apex from the light's position traced in its direction. Opacity values are accumulated, which I believe can be based on the percentage of the shadow-caster lying within the cone at each mip-map level corresponding to the cone's radius.
#35gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 25 August 2012 - 09:17 AM
With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.
#36MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 25 August 2012 - 10:03 AM
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
Like
0Likes
Like
Posted 25 August 2012 - 05:18 PM
With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.
That would be up to your required level of detail and your rendering budget I suppose
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing
I gets all your texture budgets!
#38MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 26 August 2012 - 12:28 AM
As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing
So, as the path gets longer, we lower the mipmap level?
#39gboxentertainment Members - Reputation: 772
Like
0Likes
Like
Posted 26 August 2012 - 12:56 AM
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
My thoughts would be that you would just define a maximum range for your cones and you'd probably determine this from approximating the distance where the illumination contribution becomes unnoticeable - because the further an object is, the less it contributes - attenuation in a BRDF.
I think I was quite a bit wrong with my previous understanding.
I've thought about it a bit more and here's my new understanding:
Let's assume that:
- the octree structure is already created
- the normals are stored in the leaves and averaged/filtered to the parent nodes.
- colors are stored in the leaves (but not averaged/filtered).
Referring to the attached drawing below:
For simplicity's sake, if we just take a single light ray/photon that hits a point on the surface of a blue wall:
- Sample the color and world position of this pixel from the light's point of view and do a lookup in the octree to find the index of this leaf voxel.
- The color values are filtered to higher levels by dividing by two (i'm not sure how correct this is).
Now for simplicity's sake, let's take a single pixel from our camera's point of view that we want to illuminate - let's assume this pixel is a point on the floor surface.
- If we trace just one "theoretical" cone in the direction of the wall (in reality you would trace several cones in several directions), the largest voxels that fall within the radius of the cone at every distance of the cone's range would be taken into consideration - as highlighted by the black squares. You wouldn't actually intersect a cone volume with voxel volumes because that would be inefficient, instead you would just specify a function that specifies that at a certain distance, this is the voxel level that should be considered.
- For each voxel captured, you would calculate NdotL/(distance squared), where N is a normal value stored in that voxel prior to rendering (would be a filtered value at higher level voxels) and L is the direction from the position on the wall to the point on the floor surface. The values calculated from this for each captured voxel would be added on to the color of the pixel corresponding to that point on the floor.
For speculars:
- You would make the "cone" radius smaller, thus at further distances from the point on the floor, lower level (more detailed) voxels would be captured. In this case, one leaf voxel is captured and the contribution is added. I think for speculars you would use a different formula for specular lighting to take into account the camera direction.
Edited by gboxentertainment, 26 August 2012 - 12:56 AM.
#40MrOMGWTF Members - Reputation: 440
Like
0Likes
Like
Posted 26 August 2012 - 01:39 AM
@gboxentertainment:
What if we have situation like this:
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS |
https://askthetask.com/114098/points-links-what-coordinates-point-after-reflected-across | 0 like 0 dislike
100 POINTS AND BRAINLY NO LINKS What are the coordinates of point N after it is reflected across the x-axis?
(−3, −4 )
(−3, 4)
(3, −4)
(3, 4)
0 like 0 dislike
(-3, 4)
Explanation:
When you reflect a point across the x-axis, the x-coordinate remains the same, but the y-coordinate is taken to be the additive inverse. The reflection of point (x, y) across the x-axis is (x, -y).
(-3, -4) ⇒ (-3, 4)
by
0 like 0 dislike
(-3, 4)
Explanation:
Reflection over the x -axis is (x,y) → (x,−y)
Here the given point:
N(-3, -4)
If reflected:
(-3, 4)
by |
https://lists.whatwg.org/pipermail/whatwg-whatwg.org/2006-June/048957.html | # [whatwg] Mathematics in HTML5
White Lynx whitelynx at operamail.com
Fri Jun 16 06:27:12 PDT 2006
Oistein E. Andersen wrote:
>>Quotes from "Wikipedia TeX in HTML5" http://xn--istein-9xa.com/HTML5/WikiTeX.pdf
>>2.5 Big operators
>>Remark: Is the following the intended use of under/over and opgrp?
>Yes. In fact I would be more appropriate to use the same markup for both and
>control rendering (under/over vs. sub/sup) via style sheets, but since
>it is impossible to have reasonable markup that admits both presentations
>(due to limited capabilities of CSS2.1) the only option is to allow both
>possibilities.
> In that case, what would drive an author to use <opgrp> instead of the more familiar <sub>/<sup> construction?
> Is the rendering intended to be different?
Yes, sub/sup will behave like HTML sub/sup with offsets being based on font size like it is currently
done in HTML implementations, while llim/ulim and marker/submark will have offsets based on size of their base
(operator, fence, matrix etc.) not font size like in case of HTML sub/sup elements.
> The proposal states that <op> should be used to mark resizable operators,
> but this presumably does not mean that the size of such operators is actually intended to change.
It is intended to be larger. If necessary separate element may be used for the rest of operators like "lim"
that are not resized.
> Finally, the nested <under>/<over> construction does not strike me as particularly elegant.
Completely agree.
> Would it be possible to use something like <overunder><top><overbrace><base><underbrace><bottom></overunder>
> to replace both <under> and <over>?
The problem is on CSS side. Try to align baseline of base element in this markup with baseline of parent,
in general it is impossible in CSS2.1.
>><fence left="solid" right="solid">
> >><matrix><row><cell><var>x</var><cell><var>y</var>
> >><row><cell><var>z</var><cell><var>v</var></matrix></fence>
>
> >This will draw second fence around matrix (apart of that which already is part of matrix).
>
> What kind of delimiters are supposed to be used? (Indicated in CSS?)
>
Since delimiters are not supposed to change meaning (matrix is matrix) issue is left up to style sheets.
But if necessary appropriate attributes may be added to indicate delimiters explicitly, as you suggested.
>>Remark: Should the scope node be optional?
>
> >Maybe. Do you have some concrete examples in mind?
>
> Not really, but I believe that such constructs sometimes occur without the second part,
> and I believe that the mark-up should be as flexible as possible on this point.
Ok. Let's make scope optional.
>><vector><entry><var>a</var><var>b</var><entry><var>c</var></vector>
>Note that vector includes fences. So what you need will probably require
>extra markup.
> I hope you do not mean that a <vector> is always a column matrix delimited by round parentheses.
> Delimiters like \left[ ... \right] and even \left| ... \right. are sometimes used.
> A generalised <matrix> could possibly replace the current <matrix>, <det> and <vector> constructs.
This returns us to concept of ISO 12083 arrays. It makes sense, I just thought that more specific
markup would be viable, but if necessary we can return to arrays.
> More generally, the advantage is basing HTML5 Maths on ISO-12083 is completely lost if constructs that
> can be expressed in ISO-12083 cannot be encoded in HTML5 Maths.
Some of ISO-12083 constructs that can not be encoded in MathML either.
>The problem with some of them (script, bold script, Fraktur, bold Fraktur, double-struck symbols)
>is that there is no natural fallback for browsers that does not support
>plane 1.
> font-family:script seems like a natural fallback mechanism for script.
Ok. But are Fraktur and caligraphic fonts actually marked as such? How browser will identify them?
> Anyway, the lack of fallback should not necessarily imply that the property must be abandoned.
Agree.
>>>Problem: These delimiters /*ed. mainly arrows*/ do not seem to be supported in
>>>current proposition.
>>The problem is related to fallback CSS rendering.
> Should HTML5 Maths be strictly limited to what is currently possible in pure CSS?
Basic functionality probably should fit in XML + CSS.
> I tend to believe that a few unsupported' constructions should be included anyway if
> this is necessary in order to cover ISO-12083 (or whatever standard we might be chosen).
I would prefer not to include them explicitly. For example ISO-12083 allows any character as delimiter
but of course not any charcater makes sense and not any characters is stretched by ISO 12083 implementations.
So I don't know whether to allow everything available in Unicode, everything used in TeX or AAP Math DTD (both have list of
stretchy delimiters) or everything that can be handled with CSS? I prefer third option, but I am not sure.
Oistein E. Andersen wrote:
>>Quotes from "Wikipedia TeX in HTML5" http://xn--istein-9xa.com/HTML5/WikiTeX.pdf
>>2.5 Big operators
>>Remark: Is the following the intended use of under/over and opgrp?
>Yes. In fact I would be more appropriate to use the same markup for both and
>control rendering (under/over vs. sub/sup) via style sheets, but since
>it is impossible to have reasonable markup that admits both presentations
>(due to limited capabilities of CSS2.1) the only option is to allow both
>possibilities.
> In that case, what would drive an author to use <opgrp> instead of the more familiar <sub>/<sup> construction?
> Is the rendering intended to be different?
Yes, sub/sup will behave like HTML sub/sup with offsets being based on font size like it is currently
done in HTML implementations, while llim/ulim and marker/submark will have offsets based on size of their base
(operator, fence, matrix etc.) not font size like in case of HTML sub/sup elements.
> The proposal states that <op> should be used to mark resizable operators,
> but this presumably does not mean that the size of such operators is actually intended to change.
It is intended to be larger. If necessary separate element may be used for the rest of operators like "lim"
that are not resized.
> Finally, the nested <under>/<over> construction does not strike me as particularly elegant.
Completely agree.
> Would it be possible to use something like <overunder><top><overbrace><base><underbrace><bottom></overunder>
> to replace both <under> and <over>?
The problem is on CSS side. Try to align baseline of base element in this markup with baseline of parent,
in general it is impossible in CSS2.1.
>><fence left="solid" right="solid">
> >><matrix><row><cell><var>x</var><cell><var>y</var>
> >><row><cell><var>z</var><cell><var>v</var></matrix></fence>
>
> >This will draw second fence around matrix (apart of that which already is part of matrix).
>
> What kind of delimiters are supposed to be used? (Indicated in CSS?)
>
Since delimiters are not supposed to change meaning (matrix is matrix) issue is left up to style sheets.
But if necessary appropriate attributes may be added to indicate delimiters explicitly, as you suggested.
>>Remark: Should the scope node be optional?
>
> >Maybe. Do you have some concrete examples in mind?
>
> Not really, but I believe that such constructs sometimes occur without the second part,
> and I believe that the mark-up should be as flexible as possible on this point.
Ok. Let's make scope optional.
>><vector><entry><var>a</var><var>b</var><entry><var>c</var></vector>
>Note that vector includes fences. So what you need will probably require
>extra markup.
> I hope you do not mean that a <vector> is always a column matrix delimited by round parentheses.
> Delimiters like \left[ ... \right] and even \left| ... \right. are sometimes used.
> A generalised <matrix> could possibly replace the current <matrix>, <det> and <vector> constructs.
This returns us to concept of ISO 12083 arrays. It makes sense, I just thought that more specific
markup would be viable, but if necessary we can return to arrays.
> More generally, the advantage is basing HTML5 Maths on ISO-12083 is completely lost if constructs that
> can be expressed in ISO-12083 cannot be encoded in HTML5 Maths.
Some of ISO-12083 constructs that can not be encoded in MathML either.
>The problem with some of them (script, bold script, Fraktur, bold Fraktur, double-struck symbols)
>is that there is no natural fallback for browsers that does not support
>plane 1.
> font-family:script seems like a natural fallback mechanism for script.
Ok. But are Fraktur and caligraphic fonts actually marked as such? How browser will identify them?
> Anyway, the lack of fallback should not necessarily imply that the property must be abandoned.
Agree.
>>>Problem: These delimiters /*ed. mainly arrows*/ do not seem to be supported in
>>>current proposition.
>>The problem is related to fallback CSS rendering.
> Should HTML5 Maths be strictly limited to what is currently possible in pure CSS?
Basic functionality probably should fit in XML + CSS.
> I tend to believe that a few unsupported' constructions should be included anyway if
> this is necessary in order to cover ISO-12083 (or whatever standard we might be chosen).
I would prefer not to include them explicitly. For example ISO-12083 allows any character as delimiter
but of course not any charcater makes sense and not any characters is stretched by ISO 12083 implementations.
So I don't know whether to allow everything available in Unicode, everything used in TeX or AAP Math DTD (both have list of
stretchy delimiters) or everything that can be handled with CSS? I prefer third option, but I am not sure.
--
_______________________________________________
Surf the Web in a faster, safer and easier way: |
https://crypto.stackexchange.com/questions/89625/product-of-negligible-and-non-negligible-functions | # Product of Negligible and Non-Negligible Functions
I know that the product of two negligible functions will always be negligible, but I'm wondering if it's possible for the product of two non-negligible functions to be a negligible function?
## 1 Answer
I'm wondering if it's possible for the product of two non-negligible functions to be a negligible function?
Yes, actually; here is an example:
Consider the two functions:
$$P(x) = 1 \text{ if x is an even integer}, 0 \text{ otherwise}$$ $$Q(x) = 1 \text{ if x is an odd integer}, 0 \text{ otherwise}$$
Both $$P$$ and $$Q$$ are nonnegligible functions.
However $$P(x)Q(x) = 0$$, which is (trivially) a negligible function.
• Yes, that is the answer, that I missed. Thanks for correcting. Apr 27 '21 at 12:46 |
https://socratic.org/questions/how-do-you-factor-2n-2-7n-3 | # How do you factor 2n^2 - 7n + 3?
May 18, 2016
$\left(2 x - 1\right) \left(x - 3\right)$
#### Explanation:
The general equation that factors as $\left(a x + b\right) \left(c x + d\right)$ multiplies out to be $a c {x}^{2} + \left(a d + b c\right) x + b d$. This shows that the coefficient of the middle term is made up from factors of the coefficients of the outer terms.
Thus is this case we need factors of $2$ and $\left(- 3\right)$ that will add (or subtract) to give $- 7$.
The only possible factors are $2$ and $1$, and $3$ and $1$.
The result is therefore $\left(2 x - 1\right) \left(x - 3\right)$ |
https://www.bitbybitbook.com/en/1st-ed/observing-behavior/observing-activities/ | ## Activities
• degree of difficulty: easy , medium , hard , very hard
• requires math ()
• requires coding ()
• data collection ()
• my favorites ()
1. [, ] Algorithmic confounding was a problem with Google Flu Trends. Read the paper by Lazer et al. (2014), and write a short, clear email to an engineer at Google explaining the problem and offering an idea of how to fix it.
2. [] Bollen, Mao, and Zeng (2011) claims that data from Twitter can be used to predict the stock market. This finding led to the creation of a hedge fund—Derwent Capital Markets—to invest in the stock market based on data collected from Twitter (Jordan 2010). What evidence would you want to see before putting your money in that fund?
3. [] While some public health advocates consider e-cigarettes an effective aid for smoking cessation, others warn about the potential risks, such as the high levels of nicotine. Imagine that a researcher decides to study public opinion toward e-cigarettes by collecting e-cigarettes-related Twitter posts and conducting sentiment analysis.
1. What are the three possible biases that you are most worried about in this study?
2. Clark et al. (2016) ran just such a study. First, they collected 850,000 tweets that used e-cigarette-related keywords from January 2012 through December 2014. Upon closer inspection, they realized that many of these tweets were automated (i.e., not produced by humans) and many of these automated tweets were essentially commercials. They developed a human detection algorithm to separate automated tweets from organic tweets. Using this human detect algorithm they found that 80% of tweets were automated. Does this finding change your answer to part (a)?
3. When they compared the sentiment in organic and automated tweets, they found that the automated tweets were more positive than organic tweets (6.17 versus 5.84). Does this finding change your answer to (b)?
4. [] In November 2009, Twitter changed the question in the tweet box from “What are you doing?” to “What’s happening?” (https://blog.twitter.com/2009/whats-happening).
1. How do you think the change of prompts will affect who tweets and/or what they tweet?
2. Name one research project for which you would prefer the prompt “What are you doing?” Explain why.
3. Name one research project for which you would prefer the prompt “What’s happening?” Explain why.
5. [] “Retweets” are often used to measure influence and spread of influence on Twitter. Initially, users had to copy and paste the tweet they liked, tag the original author with his/her handle, and manually type “RT” before the tweet to indicate that it was a retweet. Then, in 2009, Twitter added a “retweet” button. In June 2016, Twitter made it possible for users to retweet their own tweets (https://twitter.com/twitter/status/742749353689780224). Do you think these changes should affect how you use “retweets” in your research? Why or why not?
6. [, , , ] In a widely discussed paper, Michel and colleagues (2011) analyzed the content of more than five million digitized books in an attempt to identify long-term cultural trends. The data that they used has now been released as the Google NGrams dataset, and so we can use the data to replicate and extend some of their work.
In one of the many results in the paper, Michel and colleagues argued that we are forgetting faster and faster. For a particular year, say “1883,” they calculated the proportion of 1-grams published in each year between 1875 and 1975 that were “1883”. They reasoned that this proportion is a measure of the interest in events that happened in that year. In their figure 3a, they plotted the usage trajectories for three years: 1883, 1910, and 1950. These three years share a common pattern: little use before that year, then a spike, then decay. Next, to quantify the rate of decay for each year, Michel and colleagues calculated the “half-life” of each year for all years between 1875 and 1975. In their figure 3a (inset), they showed that the half-life of each year is decreasing, and they argued that this means that we are forgetting the past faster and faster. They used Version 1 of the English language corpus, but subsequently Google has released a second version of the corpus. Please read all the parts of the question before you begin coding.
This activity will give you practice writing reusable code, interpreting results, and data wrangling (such as working with awkward files and handling missing data). This activity will also help you get up and running with a rich and interesting dataset.
1. Get the raw data from the Google Books NGram Viewer website. In particular, you should use version 2 of the English language corpus, which was released on July 1, 2012. Uncompressed, this file is 1.4GB.
2. Recreate the main part of figure 3a of Michel et al. (2011). To recreate this figure, you will need two files: the one you downloaded in part (a) and the “total counts” file, which you can use to convert the raw counts into proportions. Note that the total counts file has a structure that may make it a bit hard to read in. Does version 2 of the NGram data produce similar results to those presented in Michel et al. (2011), which are based on version 1 data?
3. Now check your graph against the graph created by the NGram Viewer.
4. Recreate figure 3a (main figure), but change the $$y$$-axis to be the raw mention count (not the rate of mentions).
5. Does the difference between (b) and (d) lead you to reevaluate any of the results of Michel et al. (2011). Why or why not?
6. Now, using the proportion of mentions, replicate the inset of figure 3a. That is, for each year between 1875 and 1975, calculate the half-life of that year. The half-life is defined to be the number of years that pass before the proportion of mentions reaches half its peak value. Note that Michel et al. (2011) do something more complicated to estimate the half-life—see section III.6 of the Supporting Online Information—but they claim that both approaches produce similar results. Does version 2 of the NGram data produce similar results to those presented in Michel et al. (2011), which are based on version 1 data? (Hint: Don’t be surprised if it doesn’t.)
7. Were there any years that were outliers such as years that were forgotten particularly quickly or particularly slowly? Briefly speculate about possible reasons for that pattern and explain how you identified the outliers.
8. Now replicate this result for version 2 of the NGrams data in Chinese, French, German, Hebrew, Italian, Russian and Spanish.
9. Comparing across all languages, were there any years that were outliers, such as years that were forgotten particularly quickly or particularly slowly? Briefly speculate about possible reasons for that pattern.
7. [, , , ] Penney (2016) explored whether the widespread publicity about NSA/PRISM surveillance (i.e., the Snowden revelations) in June 2013 was associated with a sharp and sudden decrease in traffic to Wikipedia articles on topics that raise privacy concerns. If so, this change in behavior would be consistent with a chilling effect resulting from mass surveillance. The approach of Penney (2016) is sometimes called an interrupted time series design, and it is related to the approaches described in section 2.4.3.
To choose the topic keywords, Penney referred to the list used by the US Department of Homeland Security for tracking and monitoring social media. The DHS list categorizes certain search terms into a range of issues, i.e., “Health Concern,” “Infrastructure Security,” and “Terrorism.” For the study group, Penney used the 48 keywords related to “Terrorism” (see appendix table 8). He then aggregated Wikipedia article view counts on a monthly basis for the corresponding 48 Wikipedia articles over a 32-month period, from the beginning of January 2012 to the end of August 2014. To strengthen his argument, he also created several comparison groups by tracking article views on other topics.
Now, you are going to replicate and extend Penney (2016). All the raw data that you will need for this activity is available from Wikipedia. Or you can get it from the R-package wikipediatrend (Meissner and R Core Team 2016). When you write up your responses, please note which data source you used. (Note that this same activity also appears in chapter 6.) This activity will give you practice in data wrangling and thinking about natural experiments in big data sources. It will also get you up and running with a potentially interesting data source for future projects.
1. Read Penney (2016) and replicate his figure 2 which shows the page views for “Terrorism”-related pages before and after the Snowden revelations. Interpret the findings.
2. Next, replicate figure 4A, which compares the study group (“Terrorism”-related articles) with a comparator group using keywords categorized under “DHS & Other Agencies” from the DHS list (see appendix table 10 and footnote 139). Interpret the findings.
3. In part (b) you compared the study group with one comparator group. Penney also compared with two other comparator groups: “Infrastructure Security” related articles (appendix table 11) and popular Wikipedia pages (appendix table 12). Come up with an alternative comparator group, and test whether the findings from part (b) are sensitive to your choice of comparator group. Which choice of makes most sense? Why?
4. Penney stated that keywords relating to “Terrorism” were used to select the Wikipedia articles because the US government cited terrorism as a key justification for its online surveillance practices. As a check of these 48 “Terrorism”-related keywords, Penney (2016) also conducted a survey on MTurk, asking respondents to rate each of ht keywords in terms of Government Trouble, Privacy-Sensitive, and Avoidance (appendix table 7 and 8). Replicate the survey on MTurk and compare your results.
5. Based on the results in part (d) and your reading of the article, do you agree with Penney’s choice of topic keywords in the study group? Why or why not? If not, what would you suggest instead?
8. [] Efrati (2016) reported, based on confidential information, that “total sharing” on Facebook had declined by about 5.5% year over year while “original broadcast sharing” was down 21% year over year. This decline was particularly acute with Facebook users under 30 years of age. The report attributed the decline to two factors. One is the growth in the number of “friends” people have on Facebook. The other is that some sharing activity has shifted to messaging and to competitors such as Snapchat. The report also revealed the several tactics Facebook had tried to boost sharing, including News Feed algorithm tweaks that make original posts more prominent, as well as periodic reminders of the original posts with the “On This Day” feature. What implications, if any, do these findings have for researchers who want to use Facebook as a data source?
9. [] What is the difference between a sociologist and a historian? According to Goldthorpe (1991), the main difference is control over data collection. Historians are forced to use relics, whereas sociologists can tailor their data collection to specific purposes. Read Goldthorpe (1991). How is the difference between sociology and history related to the idea of custommades and readymades?
10. [] This builds on the previous quesiton.Goldthorpe (1991) drew a number of critical responses, including one from Nicky Hart (1994) that challenged Goldthorpe’s devotion to tailor made data. To clarify the potential limitations of tailor-made data, Hart described the Affluent Worker Project, a large survey to measure the relationship between social class and voting that was conducted by Goldthorpe and colleagues in the mid-1960s. As one might expect from a scholar who favored designed data over found data, the Affluent Worker Project collected data that were tailored to address a recently proposed theory about the future of social class in an era of increasing living standards. But, Goldthorpe and colleagues somehow “forgot” to collect information about the voting behavior of women. Here’s how Nicky Hart (1994) summarized the whole episode:
“… it [is] difficult to avoid the conclusion that women were omitted because this ‘tailor made’ dataset was confined by a paradigmatic logic which excluded female experience. Driven by a theoretical vision of class consciousness and action as male preoccupations … , Goldthorpe and his colleagues constructed a set of empirical proofs which fed and nurtured their own theoretical assumptions instead of exposing them to a valid test of adequacy.”
Hart continued:
“The empirical findings of the Affluent Worker Project tell us more about the masculinist values of mid-century sociology than they inform the processes of stratification, politics and material life.”
Can you think of other examples where tailor-made data collection has the biases of the data collector built into it? How does this compare to algorithmic confounding? What implications might this have for when researchers should use readymades and when they should use custommades?
11. [] In this chapter, I have contrasted data collected by researchers for researchers with administrative records created by companies and governments. Some people call these administrative records “found data,” which they contrast with “designed data.” It is true that administrative records are found by researchers, but they are also highly designed. For example, modern tech companies work very hard to collect and curate their data. Thus, these administrative records are both found and designed, it just depends on your perspective (figure 2.12).
Provide an example of data source where seeing it both as found and designed is helpful when using that data source for research.
12. [] In a thoughtful essay, Christian Sandvig and Eszter Hargittai (2015) split digital research into two broad categories depending on whether the digital system is an “instrument” or “object of study.” An example of the first kind—where the system is an instrument—is the research by Bengtsson and colleagues (2011) on using mobile-phone data to track migration after the earthquake in Haiti in 2010. An example of the second kind—where the system is an object of study—is research by Jensen (2007) on how the introduction of mobile phones throughout Kerala, India impacted the functioning of the market for fish. I find this distinction helpful because it clarifies that studies using digital data sources can have quite different goals even if they are using the same kind of data source. In order to further clarify this distinction, describe four studies that you’ve seen: two that use a digital system as an instrument and two that use a digital system as an object of study. You can use examples from this chapter if you want. |
http://dtubbenhauer.com/cat.html | One diagram is worth a thousand words Each step of a “categorification process” should reveal more structure. The most classical illustration of this is the following: Here we first “categorify” numbers into vector spaces. The new information available are now linear maps between vector spaces (thus, we have the whole power of linear algebra at hand). There is no reason to stop: we can “categorify” vector spaces into categories, linear maps into functors. Again, we see a new layer of information, namely the natural transformations between these functors. The last step “naturally lives” in a $2$-categorical setup. The idea is clear: keep on going (if possible of course). Let me give now some more details. Categorification? A rough descriptionSince categorification is a rather new subject of mathematics (well, it is not the “newest kid in the block”, but anyway), I should spend a few words about the motivation and ideas behind it. For more information, one can look at the nice introduction at n-lab link. The idea behind categorification is, given a fixed notion one really likes, to find an “explanation” for properties of this structure by considering natural construction in a category such that the structure is some kind of shadow of these constructions. To this end, one replaces “set like structures” (i.e. $0$-categories) with “category like structures” (i.e. $1$-categories). Of course one can perform such a process on any level, e.g. one can categorify a “$n$-category like structure” into a “$n+1$-category like structure”. Or one can “categorify with extra structure”, e.g. categorify vector spaces instead of sets. Or, to say it otherwise, categorification is an “inverse” process for decategorification (which is best to be defined via examples) Some examples of decategorification are (note that the notion categorification is too new and therefore not directly used in these classical examples): The category of finite sets is a categorification of the natural numbers. Decategorification is just counting. But categorifications are not unique, i.e. the category of finite-dimensional vector spaces can also be seen as a categorification of the natural numbers. Decategorification is taking dimensions. The homology groups of a reasonable space are a categorification of its Betti numbers (Noether, Hopf, Walther). Decategorification is taking dimensions. The Khovanov homology is a categorification of the Jones polynomial. Decategorification is taking q-graded dimensions. A topos (Lawvere) can be seen as a categorification of a Heyting algebra. More fancier examples. See also Baez and Dolan's paper at arXiv link or Khovanov, Mazorchuk and Stroppel's paper at arXiv link. The slogan for the pair categorification/decategorification is: “If you live in a three-dimensional world, then it is hard to imagine a four-dimensional world, but easy to imagine a two-dimensional world”. The whole idea can be summarised in the so-called “ladder of categories”. NEWS I am still a fool. My paper got accepted. The arXiv version of this paper was updated. My paper got accepted. The arXiv version of this paper was updated. "There are two ways to do mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else - but persistent." - based on a quotation from Raoul Bott. Upcoming event where you can meet me: Visit Faro Click
Last update: 20.01.2018 or later · email |
http://zbmath.org/?format=complete&q=an:0933.12003 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Hilbertian fields under separable algebraic extensions. (English) Zbl 0933.12003
Through an investigation of M. Fried’s proof of Weissauer’s Theorem [see M. Fried and M. Jarden, Field Arithmetic, Springer (1986; Zbl 0625.12001)], the author develops a group theoretical argument that enables him to exhibit a quite general sufficient condition for an algebraic separable extension $M$ of an hilbertian field $K$ to be hilbertian.
This new criterion can be used to prove all the cases mentioned in M. Jarden and A. Lubotzky [J. Lond. Math. Soc. (2) 46, 205-227 (1992; Zbl 0724.12005)] where it is known that an extension $M$ of an hilbertian field $K$ is hilbertian.
As a consequence of this criterion, the main result of the paper states that, if $K$ is an hilbertian field, ${M}_{1},{M}_{2}$ are two Galois extensions of $K$, and $M$ is an intermediate field of ${M}_{1}{M}_{2}/K$ such that $M\text{⊈}{M}_{1}$ and $M\text{⊈}{M}_{2}$, then $M$ is hilbertian.
##### MSC:
12E25 Hilbertian fields; Hilbert’s irreducibility theorem 12F12 Inverse Galois theory |
https://dougo.info/top-passive-income-ideas-old-age-pension.html | This world is a dangerous place to live, not because of the good people that often act in irrational and/or criminally wrongdoing ways within the confines of their individual minds, core or enterprise groups, but because of the good people that don’t do anything about it (like reveal the truth through education like Financial Samauri is doing!). Albert Einstein and Art Kleiner’s “Who Really Matters.”
While compiling this list, I did my best to avoid scams, and stick with practical ideas that work. I have tried many (but not all) of these ideas. Some of these helped me earned a few dollars here and there, but there are some that helped me earn extra money on the side every single day — and some are still providing me with revenue! Note that not all ideas will fit your skills and abilities. What works for you depends on your abilities and your current financial situation.
Regardless, it took me around 18 months to start turning a profit online. It started with around $100 per month, then grew to$200 per month. Then it kept growing and growing until, eventually, the money I earned online surpassed what I earned in my regular, 9-5 job. That was last year, and my online income is still growing. Believe it or not, it all came from starting this simple, yet effective, blog.
Millennium Development Goals Report 2007 . The report importantly notes that As high as this number seems, surveys show that it underestimates the actual number of children who, though enrolled, are not attending school. Moreover, neither enrolment nor attendance figures reflect children who do not attend school regularly. To make matters worse, official data are not usually available from countries in conflict or post-conflict situations. If data from these countries were reflected in global estimates, the enrolment picture would be even less optimistic.
#### Real Estate Income – Since we moved up to Newport Beach, I started renting out my condo in San Diego. My monthly cash on cash return is $300(I charge$1,900 for rent and my total payments including mortgage, HOA, property tax and insurance are $1,600) but I also get back around$350 every month in principal and about a $150 tax savings per month. But even this income is inconsistent, since sometimes expenses will pop up like last month when I had to buy a new A/C for$3,000!
Now I’ve been using Swagbucks for a while and have found the money works out to just under $2 an hour so this isn’t something that’s going to make you rich. You’d have to work 2,500 hours to make$5,000 so that’s about three and a half months, non-stop. The thing with Swagbucks though is you can do it when you’re doing something else so I flip through surveys and other stuff while I’m cooking dinner or flipping channels.
What’s also really important to realize here is that when I took the exam I was teaching people to study for, I didn’t get a perfect score. In fact, I didn’t even get close to a perfect score. I passed. But I also knew a lot about this exam—way more than somebody who was just getting started diving into studying for it. And it was because of that, because I was just a few steps ahead of them, that they trusted me to help them with that information. To support this, I provided a lot of great free value to help them along the way. I engaged in conversations and interacted in comments sections and on forums. Most of all, I just really cared about those people, because I struggled big-time with that exam myself.
###### Reality One: We live in a competitive and fast changing world. Business has become highly specialized and niched because knowledge is growing exponentially, requiring specialized skills to employ it properly. Successfully competing in many widely varying fields is contradictory to the specialization and complexity required by our current business climate.
According to Uncle Sam, you need to be "materially involved" in an enterprise to earn active income. With passive income, it's just the opposite, as the IRS deems you to be earning passive income if you're not materially involved with a profit-making enterprise. By and large, expect income to be taxable if you are engaged in a passive income enterprise. You will need to report earnings to the IRS.
The one thing I learned though from all those childhood experiences though is that you never can depend on one source of income. Eventually my mom caught on and stopped giving me all those extra bags of chips and I had to figure out a new way to make money. No matter how safe something seems there’s always the chance that you could lose that income and be stuck with nothing.
You may think of a savings account as just that, savings. But it’s actually another form of income as the money in the account will draw interest. And while this interest may be small, it’s still better than $0. Eventually, you can invest this money whenever an opportunity presents itself in order to gain other income streams. Look into Tax Free Savings Accounts if you are going this route. ##### If you have an empty house or room you can rent it out on AIRBNB and OYO Rooms. Many travelers are looking to spend one night at a place. You can always rent out your empty house or room to them. All you need to do is list your room or house online, explain the rules and you are good to go. Travelers will pay you online. This way you don’t have to search for clients. They will come to you. Case Schiller only tracks price appreciation of RE. RE as rental investment vehicle is measured primarily on rental yield or cap rate or some other measure. Price appreciation in that scenario is only a secondary means of growth, and arguably should be ignored as a predictor of returns when deciding on whether or not to invest in rentals. More important key performance indicators for rentals are net operating income and cash ROI. Appreciation, if it occurs, is a bonus. Blogging is still going to take work starting out. That path to$5,000 a month didn’t happen overnight but just like real estate development, it build up an asset that now creates constant cash flow whether I work or not. I get over 30,000 visitors a month from Google search rankings, rankings that will continue to send traffic even if I take a little time off.
Book sales ($36,000 a year): Sales of How to Engineer Your Layoff" continue to be steady. I expect book sales to rise once the economy starts to soften and people get more nervous about their jobs. It's always best to be ahead of the curve when it comes to a layoff by negotiating first. Further, if you are planning to quit your job, then there is no downside in trying to engineer your layoff so you can get WARN Act pay for several months, a severance check, deferred compensation, and healthcare. If you have a blog or other type of site, you can build affiliate links to different services on the website. Many people use Amazon as an affiliate partner. For example, if you are a beauty blogger writing about different products, you can set up Amazon affiliate links on your blog so that whenever someone buys the product you mention on Amazon, you receive a percentage of the sale. Amazon is not the only affiliate partner out there. Here is good, in-depth information on affiliate marketing. For 2018, he’s most interested in arbitraging the lower property valuations and higher net rental yields in the heartland of America through RealtyShares, one of the largest real estate crowdfunding platforms based in SF. He sold his SF rental home for 30X annual gross rent in 2017 and reinvested$550,000 of the proceeds in real estate crowdfunding for potentially higher returns.
I truly believe generating $10,000 a year online can be done by anybody who is willing to dedicate at least two years to their online endeavors. Here is a snapshot of what a real blogger makes through his website and because of his website. Roughly$150,000 a year is semi-passive income followed by another $186,000 a year in active income found through his site. Check out my guide on how to start your own blog here. However, I think for those who are willing to do what it takes, the sky is the absolute limit. As an example, I’m trying to take a page out of FinancialSamauri’s book and create an online personal finance and investing blog. It is an enormous undertaking, and as a new blogger, there is a seemingly endless amount of work to be done. That said, I hope that one day I can not only generate some passive income from the hours of work I have put and will put into the project, but I hope to be able to help OTHERS reach their financial goals. Finally, make sure you can service multiple types of clients. This is the best egg to have. Even for something as simple as a dog walking business, you can service multiple types of clients. For example, you can offer a puppy walking service and an adult walking service. You can have weekends at the park services. You can offer to take the dog to the groomers. Try to meet client’s individual needs. I first discovered the power of passive income when I was a senior in high school. I started a mobile billboard business where I would rent a small piece of land from someone who had land along a busy highway. Then I would place one of my billboard trailers on the land and rent out the ad space on the billboard. I would usually charge about$300 per month for the ad space, meanwhile I was only paying $50 per month to the landowner for the ground rent. I got to the point to where I had 9 billboard faces and was making quite a substantial income for someone in high school. I really learned how passive income could free up my life… this business is what lead me into investing in real estate. Even if each patron only contributes a very small amount each month, it can still be a huge source of income. Take a look at the Patreon page for Kinda Funny, an internet video company. They have over 6,209 patrons which means an average of just$3 a month would be a monthly income of almost $19,000 – plus they get cheerleaders that are always happy to spread the word on their brand. No matter what venture you undertake in life, you need a team. I’m a firm believer in team work, even if it is just to bounce ideas off of, or to have someone tell you that you are off track. For many individuals, this person is their spouse, who also brings some income diversity to the table. Just like I mentioned above, if your spouse has income, try to maximize it. 2) Find Out What You Are Good At. Everybody is good at something, be it investing, playing an instrument, playing a sport, communications, writing, art, dance and so forth. You should also list several things that interest you most. If you can combine your interest plus expertise, you should be able to monetize your skills. A tennis player can teach tennis for$65 an hour. A writer can pen her first novel. A finance buff can invest in stocks. A singer can record his first song. The more interests and skills you have, the higher chance you can create something that can provide passive income down the road.
Case Schiller only tracks price appreciation of RE. RE as rental investment vehicle is measured primarily on rental yield or cap rate or some other measure. Price appreciation in that scenario is only a secondary means of growth, and arguably should be ignored as a predictor of returns when deciding on whether or not to invest in rentals. More important key performance indicators for rentals are net operating income and cash ROI. Appreciation, if it occurs, is a bonus.
Tax Deducted at Source (TDS) is a means of collecting income tax in India, under the Indian Income Tax Act of 1961. Any payment covered under these provisions shall be paid after deducting prescribed percentage. It is managed by the Central Board for Direct Taxes (CBDT) and is part of the Department of Revenue managed by Indian Revenue Service . It has a great importance while conducting tax audits. Assessee is also required to file quarterly return to CBDT. Returns states the TDS deducted & paid to government during the Quarter to which it relates.
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good.
The obvious way to earn a second income is to get a part-time job. If you are not currently working, this is an excellent way to start as it gives you the freedom and flexibility to start other passive income opportunities. The other option is to simply work from home full time which frees up commute time so you can focus on building more income streams.
Create a Course on Udemy – Udemy is an online platform that lets its user take video courses on a wide array of subjects. Instead of being a consumer on Udemy you can instead be a producer, create your own video course, and allow users to purchase it. This is a fantastic option if you are highly knowledgeable in a specific subject matter. This can also be a great way to turn traditional tutoring into a passive income stream! |
https://123dok.net/document/zgw6ko58-parametrization-inclusive-cross-section-production-collisions-multiplicity-distribution.html | Parametrization of the inclusive cross section for $\bar p$ production in p + A collisions and $\bar p$ mean multiplicity distribution
Full text
(1)
HAL Id: in2p3-00011881
http://hal.in2p3.fr/in2p3-00011881
Submitted on 11 Sep 2002
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Parametrization of the inclusive cross section for ¯p
production in p + A collisions and ¯p mean multiplicity
distribution
C.Y. Huang, M. Buenerd
To cite this version:
C.Y. Huang, M. Buenerd. Parametrization of the inclusive cross section for ¯p production in p + A
(2)
ISN-01-18
p
p+ A
p
Mean Multiplicity Distribution
Ching-Yuan Huang and Michel Buenerd
Institut des Sciences Nucleaires, CNRS/IN2P3 et Universite Joseph Fourier, 53, Avenue des Martyrs, 38026 Grenoble Cedex, France
Abstract
The parametrized inclusive triple dierential cross section for antiproton production in proton-nucleus collisions is studied. An energy-dependent term is introduced in the functional form of inclusive cross section previously used. The new parametriza-tion provides a consistent agreement with both the experimental data for laboratory incident energies from 12 to 24 GeV for proton-proton and proton-nucleus collisions and also for the mean p multiplicity distribution in p-p collisions at least up to the center of mass energy of 25 GeV.
PSCS numbers: 13.85.Ni, 25.75.Dw, 13.85.Hd
(3)
1 Introduction
Antiprotons are a rare component of cosmic rays (CRs). The origin of cosmic-ray antiprotons has attracted a lot of attention since the rst observation reported by Golden et al. 1]. The data currently available on the antiproton ux have been measured, by means of magnetic spectrometers, by balloon-borne BESS, HEAT, CAPRICE and IMAX 2{5] and are expected to come out also from the space-borne experimentAMSin its test ight in June 1998. Cosmic-ray antiprotons are supposed to be produced, at least dominantly, by the interactions of galactic high-energy CRs with the interstellar medium (ISM) 6]. The measured ux however, is a superposition of antiprotons origi-nating from the galactic production with the ux produced in the atmosphere by primary CRs interacting with atmospheric nuclei. This latter production then constitutes a physical background for the study of the galactic p ux. The measured ux of antiprotons must then be corrected from the local p prouction in order to obtain the galactic antiprotons ux 7]. An important input for evaluating such a correction is the p-production cross section in proton-nucleus and nucleus-nucleus collisions. Unfortunately, the experimen-tal data are scarce in the energy range of interest. In addition, it is found that the available parametrization shows a poor agreement with the available data (see Figure 1-4).
In the past few decades, high quality secondary beams became available at pro-ton synchrotrons with increasing energies (PS, AGS, IHEP, FNAL, SPS) 8], and the studies on the multiplicity distribution of charged hadrons in high-energy hadron-hadron collisions came up to its stage. The multiplicity distri-butions of charged secondaries produced in p-p interactions were measured with high statistics at the CERN Intersecting Storage Rings (ISR), by us-ing the Split-Field Magnet (SFM) detector, by the ABCDHW Collaboration 9,10]. The experimental results indicate that, if the charged particles are pro-duced randomly and independently, their multiplicity distributions could obey a Poisson distribution 11]. The deviations from a Poissonian distribution are then a hint of the underlying production processes, and also a source of infor-mation about the nuclear medium eects.
The aim of this paper is rstly to present a more widely applicable expres-sion for p production in p + A collisions (A: nucleus). The parametrized p-production cross section is studied and then a good agreement is obtained with the existing data 12{15]. Secondly, the new parametrized cross section is applied to calculate the average multiplicitydistribution of antiprotons inpp collisions. A parametrized expression of the mean multiplicity for antiprotons in p-p collisions, based on the experimental charged multiplicity distributions 10], is then presented.
(4)
2 Inclusive Nuclear Cross Section
The parametrization of the inclusive cross section considered is based on the Regge phenomenology and (quark counting rules) parton model. The kinemat-ics used in the following to express the inclusive cross section forp+A collisions is based on the assumption that all processes take place in the nucleon-nucleon (NN) system. Nuclear eects are incorporated in a delicated term of the an-alytic expression of the collisions. The minor (for energies above threshold) kinematical eects due to the nuclear Fermi motion of target nucleons have been neglected. The invariant triple dierential cross section for the inclusive production of p on a nucleus A is expressed in terms of the usual kinemati-cal variables, momentum P, mass m, production angles and , in various combinations and reference frames, as 16]:
Ed3 d3p ! inv= 12mT d2 ddmT = 12pT d2 ddpT (1) = EP2 d2 ddP = 2PE 2 d2 dPdcos (2)
where pT is the transverse momentum mT =
q p2 T +m2 0 is the transverse mass = 12lnE + QE ;Q is the rapidity Q = m
Tsinh is the longitudinal
mo-mentum is the Lab angle between the measured inclusive particle and the incident particle direction is the solid angle. At high energy, the inelas-tic processes dominate hadrons collisions and, for the study on the ux of secondary antiprotons, the spectrum of produced particle number is:
dn
d = ineltot1 dd = totinel1
Z p Tmax 0 2pT Ed3 d3p ! invdpT (3)
where the total inelastic cross section ineltot for p + A collisions has been
(5)
Ek is the incident kinetic energy in MeV, and A is the target mass in amu
unit. 1
3 Parametrized Inclusive Cross Section
A parametrized eective proton-nucleus inclusive cross section for the process p + A! p + X has been proposed in 18] as:
Ed3 d3p ! inv= tot inelC1A b(pT)(1 ;x) C2exp( ;C 3x)(pT) (7) (pT)=exp(;C 4p 2 T) +C5 exp(;C 6xT) (p2 T +2)4 (8) where b(pT) = 8 > < > : b0pT pT ; b0; pT > ; The (1;x)C
2 form in (7), originates from quark counting rules in hadronic interactions, while exp(;C
3x) is characterised by Regge regime 19] A is the target mass number and its exponent could be viewed as the net-eect of the multiple NN scattering in the target nucleus x = E
E
max is a scaling
variable, in which, E and E
max are the total energy of the inclusive particle
and its maximum possible energy in the centre of mass frame, respectively pT is the transverse momentumxT = 2ppT
s is the transverse variable. E
max = s;M 2 Xmin+m2 p 2p
s , in which, MXmin = 2mp + mA is the minimum possibleA mass of the recoiling particle in the relevant process (p + N ! p + X in this case). p
s is the invariant mass of the system. 1 In A+A collisions, one could use the form 17]:
totinel= 10r2 0A 1=3 proj+A1=3 target;] 2 (mb) (6)
where Aproj and Atarget are the mass numbers of the projectile particle and the
target respectively r0 = 1:47 = 1:12. More detailed review about the total
inelastic cross section ineltot could be found in 26].
(6)
4 Fitting Procedure and Results
The 2 minimization procedure was based on the MINUIT package 20]. The set of experimental data used was from 12{15], corresponding to incident protons with energies of 12.0 GeV, 14.6 GeV/c, 19.2 GeV/c and 24 GeV/c, while the measured transverse momentum range extended up to 1.068 GeV/c. The parameters C1
C
6 b0
2, and ; given in 18] were used as starting values for the minimization procedure.
Fitting the expression (7) and (8) of the parametrized inclusive cross section to the individual data, it was noted that the second term in (pT), is of signicant
magnitude for low transverse momentumpT range, at least comparable to the
rst term. It is also investigated that, the second term in is signicantly similar to the rst term at low pT over the entire range of p
s while in high energy systems, this term is much greater than the rst and will dominate at high pT range. In this case, 2 cannot be treated as the modication term in in the original KMN parametrisation. And, in independent searches over individual sets of data, it shows that good ts were obtained for energies E > 12 GeV with only the rst term. In addition, in nuclear collisions, the probability of largepT scattering must be small so that the second is required
for only low energies. Hence,in order to enforce the above points to be satised, an energy-dependent cuto function was introduced to the function by modifying to the form as
(pT) = exp(;C 4p 2 T) +C5 exp(;C 6xT) (p2 T +2)4 exp( ; p s) (9)
Combining all the experimental data 12{15] and using the new forms given in relations of (7) and (9), the 2 minimization has been achieved and the best t parameters obtained are given in Table 2 where they are compared to those from 18]. The corresponding ts are shown in gures 1- 4. It can be noted in Fig. 3 that the cross sections are maximum as expected around mid-rapidity (i.e., around xF=0) for both p + p and p + Al collisions. Fig. 3 also provides
evidence that p multiplicity in p + A collisions increases slightly with the target mass and also the centrality, which is consistent with the experimental measurements reported by Abbottet al. 12]. It should be also noted that, by the t shown in Fig. 1, the experimental error bars of Sugaya et al. 14] have to be set to more realistic values in the optimization procedure.
In addition, in order to investigate the continuity properties of the func-tional forms used together with the incident beam energy, the target, etc, the parametrised p production cross section had been checked by plotting as a function of its variables over the whole variable domains. The smoothness pro-vides the evidence that this reparametrisation is valid under the constraints
(7)
of the available experiments taken into account 12{15].
p
Multiplicity Distribution
The p mean multiplicity is calculated ny integrating the parametrised p pro-duction cross section over the whole domaines of its variables. Figure 5 shows the mean multiplicity distribution of antiprotons produced calculated forp+p collision 9] by the new parametrisation in this work. In Figure 5, it has to be noted that the original data (black circles) were measured by removing the single-diractive parts in total cross section so that the corrected ones are given by estimating 10 15% in contribution to its multiplicity 8]. The cor-rected values from the experimental data are also shown in the gure. Hence, after such a correction, the calculated p mean multiplicity distribution has shown a good agreement with the experimental data up top
s = 25 GeV. For p
s 30 GeV, the parametrisation underestimates the p mean multiplicity. The dierence between the calculations and the measurements becomes larger at larger energies but still remains close to the experimental uncertainty of the available experiments.
It has to be noted that, whenp
s is reaching 1 TeV, the calculated pmean mul-tiplicity will turn out to decrease. Such a decrease is not physically expected. This has provided the upper energy limitof the validity of this parametrisation which is beyond the range of interest of this work.
The multiplicity of particles created in high-energy hadron collisions follows a distribution with a long tail, qualitatively similar to the distribution of ion-isation energy loss (Landau distribution). The produced hadron multiplicity is generally described in terms of the Koba-Nielsen-Olesen (KNO) variables 21], i.e., z = N
<N>, the ratio between the multiplicity and the average
mul-tiplicity of the particles produced at the centre of mass energy p
s. However, it is found that the KNO scaling is violated in high energy collisions 11,22]. Therefore, another parametrised hadron-nucleus multiplicity distribution in terms of lnp
(8)
The constants in Eqs. (10) and (11) are determined by the tting procedure. Note that a threshold value for the multiplicity was used in the parametrisa-tions (10) and (11) 9,10,24]. Also note that the leading particle contribution has been included in the factor A 23]. In these results, the mean multiplic-ity of each type of charged particle appears in a similar shape to that of the all charged particles. Therefore, the p mean multiplicity distribution in p + p collisions could be described by using the same functional forms shown in Eqs. (10) and (11). The constants for the p mean multiplicity distribution in form of (11) was reported 24] while the constants for the p mean multiplicity distribution in form of (10) was determined in this work, by the best tting. The constants for these two parametrisations are shown in Table 3.
Figure 6 shows the p mean multiplicity distributions in p + p collisions as a function of the centre of mass energy in the whole phase space from four dierent origins: the experiment 10] together with its corrected values, the values calculated by the revised parametrisation of the p production in this work, and the parametrisations from Eqs (10) 9,10,23] and (11) 24]. In this gure, both these two parametrisations of the p mean multiplicitieshave shown a better agreement with the experimental data than the results calculated by this work for the centre of mass energy p
s > 25 GeV. Nevertheless, it has to be indicated that, according the functions used in the parametrisations (10) and (11), these two parametrisations have the following lower limits of the validity: p
s = 6:4 GeV, i.e., the incident nucleon energy E ' 20:89 GeV for the parametrisation (10) p
s = 5:37 GeV, i.e., the incident nucleon energyE '14:43 GeV for the parametrisation (11). However,the energy range below these limits is still important to the p production in nuclear interactions for the present work. On the other hand, as it will be discussed later, for the interest of study on the cosmic p origins, the energy range E > 200 GeV is of less importance. In addition, the previous parametrisations of the mean p multiplicity 9{11,23,24] were obtained by data tting while the mean
p multiplicity in this work was calculated by integrating the p production cross section. Therefore, the mean p multiplicity obtained in this work is more physical.
6 Conclusions
In this paper, a modied parametrized inclusive cross section for p + A col-lisions has been proposed with a new energy-dependent term to account for the p production cross section in the 10-20 Gev transition region of incident energy. The mean number distribution of antiprotons calculated by this new parametrized cross section is in a good agreement with the measured data with p
s from threshold up to 100 GeV. A parametrized p multiplicity distribution calculated by this parametrization is also given.
(9)
This paper explored the reliability of the parametrized inclusive expression (7) with (9) over a wide range of incident energies, from around 10 GeV, up to 100-200 GeV, matching the spectrum of cosmic rays particles.
The determined parameters will then be applied to calculate the ux of the secondary antiprotons. In addition they have been used to evaluate the p production in the galactic disk 25{27]. Their results are in a good agreement with the antiprotons ux measured byBESS 2].
The results obtained in this paper will be subsequently applied to the evalu-ation of the atmospheric production of secondary antiprotons.
(10)
References
1] R. L. Golden et al., Phys. Rev. Lett.43, (1979) 1196
2] S. Orito et al., Phys. Rev. Lett.84, No. 6, (2000) 1078
3] C. R. Bower et al., Proc. 26th ICRC, Vol 5, 2000 4] M. Boezio et al., AstroPhys. J.487, (1998) 4052
5] J. W. Mitchell et al., Phys. Rev. Lett. 76, (1996) 3057
6] Dallas C. Kennedy, astro-ph/0003485
7] A. Moiseev et al., AstroPhys. J.474, (1997) 479
8] G. Giacomelli and R. Giacomelli, G. Giacomelli and R. Giacomelli, Il Nuovo Cimento24C, (2001) 575
9] A. Breakstone et al., Il Nuovo Cimento102A, (1989) 1199
10] A. Breakstone et al., Phys. Rev.D 30, (1984) 528
11] G. Giacomelli, Multiplicity Distributions and Total Cross-Section at High Energy, 1989 International Workshop on Multiparticle Dynamics, CERN-EP/89-179
12] T. Abbott et al., Phys. Rev.C 47, No. 4, (1993) 1351
13] J. V. Allaby et al., Report CERN Report No. 70-12 (1970) 14] Y. Sugaya et al., Nucl. Phys. A 634, (1998) 115
15] T. Eichten et al., Nucl. Phys.B 44, (1972) 333
16] E. Byckling & K. Kajante, Particle Kinematics, John Wiley & Sons (1973) 17] Ch. Pfeifer, S. Roesler and M. Simon, Phys. Rev. C54, No. 2, (1996)
18] A. N. Kalinovski, N. V. Mokhov & Yu. P. Nikitin, Passage of High-Energy Particles through Matter, American Institute of Physics Ed, 1989
19] P. D. B. Collins & A. D. Martin, Hadron Interaction, Uni. of Sussex Press, 1984 20] F. James, MINUIT, Function Minimization and Error Analysis, CERN
Program Library Long Writeup D506, 1998
21] Z. Koba, H. B. Nielsen and P. Olesen, Nucl. Phys.B40, (1972) 317
22] S. Matinyan, Phys. Rep.320, (1999) 261
23] C. P. Singh and M. Shyam, Phys. Lett.B 171, (1986) 125
(11)
25] C-Y Huang, Inclusive Cross Section for the Production of Antiprotons inp+A
Collisions, 6emes Rencontres avec les Jeunes Chercheurs, Physique Nucleaire, Astroparticules & Astrophysique, Aussois, 2000
26] D. Maurin, Ph.D. Thesis, Universite de Savoie, 2001
27] F. Donato et al., Antiprotons from Spallation of Cosmic Rays on Interstellar Matter, astro-ph/0103150
(12)
Fig. 1. Antiproton cross sections measured at 5:10in function of antiproton
produc-tion momentum for p+C and p+Al collisions at an incident energy of 12 GeV compared with the experimental data 14] and the KMN parametrisation 18].
Fig. 2. Antiproton invariant spectra in function of antiproton production transverse mass dierencedmT =mT;m
0forp+Alcollision at an incident momentum of 14.6
GeV/c compared with the experimental data 12] and the KMN parametrisation
(13)
Fig. 3. Antiproton spectradn=din function of antiproton production rapidityfor
p+pand p+Al collisions at an incident momentum of 19.2 GeV/c compared with the experimental data 13] and the KMN parametrisation 18].
Fig. 4. Antiproton Lorentz invariant density in function of antiproton production momentum forp+Al and p+Becollisions at an incident momentum of 24 GeV/c compared with the experimental data 15]. The measurement angles are 17, 27, 37, 47, 57, 67, 87, 107 and 127 mrad from top to bottom. The density is plotted and multiplied by a power of 10;1, i.e., 100 for 17 mrad, 10;1 for 27 mrad, 10;2 for 37
(14)
Exp. Type Ein=Pin Ppmax (GeV/c) pTpmax (GeV/c)
Allaby et al. 1970 13] p+p 19.2 GeV/c 14.5 0.91 Allaby et al. 1970 13] p+Al 19.2 GeV/c 14.5 0.91 Eichten et al. 1972 15] p+Be 24.0 GeV/c 18.0 1.068 Eichten et al. 1972 15] p+Al 24.0 GeV/c 18.0 1.068 Abbott et al. 1993 12] p+Al 14.6 GeV/c 0.78 Sugaya et al. 1998 14] p+C 12.0 GeV 2.5 0.22 Sugaya et al. 1998 14] p+Al 12.0 GeV 2.5 0.22 Table 1
List of experiments and data used.
parameters Kalinovski(1989) this paper (2001)
C1 0.08 0.042257 C2 8.6 5.9260 C3 2.30 0.9612 C4 4.20 2.1875 C5 2.0 84.344 C6 10.5 10.5 2 1.1 0.092743 b0 0.12 0.12 ; 5.0 5.0 2.2429 2 per point 7.30 0.544 Table 2
Values of the parameters and 2 by Kalinovski18] and this work. Note that the
two values ofC5 cannot be compared directly. See text.
(15)
Table 3
Constants for the parametrised pmean multiplicity distributions in p+p collisions described in forms of Eq. (10) 9,10,23] and Eq. (11) 24].
(16)
Fig. 5. Antiproton mean multiplicity distribution in the whole phase space calculated by the revised parametrisation in p+pcollision. The results are compared with the experimental data 10].
Fig. 6. Same data as in Figure 5 compared with the values calculated in this work and the parametrisations of Eqs. (10) and (11). See the text for the discussions.
Updating...
References
Related subjects : |
https://martinapugliese.gitbook.io/tales-of-science-and-data/the-computer-science-appendix/notes-on-foundations/main-data-structures | Meta & resources
Machine Learning: concepts & procedures
Machine Learning: fundamental algorithms
Machine Learning: model assessment
Artificial neural networks
Natural language processing
(Main) data structures
A data structure is a higher level form based on top of primitive data types (integers, floats/doubles, characters, booleans). Let's quickly go through the main ones.
# Arrays
Lists (collections) of elements. Note that Python has the concept of array and that of list: they differ both in their nature and what you can do with them and their general purpose. See the article in the references for a nice comparison.
This is a list of elements (called nodes) linked one to the next: a node contains the element and the link to the next node. This structure allows for easy replacement, insertion and deletion of elements as they are not stored in contiguous places in the memory, thanks to the links.
# Hash tables
Hash tables are key - value pairs, they are super useful: you can access a value by calling its key so the lookup is straightforward (in O(1) time). Hash tables use a hash function to map the key (which can be of any type) to a numerical value, so that given a key the computer knows where is the value to be accessed and can do in constant time (without the need to scroll).
# Stacks and queues
A stack is a data structure where you put elements in one on top of the other and it uses the LIFO philosophy to get data out: "last in first out" means that you access elements in the reverse order than the one used to put them in.
A queue is similar, but uses the FIFO philosophy: "first in first out", basically elements come out from the other end than the one you've inserted them in.
# Graphs (and trees, and heaps)
Graphs have nodes and connections between them which determine their relation. There is a whole branch of mathematics devoted to their study (graph theory).
A tree is a special type of graph where there is a clear relation between a parent node and a child node, so no cycles appear (there are hierarchical relations).
There are several subtypes of graphs, identified by their main characteristics, e.g. binary graphs (trees) are those where each node as at most 2 children nodes.
A heap is a type of tree structure where data is stored in a way that a parent node either:
• always contains values greater than its children nodes (max heap), so that the root node is the maximum
• always contains values smaller than its children nodes (min heap), so that the root node is the minimum
These features make heaps data structures that are partially ordered (the ordering is in the vertical direction, not in the horizontal one). In a binary heap where each node has 2 children nodes, because at level
$x$
there are
$2^x$
nodes - this means that the length of a binary heap with
$n$
nodes is
$\log_2 n$
.
# Objects
An object is a collection of data and, sometimes, functions that work on this data, put together in a coherent place. Normally, an object is implemented via a class.
For example, a AlarmClock object will represent the alarm clock you have on your bedside table: it will store data for the date and time and will have methods to update the time as it goes and ring based on some criteria (always at the same time daily or with better sophistication).
You can use them to build objects that inherit characteristics from others and specialise. For instance, you can do a general class for Vehicle and one for a Train that inherits from it as it is a subclass of it (the basic features will be inherited and specific ones are implemented for it only).
The Python docs for classes is quite educational!
# References
1. 1.
Gayle Laakmann McDowell, Cracking the Coding Interview, CareerCup
2. 2.
Kateryna Koidan, Array vs. List in Python – What's the Difference?, an article on LearnPython.com
3. 4.
Classes in Python, from the docs |
https://docs.plasmapy.org/en/latest/api/plasmapy.formulary.collisions.dimensionless.coupling_parameter.html | # coupling_parameter
plasmapy.formulary.collisions.dimensionless.coupling_parameter(T: Unit("K"), n_e: Unit("1 / m3"), species, z_mean: ~numbers.Real = nan, V: Unit("m / s") = <Quantity nan m / s>, method='classical') -> Unit(dimensionless)
Ratio of the Coulomb energy to the kinetic (usually thermal) energy.
Classical plasmas are weakly coupled ($$Γ ≪ 1$$, where $$Γ$$ is the coupling parameter). Dense plasmas tend to have significant to strong coupling ($$Γ ≥ 1$$). For more details, see the notes section below.
Parameters
• T (Quantity) – Temperature in units of temperature or energy per particle, which is assumed to be equal for both the test particle and the target particle.
• n_e (Quantity) – The electron number density in units convertible to per cubic meter.
• species (tuple) – A tuple containing string representations of the test particle (listed first) and the target particle (listed second).
• z_mean (Quantity, optional) – The average ionization (arithmetic mean) of a plasma for which a macroscopic description is valid. This parameter is used to compute the average ion density (given the average ionization and electron density) for calculating the ion sphere radius for non-classical impact parameters. z_mean is a required parameter if method is "ls_full_interp", "hls_max_interp", or "hls_full_interp".
• V (Quantity, optional) – The relative velocity between particles. If not provided, thermal velocity is assumed: $$μ V^2 \sim 2 k_B T$$ where $$μ$$ is the reduced mass.
• method (str, optional) – The method by which to compute the coupling parameter: either "classical" or "quantum". The default method is "classical". The Notes section of this docstring has more information about these two methods.
Returns
coupling – The coupling parameter for a plasma.
Return type
Raises
• ValueError – If the mass or charge of either particle cannot be found, or any of the inputs contain incorrect values.
• UnitConversionError – If the units on any of the inputs are incorrect.
• TypeError – If any of n_e, T, or V is not a Quantity.
• RelativityError – If the input velocity is same or greater than the speed of light.
Warns
Notes
The coupling parameter is given by
$Γ = \frac{E_{Coulomb}}{E_{Kinetic}}$
The Coulomb energy is given by
$E_{Coulomb} = \frac{Z_1 Z_2 q_e^2}{4 π ε_0 r}$
where $$r$$ is the Wigner-Seitz radius, and 1 and 2 refer to particle species 1 and 2 between which we want to determine the coupling.
In the classical case the kinetic energy is the thermal energy:
$E_{kinetic} = k_B T_e$
The quantum case is more complex. The kinetic energy is dominated by the Fermi energy, modulated by a correction factor based on the ideal chemical potential. This is obtained more precisely by taking the thermal kinetic energy and dividing by the degeneracy parameter, modulated by the Fermi integral :
$E_{kinetic} = 2 k_B T_e / χ f_{3/2} (μ_{ideal} / k_B T_e)$
where $$χ$$ is the degeneracy parameter, $$f_{3/2}$$ is the Fermi integral, and $$μ_{ideal}$$ is the ideal chemical potential.
The degeneracy parameter is given by
$χ = n_e Λ_{de Broglie} ^ 3$
where $$n_e$$ is the electron density and $$Λ_{de Broglie}$$ is the thermal de Broglie wavelength.
See equations 1.2, 1.3 and footnote 5 in Bonitz [1998] for details on the ideal chemical potential.
Examples
>>> import astropy.units as u
>>> n = 1e19 * u.m**-3
>>> T = 1e6 * u.K
>>> species = ('e', 'p')
>>> coupling_parameter(T, n, species)
<Quantity 5.8033...e-05>
>>> coupling_parameter(T, n, species, V=1e6 * u.m / u.s)
<Quantity 5.8033...e-05> |
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/AdaptiveMaxPool3D_en.html | This operation applies 3D adaptive max pooling on input tensor. The h and w dimensions of the output tensor are determined by the parameter output_size. The difference between adaptive pooling and pooling is adaptive one focus on the output size.
\begin{align}\begin{aligned}dstart &= floor(i * D_{in} / D_{out})\\dend &= ceil((i + 1) * D_{in} / D_{out})\\hstart &= floor(j * H_{in} / H_{out})\\hend &= ceil((j + 1) * H_{in} / H_{out})\\wstart &= floor(k * W_{in} / W_{out})\\wend &= ceil((k + 1) * W_{in} / W_{out})\\Output(i ,j, k) &= max(Input[dstart:dend, hstart:hend, wstart:wend])\end{aligned}\end{align}
Parameters
• output_size (int|list|tuple) – The pool kernel size. If pool kernel size is a tuple or list, it must contain three elements, (D, H, W). D, H and W can be either a int, or None which means the size will be the same as that of the input.
• return_mask (bool, optional) – If true, the index of max pooling point will be returned along with outputs. Default False.
• name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
Shape:
• x(Tensor): The input tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type can be float32, float64.
• output(Tensor): The output tensor of adaptive max pool3d operator, which is a 5-D tensor. The data type is same as input x.
Returns
Examples
# adaptive max pool3d
# suppose input data in shape of [N, C, D, H, W], output_size is [l, m, n],
# output shape is [N, C, l, m, n], adaptive pool divide D, H and W dimensions
# of input data into l * m * n grids averagely and performs poolings in each
# grid to get output.
# adaptive max pool performs calculations as follow:
#
# for i in range(l):
# for j in range(m):
# for k in range(n):
# dstart = floor(i * D / l)
# dend = ceil((i + 1) * D / l)
# hstart = floor(j * H / m)
# hend = ceil((j + 1) * H / m)
# wstart = floor(k * W / n)
# wend = ceil((k + 1) * W / n)
# output[:, :, i, j, k] =
# max(input[:, :, dstart:dend, hstart: hend, wstart: wend])
import numpy as np
input_data = np.random.rand(2, 3, 8, 32, 32)
out = pool(x)
# out shape: [2, 3, 4, 4, 4]
out, indices = pool(x)
# out shape: [2, 3, 4, 4, 4], indices shape: [2, 3, 4, 4, 4]
forward ( x )
Defines the computation performed at every call. Should be overridden by all subclasses.
Parameters
• *inputs (tuple) – unpacked tuple arguments
• **kwargs (dict) – unpacked dict arguments
extra_repr ( )
Extra representation of this layer, you can have custom implementation of your own layer. |
https://tex.stackexchange.com/questions/275426/how-to-mathematically-represent-my-equation | How to mathematically represent my equation?
I want to say "for all k from 1, 2,...B that are not elements of the vector S. What is the best/most professional way to represent this? Is there a better way to represent it than
Thanks,
Kevin
• Is the question about typesetting, or math notation? – Steven B. Segletes Oct 28 '15 at 17:50
• I'm just looking for help from anyone kind enough to give some advice. I've gotten a lot of helpful answers about a wide range of topics on this site over the past few months. – KevinB Oct 28 '15 at 18:06
• What is the advice you're seeking about: is it about math notation, or is it about how to implement a certain notational choice using (La)TeX? – Mico Oct 28 '15 at 18:07
• Some more context is needed; what do you mean by a “vector”? Normally a vector has no elements, so I can't see what k\notin\mathbf{s} is supposed to mean. – egreg Oct 28 '15 at 18:17
• For $k=1,2,\dots,B$, provided $k$ does not appear in the vector $\mathbf{s}$. Using symbols at any cost is not recommendable. – egreg Oct 28 '15 at 18:21
I would try
for $k\in\{1,2,\dots,B \mid k\not\in \mathbf{s} \}$.
Presumably, B is an integer and \mathbf{s} is a set of integers ranging from 1 to B, right?
• This is one way to disambiguate it. – cfr Oct 28 '15 at 18:12
• Thanks, I really like this notation. It looks much more professional. – KevinB Oct 28 '15 at 18:14
• Sorry, I cat find no sensible mathematical meaning in this notation. – egreg Oct 28 '15 at 18:17
• @egreg - I was hoping the OP would clarify what k\not\in\mathbf{s} is supposed to mean. – Mico Oct 28 '15 at 18:18
• You are correct that s is a set of integers and B is a integer – KevinB Oct 28 '15 at 18:20
Correct would be something like $1 \le k \le B$, $k \notin S$. Elipses are ambiguous...
Don't try to reduce everything to symbols, that easily turns into utter gibberish. What you write is for humans to understand, symbols (particularly less familiar ones) just stand in the way. |
https://indico.cern.ch/event/219436/contributions/1523323/ | # Quark Matter 2014 - XXIV International Conference on Ultrarelativistic Nucleus-Nucleus Collisions
19-24 May 2014
Europe/Zurich timezone
## Jet azimuthal distributions with high $p_{\rm T}$ neutral pion triggers in pp collisions from LHC-ALICE
20 May 2014, 16:30
2h
Board: E-37
Poster Jets
### Speaker
Daisuke Watanabe (University of Tsukuba (JP))
### Description
Jet measurements play an essential role in probing the hot and high energy density matter in heavy-ion collisions through parton energy loss and in observation of possible modification of the hot and dense matter by the deposited energy. In this poster, we report azimuthal distributions of charged jets with respect to high $p_{\rm T}$ neutral pion triggers in pp collisions at $\sqrt{\rm s}$ = 7 TeV from ALICE. For neutral pion identification, an electromagnetic calorimeter (EMCal) is used. Jets are reconstructed from charged particle tracks that are measured by the Time Projection Chamber (TPC) and Inner Tracking System (ITS). The sample of neutral pions is enhanced by using the EMCal gamma trigger in combination with a shower shape analysis to identify neutral pions. We report conditional yields and Gaussian widths of both near- and away-side correlation peaks as function of neutral pion trigger $p_{\rm T}$ and jet $p_{\rm T}$. The results will be also compared with PYTHIA.
On behalf of collaboration: ALICE
### Primary author
Daisuke Watanabe (University of Tsukuba (JP))
Slides |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=281&parent=1115 | ## WeBWorK Main Forum
### Re: Unable to generate hardcopy
by Xiong Chiamiov -
Number of replies: 0
Unfortunately there seems to be a bug in the Fedora 9 installation whereby necessaraay TeX formatting files are not installed. To correct this do the following
$su Password: <root password> # fmtutil-sys --all # exit$
That did the trick, thanks. |
https://zbmath.org/authors/?q=ai%3Abukhshtaber.viktor-matveevich | # zbMATH — the first resource for mathematics
## Bukhshtaber, Viktor Matveevich
Compute Distance To:
Author ID: bukhshtaber.viktor-matveevich Published as: Buchshtaber, V. M.; Buchstaber, V.; Buchstaber, V. M.; Buchstaber, Victor; Buchstaber, Victor M.; Buchstaber, Victor Matveevich; Buckshtaber, V. M.; Bukhshtaber, V. M.; Bukhshtaber, V. M. B.; Bukhshtaber, Viktor Matveevich; Bukhstaber, V. Homepage: http://higeom.math.msu.su/people/buchstaber/ External Links: MGP · Math-Net.Ru · Wikidata · GND
Documents Indexed: 217 Publications since 1966, including 14 Books Biographic References: 3 Publications
all top 5
all top 5
#### Journals
67 Russian Mathematical Surveys 19 Functional Analysis and its Applications 8 Proceedings of the Steklov Institute of Mathematics 7 Mathematics of the USSR. Izvestiya 6 Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya 5 Uspekhi Matematicheskikh Nauk [N. S.] 5 IMRN. International Mathematics Research Notices 5 Mathematics of the USSR, Sbornik 5 Moscow Mathematical Journal 4 Theoretical and Mathematical Physics 4 Izvestiya: Mathematics 3 Matematicheskiĭ Sbornik. Novaya Seriya 3 Soviet Mathematics. Doklady 2 Mathematical Notes 2 Mathematical Proceedings of the Cambridge Philosophical Society 2 Journal of Soviet Mathematics 2 Obozrenie Prikladnoĭ i Promyshlennoĭ Matematiki 2 Doklady Mathematics 2 Reviews in Mathematics and Mathematical Physics 2 Trudy Matematicheskogo Instituta Imeni V. A. Steklova 1 Journal of Mathematical Physics 1 Nonlinearity 1 Duke Mathematical Journal 1 Funktsional’nyĭ Analiz i ego Prilozheniya 1 Izvestiya Akademii Nauk SSSR. Tekhnicheskaya Kibernetika 1 Mathematische Zeitschrift 1 Soobshcheniya Akademii Nauk Gruzinskoĭ SSR 1 Topology and its Applications 1 Acta Applicandae Mathematicae 1 Journal of Physics A: Mathematical and General 1 SIAM Journal on Mathematical Analysis 1 Matematicheskiĭ Sbornik 1 Vychislitel’nye Sistemy 1 Zapiski Nauchnykh Seminarov POMI 1 Voprosy Kibernetiki (Moskva) 1 Journal of Mathematical Sciences (New York) 1 Selecta Mathematica. New Series 1 Sbornik: Mathematics 1 Transformation Groups 1 Geometry & Topology 1 Homology, Homotopy and Applications 1 Journal of Nonlinear Mathematical Physics 1 Algebraic & Geometric Topology 1 Chebyshevskiĭ Sbornik 1 Mathematical Surveys and Monographs 1 University Lecture Series 1 AIP Conference Proceedings 1 Springer Proceedings in Mathematics & Statistics 1 Arnold Mathematical Journal
all top 5
#### Fields
66 Algebraic geometry (14-XX) 57 Manifolds and cell complexes (57-XX) 44 Algebraic topology (55-XX) 31 History and biography (01-XX) 26 Convex and discrete geometry (52-XX) 23 Dynamical systems and ergodic theory (37-XX) 20 Partial differential equations (35-XX) 19 Associative rings and algebras (16-XX) 16 Nonassociative rings and algebras (17-XX) 16 Special functions (33-XX) 12 Combinatorics (05-XX) 12 Commutative algebra (13-XX) 12 Group theory and generalizations (20-XX) 12 Ordinary differential equations (34-XX) 12 Quantum Theory (81-XX) 10 General mathematics (00-XX) 10 Difference and functional equations (39-XX) 9 Global analysis, analysis on manifolds (58-XX) 8 Statistics (62-XX) 7 Differential geometry (53-XX) 6 Several complex variables and analytic spaces (32-XX) 6 Statistical mechanics, structure of matter (82-XX) 5 Topological groups, Lie groups (22-XX) 5 Functional analysis (46-XX) 5 Operator theory (47-XX) 5 General topology (54-XX) 4 Functions of a complex variable (30-XX) 4 Biology and other natural sciences (92-XX) 3 Order, lattices, ordered algebraic structures (06-XX) 3 Linear and multilinear algebra; matrix theory (15-XX) 3 Numerical analysis (65-XX) 2 Computer science (68-XX) 1 Number theory (11-XX) 1 Category theory, homological algebra (18-XX) 1 $K$-theory (19-XX) 1 Real functions (26-XX) 1 Geometry (51-XX) 1 Mechanics of particles and systems (70-XX) 1 Optics, electromagnetic theory (78-XX) 1 Game theory, economics, social and behavioral sciences (91-XX) |
https://dml.cz/handle/10338.dmlcz/146908 | # Article
Full entry | PDF (0.3 MB)
Keywords:
absolute continuity; quasi-uniformity; acceptable mapping
Summary:
The aim of this paper is to introduce a generalization of the classical absolute continuity to a relative case, with respect to a subset \$M\$ of an interval \$I\$. This generalization is based on adding more requirements to disjoint systems \$\{(a_k, b_k)\}_K\$ from the classical definition of absolute continuity -- these systems should be not too far from \$M\$ and should be small relative to some covers of \$M\$. We discuss basic properties of relative absolutely continuous functions and compare this class with other classes of generalized absolutely continuous functions.
References:
[1] Bartle R.G.: Modern Theory of Integration. American Mathematical Society, Providence, RI, 2001. MR 1817647 | Zbl 0968.26001
[2] Bogachev V.I.: Measure Theory I. Springer, Berlin, Heidelberg, 2007. MR 2267655
[3] Ene V.: Characterisations of VBG \$\cap\$ (N). Real Anal. Exch. 23 (1997-1998), no. 2, 611–630. MR 1639992
[4] Ene V.: Characterisations of VB\$^{\ast}\$G \$\cap\$ (N). Real Anal. Exch. 23 (1997-1998), no. 2, 571–600. MR 1639984
[5] Gong Z.: New descriptive characterisation of Henstock-Kurzweil integral. Southeast Asian Bull. Math., 2003, no. 27, 445–450. MR 2045557
[6] Gordon R.A.: A descriptive characterization of the generalized Riemann integral. Real Anal. Exch. 15 (1990), no. 1, 397–400. MR 1042557 | Zbl 0703.26009
[7] Gordon R.A.: The Integrals of Lebesgue, Denjoy, Perron, and Henstock. American Mathematical Society, Providence, RI, 1994. MR 1288751 | Zbl 0807.26004
[8] Fletcher P., Lindgren W.F.: Quasi-uniform Spaces. Lecture Notes in Pure and Applied Mathematics, 77, Dekker, New York, 1982. MR 0660063 | Zbl 0583.54017
[9] Isbell J.R.: Uniform Spaces. American Mathematical Society, Providence, RI, 1964. MR 0170323 | Zbl 0124.15601
[10] Lee P.Y.: On \$ {ACG}^{\ast}\$ functions. Real Anal. Exch. 15 (1990), no. 2, 754–759. MR 1059436
[11] Lee P.Y.: Lanzhou Lectures on Henstock Integration. World Scientific, Singapore, 1989. MR 1050957 | Zbl 0699.26004
[12] Njastad O.: On uniform spaces where all uniformly continuous functions are bounded. Monatsh. Math. 69 (1965), no. 2, 167–176. DOI 10.1007/BF01298321 | MR 0178453 | Zbl 0145.19502
[13] Saks S.: Theory of the Integral. Warsawa, Lwów, 1937. Zbl 0017.30004
[14] Sworowski P.: On the uniform strong Lusin condition. Math. Slovaca 63 (2013), no. 2, 229-–242. DOI 10.2478/s12175-012-0095-9 | MR 3037065 | Zbl 1324.26010
[15] Zhereby'ev Yu.A.: On the Denjoy-Luzin definitions of the function classes \$ACG\$, \$ACG^{\ast}\$, \$VBG\$, and \$VBG^{\ast}\$. Mathematical Notes 81 (2007), no. 2, 183–192. DOI 10.1134/S000143460701021X | MR 2409272
Partner of |
http://jperedadnr.blogspot.com/ | ## Thursday, March 19, 2015
### JavaFX on mobile, a dream come true!
Hi there!
It seems is time for a new post, but not a long and boring one as usual. I'll post briefly about my first experience bringing JavaFX to Google Play store.
Yes, I made it with 2048FX, a JavaFX version of the original game 2048, by Gabriel Cirulli, that Bruno Borges and I started last year.
Now, thanks to the outstanding work of JavaFXPorts project, I've adapted that version so it could be ported to Android.
And with the very last version of their plugin, I've managed to succesfully submit it to Google Play store.
After a week in beta testing mode, today the app is in production, so you can go and download it to your Android device, and test it for yourself.
For those of you eager to get the app, this is the link. Go and get it, and add a nice review ;)
If you want to read about the process to make it possible, please keep on reading. These are the topics I'll cover in this post:
• 2048FX, the game
• JavaFXPorts and their mobile plugin
• New Gluon plugin for NetBeans
• 2048FX on Android
• Google Play Store
### 2048FX, the game
Many of you will now for sure about the 2048 game by Gabriel Cirulli. Last year it was a hit.
Many of us got really addicted to it...
In case you are not one of those, the game is about moving numbers in a 4x4 grid, and when equal numbers clash while moving the blocks (up/down/left/right), they merge and numbers are added up. The goal is reaching the 2048 tile (though you can keep going on looking for bigger ones!).
At that time, Bruno started a Java/JavaFX 8 version, given that the game was open sourced. I jumped in inmediately, and in a few weeks we had a nice JavaFX working version
Since we used (and learned) the great new features of Java 8, we thought it was a good proposal for JavaOne, and we end up presenting it in a talk (video) and doing a Hands on Lab session.
And we'll talk about it again next week at JavaLand. If you happen to be there, don't miss our talk
### JavaFXPorts and their mobile plugin
Since the very beginning of JavaFX (2+), going mobile has been on the top of list of the most wanted features. We've dreamed with the possibility of making true the WORA slogan, and it's only recently since the appearance of the JavaFXPorts project, that this dream has come true.
Led by Johan Vos, he and his team have given to the community the missing piece, so now we can jump to mobile devices with the (almost) same projects we develop for desktop.
While Johan started this adventure at the end of 2013, his work on porting JavaFX to Android, based on the OpenJFX project, has been evolving constantly during 2014, until recently in Febrary 2015 he announced a join effort between his company, LodgOn, and Trillian Mobile, the company behind RoboVM, the open source project to port JavaFX to iOS.
As a result, jfxmobile-plugin, the one and only gradle JavaFX plugin for mobile was created and freely available through the JavaFXPorts repository.
With this plugin you can target from one single project three different platforms: Desktop, Android and iOS.
An it's as simple as this sample of build.gradle:
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'org.javafxports:jfxmobile-plugin:1.0.0-b5'
}
}
apply plugin: 'org.javafxports.jfxmobile'
repositories {
jcenter()
}
mainClassName = 'org.javafxports.project.MainApplicationFX'
jfxmobile {
ios {
forceLinkClasses = [ 'org.javafxports.**.*' ]
}
}
For Android devices, it is required Android SDK, and Android build-tools as you can read here. The rest of the depencencies (like Dalvik SDK and the Retrolambda plugin) are taking care by the plugin itself.
Note the plugin version 1.0.0-b5 is constantly evolving, and at the time of this writting 1.0.0-b7 is now available. Check this frequently to keep it updated.
With this plugin, several tasks are added to your project, and you can run any of them. Among others, these are the main ones:
• ./gradlew android creates an Android package
• ./gradlew androidInstall installs your Android application on an Android device that is connected to your development system via a USB cable.
• ./gradlew launchIOSDevice launches your application on an iOS device that is connected to your development system
• ./gradlew run launches your application on your development system.
### New Gluon plugin for NetBeans
Setting up a complex project, with three different platforms can be a hard task. Until now, the best approach (at least the one I followed) was cloning the HelloPlatform sample, and changing the project and package names.
But recently, a new company called Gluon, with Johan as one of his founding fathers, has released a NetBeans plugin that extremelly simplifies this task.
Once you have installed the plugin, just create a JavaFX new project, and select Basic Gluon Application.
Select valid names for project, packages and main class, and you will find a bunch of folders in your new project:
Change the jfxmobile-plugin version to 1.0.0-b5 (or the most recent one), select one of the tasks mentioned before, and see for yourself.
### 2048FX on Android
I've been using previous versions of the plugin, and it was a hard task to get everything working nicely. In fact, I had a working version of 2048FX on Android before the announcement of the jfxmobile-plugin. But it was a separated project from the desktop one.
Now with the plugin, everything binds together magically. So I have a single project with the desktop and the android version of the game.
#### Java 8?
There's a main drawback in all this process: Dalvik VM doesn't support Java 8 new features. For Lambdas, we can use the Retrolambda plugin, that takes care of converting them to Java 6 compatible bytecode. But Streams or Optional are not supported. This means that you have to manually backport them to Java 6/7 compatible version.
While the primary object of the 2048FX project was basically learning these features, for the sake of going mobile, I backported the project, though this didn't change its structure or its appearance.
#### The project: Desktop and Android altogether
This is how the project structure looks like:
A PlatformProvider interface allows us to find out in which platform we are running the project, which is extremely useful to isolate pieces of code that are natively related to that plaftorm.
For instance, to save the game session in a local file in the Android device, I need to access to an internal folder where the apk is installed, and for that I use an FXActivity instance, the bridge between JavaFX and Dalvik runtime, that extends Android Context. This Context can be used to lookup Android services.
One example of this is FileManager class, under Android packages:
import javafxports.android.FXActivity;
import android.content.Context;
import java.io.File;
public class FileManager {
private final Context context;
public FileManager(){
context = FXActivity.getInstance();
}
public File getFile(String fileName){
return new File(context.getFilesDir(), fileName);
}
}
Now the PlatformProvider will call this implementation when running on Android, or the usual one for desktop.
After a few minor issues I had a working project in both desktop and Android.
### Google Play Store
Bruno asked me once to go with this app to Google Play store, but at that time the project wasn't mature enough. But last weekend I decided to give it a try, so I enrolled myself in Google Play Developers, filled a form with the description and several screenshots, and finally submitted the apk of the game... what could go wrong, right?
Well, for starters, I had a first error: the apk had debug options enabled, and that was not allowed.
#### The AndroidManifest.xml
This hidden file, created automatically by the plugin, contains important information of the apk. You can retrieve it after a first built, and modify it to include or modify different options. Then you have to refer this file in the build.gradle file.
On the application tag is where you have to add android:debuggable="false".
There you can add also the icon of your app: android:icon="@mipmap/ic_launcher", where mipmap-* are image folders with several resolutions.
#### Signing the apk
Well, that part was easy. Second attempt, second error... The apk must be signed for release. "Signed" means you need a private key, and for that we can use keytool.
And "release" means that we need to add to build.gradle the signing configuration... and that was not possible with the current plugin version b5.
So I asked Johan (on Friday night) about this, and he answered me (Saturday afternoon) that they've been working precisely on that but it was not ready yet. Later that evening, Joeri Sykora from LodgON, told me that it was in a branch... so with the invaluable help of John Sirach (the PiDome guy) we spent most of the Saturday night trying to build locally the plugin to add the signing configuration.
It end up being something like this:
jfxmobile {
android {
signingConfig {
storeFile file("path/to/my-release-key.keystore")
storePassword 'STORE_PASSWORD'
keyAlias 'KEY_ALIAS'
keyPassword 'KEY_PASSWORD'
}
manifest = 'lib/android/AndroidManifest.xml'
resDirectory = 'src/android/resources'
}
}
It was done! It was almost 2 a.m., but I tried uploading the signed apk for the third time, and voilà!! No more errors. The app when to submission and in less than 10 hours, on Sunday morning it was already published!!
#### Beta Testing Program
Instead of going into production I chose the Beta Testing program, so during this week only a few guys have been able to access to Google Play to download and test the application.
Thanks to their feedback I've made a few upgrades, like fixing some issues with fonts and Samsung devices (thanks John) or changing the context menu to a visible toolbar (thanks Bruno).
#### 2048FX on Google Play Store
And the beta testing time is over. As of now, the application is on production.
What are you waiting for? Go and get it!!
Download it, play with it, enjoy, and if you have any issue, any problem at all, please report it, so we can work on improving its usability in all kind of devices.
### Final Thanks
Let me finish this post taking the word of the whole JavaFX community out there, saying out loud:
THANK YOU, JavaFXPorts !!!
Without you all of this wouldn't be possible at all.
Big thanks to all the guys already mentioned in this post, and also to Eugene Ryzhikov, Mark Heckler and Diego Cirujano, for helping along the way.
And finally, thanks to the OpenJFX project and the JavaFX team.
### UPDATE
Thanks to the work of Jens Deters, 2048FX has made it to Apple Store too!
Go and install it from here!
And since today (15th May 2015), we are open sourcing all the project, so anyone can have a look at it and find out about the last final details required to put it all together and make it successfully to Google Play or Apple Store:
Enjoy!
## Thursday, January 8, 2015
### Creating and Texturing JavaFX 3D Shapes
Hi there!
It's been a while since my last post, and it seems I've just said the same on my last one... but you know, many stuff in between, and this blog has a tradition of long posts, those that can't be delivered on a weekly/monthly basis. But if you're a regular reader (thank you!) you know how this goes.
This post is about my last developments in JavaFX 3D, after working in close collaboration with a bunch of incredible guys for the last months.
For those of you new to this blog, I've already got a few posts talking about JavaFX 3D. My last recent one about the Rubik's Cube: RubikFX: Solving the Rubik's Cube with JavaFX 3D, and other about the Leap Motion controller: Leap Motion Controller and JavaFX: A new touch-less approach.
I'll cover in this post the following topics:
Before getting started, did I mention my article "Building castles in the Sky. Use JavaFX 3D to model historical treasures and more" has been published in the current issue of Java Magazine?
In a nutshell, this article describes a multi model JavaFX 3D based application, developed for the virtual immersion in Cultural Heritage Buildings, through models created by Reverse Engineering with Photogrammetry Techniques. The 3D model of the Menéndez Pelayo Library in Santander, Spain, is used throughout the article as an example of a complex model.
You can find this application and, thanks to Óscar Cosido, a free model of the Library here.
### Leap Motion Skeletal Tracking Model
Since my first post about Leap Motion, I've improved the 3D version after Leap Motion released their version 2 that includes an skeletal tracking model.
I haven't had the chance to blog about it, but this early video shows my initial work. You can see that the model now includes bones, so a more realistic hand can be built.
I demoed a more advanced version at one of my JavaOne talks with the incredibles James Weaver, Sean Phillips and Zoran Sevarac. Sadly, Jason Pollastrini couldn't make it, but he was part of the 3D-team.
If you are interested, all the code is available here. Go, fork it and play with it if you have a Leap Motion controller.
Yes, we did have a great time there.
The session was great. In fact you can watch it now at Parleys.
We had even a special guest: John Yoon, a.k.a. @JavaFX3D
### Skinning Meshes and Leap Motion
And then I met Alexander Kouznetsov.
It was during the Hackergarten 3D session, where Sven Reimers and I were hacking some JavaFX 3D stuff, when he showed up, laptop in backpack, ready for some hacking. There's no better trick than asking a real developer: I bet you're not able to hack this... to get it done!
So the challenge was importing a rigged hand in JSON format to use a SkinningMesh in combination with the new Leap Motion skeletal tracking model. As the one and only John Yoon would show later in his talk:
"In order to animate a 3D model, you need a transform hierarchy to which the 3D geometry is attached.
The general term for this part of the pipeline is “rigging” or “character setup”.
Rigging is the process of setting up your static 3D model for computer-generated animation, to make it animatable."
He was in charge of animating the Duke for the chess demo shown at the Keynote of JavaOne 2013. As shown in the above picture, this required a mesh, a list of joints, weights and transformations, binding the inner 'bones' with the surrounding external mesh, so when the former were moved the latter was deformed, creating the desired animation effect.
The SkinningMesh class in the 3DViewer project was initially designed for Maya, and we had a rigged hand in Three.js model in JSON format.
So out of the blue Alex built an importer, and managed to get the mesh of the hand by reverse engineering. Right after that he solved the rest of the components of the skinningMesh. The most important part was the binding of the transformations between joints.
Affine[] bindTransforms = new Affine[nJoints];
Affine bindGlobalTransform = new Affine();
List<Joint> joints = new ArrayList<>(nJoints);
List<Parent> jointForest = new ArrayList<>();
for (int i = 0; i < nJoints; i++) {
JsonObject bone = object.getJsonArray("bones").getJsonObject(i);
Joint joint = new Joint();
String name = bone.getString("name");
joint.setId(name);
JsonArray pos = bone.getJsonArray("pos");
double x = pos.getJsonNumber(0).doubleValue();
double y = pos.getJsonNumber(1).doubleValue();
double z = pos.getJsonNumber(2).doubleValue();
joint.t.setX(x);
joint.t.setY(y);
joint.t.setZ(z);
bindTransforms[i] = new Affine();
int parentIndex = bone.getInt("parent");
if (parentIndex == -1) {
jointForest.add(joint);
bindTransforms[i] = new Affine(new Translate(-x, -y, -z));
} else {
Joint parent = joints.get(parentIndex);
parent.getChildren().add(joint);
bindTransforms[i] = new Affine(new Translate(
-x - parent.getLocalToSceneTransform().getTx(),
-y - parent.getLocalToSceneTransform().getTy(),
-z - parent.getLocalToSceneTransform().getTz()));
}
joints.add(joint);
joint.getChildren().add(new Axes(0.02));
}
This was the first animation with the model:
The axes are shown at every joint. Observe how easy is to deform a complex mesh just by rotating two joints:
Timeline t = new Timeline(new KeyFrame(Duration.seconds(1),
new KeyValue(joints.get(5).rx.angleProperty(), 90),
new KeyValue(joints.get(6).rx.angleProperty(), 90)));
t.setCycleCount(Timeline.INDEFINITE);
t.play();
With a working SkinningMesh, it was just time for adding the skeletal tracking model from Leap Motion.
First, we needed to match Bones to joints, and then we just needed to apply the actual orientation of every bone to the corresponding joint transformation.
listener = new LeapListener();
listener.doneLeftProperty().addListener((ov,b,b1)->{
if(b1){
List<finger$gt; fingersLeft=listener.getFingersLeft(); Platform.runLater(()->{ fingersLeft.stream() .filter(finger -> finger.isValid()) .forEach(finger -> { previousBone=null; Stream.of(Bone.Type.values()).map(finger::bone) .filter(bone -> bone.isValid() && bone.length()>0) .forEach(bone -> { if(previousBone!=null){ Joint joint = getJoint(false,finger,bone); Vector cross = bone.direction().cross(previousBone.direction()); double angle = bone.direction().angleTo(previousBone.direction()); joint.rx.setAngle(Math.toDegrees(angle)); joint.rx.setAxis(new Point3D(cross.getX(),-cross.getY(),cross.getZ())); } previousBone=bone; }); }); ((SkinningMesh)skinningLeft.getMesh()).update(); }); } }); The work was almost done! Back from JavaOne I had the time to finish the model, adding hand movements and drawing the joints: This video sums up most of what we've accomplished: If you are interested in this project, all the code is here. Feel free to clone or fork it. Pull requests will be very wellcome. ### TweetWallFX One thing leads to another... And Johan Vos and Sven asked me to join them in a project to create a Tweet Wall with JavaFX 3D for Devoxx 2014. JavaFX 3D? I couldn't say no even if I wasn't attending! Our first proposal (not the one Sven finally accomplished) was based on the F(X)yz library from Sean and Jason: a SkyBox as a container, with several tori inside, where tweets were rotating over them: Needless to say, we used the great Twitter4J API for retrieving new tweets with the hashtag #Devoxx. The first challenge here was figuring out how to render the tweets over each torus. The solution was based on the use of an snapshot of the tweet (rendered in a background scene) that would serve as the diffuse map image of the PhongMaterial assigned to the torus. To second was creating a banner effect rotating the tweets over they tori. To avoid artifacts, a segmented torus was built on top of the first one, cropping the faces of a regular torus, so the resulting mesh will be textured with the image. This is our desired segmented torus. In the next section, we'll go into details of how we could accomplish this shape. ### Creating new 3D shapes Note to beginners: For an excelent introduction to JavaFX 3D, have a look to the 3D chapters on these books: JavaFX 8 Introduction by Example an JavaFX 8 Pro: A Definitive Guide to Building Desktop, Mobile, and Embedded Java Clients. To create this mesh in JavaFX 3D we use a TriangleMesh as a basis for our mesh, where we need to provide float arrays of vertices and texture coordinates and one int array of vertex and texture indices for defining every triangle face. Since a torus can be constructed from a rectangle, by gluting both pairs of opposite edges together with no twists, we could use a 2D rectangular grid in a local system ($\theta$,$\phi$), and map every point with these equations: $X=(R+r \cos\phi) \cos\theta\\Z=(R+r \cos\phi) \sin\theta\\Y=r \sin\phi$ So based on this grid (with colored borders and triangles for clarity): we could create this torus (observe how the four corners of the rectangle are joinned together in one single vertex): Now if we want to segment the mesh, we can get rid of a few elements from the borders. From the red inner grid, we could have a segmented torus now: #### Vertices coordinates As we can see in the SegmentedTorusMesh class from the F(x)yz library, generating the vertices for the mesh is really easy, based in the above equations, the desired number of subdivisions (20 and 16 in the figures) and the number of elements cropped in both directions (4): private TriangleMesh createTorus(int subDivX, int subDivY, int crop, float R, float r){ TriangleMesh triangleMesh = new TriangleMesh(); // Create points List<Point3D> listVertices = new ArrayList<>(); float pointX, pointY, pointZ; for (int y = crop; y <= subDivY-crop; y++) { float dy = (float) y / subDivY; for (int x = crop; x <= subDivX-crop; x++) { float dx = (float) x / subDivX; if(crop>0 || (crop==0 && x<subDivX && y<subDivY)){ pointX = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.cos((-1d+2d*dx)*Math.PI)); pointZ = (float) ((R+r*Math.cos((-1d+2d*dy)*Math.PI))*Math.sin((-1d+2d*dx)*Math.PI)); pointY = (float) (r*Math.sin((-1d+2d*dy)*Math.PI)); listVertices.add(new Point3D(pointX, pointY, pointZ)); } } } Note that we have to convert this collection to a float array. Since there is no such thing as FloatStream, trying to use Java 8 streams, I asked a question at StackOverflow, and as result now we use a very handy to do the conversion: float[] floatVertices=listVertices.stream() .flatMapToDouble(p->new DoubleStream(p.x,p.y,p.z)) .collect(()->new FloatCollector(listVertices.size()*3), FloatCollector::add, FloatCollector::join) .toArray(); triangleMesh.getPoints().setAll(floatVertices); In case anybody is wondering why we don't use plain float[], using collections instead of simple float arrays allow us to perform mesh coloring (as we'll see later), subdivisions, ray tracing,...using streams and, in many of these cases, parallel streams. Well, in Jason's words: why TriangleMesh doesn't provide a format that incorporates the use of streams by default...?? #### Texture coordinates In the same way, we can create the texture coordinates. We can use the same grid, but now mapping (u,v) coordinates, from (0.0,0.0) on the left top corner to (1.0,1.0) on the right bottom one. We need extra points for the borders. int index=0; int width=subDivX-2*crop; int height=subDivY-2*crop; float[] textureCoords = new float[(width+1)*(height+1)*2]; for (int v = 0; v <= height; v++) { float dv = (float) v / ((float)(height)); for (int u = 0; u <= width; u++) { textureCoords[index] = (float) u /((float)(width)); textureCoords[index + 1] = dv; index+=2; } } triangleMesh.getTexCoords().setAll(textureCoords); #### Faces Once we have defined the coordinates we need to create the faces. From JavaDoc: The term face is used to indicate 3 set of interleaving points and texture coordinates that together represent the geometric topology of a single triangle. One face is defined by 6 indices: p0, t0, p1, t1, p2, t2, where p0, p1 and p2 are indices into the points array, and t0, t1 and t2 are indices into the texture coordinates array. For convenience, we'll use two splitted collections of points indices and texture indices. Based on the above figures, we go triangle by triangle, selecting the three indices position in specific order. This is critical for the surface orientation. Also note that for vertices we reuse indices at the borders to avoid the formation of seams. List<Point3D> listFaces = new ArrayList<>(); // Create vertices indices for (int y =crop; y<subDivY-crop; y++) { for (int x=crop; x<subDivX-crop; x++) { int p00 = (y-crop)*((crop>0)?numDivX:numDivX-1) + (x-crop); int p01 = p00 + 1; if(crop==0 && x==subDivX-1){ p01-=subDivX; } int p10 = p00 + ((crop>0)?numDivX:numDivX-1); if(crop==0 && y==subDivY-1){ p10-=subDivY*((crop>0)?numDivX:numDivX-1); } int p11 = p10 + 1; if(crop==0 && x==subDivX-1){ p11-=subDivX; } listFaces.add(new Point3D(p00,p10,p11)); listFaces.add(new Point3D(p11,p01,p00)); } } List<Point3D> listTextures = new ArrayList<>(); // Create textures indices for (int y=crop; y<subDivY-crop; y++) { for (int x=crop; <subDivX-crop; x++) { int p00 = (y-crop) * numDivX + (x-crop); int p01 = p00 + 1; int p10 = p00 + numDivX; int p11 = p10 + 1; listTextures.add(new Point3D(p00,p10,p11)); listTextures.add(new Point3D(p11,p01,p00)); } } Though now we have to join them. The adventages of this approach will be shown later. // create faces AtomicInteger count=new AtomicInteger(); int faces[] = return listFaces.stream() .map(f->{ Point3D t=listTexture.get(count.getAndIncrement()); int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z; int t0=(int)t.x; int t1=(int)t.y; int t2=(int)t.z; return IntStream.of(p0, t0, p1, t1, p2, t2); }).flatMapToInt(i->i).toArray(); triangleMesh.getFaces().setAll(faces); // finally return mesh return triangleMesh; } This picture shows how we create the first and last pairs of faces. Note the use of counterclockwise winding to define the front faces, so we have the normal of every surface pointing outwards (to the outside of the screen). Finally, we can create our banner effect, adding two tori, both solid (DrawMode.FILL) and one of them segmented and textured with an image. This snippet shows the basics: SegmentedTorusMesh torus = new SegmentedTorusMesh(50, 42, 0, 500d, 300d); PhongMaterial matTorus = new PhongMaterial(Color.FIREBRICK); torus.setMaterial(matTorus); SegmentedTorusMesh banner = new SegmentedTorusMesh(50, 42, 14, 500d, 300d); PhongMaterial matBanner = new PhongMaterial(); matBanner.setDiffuseMap(new Image(getClass().getResource("res/Duke3DprogressionSmall.jpg").toExternalForm())); banner.setMaterial(matBanner); Rotate rotateY = new Rotate(0, 0, 0, 0, Rotate.Y_AXIS); torus.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY); banner.getTransforms().addAll(new Rotate(0,Rotate.X_AXIS),rotateY); Group group.getChildren().addAll(torus,banner); Group sceneRoot = new Group(group); Scene scene = new Scene(sceneRoot, 600, 400, true, SceneAntialiasing.BALANCED); primaryStage.setTitle("F(X)yz - Segmented Torus"); primaryStage.setScene(scene); primaryStage.show(); final Timeline bannerEffect = new Timeline(); bannerEffect.setCycleCount(Timeline.INDEFINITE); final KeyValue kv1 = new KeyValue(rotateY.angleProperty(), 360); final KeyFrame kf1 = new KeyFrame(Duration.millis(10000), kv1); bannerEffect.getKeyFrames().addAll(kf1); bannerEffect.play(); to get this animation working: ### Playing with textures The last section of this long post will show you how we can hack the textures from a TriangleMesh to display more advances images over the 3D shape. This will include: • Coloring meshes (vertices or faces) • Creating contour plots • Using patterns • Animating textures This work is inspired by a question from Álvaro Álvarez on StackOverflow, about coloring individual triangles or individual vertices from a mesh. The inmediate answer would be: no, you can't easily, since for one mesh there's one material with one diffuse color, and it's not possible to assing different materials to different triangles of the same mesh. You could create as many meshes and materials as colors, if this number were really small. Using textures, was the only way, but for that, following the standard procedure, you will need to color precisely your texture image, to match each triangle with each color. In convex polihedra there's at least one net, a 2D arrangement of polygons that can be folded into the faces of the 3D shape. Based on an icosahedron (20 faces), we could use its net to color every face: And then use the image as texture for the 3D shape: This was my first answer, but I started thinking about using another approach. What if instead of the above colored net we could create on runtime a small image of colored rectangles, like this: and trick the texture coordinates and texture indices to find their values in this image instead? Done! The result was this more neat picture: (The required code to do this is in my answer, so I won't post it here). And going a little bit further, if we could create one palette image, with one color per pixel, we could also assign one color to each vertex, and the texture for the rest of the triangle will be interpolated by the scene graph! This was part of a second answer: #### Color Palette With this small class we can create small images with up to 1530 unique colors. The most important thing is they are correlative, so we'll have smooth contour-plots, and there won't be unwanted bumps when intermediate values are interpolated. To generate on runtime this 40x40 image (2 KB) we just use this short snippet: Image imgPalette = new WritableImage(40, 40); PixelWriter pw = ((WritableImage)imgPalette).getPixelWriter(); AtomicInteger count = new AtomicInteger(); IntStream.range(0, 40).boxed() .forEach(y->IntStream.range(0, 40).boxed() .forEach(x->pw.setColor(x, y, Color.hsb(count.getAndIncrement()/1600*360,1,1)))); With it, we can retrieve the texture coordinates for a given point from this image and update the texture coordinates on the mesh: public DoubleStream getTextureLocation(int iPoint){ int y = iPoint/40; int x = iPoint-40*y; return DoubleStream.of((((float)x)/40f),(((float)y)/40f)); } public float[] getTexturePaletteArray(){ return IntStream.range(0,colors).boxed() .flatMapToDouble(palette::getTextureLocation) .collect(()->new FloatCollector(2*colors), FloatCollector::add, FloatCollector::join) .toArray(); } mesh.getTexCoords().setAll(getTexturePaletteArray()); #### Density Maps Half of the work is done. The other half consists in assigning a color to every vertex or face in our mesh, based on some criteria. By using a mathematical function that for any$(x,y,z)$coordinates we'll have a value$f(x,y,z)\$ that can be scaled within our range of colors.
So let's have a function:
@FunctionalInterface
public interface DensityFunction<T> {
Double eval(T p);
}
private DensityFunction<Point3D> density;
Let's find the extreme values, by evaluating the given function in all the vertices, using parallel streams:
private double min, max;
public void updateExtremes(List<Point3D> points){
max=points.parallelStream().mapToDouble(density::eval).max().orElse(1.0);
min=points.parallelStream().mapToDouble(density::eval).min().orElse(0.0);
if(max==min){
max=1.0+min;
}
}
Finally, we assign the color to every vertex in every face, by evaluating the given function in all the vertices, using parallel streams:
public int mapDensity(Point3D p){
int f=(int)((density.eval(p)-min)/(max-min)*colors);
if(f<0){
f=0;
}
if(f>=colors){
f=colors-1;
}
return f;
}
public int[] updateFacesWithDensityMap(List<Point3D> points, List<Point3D> faces){
return faces.parallelStream().map(f->{
int p0=(int)f.x; int p1=(int)f.y; int p2=(int)f.z;
int t0=mapDensity(points.get(p0));
int t1=mapDensity(points.get(p1));
int t2=mapDensity(points.get(p2));
return IntStream.of(p0, t0, p1, t1, p2, t2);
}).flatMapToInt(i->i).toArray();
}
mesh.getFaces().setAll(updateFacesWithDensityMap(listVertices, listFaces));
Did I say I love Java 8??? You can see now how the strategy of using lists for vertices, textures and faces has clear adventages over the float arrays.
Let's run some example, using the IcosahedronMesh classs from F(X)yz:
IcosahedronMesh ico = new IcosahedronMesh(5,1f);
ico.setTextureModeVertices3D(1600,p->(double)p.x*p.y*p.z);
Scene scene = new Scene(new Group(ico), 600, 600, true, SceneAntialiasing.BALANCED);
primaryStage.setScene(scene);
primaryStage.show();
This is the result:
Impressive, right? After a long explanation, we can happily say: yes! we can color every single triangle or vertex on the mesh!
And we could even move the colors, creating an smooth animation. For this we only need to update the faces (vertices and texture coordinates are the same). This video shows one:
### More features
More? In this post? No! I won't extend it anymore. I just post this picture:
And refer you to all these available 3D shapes and more at F(X)yz repository. If I have the time, I'll try to post about them in a second part.
### Conclusions
JavaFX 3D API in combination with Java 8 new features has proven really powerful in terms of rendering complex meshes. The API can be easily extended to create libraries or frameworks that help the developer in case 3D features are required.
We are far from others (Unity 3D, Three.js, ... to say a few), but with the collaboration of the great JavaFX community we can shorten this gap.
Please, clone the repository, test it, create pull requests, issues, feature requests, ... get in touch with us, help us to keep this project alive and growing.
Also visit StackOverflow and ask questions there using these tags: javafx, javafx-8 and the new javafx-3d). You never know where a good question may take you! And the answers will help others developers too.
A final word to give a proper shout-out to Sean Phillips and Jason Pollastrini, founders of the F(x)yz library, for starting an outstanding project.
## Monday, April 14, 2014
### RubikFX: Solving the Rubik's Cube with JavaFX 3D
Hi all!
It's really been a while since my last post... But in the middle, three major conferences kept me away from my blog. I said to myself I had to blog about the topics of my talks, but I had no chance at all.
Now, with JavaOne still far in the distance, and after finishing my collaboration in the book, soon to be published, JavaFX 8, Introduction by Example, with my friends Carl Dea, Gerrit Grunwald, Mark Heckler and Sean Phillips, I've had the chance to play with Java 8 and JavaFX 3D for a few weeks, and this post is the result of my findings.
It happens that my kids had recently at home a Rubik's cube, and we also built the Lego Mindstorms EV3 solver thanks to this incredible project from David Gilday, the guy behind the CubeStormer 3 with the world record of fastest solving.
After playing with the cube for a while, I thought about the possibility of creating a JavaFX application for solving the cube, and that's how RubikFX was born.
If you're eager to know what this is about, here is a link to a video in YouTube which will show you most of it.
Basically, in this post I'll talk about importing 3D models in a JavaFX scene, with the ability to zoom, rotate, scale them, add lights, move the camera,... Once we have a nice 3D model of the Rubik's cube, we will try to create a way of moving layers independently from the rest of the model, keeping track of the changes made. Finally, we'll add mouse picking for selecting faces and rotating layers.
Please read this if you're not familiar with Rubik's cube notation and the basic steps for solving it.
Before we start
By now you should know that Java 8 is GA since the 18th of March, so the code in this project is based on this version. In case you haven't done it yet, please download from here the new SDK and update your system. Also, I use NetBeans 8.0 for its support for Java 8 including lambdas and the new Streams API, among other things. You can update your IDE from here.
I use two external dependencies. One for importing the model, from a experimental project called 3DViewer, which is part of the OpenJFX project. So we need to download it and build it. The second one is from the ControlsFX project, for adding cool dialogs to the application. Download it from here.
Finally, we need a 3D model for the cube. You can build it yourself or use a free model, like this, submitted by 3dregenerator, which you can download in 3ds or OBJ formats.
Once you've got all this ingredients, it's easy to get this picture:
For that, just extract the files, rename 'Rubik's Cube.mtl' to 'Cube.mtl' and 'Rubik's Cube.obj' to 'Cube.obj', edit this file and change the third line to 'mtllib Cube.mtl', and save the file.
Now run the 3DViewer application, and drag 'Cube.obj' to the viewer. Open Settings tab, and select Lights, turn on the ambient light with a white color, and off the puntual Light 1. You can zoom in or out (mouse wheel, right button or navigation bar), rotate the cube with the left button (modifying rotation speed with Ctrl or Shift), or translate the model with both mouse buttons pressed.
Now select Options and click Wireframe, so you can see the triangle meshes used to build the model.
Each one of the 27 cubies is given a name in the obj file, like 'Block46' and so on. All of its triangles are grouped in meshes defined by the material assigned, so each cubie is made of 1 to 6 meshes, with names like 'Block46', 'Block46 (2)' and so on, and there are a total number of 117 meshes.
The color of each cubie meshes is asigned in the 'Cube.mtl' file with the Kd constant relative to the diffuse color.
1. The Rubik's Cube - Lite Version
Importing the 3D model
So once we know how is our model, we need to construct the MeshView nodes for each mesh. The ObjImporter class from 3DViewer provide a getMeshes() method that returns a Set of names of the blocks for every mesh. So we will define a HashMap to bind every name to its MeshView. For each mesh name s we get the MeshView object with buildMeshView(s) method.
By design, the cube materials in this model don't reflect light (Ns=0), so we'll change this to allow interaction with puntual lights, by modifying the material specular power property, defined in the PhongMaterial class.
Finally, we will rotate the original model so we have the white face on the top, and the blue one on the front.
public class Model3D {
// Cube.obj contains 117 meshes, marked as "Block46",...,"Block72 (6)" in this set:
private Set<String> meshes;
// HashMap to store a MeshView of each mesh with its key
private final Map<String,Meshview> mapMeshes=new HashMap<>();
public void importObj(){
try {// cube.obj
ObjImporter reader = new ObjImporter(getClass().getResource("Cube.obj").toExternalForm());
meshes=reader.getMeshes(); // set with the names of 117 meshes
Affine affineIni=new Affine();
affineIni.prepend(new Rotate(-90, Rotate.X_AXIS));
affineIni.prepend(new Rotate(90, Rotate.Z_AXIS));
meshes.stream().forEach(s-> {
MeshView cubiePart = reader.buildMeshView(s);
// every part of the cubie is transformed with both rotations:
cubiePart.getTransforms().add(affineIni);
// since the model has Ns=0 it doesn't reflect light, so we change it to 1
PhongMaterial material = (PhongMaterial) cubiePart.getMaterial();
material.setSpecularPower(1);
cubiePart.setMaterial(material);
// finally, add the name of the part and the cubie part to the hashMap:
mapMeshes.put(s,cubiePart);
});
} catch (IOException e) {
System.out.println("Error loading model "+e.toString());
}
}
public Map<String, MeshView> getMapMeshes() {
return mapMeshes;
}
}
Since the model is oriented with white to the right (X axis) and red in the front (Z axis) (see picture above), two rotations are required: first rotate -90 degrees towards X axis, to put blue in the front, and then rotate 90 degrees arount Z axis to put white on top.
Mathematically, the second rotation matrix in Z must by multiplied on the left to the first matrix in X. But according to this link if we use add or append matrix rotations are operated on the right, and this will be wrong:
cubiePart.getTransforms().addAll(new Rotate(-90, Rotate.X_AXIS),
new Rotate(90, Rotate.Z_AXIS));
as it will perform a first rotation in Z and a second one in X, putting red on top and yellow on front. Also this is wrong too:
cubiePart.getTransforms().addAll(new Rotate(90, Rotate.Z_AXIS),
new Rotate(-90, Rotate.X_AXIS));
Though it does the right rotations, then it will require for further rotations of any cubie to be rotated from its original position, which is quite more complicated than rotating always from the last state.
So prepend is the right way to proceed here, and we just need to prepend the last rotation matrix to the Affine matrix of the cubie with all the previous rotations stored there.
Handling the model
After importing the obj file, we can figure out which is the number of each cubie, and once the cube it's well positioned (white face top, blue face front), the scheme we're going to use is a List<Integer> with 27 items:
• first 9 indexes are the 9 cubies in the (F)Front face, from top left (R/W/B) to down right (Y/O/B).
• second 9 indexes are from the (S)Standing face, from top left (R/W) to down right (Y/O).
• last 9 indexes are from (B)Back face, from top left (G/R/W) to down right (G/Y/O).
But for performing rotations of these cubies, the best way is the internal use of a 3D array of integers:
private final int[][][] cube={{{50,51,52},{49,54,53},{59,48,46}},
{{58,55,60},{57,62,61},{47,56,63}},
{{67,64,69},{66,71,70},{68,65,72}}};
where 50 is the number of the R/W/B cubie and 72 is the number for the G/Y/O.
The Rotations class will take care of any face rotation.
// This is the method to perform any rotation on the 3D array just by swapping indexes
// first index refers to faces F-S-B
// second index refers to faces U-E-D
// third index refers to faces L-M-R
public void turn(String rot){
int t = 0;
for(int y = 2; y >= 0; --y){
for(int x = 0; x < 3; x++){
switch(rot){
case "L": tempCube[x][t][0] = cube[y][x][0]; break;
case "Li": tempCube[t][x][0] = cube[x][y][0]; break;
case "M": tempCube[x][t][1] = cube[y][x][1]; break;
case "Mi": tempCube[t][x][1] = cube[x][y][1]; break;
case "R": tempCube[t][x][2] = cube[x][y][2]; break;
case "Ri": tempCube[x][t][2] = cube[y][x][2]; break;
case "U": tempCube[t][0][x] = cube[x][0][y]; break;
case "Ui": tempCube[x][0][t] = cube[y][0][x]; break;
case "E": tempCube[x][1][t] = cube[y][1][x]; break;
case "Ei": tempCube[t][1][x] = cube[x][1][y]; break;
case "D": tempCube[x][2][t] = cube[y][2][x]; break;
case "Di": tempCube[t][2][x] = cube[x][2][y]; break;
case "F": tempCube[0][x][t] = cube[0][y][x]; break;
case "Fi": tempCube[0][t][x] = cube[0][x][y]; break;
case "S": tempCube[1][x][t] = cube[1][y][x]; break;
case "Si": tempCube[1][t][x] = cube[1][x][y]; break;
case "B": tempCube[2][t][x] = cube[2][x][y]; break;
case "Bi": tempCube[2][x][t] = cube[2][y][x]; break;
}
}
t++;
}
save();
}
Similar rotations can be performed to the whole cube (X, Y or Z).
The content model
Once we have our model, we need a scene to display it. For that we'll use a object as content container, wrapped in a ContentModel class, where camera, lights and orientation axis are added, which is based in the ContentModel class from 3DViewer application:
public class ContentModel {
public ContentModel(double paneW, double paneH, double dimModel) {
this.paneW=paneW;
this.paneH=paneH;
this.dimModel=dimModel;
buildCamera();
buildSubScene();
buildAxes();
addLights();
}
private void buildCamera() {
camera.setNearClip(1.0);
camera.setFarClip(10000.0);
camera.setFieldOfView(2d*dimModel/3d);
camera.getTransforms().addAll(yUpRotate,cameraPosition,
cameraLookXRotate,cameraLookZRotate);
cameraXform.getChildren().add(cameraXform2);
cameraXform2.getChildren().add(camera);
cameraPosition.setZ(-2d*dimModel);
root3D.getChildren().add(cameraXform);
// Rotate camera to show isometric view X right, Y top, Z 120º left-down from each
cameraXform.setRx(-30.0);
cameraXform.setRy(30);
}
private void buildSubScene() {
root3D.getChildren().add(autoScalingGroup);
subScene = new SubScene(root3D,paneW,paneH,true,javafx.scene.SceneAntialiasing.BALANCED);
subScene.setCamera(camera);
subScene.setFill(Color.CADETBLUE);
setListeners(true);
}
private void buildAxes() {
double length = 2d*dimModel;
double width = dimModel/100d;
double radius = 2d*dimModel/100d;
final PhongMaterial redMaterial = new PhongMaterial();
redMaterial.setDiffuseColor(Color.DARKRED);
redMaterial.setSpecularColor(Color.RED);
final PhongMaterial greenMaterial = new PhongMaterial();
greenMaterial.setDiffuseColor(Color.DARKGREEN);
greenMaterial.setSpecularColor(Color.GREEN);
final PhongMaterial blueMaterial = new PhongMaterial();
blueMaterial.setDiffuseColor(Color.DARKBLUE);
blueMaterial.setSpecularColor(Color.BLUE);
Sphere xSphere = new Sphere(radius);
Sphere ySphere = new Sphere(radius);
Sphere zSphere = new Sphere(radius);
xSphere.setMaterial(redMaterial);
ySphere.setMaterial(greenMaterial);
zSphere.setMaterial(blueMaterial);
xSphere.setTranslateX(dimModel);
ySphere.setTranslateY(dimModel);
zSphere.setTranslateZ(dimModel);
Box xAxis = new Box(length, width, width);
Box yAxis = new Box(width, length, width);
Box zAxis = new Box(width, width, length);
xAxis.setMaterial(redMaterial);
yAxis.setMaterial(greenMaterial);
zAxis.setMaterial(blueMaterial);
autoScalingGroup.getChildren().addAll(xAxis, yAxis, zAxis);
autoScalingGroup.getChildren().addAll(xSphere, ySphere, zSphere);
}
private void addLights(){
root3D.getChildren().add(ambientLight);
root3D.getChildren().add(light1);
light1.setTranslateX(dimModel*0.6);
light1.setTranslateY(dimModel*0.6);
light1.setTranslateZ(dimModel*0.6);
}
}
For the camera, a Xform class from 3DViewer is used to change easily its rotation values. This also allows the initial rotation of the camera to show an isometric view:
cameraXform.setRx(-30.0);
cameraXform.setRy(30);
Other valid ways to perform these rotations could be based on obtaining the vector and angle of rotation to combine two rotations, which involve calculate the rotation matrix first and then the vector and angle (as I explained here):
camera.setRotationAxis(new Point3D(-0.694747,0.694747,0.186157));
camera.setRotate(42.1812);
Or prepending the two rotations to all the previous transformations, by appending all of them in a single Affine matrix before prepending these two rotations:
Affine affineCamIni=new Affine();
camera.getTransforms().stream().forEach(affineCamIni::append);
affineCamIni.prepend(new Rotate(-30, Rotate.X_AXIS));
affineCamIni.prepend(new Rotate(30, Rotate.Y_AXIS));
camera.getTransforms().setAll(affineCamIni);
Then we add the listeners to the subscene, so the camera can be easily rotated.
private void setListeners(boolean addListeners){
if(addListeners){
subScene.addEventHandler(MouseEvent.ANY, mouseEventHandler);
} else {
subScene.removeEventHandler(MouseEvent.ANY, mouseEventHandler);
}
}
private final EventHandler<MouseEvent> mouseEventHandler = event -> {
double xFlip = -1.0, yFlip=1.0; // y Up
if (event.getEventType() == MouseEvent.MOUSE_PRESSED) {
mousePosX = event.getSceneX();
mousePosY = event.getSceneY();
mouseOldX = event.getSceneX();
mouseOldY = event.getSceneY();
} else if (event.getEventType() == MouseEvent.MOUSE_DRAGGED) {
double modifier = event.isControlDown()?0.1:event.isShiftDown()?3.0:1.0;
mouseOldX = mousePosX;
mouseOldY = mousePosY;
mousePosX = event.getSceneX();
mousePosY = event.getSceneY();
mouseDeltaX = (mousePosX - mouseOldX);
mouseDeltaY = (mousePosY - mouseOldY);
if(event.isMiddleButtonDown() || (event.isPrimaryButtonDown() && event.isSecondaryButtonDown())) {
cameraXform2.setTx(cameraXform2.t.getX() + xFlip*mouseDeltaX*modifierFactor*modifier*0.3);
cameraXform2.setTy(cameraXform2.t.getY() + yFlip*mouseDeltaY*modifierFactor*modifier*0.3);
}
else if(event.isPrimaryButtonDown()) {
cameraXform.setRy(cameraXform.ry.getAngle() - yFlip*mouseDeltaX*modifierFactor*modifier*2.0);
cameraXform.setRx(cameraXform.rx.getAngle() + xFlip*mouseDeltaY*modifierFactor*modifier*2.0);
}
else if(event.isSecondaryButtonDown()) {
double z = cameraPosition.getZ();
double newZ = z - xFlip*(mouseDeltaX+mouseDeltaY)*modifierFactor*modifier;
cameraPosition.setZ(newZ);
}
}
};
Handling the model
Now we can put all together and create the Rubik class, where the 3D model is imported, all the meshviews are created and grouped in cube, which is added to the content subscene. At the same time, rot is instantiated with the original position of the cubies.
public class Rubik {
public Rubik(){
// Import Rubik's Cube model and arrows
Model3D model=new Model3D();
model.importObj();
mapMeshes=model.getMapMeshes();
cube.getChildren().setAll(mapMeshes.values());
dimCube=cube.getBoundsInParent().getWidth();
// Create content subscene, add cube, set camera and lights
content = new ContentModel(800,600,dimCube);
content.setContent(cube);
// Initialize 3D array of indexes and a copy of original/solved position
rot=new Rotations();
order=rot.getCube();
// save original position
mapMeshes.forEach((k,v)->mapTransformsOriginal.put(k, v.getTransforms().get(0)));
orderOriginal=order.stream().collect(Collectors.toList());
// Listener to perform an animated face rotation
rotMap=(ov,angOld,angNew)->{
mapMeshes.forEach((k,v)->{
layer.stream().filter(l->k.contains(l.toString()))
.findFirst().ifPresent(l->{
Affine a=new Affine(v.getTransforms().get(0));
a.prepend(new Rotate(angNew.doubleValue()-angOld.doubleValue(),axis));
v.getTransforms().setAll(a);
});
});
};
}
}
Finally we create a listener for rotating layers of cubies in a Timeline animation. As the rotations are prepended to the actual affine matrix of the cubies, to perform a smooth animation we'll change the angle between 0 and 90º, and listen how the timeline internally interpolate it, making small rotations between angNew and angOld angles.
So the method to perform the rotation could be like this:
public void rotateFace(final String btRot){
if(onRotation.get()){
return;
}
onRotation.set(true);
// rotate cube indexes
rot.turn(btRot);
// get new indexes in terms of blocks numbers from original order
reorder=rot.getCube();
// select cubies to rotate: those in reorder different from order.
AtomicInteger index = new AtomicInteger();
layer=order.stream()
.filter(o->!Objects.equals(o, reorder.get(index.getAndIncrement())))
.collect(Collectors.toList());
// add central cubie
layer.add(0,reorder.get(Utils.getCenter(btRot)));
// set rotation axis
axis=Utils.getAxis(btRot);
// define rotation
double angEnd=90d*(btRot.endsWith("i")?1d:-1d);
rotation.set(0d);
// add listener to rotation changes
rotation.addListener(rotMap);
// create animation
Timeline timeline=new Timeline();
timeline.getKeyFrames().add(
new KeyFrame(Duration.millis(600), e->{
// remove listener
rotation.removeListener(rotMap);
onRotation.set(false);
}, new KeyValue(rotation,angEnd)));
timeline.playFromStart();
// update order with last list
order=reorder.stream().collect(Collectors.toList());
}
RubikFX, Lite Version
Later on we'll add more features, but for now let's create a JavaFX application, with a BorderPane, add content to the center of the pane, and a few toolbars with buttons to perform rotations.
public class TestRubikFX extends Application {
private final BorderPane pane=new BorderPane();
private Rubik rubik;
@Override
public void start(Stage stage) {
rubik=new Rubik();
// create toolbars
ToolBar tbTop=new ToolBar(new Button("U"),new Button("Ui"),new Button("F"),
new Button("Fi"),new Separator(),new Button("Y"),
new Button("Yi"),new Button("Z"),new Button("Zi"));
pane.setTop(tbTop);
ToolBar tbBottom=new ToolBar(new Button("B"),new Button("Bi"),new Button("D"),
new Button("Di"),new Button("E"),new Button("Ei"));
pane.setBottom(tbBottom);
ToolBar tbRight=new ToolBar(new Button("R"),new Button("Ri"),new Separator(),
new Button("X"),new Button("Xi"));
tbRight.setOrientation(Orientation.VERTICAL);
pane.setRight(tbRight);
ToolBar tbLeft=new ToolBar(new Button("L"),new Button("Li"),new Button("M"),
new Button("Mi"),new Button("S"),new Button("Si"));
tbLeft.setOrientation(Orientation.VERTICAL);
pane.setLeft(tbLeft);
pane.setCenter(rubik.getSubScene());
pane.getChildren().stream()
.filter(n->(n instanceof ToolBar))
.forEach(tb->{
((ToolBar)tb).getItems().stream()
.filter(n->(n instanceof Button))
.forEach(n->((Button)n).setOnAction(e->rubik.rotateFace(((Button)n).getText())));
});
rubik.isOnRotation().addListener((ov,b,b1)->{
pane.getChildren().stream()
.filter(n->(n instanceof ToolBar))
.forEach(tb->tb.setDisable(b1));
});
final Scene scene = new Scene(pane, 880, 680, true);
scene.setFill(Color.ALICEBLUE);
stage.setTitle("Rubik's Cube - JavaFX3D");
stage.setScene(scene);
stage.show();
}
}
Now this is what we have already accomplished:
If you're interested in having a deeper look at the application, you can find the source code in my GitHub repository. Note you'll need to add the 3DViewer.jar.
How does it work?
Take, for instance, a initial "F" rotation. We apply it to rot:
// rotate cube indexes
rot.turn(btRot);
// get new indexes in terms of blocks numbers from original order
reorder=rot.getCube();
Using rot.printCube() we can see the numbers of cubies for the solved cube (order) and for the new one, with the frontal layer rotated clockwise (reorder):
order: 50 51 52 49 54 53 59 48 46 || 58 55 60 57 62 61 47 56 63 || 67 64 69 66 71 70 68 65 72
reorder: 59 49 50 48 54 51 46 53 52 || 58 55 60 57 62 61 47 56 63 || 67 64 69 66 71 70 68 65 72
By comparing both lists and getting the different items, we know which cubies must be rotated, though we have to add the number of the central cubie (54), as it is the same in both lists, but it should be rotated too. So we create the list layer with these nine cubies:
// select cubies to rotate: those in reorder different from order.
AtomicInteger index = new AtomicInteger();
layer=order.stream()
.filter(o->!Objects.equals(o, reorder.get(index.getAndIncrement())))
.collect(Collectors.toList());
// add central cubie
layer.add(0,reorder.get(Utils.getCenter(btRot)));
// set rotation axis
axis=Utils.getAxis(btRot);
Utils is a class that manage the values for each type of rotation. For this case:
public static Point3D getAxis(String face){
Point3D p=new Point3D(0,0,0);
switch(face.substring(0,1)){
case "F":
case "S": p=new Point3D(0,0,1);
break;
}
return p;
}
public static int getCenter(String face){
int c=0;
switch(face.substring(0,1)){
case "F": c=4; break;
}
return c;
}
Once we've got the cubies and the axis of rotation, now it's worth noticing how the rotation listener works. With the timeline, the angle goes from 0 to 90º with an EASE_BOTH interpolation (by default), so the angle increments are smaller at the beginning, bigger in the middle and smaller again at the end. This could be a possible list of increments: 0.125º-3º-4.6º-2.2º-2.48º-...-2.43º-4.78º-2.4º-2.4º-0.55º.
For every value in angNew, the listener rotMap applies a small rotation to a layer of cubies. For that we look in our HashMap which meshviews belongs to these cubies, and prepend a new rotation to their previous affine matrix:
// Listener to perform an animated face rotation
rotMap=(ov,angOld,angNew)->{
mapMeshes.forEach((k,v)->{
layer.stream().filter(l->k.contains(l.toString()))
.findFirst().ifPresent(l->{
Affine a=new Affine(v.getTransforms().get(0));
a.prepend(new Rotate(angNew.doubleValue()-angOld.doubleValue(),axis));
v.getTransforms().setAll(a);
});
});
};
So in 600 ms we apply around 30 to 40 small rotations to a bunch of around 40 meshviews.
Finally, after the rotation is done, we just need to update order with the last list of cubies, so we can start all over again with a new rotation.
2. The Rubik's Cube - Full Version
Adding more features
Now that we've got a working but pretty basic Rubik's cube JavaFX application, it's time for adding a few extra features, like graphic arrows and preview rotations to show the direction of rotation before they're performed.
Scramble and Sequences
Let's start by adding a scramble routine, to scramble the cubies before start solving the cube. To do that we generate a sequence of 25 random moves from a list of valid rotations.
private static final List<String> movements =
Arrays.asList("F", "Fi", "F2", "R", "Ri", "R2",
"B", "Bi", "B2", "L", "Li", "L2",
"U", "Ui", "U2", "D", "Di", "D2");
private String last="V", get="V";
public void doScramble(){
StringBuilder sb=new StringBuilder();
IntStream.range(0, 25).boxed().forEach(i->{
while(last.substring(0, 1).equals(get.substring(0, 1))){
// avoid repeating the same/opposite rotations
get=movements.get((int)(Math.floor(Math.random()*movements.size())));
}
last=get;
if(get.contains("2")){
get=get.substring(0,1);
sb.append(get).append(" ");
}
sb.append(get).append(" ");
});
doSequence(sb.toString().trim());
}
Then we have to perform this sequence, by rotating each movement. First we extract the rotations from the string, converting other notations (like lower letters or ' instead of 'i' for counter clockwise rotations) to the one used.
A listener is added to onRotation, so only when the last rotation finishes, a new rotation starts. By adding a second listener to the index property, when the end of the list plus one is reached, this listener is stopped, allowing for the last rotation to finish properly, and saving the rotations for a further replay option.
public void doSequence(String list){
onScrambling.set(true);
List<String> asList = Arrays.asList(list.replaceAll("’", "i").replaceAll("'", "i").split(" "));
sequence=new ArrayList<>();
asList.stream().forEach(s->{
if(s.contains("2")){
sequence.add(s.substring(0, 1));
sequence.add(s.substring(0, 1));
} else if(s.length()==1 && s.matches("[a-z]")){
sequence.add(s.toUpperCase().concat("i"));
} else {
sequence.add(s);
}
});
System.out.println("seq: "+sequence);
IntegerProperty index=new SimpleIntegerProperty(1);
ChangeListener<boolean> lis=(ov,b,b1)->{
if(!b1){
if(index.get()<sequence.size()){
rotateFace(sequence.get(index.get()));
} else {
// save transforms
mapMeshes.forEach((k,v)->mapTransformsScramble.put(k, v.getTransforms().get(0)));
orderScramble=reorder.stream().collect(Collectors.toList());
}
index.set(index.get()+1);
}
};
index.addListener((ov,v,v1)->{
if(v1.intValue()==sequence.size()+1){
onScrambling.set(false);
onRotation.removeListener(lis);
count.set(-1);
}
});
onRotation.addListener(lis);
rotateFace(sequence.get(0));
}
Note that we use a Dialog from ControlsFX to prevent losing previous moves.
Button bSc=new Button("Scramble");
bSc.setOnAction(e->{
if(moves.getNumMoves()>0){
Action response = Dialogs.create()
.owner(stage)
.title("Warning Dialog")
.masthead("Scramble Cube")
.message( "You will lose all your previous movements. Do you want to continue?")
.showConfirm();
if(response==Dialog.Actions.YES){
rubik.doReset();
doScramble();
}
} else {
doScramble();
}
});
If you want to load a sequence, like any of these, another Dialog with input allowed is used.
Button bSeq=new Button("Sequence");
bSeq.setOnAction(e->{
String response;
if(moves.getNumMoves()>0){
response = Dialogs.create()
.owner(stage)
.title("Warning Dialog")
.masthead("Loading a Sequence").lightweight()
.message("Add a valid sequence of movements:\n(previous movements will be discarded)")
.showTextInput(moves.getSequence());
} else {
response = Dialogs.create()
.owner(stage)
.title("Information Dialog")
.masthead("Loading a Sequence").lightweight()
.message( "Add a valid sequence of movements")
.showTextInput();
}
if(response!=null && !response.isEmpty()){
rubik.doReset();
rubik.doSequence(response.trim());
}
});
The results of scrambling a cube or adding a sequence of rotations can be seen in this video.
Timer and moves counter
Let's add now a timer using the new Date and Time API for Java 8. You may have noticed the timer in the bottom toolbar in the previous video.
For that, we use the following code in RubikFX class:
private LocalTime time=LocalTime.now();
private Timeline timer;
private final StringProperty clock = new SimpleStringProperty("00:00:00");
private final DateTimeFormatter fmt = DateTimeFormatter.ofPattern("HH:mm:ss").withZone(ZoneId.systemDefault());
@Override
public void start(Stage stage) {
...
Label lTime=new Label();
lTime.textProperty().bind(clock);
tbBottom.getItems().addAll(new Separator(),lTime);
timer=new Timeline(new KeyFrame(Duration.ZERO, e->{
clock.set(LocalTime.now().minusNanos(time.toNanoOfDay()).format(fmt));
}),new KeyFrame(Duration.seconds(1)));
timer.setCycleCount(Animation.INDEFINITE);
rubik.isSolved().addListener((ov,b,b1)->{
if(b1){
timer.stop();
}
});
time=LocalTime.now();
timer.playFromStart();
}
For the counter, we'll add two classes. Move is a simple POJO class, with a string for the name of the rotation and a long for the timestamp of the movement. Moves class will contain a list of moves.
public class Moves {
private final List<Move> moves=new ArrayList<>();
public Moves(){
moves.clear();
}
public void addMove(Move m){ moves.add(m); }
public List<Move> getMoves() { return moves; }
public Move getMove(int index){
if(index>-1 && index<moves.size()){
return moves.get(index);
}
return null;
}
public String getSequence(){
StringBuilder sb=new StringBuilder("");
moves.forEach(m->sb.append(m.getFace()).append(" "));
return sb.toString().trim();
}
}
For adding the number of rotations, we use the following code in RubikFX class:
private Moves moves=new Moves();
@Override
public void start(Stage stage) {
...
rubik.getLastRotation().addListener((ov,v,v1)->{
if(!v1.isEmpty()){
moves.addMove(new Move(v1, LocalTime.now().minusNanos(time.toNanoOfDay()).toNanoOfDay()));
}
});
Label lMov=new Label();
rubik.getCount().addListener((ov,v,v1)->{
lMov.setText("Movements: "+(v1.intValue()+1));
});
tbBottom.getItems().addAll(new Separator(),lMov);
}
Replay
We can also replay the list of movements the user has done stored in moves. For that we need to restore first the state of the cube right after the scramble, and performe one by one all the rotations from the list.
public void doReplay(List<Move> moves){
if(moves.isEmpty()){
return;
}
content.resetCam();
//restore scramble
if(mapTransformsScramble.size()>0){
mapMeshes.forEach((k,v)->v.getTransforms().setAll(mapTransformsScramble.get(k)));
order=orderScramble.stream().collect(Collectors.toList());
rot.setCube(order);
count.set(-1);
} else {
// restore original
doReset();
}
onReplaying.set(true);
IntegerProperty index=new SimpleIntegerProperty(1);
ChangeListener<boolean> lis=(ov,v,v1)->{
if(!v1 && moves.size()>1){
if(index.get()<moves.size()){
timestamp.set(moves.get(index.get()).getTimestamp());
rotateFace(moves.get(index.get()).getFace());
}
index.set(index.get()+1);
}
};
index.addListener((ov,v,v1)->{
if(v1.intValue()==moves.size()+1){
onReplaying.set(false);
onRotation.removeListener(lis);
acuAngle=0;
}
});
onRotation.addListener(lis);
timestamp.set(moves.get(0).getTimestamp());
rotateFace(moves.get(0).getFace());
}
Rotation direction preview
Time for a new feature: 3D arrows will be shown in the rotating face or axis, to show the direction.
Actually, JavaFX 3D API doesn't supply any way of building 3D complex models. There's an impressive ongoing work by Michael Hoffer to provide a way by using Constructive Solid Geometry (CSG) here, kudos Michael!!
By using primitives and boolean operations with CSG you can build a model, and even export it with STL format.
You can use free or commercial 3D software for this task too. I designed these arrows with SketchUp Make and exported them to OBJ format so I could import them as we did with the cube using ObjImporter from 3DViewer.
While the design is fast, it requires manual editting of the created obj file to convert long faces of more than 4 vertixes as they are not properly imported.
Other approach could be exporting the file to *.3ds and use the proper importer from August Lammersdorf.
Edit: Michael Hoffer kindly added an option to export to OBJ format, so now it would be possible to import the arrow model generated with CSG in JavaFXScad in our scene. Thanks Michael!
Once we have the model, we have to add it, scale and rotate it, so we can show the arrow in the rotating face.
For a rotation like 'Ui':
public void updateArrow(String face, boolean hover){
boolean bFaceArrow=!(face.startsWith("X")||face.startsWith("Y")||face.startsWith("Z"));
MeshView arrow=bFaceArrow?faceArrow:axisArrow;
if(hover && onRotation.get()){
return;
}
arrow.getTransforms().clear();
if(hover){
double d0=arrow.getBoundsInParent().getHeight()/2d;
Affine aff=Utils.getAffine(dimCube, d0, bFaceArrow, face);
arrow.getTransforms().setAll(aff);
arrow.setMaterial(Utils.getMaterial(face));
if(previewFace.get().isEmpty()) {
previewFace.set(face);
onPreview.set(true);
rotateFace(face,true,false);
}
} else if(previewFace.get().equals(face)){
rotateFace(Utils.reverseRotation(face),true,true);
} else if(previewFace.get().equals("V")){
previewFace.set("");
onPreview.set(false);
}
}
where the affine is calculated in the Utils class for the current face:
public static Affine getAffine(double dimCube, double d0, boolean bFaceArrow, String face){
Affine aff=new Affine(new Scale(3,3,3));
aff.append(new Translate(0,-d0,0));
switch(face){
case "U":
case "Ui": aff.prepend(new Rotate(face.equals("Ui")?180:0,Rotate.Z_AXIS));
aff.prepend(new Rotate(face.equals("Ui")?45:-45,Rotate.Y_AXIS));
aff.prepend(new Translate(0,dimCube/2d,0));
break;
}
return aff;
}
To trigger the drawing of the arrow, we set a listener to the buttons on the toolbars based on the mouse hovering.
We can also add a small rotation (5º) as preview of the full rotation (90º) in the face selected, by calling rotateFace again, with bPreview=true at this point.
If the user clicks on the button, the rotation is completed (from 5º to 90º). Otherwise the rotation is cancelled (from 5º to 0º). In both cases, with a smooth animation.
Select rotation by picking
Finally, the rotation could be performed based on the mouse picking of a cubie face, with visual aid showing the arrow and performing a small rotation of 5º. If the mouse is dragged far enough the full rotation will be performed after it is released. If the mouse is released close to the origin, the rotation is cancelled.
For this feature, the critical part is being able to know which mesh we are selecting with the mouse click. And for that, the API provides MouseEvent.getPickResult().getIntersectedNode(), which returns one of the meshviews on the cube.
So the next step is find which is this meshview and what cubie does it belongs to. As all the meshes have a name, like 'Block46 (2)', looking at the number of block we identify the cubie.
Now we need to find which of the faces we have selected. For that we use the triangles coordinates of the mesh, as for the faces they define a plane, so with the cross product we know the normal direction of this plane. Note we must update the operations with the actual set of transformations applied.
private static Point3D getMeshNormal(MeshView mesh){
TriangleMesh tm=(TriangleMesh)mesh.getMesh();
float[] fPoints=new float[tm.getPoints().size()];
tm.getPoints().toArray(fPoints);
Point3D BA=new Point3D(fPoints[3]-fPoints[0],fPoints[4]-fPoints[1],fPoints[5]-fPoints[2]);
Point3D CA=new Point3D(fPoints[6]-fPoints[0],fPoints[7]-fPoints[1],fPoints[8]-fPoints[2]);
Point3D normal=BA.crossProduct(CA);
Affine a=new Affine(mesh.getTransforms().get(0));
return a.transform(normal.normalize());
}
public static String getPickedRotation(int cubie, MeshView mesh){
Point3D normal=getMeshNormal(mesh);
String rots=""; // Rx-Ry
switch(cubie){
case 0: rots=(normal.getZ()>0.99)?"Ui-Li":
((normal.getX()<-0.99)?"Ui-F":((normal.getY()>0.99)?"Ui-Li":""));
break;
}
return rots;
}
Once we have the normal, we can provide the user with two possible rotations (and their possible two directions). To select which one to perform, we'll look how the user moves the mouse while it's being dragged. Note the mouse coordinates are 2D.
public static String getRightRotation(Point3D p, String selFaces){
double radius=p.magnitude();
double angle=Math.atan2(p.getY(),p.getX());
String face="";
if(radius>=radMinimum && selFaces.contains("-") && selFaces.split("-").length==2){
String[] faces=selFaces.split("-");
// select rotation if p.getX>p.getY
if(-Math.PI/4d<=angle && angle<Math.PI/4d){ // X
face=faces[0];
} else if(Math.PI/4d<=angle && angle<3d*Math.PI/4d){ // Y
face=faces[1];
} else if((3d*Math.PI/4d<=angle && angle<=Math.PI) ||
(-Math.PI<=angle && angle<-3d*Math.PI/4d)){ // -X
face=reverseRotation(faces[0]);
} else { //-Y
face=reverseRotation(faces[1]);
}
System.out.println("face: "+face);
} else if(!face.isEmpty() && radius<radMinimum){ // reset previous face
face="";
}
return face;
}
Now that we have the layer to rotate, we make a small rotation as a preview of rotation if the mouse is dragged far from the initial click point, with a minimum distance. Then if the user releases the mouse and the distance from the initial point is greater than a distance radClick, the rotation is completed. But if the distance is lower or the mouse is dragged under the distance radMinimum, the rotation is cancelled.
The next listing shows an EventHandler<MouseEvent> implemented to provide this behaviour. Note that we have to stop the camera rotations while we are picking a face and rotating a layer.
public EventHandler<MouseEvent> eventHandler=(MouseEvent event)->{
if (event.getEventType() == MouseEvent.MOUSE_PRESSED ||
event.getEventType() == MouseEvent.MOUSE_DRAGGED ||
event.getEventType() == MouseEvent.MOUSE_RELEASED) {
mouseNewX = event.getSceneX();
mouseNewY = -event.getSceneY();
if (event.getEventType() == MouseEvent.MOUSE_PRESSED) {
Node picked = event.getPickResult().getIntersectedNode();
if(null != picked && picked instanceof MeshView) {
mouse.set(MOUSE_PRESSED);
cursor.set(Cursor.CLOSED_HAND);
stopEventHandling();
stopEvents=true;
pickedMesh=(MeshView)picked;
String block=pickedMesh.getId().substring(5,7);
int indexOf = order.indexOf(new Integer(block));
selFaces=Utils.getPickedRotation(indexOf, pickedMesh);
mouseIniX=mouseNewX;
mouseIniY=mouseNewY;
myFace="";
myFaceOld="";
}
} else if (event.getEventType() == MouseEvent.MOUSE_DRAGGED) {
if(stopEvents && !selFaces.isEmpty()){
mouse.set(MOUSE_DRAGGED);
Point3D p=new Point3D(mouseNewX-mouseIniX,mouseNewY-mouseIniY,0);
radius=p.magnitude();
if(myFaceOld.isEmpty()){
myFace=Utils.getRightRotation(p,selFaces);
if(!myFace.isEmpty() && !onRotation.get()){
updateArrow(myFace, true);
myFaceOld=myFace;
}
if(myFace.isEmpty()){
myFaceOld="";
}
}
// to cancel preselection, just go back to initial click point
if(!myFaceOld.isEmpty() && radius<Utils.radMinimum){
myFaceOld="";
updateArrow(myFace, false);
myFace="";
}
}
} else if (stopEvents && event.getEventType() == MouseEvent.MOUSE_RELEASED) {
mouse.set(MOUSE_RELEASED);
if(!onRotation.get() && !myFace.isEmpty() && !myFaceOld.isEmpty()){
if(Utils.radClick<radius){
// if hand is moved far away do full rotation
rotateFace(myFace);
} else {
// else preview cancellation
updateArrow(myFace, false);
}
}
myFace=""; myFaceOld="";
stopEvents=false;
resumeEventHandling();
cursor.set(Cursor.DEFAULT);
}
}
};
Finally, we add this EventHandler to the scene:
scene.addEventHandler(MouseEvent.ANY, rubik.eventHandler);
This video shows how this event handling works.
Check if cube is solved
Finally, let's add a check routine. We know the initial solved order of cubies, but we need to take into account any of the 24 possible orientations of the faces, which can be acchieved with up to two rotations.
private static final List<String> orientations=Arrays.asList("V-V","V-Y","V-Yi","V-Y2",
"X-V","X-Z","X-Zi","X-Z2",
"Xi-V","Xi-Z","Xi-Zi",
"X2-V","X2-Z","X2-Zi",
"X-Y","X-Yi","X-Y2",
"Xi-Y","Xi-Yi","X2-Y","X2-Yi",
"Z-V","Zi-V","Z2-V");
public static boolean checkOrientation(String r, List<Integer> order){
Rotations rot=new Rotations();
for(String s:r.split("-")){
if(s.contains("2")){
rot.turn(s.substring(0,1));
rot.turn(s.substring(0,1));
} else {
rot.turn(s);
}
}
return order.equals(rot.getCube());
}
So after any movement we have to check if the actual order matches any of these 24 solutions. For that we can use parallelStream() with a filter in which we rotate a new cube to one of the possible orientations and check if that matches the actual one:
public static boolean checkSolution(List<Integer> order) {
return Utils.getOrientations().parallelStream()
.filter(r->Utils.checkOrientation(r,order)).findAny().isPresent();
}
Conclusions
All along this post, we've been discussing the new JavaFX 3D API. Powerfull enough, but with lack of some usual tools in the 3D modelling world. Maybe they will come soon...
We've used the new lambdas and Stream API. I hope by now you've got a clear view of what you can do with them. For sure, they will definetely change the way we write code.
The Rubik's cube application has proven to be a nice way of testing these new capabilities, while enjoying playing, humm, I mean, developing the code.
This final video shows most of what we've accomplished in this post. As I said, it's for begginers like me with the Rubik's cube...
In my repo you can find all the code for this full version. Fill free to fork it and play with it. There're tons of improvements to make, so any pull request will be welcome!
Edit: And if you just want to try it before having a look at the code, here you can download a single executable jar. Download it and run it with Java 8 installed.
As always, thanks for reading me! Please, try it for yourself and share any comment you may have. |
https://tex.stackexchange.com/questions/413580/how-to-correctly-change-the-side-that-marginpar-appears-per-case?noredirect=1 | # How to correctly change the side that marginpar appears per case?
I'm trying to use marginpar and todonotes to make comments in a latex file as to ease the collaboration. The margins are just quick todo notes and fixes and I don't want to hardcode the position of every margin note since they'll be temporary. So, I'm trying to create small scripts in the preamble and use them as I go.
My problem is with long margin texts that multiples of them in one paragraph/page will either overlap or get too far away from the actual text. Naturally, I I thought I should be able to change the margin to appear on the left or right side of page per case that I have. That is, I want to be able to define to commands, let's say \lmargin and rmargin so that all the margin notes with \lmargin appear on one side and all the ones with \rmargin to appear on the other side. I don't really care about which one is on the left and which one is on the right, as long as they are on opposite sides. Or ideally, it would be great if latex could keep track of the margin notes and alternate between left and right automatically.
Searching through forums, the only solutions change the side of the margins for all the document. The closest solution I could find is to use \reversemarginpar. It works as long as the margin notes are in different paragraphs, but I don't want to break the paragraphs just for each note that I add to text.
Here is a MnonWE for the marginpar, the todonotes package is similar:
\documentclass[]{article}
\usepackage{pgf}
\usepackage[ %
%top=Bcm,
%bottom=Hcm,
%outer=Ccm,
%inner=Acm,
%heightrounded,
marginparwidth=2.5cm,
%marginparsep=Ecm
]{geometry}
\usepackage{lipsum}
\usepackage{soul} % for highlighting the text: \hl
\usepackage{setspace} % for setting the linespace for a portion of the text: \setstretch
\definecolor{dkblue}{rgb}{0.0, 0.0, 0.6}
\definecolor{ltblue}{rgb}{0.6, 0.6, 1.0}
\usepackage{color}
\newcommand{\lmargin}[2]{\sethlcolor{ltblue}\hl{#1}\marginpar{\setstretch{0.5} {\color{dkblue} \tiny note: #2}}}
\begin{document}
And even there \lmargin{are}{there's something here that I need to pay attention to, and I need to be very clear about it, so I am going to write everything that comes to my mind here in the margin so that it doesn't take anyone 400 years to come up with the same thing!} more things that can be said, but I am \reversemarginpar\lmargin{not sure}{Same thing here: there's something here that I need to pay attention to, and I need to be very clear about it, so I am going to write everything that comes to my mind here in the margin so that it doesn't take anyone 400 years to come up with the same thing!} exactly what. \lipsum[7]
Do you see the problem \reversemarginpar\lmargin{here}{And this is what I need}? If I don't go to a new paragraph, everything changes altogether! and overlaps even when I do!
\end{document}
And this is how it looks like:
While I was looking at my old paracol solution I started thinking I could do it better. In particular, I should be able to handle more than one note per paragraph.
The advantage of this solution is that it will never overlap and will break across pages. Because the notes are being written to the AUX file, I put most of the formatting into \marginparformat. Also, it takes two runs.
The main problem in implementing this is that you can only switch columns between paragraphs, which can be solved using \everypar. Even then, you need to know at the beginning of the paragraph what will be coming up later. The solution is to write the needed information to the AUX file. When you switch back to the original column, it immediately kicks off another \everypar which you have to ignore or get trapped in a loop.
It should be noted that \everypar tends not to be preserved. I rewrote \@afterheading, but there are bound to be others.
\documentclass{article}
\usepackage[ %
%top=Bcm,
%bottom=Hcm,
%outer=Ccm,
%inner=Acm,
%heightrounded,
textwidth={\dimexpr 5in+5cm+2\columnsep},
marginparwidth=0pt,
marginparsep=0pt
]{geometry}
\usepackage[nopar]{lipsum}
\usepackage{soul} % for highlighting the text: \hl
\usepackage{setspace} % for setting the linespace for a portion of the text: \setstretch
\usepackage{color}
\definecolor{dkblue}{rgb}{0.0, 0.0, 0.6}
\definecolor{ltblue}{rgb}{0.6, 0.6, 1.0}
\usepackage{paracol}
\setcolumnwidth{2.5cm, 5in, 2.5cm}
\newcommand{\marginparformat}% only change between paragraphs
{\tiny% affects em and ex
\parindent=2em
\parskip=1ex
\sloppy
\setstretch{0.5}%
\color{dkblue}%
\noindent note: }
\newcounter{absparagraph}
\newcounter{leftmarginpar}
\newcounter{rightmarginpar}
\globalcounter{absparagraph}
\globalcounter{leftmarginpar}
\globalcounter{rightmarginpar}
\makeatletter
\newcommand{\leftmarginpar}[1]% #1=text
{\stepcounter{leftmarginpar}%
\pdfsavepos
\protected@write\@auxout{}{\string\newleftmarginpar{\theleftmarginpar}{\theabsparagraph}%
{\thepage}{\noexpand\number\pdflastypos}{#1}}}%
\newcommand{\rightmarginpar}[1]% #1=text
{\stepcounter{rightmarginpar}%
\pdfsavepos
\protected@write\@auxout{}{\string\newrightmarginpar{\therightmarginpar}{\theabsparagraph}%
{\thepage}{\noexpand\number\pdflastypos}{#1}}}%
%begin gory details
\newcommand{\current@page}{}% reserve global name
\newif\ifrepeatpar
\newlength{\leftmarginpar@y}
\newlength{\rightmarginpar@y}
\setlength{\leftmarginpar@y}{\pagetop@y}
\setlength{\rightmarginpar@y}{\pagetop@y}
\coordinate (sw1) at (current page.south west);
\coordinate (nw1) at (current page.north west);
\coordinate (leftmargin) at (intersection of lm--leftbottom and sw1--nw1);
\node[right,align=flush left ,text width=2cm, fill=green!20] at (leftmargin) {\color{gray!70}\footnotesize\bfseries{#2}};
\end{tikzpicture}
}
\begin{document}
\pagestyle{empty}
\tikzstyle{every picture}+=[remember picture]
\tikzrmargin{Remark in the right margin!}{This is the Remark.} bla bla bla bla bla gjqp bla bla bla bla blabla bla bla
bla bla blabla bla blabla bla blabla bla blabla bla blabla bla bla bla bla bla bla bla blabla bla bla bla bla blabla bla bla \tikzlmargin{Remark in the left margin!}{This is the Remark.}
bla bla blabla bla blabla bla blabla bla blabla bla blabla bla blabla bla bla bla bla blabla bla bla bla bla blabla bla bla
bla bla blabla bla blabla bla blabla bla blabla bla blabla bla blabla bla bla bla bla blabla bla bla bla bla blabla bla bla
bla bla blabla bla blabla bla \tikzrmargin{ 2nd remark in the right margin!}{This is the 2\textsuperscript{nd} remark}blabla bla blabla bla blabla bla blabla bla bla bla bla blabla bla bla bla bla blabla bla bla
bla bla blabla bla blabla bla blabla bla blabla bla blabla bla bla
This should be put inside a suitable macro! \tikzlmargin{On the left}{again!}
\end{document}
• This looks very cool. It seems like all the answers come down to tikz at some point. I might adopt this. Thanks for the workaround. Just one question, do you know if it's possible to make the numbering automatic? – Keivan Feb 5 '18 at 4:50
• And, about the first link. I've been looking at the todonotes, but as far as I figured out it behaves the same as marginpar regarding which side it appears. – Keivan Feb 5 '18 at 4:52
• @Keivan: Automatic numbering is obsolete now with the new version. Is this right? – Jens Schwaiger Feb 5 '18 at 10:56
• Yup. That's right – Keivan Feb 6 '18 at 20:01 |
http://mathhelpforum.com/geometry/106368-how-much-area-preserved-when-you-inscribe-circles-circles-print.html | # How much of the area is preserved when you inscribe circles in circles
• October 5th 2009, 06:58 PM
eeyore
How much of the area is preserved when you inscribe circles in circles
http://img30.imageshack.us/img30/282...lesproblem.png
What is the ratio of the area of the shaded portion in the figure on the right to the area of the shaded portion of the figure to the left? The circles are all tangent to each other even though the picture might not be exact. The inscribed circles in the left figure are congruent. The second figure is a result of inscribing 4 congruent circles in each of the 4 circles on the left.
I have no idea how to approach this. Thanks for taking the time to read my question!
• October 5th 2009, 09:49 PM
pflo
First, lets just look at the figure on the left with a single circle and four circles inscribed in it. Let $A$ be the area of the larger circle (with radius $R$) and $a$ be the area of each of the smaller circles (with radius $r$). You are looking for the following ratio: $\frac{A}{4a}$ which is $\frac{\pi R^2}{4(\pi r^2)}=\frac{R^2}{4r^2}$. So you essentially need to find the ratio of the radii.
Put a coordinate system on your circle picture, with the center of the large, original circle at the origin.
Look at the smaller circle in quadrant I of the coordinate system. It touches both the x-axis and the y-axis. Therefore, its center is at (r,r). Using this, you can calculate the distance from the origin to its center as $\sqrt{2}*r$ and if you go from the center to the point of tangency (adding an r from this point) you will have gone R. This means $\sqrt{2}*r+r=R$. Solving for r, you can see $r=\frac{R}{\sqrt{2}+1}$.
Plug this in the original question and you will find $\frac{R}{4r}=\frac{R}{4\frac{R}{\sqrt{2}+1}}=\frac {\sqrt{2}+1}{4}$
Can you use this information to figure out the answer to your question?
• October 6th 2009, 08:04 AM
eeyore
Yes, thank you! That was very helpful. |
http://trac-hacks.org/ticket/10481 | # Ticket #10481 (closed defect: invalid)
Opened 7 months ago
## No Footnotes at all in Trac 1.0
Reported by: Assigned to: [email protected] rjollos high FootNoteMacro major 1.0
### Description
FootNoteMacro 1.03-r11767 is installed and enabled in Trac 1.0
The macro does generate a superscript index as well as the ALT text and Hyperlink, but there is no footnote at the bottom of the page.
we have<sup class="footnote"><a cannotgivehrefhere="#FootNote1" id="FootNoteRef1" title="We have of course older backups too, but these ...">1</a></sup>
## Change History
### 10/14/12 11:16:15 changed by anonymous
the cannotgivehref here is normally of course href, I just wanted to avoid rejection and an internal server error of trac-hacks...
### 10/19/12 10:31:25 changed by rjollos
• status changed from new to assigned.
• priority changed from normal to high.
### 10/20/12 09:39:05 changed by rjollos
Did you place [[FootNote]] at the bottom of the page? It is working fine for me when testing out this evening with Trac 1.1.1dev. If you still have no luck, please try out the example at FootNoteMacro#Example.
### 10/20/12 09:47:46 changed by rjollos
(In [12232]) Refs #10481:
• Changed to more common no-conflict mode jQuery(document).ready(function(){... syntax, replacing \$(function(){...
### (follow-up: ↓ 7 ) 10/20/12 10:27:53 changed by PetaMem
• status changed from assigned to closed.
• resolution set to invalid.
Argh...
placed at the bottom of the page. Probably stopped reading the docs too early. And thought (experience from other system) this would be implicit behaviour (placing Footnotes at the foot of the document).
I can see how this implementation offers greater flexibility, but still would suggest that the FootnoteMacro? should by default "flush" all pending content at the bottom of the page.
side note: after your suggestion, I placed [[Footnote]] at the bottom of the page and it still didn't work. ;-)
So: My bad. FootNoteMacro works, resolution: invalid. Maybe docs or default behaviour could be improved for more intuitive use, but so far I'm ok with it.
### (follow-up: ↓ 8 ) 10/20/12 10:29:32 changed by anonymous
Oh - BTW - On my System I do not get the aforementioned error "Error: Failed to load processor Footnote" - I just get a
Footnote?
### (in reply to: ↑ 5 ) 10/22/12 04:46:41 changed by rjollos
I can see how this implementation offers greater flexibility, but still would suggest that the FootnoteMacro should by default "flush" all pending content at the bottom of the page.
Sure, that makes sense - append to the bottom of the page if they haven't already been flushed. I'll keep that in mind while working on t:#9037. I might get that change integrated here as well, but currently my attention is on t:#9037 when I have time to work on footnotes.
So: My bad. FootNoteMacro works, resolution: invalid. Maybe docs or default behaviour could be improved for more intuitive use, but so far I'm ok with it.
Let me know if you have some specific ideas for changing the docs, or feel free to edit the wiki page on your own.
### (in reply to: ↑ 6 ) 10/22/12 04:50:42 changed by rjollos
Oh - BTW - On my System I do not get the aforementioned error "Error: Failed to load processor Footnote" - I just get a Footnote? link to some nonexistant document.
I see the same behavior on newer versions of Trac. I think they've just gotten better about handling broken, removed or mistyped macro names.
### Add/Change #10481 (No Footnotes at all in Trac 1.0)
Change Properties |
http://tex.stackexchange.com/questions/185926/incompatibility-of-scrbook-and-fontspec-in-tex-live-2013 | # Incompatibility of scrbook and fontspec in TeX-Live 2013 [closed]
Since updating to TeX-Live 2013 I am facing the following error:
Compiling the following file by XeLaTeX ...
\documentclass{scrbook}
\usepackage{fontspec}
\begin{document}
bla
\end{document}
... ends up with
! Undefined control sequence.
<argument> \str_if_eq_x_p:nn
l.5 \end{document}
Replacing scrbook by scrartcl or scrreprt works smoothly. Also in my old TeX-Live 2012 installation the problem does not occur.
-
## closed as off-topic by Joseph Wright♦Apr 11 at 22:18
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center." – Joseph Wright
If this question can be reworded to fit the rules in the help center, please edit the question.
You have mismatched versions of fontspec and l3kernel (expl3): nothing to do with KOMA per se (probably showing up as a different path is taken in the code). Can you post your log file as an edit? – Joseph Wright Jun 21 '14 at 14:04
Oh my goodness. I happened to have all the l3-packages in my \$(HOME)/texmf file. Therefore the version mismatch occured. I could have figured this out myself by checking the logfiles more carefully. Thanks so much, problem solved. – Johannes Jun 21 '14 at 14:10
I suspected something like that: this is quite a common one :-) – Joseph Wright Jun 21 '14 at 14:10 |
https://electronics.stackexchange.com/questions/411418/transmission-line-with-a-resistor-what-happens-after-a-long-time | # Transmission line with a resistor. What happens after a long time?
Ok so I'm analyzing a problem with a very simple lossless transmission line:
In my case $$R=20 \Omega$$ and I have to find the evolution of uL(t)
So I had no problem doing the first iterations.
I think I don't need to give much details but basically
$$u_L=u_i + u_r$$
where i and r means incident wave and reflected wave
I will constantly have:
$$u_i=U_0-u_r$$
$$u_r=\frac{R - R_{\omega}}{R + R_{\omega}}u_i$$
So I will have the following sequence
$$u_i=U_0=120V$$ $$u_r=-60V$$ $$u_L=60V$$
$$u_i=U_0=180V$$ $$u_r=-90V$$ $$u_L=90V$$
$$u_i=U_0=210V$$ $$u_r=-105V$$ $$u_L=105V$$
and so on and so forth.
I have no problem computing this iterations and I totally understand the phenomena of wave propagation in the transmission lines.
What I don't understand is when my book says that in infinite time u_L will become 120 V. Is it a general property? I mean, I think it makes sense: as time goes to infinity the output voltage will tend to be equal to the input voltage, like we would have in traditional circuit analysis. But I'm not sure if I can jump to this conclusion.
I also did the analysis with the limit cases of R=0 and R=infinity: with R=0 I would of course always have a zero voltage as it should be. While in the case of an open-circuit I constantly have the output voltage switching from 240 V and 0 V as time goes by (but the mean value is indeed 120 V).
Can someone explain me mathematically why the output voltage will tend to be equal to the input voltage when we have a finite value resistor (different than zero)? Thanks!
• What is the source resistance of the step voltage generator? This heavily affects the signal. Is it Rsource=0? (Which means full signal reflection). – Stefan Wyss Dec 10 '18 at 5:36
• Yes it is zero. – Granger Obliviate Dec 10 '18 at 11:23
• When you say "mirror the input voltage" do you mean eventually be the same as or do you think that it becomes inverted as a mirror does to an image. I think you should choose that word carefully because it's possibly preventing an answer. Also you say at the start of your final paragraph that you don't understand why - we need some indication of what you do understand here else the answer becomes unfeasible. – Andy aka Dec 10 '18 at 12:25
• Thank you for the feedback I will edit the question to try to make it more clear – Granger Obliviate Dec 10 '18 at 14:28
• Anyone please ? – Granger Obliviate Dec 10 '18 at 15:51
Can someone explain me mathematically why the output voltage will tend to be equal to the input voltage when we have a finite value resistor (different than zero)?
Think about it from a power perspective. You launch a voltage down the t-line and associated with that voltage is a current. The current is determined by the launch voltage (120 volts) and the characteristic impedance (60 ohm) and is of course 2 amps for your example. The launched power is 480 watts.
That launched power gets dissipated completely if the termination (load) resistance is 60 ohms and that would be the end of the story. However, the termination resistor is much smaller at 20 ohms and the voltage at the far end becomes 60 volts (and you of course know that because you have correctly calculated that in your example). That is a power dissipation of only 180 watts and so the remainder of the power (300 watts) is reflected back to the sending end. Unfortunately, at the sending end there is nothing to dissipate that power because there is no sending end impedance; i.e. it is zero.
Some (short) time later you have 90 volts at the terminating end and this creates a power of 405 watts in the 20 ohm termination. But the relaunched voltage that created this was 120 volts + 60 volts and that has a power into the t-line of 540 watts.
Do you see that the "gap" between power dissipated in the termination and the re-launched power that creates it (a little bit earlier) is getting smaller: -
• Initially 480 watts was launched but only 180 watts was used
• Next time the relaunched power was 540 watts and 405 watts is used
The gap is gettting closer and as it does the final voltage on the termination becomes 105 volts then 112.5 volts then 116.25 volts then 118.125 volts etc etc.. The power gap gets smaller due to dissipation.
• Hi! Thank you for such a detailed answer! Yes I'm fully understanding now what is happening based on dissipated power and power gap! – Granger Obliviate Dec 10 '18 at 21:29 |
https://search.r-project.org/CRAN/refmans/clock/html/seq.clock_year_day.html | seq.clock_year_day {clock} R Documentation
Sequences: year-day
Description
This is a year-day method for the seq() generic.
Sequences can only be generated for "year" precision year-day vectors.
When calling seq(), exactly two of the following must be specified:
• to
• by
• Either length.out or along.with
Usage
## S3 method for class 'clock_year_day'
seq(from, to = NULL, by = NULL, length.out = NULL, along.with = NULL, ...)
Arguments
from [clock_year_day(1)] A "year" precision year-day to start the sequence from. from is always included in the result. to [clock_year_day(1) / NULL] A "year" precision year-day to stop the sequence at. to is cast to the type of from. to is only included in the result if the resulting sequence divides the distance between from and to exactly. by [integer(1) / clock_duration(1) / NULL] The unit to increment the sequence by. If by is an integer, it is transformed into a duration with the precision of from. If by is a duration, it is cast to the type of from. length.out [positive integer(1) / NULL] The length of the resulting sequence. If specified, along.with must be NULL. along.with [vector / NULL] A vector who's length determines the length of the resulting sequence. Equivalent to length.out = vec_size(along.with). If specified, length.out must be NULL. ... These dots are for future extensions and must be empty.
Value
A sequence with the type of from.
Examples
# Yearly sequence
x <- seq(year_day(2020), year_day(2040), by = 2)
x
# Which we can then set the day of to get a sequence of end-of-year values
set_day(x, "last")
# Daily sequences are not allowed. Use a naive-time for this instead.
try(seq(year_day(2019, 1), by = 2, length.out = 2))
as_year_day(seq(as_naive_time(year_day(2019, 1)), by = 2, length.out = 2))
[Package clock version 0.6.1 Index] |
https://www.physicsforums.com/threads/wave-derivation-intensity-and-pressure-amplitude-relationship.183531/ | # Wave Derivation: Intensity and Pressure Amplitude Relationship
1. Sep 8, 2007
### PFStudent
1. The problem statement, all variables and given/known data
My professor mentioned that,
$${I} \propto {\Delta{{A}_{p}}^{2}}$$
Where,
$I$ $\equiv$ Intensity measured in units of $\frac{[W]}{[{m}^{2}]}$
$\Delta{{A}_{p}}$ $\equiv$ pressure amplitude measured in units of $[Pa]$
What I want to do is actually derive the relationship for $I$ and $\Delta{A_{p}}^{2}$, I did it below but I am not sure if it is right...
Can anyone check?
2. Relevant equations
$${s(x, t)} = {A_{s}}cos({kx} \mp {\omega}{t} + {\phi}),{{.}}{{.}}{{.}}{{.}}{{.}}[\pm{{.}}x{{.}}propagation]$$
$${\Delta{p(x, t)}} = \Delta{A_{p}}sin({kx} \mp {\omega}{t} + {\phi}),{{.}}{{.}}{{.}}{{.}}{{.}}[\pm{{.}}x{{.}}propagation]$$
$${\Delta{A_{p}}} = {v}{\rho}{\omega}{A_{s}}$$
$$I = \frac{1}{2}{\rho}{v}{\omega}^{2}{{A_{s}}^{2}}$$
3. The attempt at a solution
Using,
$$I = \frac{1}{2}{\rho}{v}{\omega}^{2}{{A_{s}}^{2}}$$
and
$${\Delta{A_{p}}} = {v}{\rho}{\omega}{A_{s}}$$
So rearranging, I get the following,
$${\Delta{{A_{p}}^{2}}} = {2}{\rho}{v}{I}$$
Is that right?
Any help is appreciated.
Thanks,
-PFStudent |
https://cms.math.ca/cjm/kw/filiform%20Lie%20algebras | location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword filiform Lie algebras
Expand all Collapse all Results 1 - 1 of 1
1. CJM 2014 (vol 67 pp. 55)
Barron, Tatyana; Kerner, Dmitry; Tvalavadze, Marina
On Varieties of Lie Algebras of Maximal Class We study complex projective varieties that parametrize (finite-dimensional) filiform Lie algebras over ${\mathbb C}$, using equations derived by Millionshchikov. In the infinite-dimensional case we concentrate our attention on ${\mathbb N}$-graded Lie algebras of maximal class. As shown by A. Fialowski there are only three isomorphism types of $\mathbb{N}$-graded Lie algebras $L=\oplus^{\infty}_{i=1} L_i$ of maximal class generated by $L_1$ and $L_2$, $L=\langle L_1, L_2 \rangle$. Vergne described the structure of these algebras with the property $L=\langle L_1 \rangle$. In this paper we study those generated by the first and $q$-th components where $q\gt 2$, $L=\langle L_1, L_q \rangle$. Under some technical condition, there can only be one isomorphism type of such algebras. For $q=3$ we fully classify them. This gives a partial answer to a question posed by Millionshchikov. Keywords:filiform Lie algebras, graded Lie algebras, projective varieties, topology, classificationCategories:17B70, 14F45
top of page | contact us | privacy | site map | |
Subsets and Splits