Jump to content

shmoe

Senior Members
  • Posts

    35
  • Joined

  • Last visited

Retained

  • Quark

shmoe's Achievements

Quark

Quark (2/13)

10

Reputation

  1. You've answered something that isn't quite equivalent. (phi(1)+..+phi(N))/(1+...+N) is the probability that a pair selected from A={(n,m)| 1<=m<=n<=N} has gcd(n,m)=1. This isn't the same as selecting from the set B={(n,m)| 1<=m<=N, 1<=n<=N} as I originally asked for. The diffence is the diagonal, the points where n=m. The map f from B to A that orders the pair (e.g. f(1,2)=(2,1)) is a 1-1 map on the diagonal but 2-1 everywhere else (f(1,2)=f(2,1)). Essentially, you're giving the diagonal elements more weight and your probability of relatively prime pair for any N is smaller. Since there are only N elements on the diagonal and N^2 choices overall, this doesn't affect the limit though. About the appearance of the 1/zeta(2) in the solution, this is no accident of course. Using the relation of phi with the mobius function you can prove, for Real part of s>2: [math]\sum_{n=1}^\infty\frac{\phi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}[/math] if you look up Perron's formula you will find an analytic approach to approximating the summatory function of the coefficients of a Dirichlet series like this.
  2. That's essentially it woelen, here's how you can deal with the error terms. I'll use [x] to denote the floor of a real number x. Note that [x] is always within 1 of x, so [x]=x+O(1). This gives [x]^2=x^2+O(x) and [x]=O(x) . [math]\sum_{k\leq N}\phi(k)=\sum_{k\leq N}k\sum_{dm=k}\frac{\mu(d)}{d}[/math] this can is the sum over all d and m where d*m<=N. Changing this to a sum over d and m and using the formula for the sum of the first n integers gives (we can pull the d out of the inner sum): [math]=\sum_{d\leq N}\frac{\mu(d)}{d}\sum_{m\leq [N/d]}md=\sum_{d\leq N}\frac{\mu(d)}{d}d\left(\frac{[N/d]^2}{2}+\frac{[N/d]}{2}\right)[/math] replacing [N/d] with N/d gives: [math]=\frac{N^2}{2}\sum_{d\leq N}\frac{\mu(d)}{d^2}+O\left(\ \sum_{d\leq N}\frac{N}{d}\right)=\frac{N^2}{2}\sum_{d\leq N}\frac{\mu(d)}{d^2}+O\left(N\log{N}\right)[/math] since [math]\sum_{d\leq N}d^{-1}=O(\log{N})[/math], just bound the sum by the corresponding integral. Also bounding by an integral, you can show [math]\sum_{d> N}\mu(d)d^{-2}=O(N^{-1})[/math], which gives [math]\sum_{d\leq N}\mu(d)d^{-2}=\zeta(2)^{-1}+O(N^{-1})[/math]. Inserting this into the above and we get: [math]\sum_{k\leq N}\phi(k)=\frac{N^2}{2\zeta(2)}+O\left(N+N\log{N}\right)=\frac{N^2}{2\zeta(2)}+O\left(N\log{N}\right)=\frac{3}{\pi^2}N^2+O(N\log{N})[/math] which proves your limit. Now, if P(N) is the probability that two randomly selected integers less than or equal to N are relatively prime, what is the limit of P(N) as N-> infinity? You can likewise show that [math]\sum_{k\leq N}\frac{\phi(k)}{k}=\frac{6}{\pi^2}N+O(N\log{N})[/math] using the same approach (this will be slightly simpler), or you can use partial summation to get from one result to the other (partial summation is the 'integration by parts' of summation, it's actually integration by parts for Riemann-Stieltjes integrals). This leads to the conclusion that the 'average value' of phi(n)/n is 6/pi^2.
  3. You are using O(n) in a non-standard way, at least not one I'm familiar with if you think saying phi(n) is on average of order O(n) is anything but a trivial statement. I guess you mean to say phi(n) is "close" to n for "lots" of values of n? This, or a precise statement of this, is essentially equivalent to proving you limit is something non zero. We can find the 'average' value for phi(n)/n (average in an asymptotic kind of sense). I'll give some details once you've resolved your limit. I'm telling you it is the latter case! Everything you need is in my last post. I can fill out the details if you like, but I'll wait until you ask for them.
  4. Ahh, so you don't have the answer. I had thought you did for the same reason matt did, when you said "This limit is defined and has a very nice and surprising value". You can compute values as high as you like, it does not show this limit exists! And no, I wouldn't consider some as yet to be identified randomish looking string of digits 'nice' in any way. From your last post...phi(n) is O(n), you trivially have phi(n)<n. To do this problem, use [math]\phi(n)=n\sum_{d|n}\frac{\mu(d)}{d}[/math] where mu(n) is the mobius function (the sum is over divisors d of n). Change order of summation. You'll also need to know that [math]1/\zeta(s)=\sum_{n=1}^{\infty}\mu(n)n^{-s}[/math] when the real part of s>1. The limit is indeed a 'nice' value as matt was guessing at (expressible in terms of naturally occuring well known constants), and it's not unrelated to the question he posed earlier (another reason I thought you might have known the answer!) I don't have the proof about the taylor polynomials of e^x, but you should be able to find it with his name and the year given. Did you try mathscinet?
  5. Why would you bother with the computation? At any rate, if my memory of error terms serves*, S(N)/N^2 should be within 1/N of the actual limit (the error is O(1/N), I think 1 worked as a constant, but I'm not positive). *edit-my memory of error terms failed me, O(log(N)/N) is easy enough to prove, I had thought the log could be removed with more work. It can be reduced a little bit, but apparently not to something less than logloglog(N), so a constant is right out. see http://www.mai.liu.se/~halun/complex/taylor/
  6. Consider what adding the digits together does to your number mod 9. (For your pattern, it should be mentioned that you repeat this 'adding the digits together' operation until you get a single digit)
  7. This isn't what x^x^x... represents. It's the limit of the sequence x^x, x^(x^x), x^(x^(x^x)),... what you are doing on your calculator appears to be x^x, (x^x)^(x^x), ((x^x)^(x^x))^((x^x)^(x^x)),... which is something different. With x=sqrt(2) you'll get: x=1.414213562.... x^x=1.632526919... x^(x^x)=1.760839556... You can keep going, it will be approaching 2 but of course doing a finite number of terms proves nothing at all. You should prove (when x=sqrt(2)) that a) this sequence is increasing b) this sequence is bounded above by 2 (induction for both). This will prove the limit does in fact exist, you can then do some manipulations (combined with continuity of the exponential) to show this limit is 2.
  8. Do you mean: [MATH]\lim_{n\rightarrow \infty}\frac{n}{\sqrt{\sum_{k=1}^{n} k^2}}[/MATH] As you had it doesn't make much sense. It's not the case that [MATH]\sum_{k=1}^{n} k^2[/MATH] is equal to [MATH]\int_{1}^{n} t^2 dt[/MATH] either' date=' but you can use integrals to bound the sum: [MATH']\int_{0}^{n} t^2 dt\leq\sum_{k=1}^{n} k^2\leq\int_{1}^{n+1} t^2 dt[/MATH] Note the endpoints carefully (draw a graph of these). Your (n^3-1)/3 will be a lower bound also, good enough for this problem (maybe you just meant to bound it, but you had said "rewrite", which was troubling). See http://mathworld.wolfram.com/PowerSum.html for more on power sums, equation (23) and (33) are the relevant ones here.
  9. No, you can't skip out on the definitions. If you do you will literally have no idea what you are talking about- the symbols have no inherent meaning, only the meaning mathematicians assign to them. In my experience, the majority of the people who have problems with 0.99...=1 won't even be able to tell you what the symbol "0.99..." means and yet they feel qualified to make statements about it. This gets annoying fast to anyone who has actually taken the time to understand what the real numbers are (something you don't do in high school, or even your typical intro calculus class), and is why you'll see short replies suggesting they should go and do some much needed reading. If they aren't willing to dive into the details, they will never be qualified to have an opinion worth more than dirt. You linked to a source for a woefully inadequate definition of the real numbers. I would suggest you avoid using what appears to be a glossary on a philosophy website for mathematical definitions. If you want to learn about the reals, go crack opens some textbooks. Any intro to real analysis book will probably do (or something like Spivak's Calculus), look for something that gives a construction of the reals from the more familiar rationals (though theres probably plenty of stuff online). This is admitedly lazy on my part, but it's a much better use of my time if you go away, read, and come back with questions than me building up a detailed theory from scratch when it already exists in many places.
  10. Anything beyond intro calculus texts and "ln" has largely vanished, though even in these texts it's not a given that you'll see "ln".
  11. Absolutely. This sort of reduction is a good lead in to uses of Fermat/Euler though. Alas, they probably weren't capable of dividing polynomials either. I honestly don't know what they make them do in high school anymore. When I first learned about the Euclidean algorithm to write the gcd as a linear combination I remember thinking how bloody cool division was and wondering why this wasn't taught in grade school. It's such a simple idea, yet so very clever.
  12. In addition to not being able to do the arithmetic, they don't like equivalence relations. This is also suprising since they've been implicitly using equivalence relations on the rational numbers since I don't know how early. @the tree- If you've just been introduced to modular arithemtic then I would suggest doing everything by hand until it's mind numbingly easy. It is grade school stuff like matt has said, so this shouldn't take long to come back. 2^31+1 mod 7... you'll learn Fermat's Little Theorem (and Euler's) later and it will be perhaps more obvious. That's overkill for numbers this small though. What is 2^3 mod 7 and how does this help find 2^31?
  13. Now you know better. Go to the library and browse through some advanced math texts and see for yourself (anything beyond elementary calculus). If you like, you can write the authors and let them know your calculator says their notation is wrong. Bring alot of stamps though, you'll need them.
  14. It should not be necessary. However, if you are doing many calculations involving large numbers than mechanical means can be handy to save time. I would say if the division algorithm isn't an utterly trivial thing then it should be done by hand until it is before turning to a calculator for help. In retrospect though I do regret posting a way of doing it with a calculator. I should have demanded the tree come up with one on his own- it's really not difficult with even a basic understanding of mod. I was feeling lazy I suppose.
  15. Only if you are unfortunate enough to be trapped in a first year calculus class. "log" denotes the natural logarithm in higher maths. Sometimes it will mean "base-2" if you're trapped in a computer science class (sometimes "lg" is used here). Universal meaning in higher maths: "log" without an explicitly mentioned base means "whatever base makes this statement true" if it matters and "whatever the readers favorite base is" if it doesn't.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.