-
Posts
417 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by John
-
These questions aren't entirely invalid. "Line" and "point" are usually taken as primitive objects in geometry, and the properties of a line or point in any given geometry depend on the axioms used to describe them. Usually we define a line as containing at least two distinct points, so in that sense shrinking a line to length zero doesn't really work. You might enjoy reading about the concept of duality in projective geometry, in which we interchange the roles of points and lines in a projective plane while preserving incidence. However, this isn't directly related to what you seem to be describing. Shrinking a circle such that r = 0 produces an example of a degenerate conic. As for negative lengths (or negative sizes in general), in measure theory there is the concept of signed measure (measure is, in a sense, a generalization of the concept of size). The study and application of these concepts involve some fairly high-level mathematics (namely, it seems: geometry, algebraic geometry, and measure theory, of which I've studied only the first and a little of the third, so I can't provide too much information or insight), but if they're more than just passing points of curiosity for you, they're there for the learning.
-
'n-1' versus 'n' in sampling variance
John replied to DylsexicChciken's topic in Applied Mathematics
Thanks for letting me know. The forum software is a bit weird with links containing apostrophes. Since I can no longer edit my post, here is the corrected link: Bessel's correction. -
'n-1' versus 'n' in sampling variance
John replied to DylsexicChciken's topic in Applied Mathematics
This is called Bessel's correction. The associated Wikipedia article has a section explaining the source of the bias when dividing by n, as well as three proofs, the third of which includes a subsection dealing with the intuition behind the proof. -
Help with Permutation Problem (Grade 11 )
John replied to CasualKilla's topic in Linear Algebra and Group Theory
imatfaal's reasoning is correct. A good rule of thumb with these sorts of problems is to start with the most restricted positions, which in this case are the first and fourth. We know the first character must be a vowel, thus there are five possibilities. This leaves four possibilities for the fourth character. Regardless of which two vowels occupy these spaces, we then have 24 and 23 possible characters for the second and third positions, respectively. The fifth and sixth characters can, of course, be treated separately, as everyone here seems to have done. Thus the number of valid passwords is 5(4)(24)(23)(9)(8), as indicated. To work it out more laboriously, we can try to go through in sequence, and we get the following: VVVVNN: 5(4)(3)(2)(9)(8) VCVVNN: 5(21)(4)(3)(9)(8) VVCVNN: 5(4)(21)(3)(9)(8) VCCVNN: 5(21)(20)(4)(9)(8) 5(4)(3)(2)(9)(8) + 5(21)(4)(3)(9)(8) + 5(4)(21)(3)(9)(8) + 5(21)(20)(4)(9)(8) = 794880 = 5(4)(24)(23)(9)(8) Edit: I somehow didn't really notice that the latter process is exactly what fiveworlds and imatfaal mentioned in their posts. Blame it on a long day at work. Edit 2: Another way to see how and why the first method works is to mentally picture the set of all valid passwords. Separate them into blocks depending on what vowels are in the first and fourth positions. So there will be 5(4) blocks in total (one for A__E__, one for A__I__, etc.). For each of these, there are 24(23) letters left to occupy the second and third positions. This is the reasoning encoded by the product 5(4)(24)(23)(9)(8). -
An exception to what? Also, I think you're mixing two different meanings of "invertible" here, going by your WolframAlpha link. What we've been talking about previously has been finding the inverse of a polynomial function, which is essentially finding the polynomial function that "undoes" the given polynomial function, returning x if we compose the two. This is different from the meaning referred to in the link, which is in the context of polynomial rings and refers to the unique polynomial which, multiplied by the given polynomial, returns the identity "1". Using this second context, we lose even more polynomials when we talk about inverses, since the only invertible elements in R[x] are the constant polynomials besides 0.
-
I'm not sure what you're doing here, heh. We showed in your previous thread that de must equal 1, else the "modified derivative" will not exist. Thus the method detailed in the first post will not work for the vast majority of linear functions (and as a note, for polynomials in general, "most" are not invertible, thus the method will not work most of the time). I'm not sure exactly what the limit presented in your second post is supposed to mean, as the limit of C(x) as it's usually defined will be infinity as x goes to infinity. Where did you find it? What's the context? For your third post, perhaps I'm misunderstanding something, but I do have several questions: 1. Isn't a hailstone sequence, in this context, simply a sequence of numbers hit by successive iterations of the Collatz function starting from a given integer? 2. What exactly do you mean when you say "Collatz number," and why is only every other element of a given hailstone sequence a Collatz number? 3. Why does it matter whether half a given Collatz number is a whole number, when the Collatz number in question will still be a whole number? Also, unless I'm missing something, the generalization you've presented seems to simply restate the original conjecture in slightly more complex terms.
-
This is sad to hear, but not entirely unexpected. Confirmation bias is a pretty strong effect, and if you've landed on the anti-vaccination side, then it's doubtful you'll easily be persuaded by the evidence available. But to reiterate, at the end of the day, the data we have available indicate that vaccination is safer than non-vaccination, for both your family and your community. It can be difficult, as a new parent, to decide what's best for your child, and in many cases, the best option isn't very clear. But here, the data is simply unequivocal. Vaccination is the better option. The rate of complications due to vaccination is much lower than the rate of disease and morbidity/mortality from the diseases these vaccines are designed to prevent. Take this as you will, but your reasoning here seems similar to concluding that drunk driving is safer than sober driving if you've known (as I have) more people who've gotten into wrecks sober than people who've gotten into wrecks drunk. Some may differ in their opinions, but I wouldn't go as far as to say you're a bad person for choosing not to vaccinate. However, you are taking an unnecessary risk, and parents who do see the benefits of vaccination may react more strongly. In any case, hopefully your child will grow up without experiencing mumps, measles, etc., and without infecting other children.
-
If you think about this geometrically, then the result makes fairly good sense. Consider that when taking the standard limit to find the derivative, we're looking at the value of the slope of a line between two points, f(x) and some other point f(x + h), as the two points are brought closer and closer together along the curve in question. Thus, in the limit, we find the slope when the two points are "the same," i.e. the slope of the tangent line at that point. Now look at the modifications you've introduced to the function arguments in the numerator. Since dih still goes to zero, f(x + dih) still approaches f(x). However, if de is anything other than 1, then f(dex) will not be equal to f(x). Thus f(x + dih) - f(dex) will be non-zero. Since h in the denominator still goes to zero, the result is "infinity," i.e. the limit does not exist. Perhaps you've already thought along these lines, but just in case you hadn't, I thought I'd post.
-
Not quite. Chebyshev's inequality, in terms of values within [math]k\sigma[/math] of [math]\mu[/math], is equivalent to the following: [math]P(|X - \mu| < k\sigma) \geq 1 - \frac{1}{k^2}[/math]. It may seem strange at first, but if you think of dividing up the interval [0, 1] (since probabilities must be between 0 and 1), then Chebyshev's inequality states that *at most* [math]\left(0, \frac{1}{k^2} \right)[/math] is occupied by values outside [math]\mu \pm k\sigma[/math], which leaves *at least* [math]\left( \frac{1}{k^2}, 1 \right)[/math] to be occupied by values inside [math]\mu \pm k\sigma[/math]. I don't know if that clears things up extremely well, but there you have it. I think this addresses the rest of your post, too. Was this a test question you answered that was marked wrong, or is it simply the answer in the back of your book? The reason I ask is that, in the latter case, it's likely just a matter of convention. If it's the former, then it may be a case of the instructor wanting the tightest interval for which the conditions of the exercise are satisfied. That is to say, the way the question is worded in your original post, [math](-\infty, \infty)[/math] is also technically correct, and so is the answer you provided, but the same interval without inclusion of the boundaries is also correct and is in some sense the "smallest" valid interval.
-
Chebyshev's inequality is that given a random variable [math]X[/math] with expected value [math]\mu[/math] and standard deviation [math]\sigma[/math], we have [math]P(|X-\mu| \geq k\sigma) \leq \frac{1}{k^2}[/math], where [math]k[/math] is a real number greater than 0. That is, we're looking at the case where [math]|X-\mu|[/math] is greater than *or equal to* [math]k\sigma[/math], so the boundaries are included. Thus, when looking at the values within [math]k\sigma[/math] of [math]\mu[/math], we're looking at [math]P(|X-\mu| < k\sigma)[/math], so the boundaries are not included. timo is also correct in saying that the inclusion of boundaries doesn't matter in general with a continuous interval. And 0.000... = 0 for sure. If you were intending to mean something like "infinitely many zeroes with a 1 at the end," then such a statement is invalid.
-
Here is the relevant page: http://www.macfound.org/fellows/927/ Of course, congratulations are in order for all of the 2014 Fellows, but Dr. Zhang's story in particular is pretty awesome. Edit: Just to elaborate without forcing a read-through of the first link, Zhang had a difficult time of things after earning his Ph.D. After years of working in various non-academic jobs (e.g. as an accountant, delivery man, and Subway sandwich artist), he landed a position at the University of New Hampshire in 1999. In 2013 he achieved fame in the mathematics community for proving that there exist infinitely many pairs of prime numbers differing by at most 70,000,000. This was the first finite bound established for prime gaps, a major result in number theory and possibly a step towards proving the twin prime conjecture. In related (but older) news, thanks to the Polymath 8 project proposed by Terence Tao, the original bound of 70,000,000 has been reduced to just 246; and assuming the Elliot-Halberstam conjecture holds, the bound is as low as 6.
-
This isn't really a paradox. The difficulty arises from treating "A is friends with B" and "B is friends with A" as two separate events, whereas due to the way Facebook works, the two are really the single event "A and B are friends."
-
It works the same as any other substitution. You're introducing a new variable t and making x a function of t, with the goal of working with a simpler integrand. The function is probably intended to be a real-valued function, as introducing complex numbers leads into various shenanigans generally beyond the scope of introductory calculus. Thus x in this case can only take values in the interval [-3, 3]. Notice that since x = 3sint, we preserve this range of values for x, i.e. since sint ranges from -1 to 1, 3sint ranges from -3 to 3. If you'd like to see the reasoning behind why integration by substitution works, then check out this ProofWiki article. Dave's quick guide to using LaTeX on the forum is the first stickied thread in the main Mathematics forum, but here is a link anyway.
-
All the sets seemingly are supposed to be subsets of the integers. This is what I take from the opening line, anyway.
-
He's simply pointing out that the direction of rotation doesn't matter, i.e. an object with rotational symmetry has the same symmetry rotating clockwise as it has rotating counterclockwise. At least in Euclidean geometry, the direction certainly doesn't matter. I suppose there may be other contexts in which rotational symmetry does depend on "direction" in some sense, but by the time a student gets that far, he should be past the point of being confused by minor details like this. Perhaps my professors have been abnormal, though, because in all the real discussions I recall having regarding rotational symmetry (specifically, in geometry and algebra), we used the counterclockwise convention anyway, or at least specified which direction we meant in a particular instance.
-
If you're asking about the sum Bignose presented, then no, there isn't a simple formula for calculating the sum. However, we can approximate it fairly well. [math]\sum_{i=1}^{\infty} \frac{1}{i}[/math] is called the harmonic series, and the partial sum [math]\sum_{i=1}^{n} \frac{1}{i}[/math] for some finite n is called the nth harmonic number, denoted Hn. It turns out that as n approaches infinity, the difference between Hn and ln n approaches a limit, which we call the Euler-Mascheroni constant. Thus, as n increases, the approximation [math]\ln {n} + \gamma[/math] (where [math]\gamma[/math] is the aforementioned constant) becomes more and more accurate. The Calculation section of Wikipedia's article on harmonic numbers gives the first few terms of an asymptotic expansion for Hn, and including these dramatically improves the approximation. For comparison, check WolframAlpha's results for H142, ln(142) + gamma, and ln(142) + gamma + 1/(2(142)) - 1/(12(1422)) + 1/(120(1424)).
-
I think this is more a philosophical discussion than a mathematical one, really. While infinity doesn't actually exist (at least for all practical purposes) in our universe, we are able to reason about it in a logically sound way in mathematics. There is an entire school in the philosophy of mathematics called finitism, which rejects the idea of infinite objects. But it sounds like you do accept infinity, but simply disagree with some of the consequences of its existence. As for Cantor, you're certainly not the only person to have ever doubted the validity of his work, but it's pretty well accepted by the mathematical community at this point.
-
It seems like you're treating infinity as simply a very large integer, whereas infinity is in fact a concept defined as being larger than any number. While we do run into problems using f(n) = 2n using finite subsets of the integers like [0 .. 6] in the sense of your post, we will never run into similar problems using the entire set of non-negative integers, for instance, since there is no highest integer beyond which we'll run out of outputs. I'm not sure what your meaning is with regards to mathematical induction. The principle of mathematical induction (PMI) simply states that given some natural number n and property P, if P(n) holds, and if P(n) implies P(n + 1), then P holds for all natural numbers greater than or equal to n. This is taken as an axiom in Peano arithmetic, but can also be proven from the well-ordering principle (and indeed, the WOP can also be proven from the PMI, i.e. the WOP and PMI are logically equivalent). The validity of these statements is somewhat built into our definition and concept of the natural numbers, though I'm sure there are some logicians out there who question their validity.
-
Very well. If I read too much into what you said, then I apologize. Disregard my abandonment of the thread. The original point of contention was that Acme declared the primes to be randomly distributed, and Unity+ responded by saying they could very easily appear random without actually being so, i.e. the distribution of the primes could easily be pseudorandom, and we have no proof for or against that idea. What I linked above certainly doesn't assert that the primes are distributed randomly. Really, the best (and only, barring some major advances no one's yet mentioned and we don't know about) answer to this entire "debate" is simply that we don't know for sure, but it's quite possible (and useful in some ways to believe) that there is ultimately some pattern we may uncover. Edit: And again, all I said about the material linked in my first post was that Terence Tao had given a presentation on the subject some of us might find interesting to read. I made no claims as to the content of the presentation, though admittedly the wording of my post may have implied otherwise.
-
Dial it back a bit, thank you. I have no dog in this fight, and the slides were meant simply as interesting reading, not as a support for my limited understanding (which I've been reading this thread in an effort to improve). Perhaps I should have said my understanding "has been" rather than "is." Keep in mind that the distribution of primes does have some interesting structure. Is "random" exactly the right word in this case? And as for reading the slides themselves, towards the end there is a slide which says, and I quote, "Of course, the primes are a deterministic set of integers, not a random one, so the predictions given by random models are not rigorous." Perhaps Tao is using multiple meanings of the word "deterministic" here, or something. In any case, given the tone of this thread, I'm out. Take care. Edit: I thought I'd add, there is a cleaned up copy of Tao's presentation available as a PDF from Springer here: http://www.springer.com/cda/content/document/cda_downloaddocument/9783642195327-c1.pdf?SGWID=0-0-45-1140839-p174105361 The general idea does seem to be that the primes are probably pseudorandom, though no proof has been found.
-
With the caveat that I'm nowhere near a a number theorist, my understanding is that the distribution of the primes is pseudorandom, i.e. it appears random but is actually deterministic. Terence Tao actually gave a talk on this subject a few years ago, and his slides (which might be worth reading for anyone interested in this topic) can be found here: http://terrytao.files.wordpress.com/2009/07/primes1.pdf Of course, that's Terry Tao, and other mathematicians may take a different view.
-
Well, we can rearrange and expand, giving us the following (starting from the second-to-last step in my post above): [math]\begin{array}{rcl} \left(\frac{(n-2)!}{(k-1)!(n-k-1)!}\right) \left(\frac{k!(n-k)!}{n!}\right) & = & \left(\frac{(n-2)!}{n!}\right) \left(\frac{k!}{(k-1)!}\right) \left(\frac{(n-k)!}{((n-k)-1)}\right) \\ & = & \left(\frac{(n-2)(n-3)...(2)(1)}{n(n-1)(n-2)(n-3)...(2)(1)}\right) \left(\frac{k(k-1)(k-2)...(2)(1)}{(k-1)(k-2)...(2)(1)}\right) \left(\frac{(n-k)(n-k-1)(n-k-2)...(2)(1)}{(n-k-1)(n-k-2)...(2)(1)}\right) \\ & = & \left(\frac{1}{n(n-1)}\right) \left(\frac{k}{1}\right) \left(\frac{n-k}{1}\right) \\ & = & \frac{k(n-k)}{n(n-1)} \end{array}[/math] This last fraction also follows from the other method. Assume we're testing n tubes, and let k be the number of bad tubes. Then n - k is the number of good tubes. The probability that the first tube is good is (n - k)/n, and in this case, what remains are k bad tubes and n - 1 tubes total, so the probability that the second tube is bad is k/(n - 1). Multiplying, we arrive at [math]\left(\frac{n-k}{n}\right) \left(\frac{k}{n-1}\right)[/math]. Now, the probability that the first tube is bad is k/n, and in this case, what remains are n - k good tubes and n - 1 total tubes, so multiplying, we arrive at [math]\left(\frac{k}{n}\right) \left(\frac{n-k}{n-1}\right)[/math].
-
Well, keep in mind that the probability for either GB or BG in isolation is simply [math]\frac{{{n-2}\choose{k-1}}}{{{n}\choose{k}}}[/math]. There's no need to double it unless you're wanting to know the probability of either sequence happening. As for simplification, factorials simplify in pleasant ways. Consider n! and (n - 1)! for instance. Now, n! is n(n - 1)(n - 2)...(2)(1) while (n - 1)! is (n - 1)(n - 2)(n - 3)...(2)(1), so for example (and assuming n > 0), [math]\frac{n!}{(n-1)!} = \frac{n(n-1)(n-2)...(2)(1)}{(n-1)(n-2)(n-3)...(2)(1)} = n[/math]. Looking at our probability, then, we have [math]\frac{{{n-2}\choose{k-1}}}{{{n}\choose{k}}} = \frac{\frac{(n-2)!}{(k-1)!((n-2)-(k-1))!}}{\frac{n!}{k!(n-k)!}} = \left(\frac{(n-2)!}{(k-1)!(n-k-1)!}\right) \left(\frac{k!(n-k)!}{n!}\right) = \frac{k(n-k)}{n(n-1)}[/math]. Don't worry too much about the combinatorics here. It's a handy way to look at certain (most?) discrete probability problems, but there's nothing wrong with the method you were using originally, once the appropriate variables are set up and the appropriate calculations carried out.
-
The notation is read "n choose x," and it denotes a combination. Given n tubes, x of which are bad, there are n choose x ways to arrange the tubes in sequence. The symbolic definition is [math]{{n}\choose{x}} = \frac{n!}{x!(n-x)!}[/math]. Conceptually, here it means that given the n spots we have for our tubes, there are n choose x ways to choose x spots to hold the x bad tubes.
-
Let [math]n[/math] be the total number of tubes, and let [math]x[/math] be the number of bad tubes. Then there are [math]{n}\choose{x}[/math] possible arrangements of good and bad tubes. The number of arrangements in which tube 1 is good and tube 2 is bad is [math]{n-2}\choose{x-1}[/math], as is the number of arrangements in which tube 1 is bad and tube 2 is good. Thus the probabilities are equal. Essentially, if we consider our entire testing sequence to be a series of tubes ( ), then in either case we have one good tube and one bad tube in our first two slots, and so our remaining tube arrangements contain the same members, ordered in various ways. Consider similar questions regarding the first three tests instead of the first two. Letting G be good and B be bad, the probabilities of GBB, BGB and BBG should be equal, whereas the probabilities of GGB and BBG should not (unless, of course, there are equal numbers of good and bad tubes).