-
Posts
417 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by John
-
Your step 3 reads like you're assuming what you're trying to prove. You've assumed 3^k > k * 2^k for some k, and you must show how this implies that the inequality holds for k + 1. But your step 3 starts out stating the latter as fact. You're allowed to use the assumption as part of your proof, yes. No, you can't just plug in the base value. The base value and k aren't the same thing. The way induction works is as follows: 1. We show the statement holds for some base value. 2. Under the assumption that the statement holds for some value k (which is not necessarily the base value), we show that it holds for k + 1. 3. Therefore, the statement holds for all k greater than or equal to the base value. In your problem here, you first demonstrate that the inequality holds for 3. From that, you are to show that if it holds for any k >= 3, then it must hold for k + 1. Thus you will have shown that since the inequality holds for 3, it must also hold for 4, and thus also for 5, and for 6, etc.
-
A system of equations may have no solution, one solution, or many solutions. I'll have to really dig through this thread some time to get a handle on what you're doing.
-
It seems like you just need to plug in the value they give you for each letter and simplify. For instance, if they give you something like 9 + a and tell you a = 3, then you'll plug in 3 for a, giving you 9 + 3, which equals 12. Try a few, maybe post your results, and if you're still having trouble, then we'll see what we can do.
-
You should be more specific about the requirements of the assignment and what you're struggling with. Then we'll be more inclined and better able to help you. Based on a Flash version I just played to see what the "cannibals vs monks game" even is, I think I see how I'd go about it--but then, I'm not really a programmer, so my way may be clunky and inefficient.
-
You're dealing with a fraction containing [math]i[/math], an imaginary number, in the denominator. You rewrite [math]i[/math] in the form [math](-1)^{\frac{1}{2}}[/math], but at the end of the day, you're still applying a power identity in a context where it isn't necessarily valid. But even if I'm misreading or wrong about that, we know your result is "manifestly wrong" because, as shown in previous posts, it results in a logical contradiction. The fact that [math]a + bi \neq -a + bi[/math] for nonzero [math]a[/math] is extremely easy to demonstrate, and I'm not sure why you're clinging to this supposed "proof" of the contrary.
-
You didn't read the entire page. See near the bottom: Note that one of said "manifestly wrong results" is the idea that a + bi = -a + bi for nonzero a.
-
To amend my earlier post (with apologies--my excuse is I'm tired ), it's probably better to say [math]\sqrt{a^{2}} = |a|[/math]. Also, another problem is that the power identity [math]\frac{a^{x}}{b^{x}} = \left(\frac{a}{b}\right)^{x}[/math] is valid for [math]a,b \in \mathbb{R_{+}}[/math] and [math]x \in \mathbb{R}[/math] , but not necessarily for complex [math]a[/math] or [math]b[/math] unless [math]x[/math] is an integer.
-
The ambiguity lies in steps where you claim that [math]a = \sqrt{a^{2}}[/math]. In fact, [math]\sqrt{a^{2}} = \pm a[/math]. Anyway, to be clear, [math]\frac{a}{i} = \frac{a + 0i}{0 + i} = \left(\frac{a + 0i}{0 + i}\right) \left(\frac{0 - i}{0 - i}\right) = \frac{-ai}{-i^{2}} = \frac{-ai}{-(-1)} = -ai[/math], as ydoaPs said. Here's a proof that [math]a + bi \neq -a + bi[/math] or [math]a = 0[/math]: Assume [math]a + bi = -a + bi[/math] and [math]a \neq 0[/math]. Then [math]a = -a \implies a + a = 0 \implies 2a = 0 \implies a = 0[/math]. Contradiction!
-
(-1)^(1/x) = i*sin(pi/x) + cos(pi/x) where x>=4 is this equation new?
John replied to Semjase's topic in Mathematics
You'd probably have to ask Google about that. As a particular example, Euler's identity itself is the case where x in your equation equals 1, and 1 < 4. -
(-1)^(1/x) = i*sin(pi/x) + cos(pi/x) where x>=4 is this equation new?
John replied to Semjase's topic in Mathematics
Well, if we consider Euler's formula and identity, [math]e^{i\theta} = \cos \theta + i \sin \theta[/math] and [math]e^{i\pi} + 1= 0 \implies e^{i\pi} = -1[/math], then your result follows pretty directly, since we have [math]i \sin \frac{\pi}{x} + \cos \frac{\pi}{x} = e^{\frac{i\pi}{x}} = (e^{{i\pi}})^{\frac{1}{x}} = (-1)^{\frac{1}{x}}[/math]. -
I don't think we understand the workings of the human mind well enough yet to really answer that question, but you might enjoy reading about mathematical psychology. There is also the idea of the quantum mind to consider, though I don't think it's really a mainstream hypothesis. Also, if it turns out that we can create an artificial intelligence as capable as a human brain, then perhaps we'll be able to run simulations quickly enough to gather much more data than would be possible in human experiments, thereby improving our ability to predict how human subjects will act in various situations. However, even if we show that classical mechanics can be used to predict human behavior, I wouldn't expect 100% accuracy to be attainable (though I'm by no means an expert). We might get close, but perfection is a tall order.
-
Well, what you're doing with the sum is counting k-permutations for k = 1 to k = n, e.g. in your example, the sum is equal to [math]{_4P_1}+{_4P_2}+{_4P_3}+{_4P_4} = \frac{4!}{(4-1)!}+\frac{4!}{(4-2)!}+\frac{4!}{(4-3)!}+\frac{4!}{(4-4)!}[/math]. I don't know of a special function that does this, but you can express the sum using sigma notation of course, i.e. given [math]n[/math] points, you have [math]\sum\limits_{k=1}^n {_nP_k} = \sum\limits_{k=1}^n \frac{n!}{(n-k)!}[/math]. Having said that, I just did a search and found this discussion, which you might find interesting: http://math.stackexchange.com/questions/161314/what-is-the-sum-of-following-permutation-series-np0-np1-np2-cdots-npn
-
"Shocking" video of comet ISON causes "panic"?!?
John replied to sevenseas's topic in Astronomy and Cosmology
I agree with much of your post (though we don't know for certain whether any proposed scheme will work in practice), but for the record, we've never had enough nukes to destroy the entire Earth. We maybe have enough now to ruin most of the land area (if we assume reported numbers don't account for all nuclear weapons actually in existence, and depending on the average yield), and maybe at one point had enough to ruin nearly all of it, but we've never come even remotely close to having the power to blow up the planet. If you meant "destroy much of the surface" (as in making it unsuitable for life or modern civilization), then I apologize, but in context it looks like you were talking about actually blowing up the planet. -
There is a proof already, yes, given by Andrew Wiles. However, it involves some pretty advanced and modern mathematics. Fermat wrote in the margin of a copy of a book called Arithmetica that he had a "marvelous proof" of the conjecture that was too large to fit in the margin, but no one knows what it could have been. It would have almost certainly involved what today would be considered pretty elementary mathematics. It may be that the proof Fermat thought he'd discovered contained an error. But if not, then whoever discovers a simpler proof (assuming anyone ever does) using techniques that Fermat likely had access to will probably achieve some fame, at least in the mathematical community.
-
Your proof for Problem IV is almost fine. You should remove "[math]x^3-y^3 =[/math]" from the first line, as [math]x^3-y^3 = (x - y)(x^2 + xy + y^2)[/math] is what you're trying to prove in the first place. You may be overthinking Problem V just a bit. If you look at your proof for Problem IV, what you've done is show [math](x - y)(x^2 + xy + y^2) = x^3-y^3[/math], which also proves [math]x^3-y^3 = (x - y)(x^2 + xy + y^2)[/math]. The same strategy can be used to solve Problem V. Don't be thrown off by the "[math]+...+[/math]". Just multiply the terms you see, and you may notice something about which terms cancel in the result. From there, all that's left is to simplify, and you're done. Keep in mind that plugging in specific values isn't sufficient to prove an idea. You might try three values for [math]n[/math] or 100,000,000,000 values for [math]n[/math], and even if they all check out, that isn't enough to prove the statement. (Edit: This statement has obvious exceptions, so take it with a grain of salt. For instance, if I asked you to prove that some property holds for all natural numbers [math]n[/math] less than or equal to 100,000,000,000, then you could just go through 1 to 100,000,000,000, trying each one, to see whether the property holds.) To answer your more general question about proofs, knowing what steps are necessary comes down to practice and knowing one's audience. For instance, you can say something like, "Since [math]x^2[/math] is even, then [math]x[/math] must be even," without having to prove that, unless you think your reader isn't familiar with that result and doesn't know how to prove it himself. Even for a more mathematically experienced audience, though, sometimes more detail is preferable to less. For instance, an expert in complex analysis might not be aware of certain advanced or recent results in number theory. If you use the number theory result in some proof, it might be a good idea to at least specify (either inline or in a footnote) which lemma, theorem or whatever, you're using. It can be difficult to strike a proper balance between conciseness and clarity. If you want to see many examples of how proofs can be presented, you might enjoy browsing ProofWiki (if you're not aware of it already).
-
I apologize for the thread necromancy, but this thread is still on the first page of its section at least. Recently, I found out about this: http://people.math.gatech.edu/~cain/textbooks/onlinebooks.html It's a collection of links to math textbooks freely available online. Obviously I haven't read most of them, but at least a few that I briefly checked out seem decent. A few of the links are broken, but in general it's a pretty nice list.
-
The discussion's kind of morphed over time, and I'm honestly not sure what the argument is about at this point. However, I think everyone here agrees that philosophy is important in the history and foundations of science.
-
Would it perhaps be more accurate to say that there is a distinction but the border is fuzzy, with the intersection of philosophy and science being non-empty but science not being a subset of philosophy? This seems similar in some ways to a discussion of whether engineering is science, though I think scientists are often more than willing to say it isn't.
-
The bigger problem I see with PeterJ's "decent scientist" comment is that any scientist who does oppose ydoaPs' view could easily be dismissed as not a "decent scientist." Feynman did say something like, "Philosophy is as useful to scientists as ornithology is to birds." Of course, I'm not sure it actually counts, since I don't know how useful he considered ornithology to be for birds. Hawking said philosophy is "dead," though as I recall it was more in the context of (at least some, maybe most) modern philosophers failing to keep up with the latest advances in scientific theory. Here is a fun discussion of what he said. Those are two examples. I'm sure there are others to find. Some may simply be taken out of context, but I'd wager there are actually many "decent scientists" who hold the view that philosophical study is not extremely important for scientists in modern practice. Is everything that makes use of concepts grounded in philosophy reducible to philosophy? When a chef decides on ingredients to use in a dish, does the fact that he's making some use of aesthetics, chemistry and mathematics (whether he really realizes it or not) mean that chefs are essentially doing philosophy? Is everyone doing philosophy every day? If so, then are we saying that everyone who isn't well-versed in philosophy isn't *really* doing whatever it is they think they're doing? And if not, where is the line drawn? That last bit may be a silly line of questioning, and feel free to tell me if you think so; but I'm curious. Otherwise, please carry on.
-
This looks fine to me. This is also correct, but it can be shortened by a couple of lines. Notice that since [math]y = x + k[/math], then [math]k = y - x[/math]. Thus, you can substitute [math]y - x[/math] for [math]k[/math] in the fifth line of your proof to arrive at your conclusion more quickly. Sometimes the proofs that give a person that most trouble (especially those new to proof-writing) are the simplest. Your proof is correct. Yes, the reverse is a bit more involved, but as mentioned earlier, making the appropriate substitution simplifies things a bit. Assuming I'm understanding you correctly, don't worry. Your proof is fine, and yes, while "elegance" is appreciated in a proof, a proof is fine so long as each step is justified. The basic notions of what a proper proof entails can be taught, but being good at actually seeing and writing proofs takes time and practice. Keep at it, and you'll probably find the process gets easier. Good job. Spivak is a well-written book and very thorough. I don't know if there are really "better" books, but the other two usual recommendations are Tom Apostol and Richard Courant. While Spivak's Calculus covers only single-variable, Apostol and Courant's calculus series both go through multivariable as well. Apostol is a bit more terse and technical than Spivak, Courant is perhaps more focused on applications. All three are, by all accounts, excellent. And of course, while Spivak, Apostol, and Courant are the ones I've seen recommended the most, there are countless other calculus books. If Spivak works well for you, then great. If not, then it's just a matter of trying to find a text that does. Best of luck to you in your studies.
-
At the risk of putting words in certain people's mouths, I think the claim is more accurately expressed as, "Science has adopted the methods from philosophy that provide a firm grounding for scientific investigation, and is now self-sustaining in that regard." This is not the same as ignoring the contributions philosophy has made or will continue to make. Despite the constant miscommunication and creeping anger, I for one am enjoying this discussion immensely. I hope you all won't get totally frustrated and give up on it just yet. It's a shame that these discussions so easily put all involved on the defensive. It leads to misunderstandings and rather personal jabs that detract from an otherwise interesting debate. As for my own take on the OP's question, I don't see how anyone can claim that philosophy is "crap" after learning much about the field. However, the layman's understanding seems to be that philosophy essentially amounts to arbitrary claims about the world that hold barely more water than religion, and if philosophy were just that (and let me reiterate that it quite clearly is not), then calling it "crap" would probably be justified. But then, I'm just a math student. What do I know?
-
Yeah, the Flynn effect popped into my mind as well, though the Wikipedia article notes that, in developed nations at least, the effect seems to have leveled off and declines have been seen recently in a few instances: http://en.wikipedia.org/wiki/Flynn_effect#Possible_end_of_progression It's all just natural selection in the end. We value high intelligence, but nature probably doesn't care one way or another.
-
This is possibly true, though I would say (assuming I'm understanding you correctly) that formal education provides the proper cultivation, and those who would like to make meaningful contributions should pursue that path. Yes. It's kind of what I meant when I listed "youth," just one of a few instances of overloaded phrasing in my previous post. My only excuse is that I'm "at work" and therefore somewhat rushed (though the excuse fails a bit because I could easily have waited until the end of my workday to say anything at all). Edit: I should clarify again, though, that it's only "a shame" when someone presents an idea, can't defend it from criticism, and proceeds to get angry and defensive rather than reconsidering the idea given the responses. I don't fault anyone for not knowing, regardless of the reason, but I do fault those who stubbornly refuse to accept valid criticism or to reconsider their ideas in the face of such criticism.