Jump to content

ydoaPs

Moderators
  • Posts

    10567
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by ydoaPs

  1. Yeah, the book was pretty bad. It's just that the OP smacks of those Amazon reviews written by people with axes to grind who obviously haven't read the book. Like I said, it's just a gut feeling. As far as Dawkins goes, I'd not even bother with anything he does on religion. His biology works are another matter entirely. I particularly like The Blind Watchmaker.
  2. ! Moderator Note If you'd prefer to not have a vacation soon, you probably shouldn't take threads off topic solely to troll the staff.
  3. Excuse me? The balls are charged; they provide the field. You know the mass which gives the gravity and you know all the sides which gives the angles. You just need to figure out the individual forces when the electrostatic force cancels the gravitational force when they're 4cm apart. From there it's just F=ma and basic electrostatics.
  4. I neither contradicted myself nor did I say that. I said it's not the acceleration because you STILL get the interesting result when you match the accelerations.
  5. I'd say my thinking split for English/German is 80/20. Then again, my housemate speaks German a lot around the house so that could cause me to kick into German mode more often than I otherwise would.
  6. That's worthless semantics and not strictly speaking true as thought experiments are notoriously malleable and have a history of having several substantial variations under the same name. Having the grounded twin pacing hardly deserves another name. But, as I said, it's pointless to argue about the name of the scenario. The question is more general than that. It's about why one twin ages less than the other when you send one away and back. It is not the acceleration. We know this, because we can ever so slightly alter the experiment to make the accelerations match and get the same result.
  7. I've got English and German, a fair bit of Mexican Spanish, and a very tiny bit of Arabic. I start learning Mandarin next semester. After most of a semester of Arabic I started dreaming in Arabic, but I don't remember ever dreaming in German or Spanish.
  8. But you get the same result which shows that the result isn't strictly speaking about the acceleration.
  9. Science deals with universally quantified statements, and as such, you're correct that it relies on induction. No one data point can give you a universal. In order to verify a universal, you'd need to verify every single instance ever. And that's just not doable in the overwhelming majority of the cases. What is easier (and, is sometimes inaccurately portrayed as the only thing [thanks mainly to Karl Popper]) is to show which universal statements are false. Science doesn't care about how you come up with an idea, but rather, how you justify it. In philosophy terms, it's about the justification side of the justification-discovery distinction. While what is and is not science has changed dramatically since science first started, modern science is more or less defined by Ruse's criteria: A scientific theory makes predictions A scientific theory is testable [which requires (1)] A scientific theory is tenable [it is open to being overturned] That's not exactly how he presented them, but many of the criteria collapse in on each other. For example, (1) and (2) are very nearly the same thing. The scientific method is inductive as mentioned earlier, but it is also deductive. When you test a hypothesis, two things happen. The solely inductive bit is confirmation. If it passes the test, its probability of being true goes up. The other thing is a mix between deductive and inductive. Falsification, ideally, is a simple example of Modus Tollens. If p, then q. It is not the case that q. Therefore, it is not the case that p. The problem with this (called the Quine-Duhem problem) is that you can almost never test p by itself. Any test is going to rely on multiple theoretical aspects and some environmental factors. So, the Modus Tollens becomes: If l and m and n and o and p, then q. It is not the case that q. Therefore it is not the case that l and m and n and o and q. The thing is, that just tells us that they can't all be true. It doesn't tell us which one [or ones] is false. That's where the induction comes in. We can use the probability calculus to tell which bits are less likely to be true after a falsifying test. As far as proof goes, they are both ironclad proof, but you have to be careful at making sure you know what you're proving. Falsification proves that at least one of the entangled hypotheses and assumptions is false. Confirmation proves that a theory is more likely to be correct.
  10. arjundeepakshriram has been banned as a particularly unclever sockpuppet of Arjun Deepak Shriram.
  11. Try it with the "stationary" twin pacing back and forth so that the acceleration matches the "moving" twin.
  12. He in no uncertain terms completely failed to overcome the Quine-Duhem thesis. You cannot pull theories apart like that. It doesn't work. That's talking about simple existential statements. Of course you can verify simple existential statements, but that's stamp collecting, not science. The Logic of Scientific Discovery was about theoretical statements rather than simple existential statements. He spends a great portion of the book railing against verification/induction. For example, that's how he starts out the very first chapter. AND he goes so far as to dedicate a whole chapter (and a few additional appendices, if you have the right edition) in a failed attempt to attack probability as a method for validating confirmation. On the contrary to your assertion, it is the Popper apologists who mischaracterize his views by quotemining him and forcing the quotes into contexts where they don't belong.
  13. Anti-realism isn't about reality; it's about how good a theory is at predicting the position of a needle. It's not saying science is a description of reality. I can assure you that I did not get them nor the rest of the paragraph flipped around. Which is precisely contrary to when you said "Both realism and anti-realism consist of finding the 'least wrong description'". Not to the Constructive Empiricist and the other anti-realists. They're wrong, and I've proven it mathematically. There is no defensible coherence theory of truth. None. Not one. There are two main things philosophy does when doing something like coming up with a theory of a concept. One is conceptual analysis (teasing out intricacies of everyday concepts) and the other is explication (creating more precise counterparts to everyday concepts which are often (hopefully most of the time if not all of the time) able to replace the ordinary concept. Any attempt at a coherence theory of truth fails on both counts as it is a necessary condition of the everyday concept of truth that a true statement describes reality as it is. That's completely counter to any coherence theory and as such is not a conceptual analysis of the ordinary concept and cannot be interchanged with the ordinary concept which means it is also not an explication of the ordinary concept. Coherence theories of how we can be justified in believing something is true however, are easily justifiable.
  14. Bell's inequalities are not considered an interpretation of QM, AFAICT. And once they can be, we won't consider them interpretations anymore. But the point is it will require experiment to do so. I've rearranged your quotes of me in your quote for flowingnessocity concerns. I never claimed that Bell Inequalities are interpretations of QM. What I claimed is that, since different interpretations take different stances on the choice Bell's Theorem gives us, they do in fact make testable predictions which can distinguish them experimentally. This, however, is tangental to my overall point which is that experiment isn't the only game in town for figuring out what is or is not true. For example, I'm about to conclusively prove you wrong about a statement without outlining a single experiment to show that you're wrong. That statement being: I'm not even going to touch the incoherence of claiming models aren't representations. Your view sounds a lot like a version of scientific anti-realism called Constructive Empiricism. Constructive Empiricism, in the words of its founder, is "Science aims to give us theories which are empirically adequate; and acceptance of a theory involves as belief only that it is empirically adequate". This is "Shut up and calculate" taken to the extreme. It is also epistemically bankrupt. A theory is empirically adequate iff it explains the data better than its negation. That is, P(e|h)>P(e|~h). Let's see where that takes us. That is, let's shut up and calculate: [latex]P(e|h)>P(e|{\sim}h)[/latex] [latex]P(e|h){\times}P({\sim}h)>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h){\times}(1-P(h))>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h)-P(e|h){\times}P(h)>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h)>P(e|h){\times}P(h)+P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h)>P(e)[/latex] [latex]P(e|h){\times}{P(h)}>P(e){\times}P(h)[/latex] [latex]\frac{P(e|h){\times}P(h)}{P(e)}>P(h)[/latex] [latex]P(h|e)>P(h)[/latex] If [latex]P(e|h)>P(e|{\sim}h)[/latex], then [latex]P(h|e)>P(h)[/latex] [latex]P(h|e)>P(h)[/latex] [latex]\frac{P(e|h){\times}P(h)}{P(e)}>P(h)[/latex] [latex]P(e|h){\times}{P(h)}>P(e){\times}P(h)[/latex] [latex]P(e|h)>P(e)[/latex] [latex]P(e|h)>P(e|h){\times}P(h)+P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h)-P(e|h){\times}P(h)>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h){\times}(1-P(h))>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h){\times}P({\sim}h)>P(e|{\sim}h){\times}P({\sim}h)[/latex] [latex]P(e|h)>P(e|{\sim}h)[/latex] If [latex]P(h|e)>P(h)[/latex], then [latex]P(e|h)>P(e|{\sim}h)[/latex] [latex]P(e|h)>P(e|{\sim}h)[/latex] iff [latex]P(h|e)>P(h)[/latex] So, we see that the more empirically adequate a theory is, the more likely it is to be true. That is, the better it explains the data and the more accurately it predicts the new data, the more likely it is to be true. By any sensible definition of truth, that means that the more empirically adequate a theory is, the more likely the things it talks about exist and what it says about those things is accurate. You cannot rationally stop at the theory predicting with great success the positions of needles on gauges. Just as I could show you to be wrong without a single experiment, we can distinguish things which explain the data equally well in terms of how likely they are to be true without experiment. [latex]P(h_1|e)>P(h_2|e)[/latex] and [latex]P(e|h_1)=P(e|h_2)[/latex] [latex]\frac{P(e|h_1){\times}P(h_1)}{P(e)}>\frac{P(e|h_2){\times}P(h_2)}{P(e)}[/latex] [latex]P(e|h_1){\times}P(h_1)>P(e|h_2){\times}P(h_2)[/latex] [latex]P(h_1)>P(h_2)[/latex] If [latex]P(h_1|e)>P(h_2|e)[/latex] and [latex]P(e|h_1)=P(e|h_2)[/latex], then [latex]P(h_1)>P(h_2)[/latex] So, we see that if two hypotheses explain the data equally well, if we want to know which is more likely to be true, we need to look at the priors. Again, we're not dealing with tests. We're talking about empirically indistinguishable things like, say, Copenhagen and MWH which are currently empirically indistinguishable. So, they explain all of the data equally well. For every observation e, P(e|h1)=P(e|h2). This means we look at the intrinsic priors. AFAIK, all of the things that influence the intrinsic priors reduce to two things: simplicity and coherence. We know that P(A&B)<P(A) and P(A&B)<P(B) for all A and B which are not 1 or 0. So, the more things a theory says, the lower the intrinsic probability. The Copenhagen interpretation postulates only one universe, but the MWH postulates an uncountably infinitely many universes. This means Copenhagen is intrinsically more likely to be true, and since they explain all of the data equally well, it comes out on top. No, in any reasonable taxonomy. Science is about the nature of nature, so it's a subset of Metaphysics. It's just that its epistemological (yep, more philosophy) rules are more strict than metaphysics at large. Anyway you cut it, there will always be metaphysics that you can't excise from your physics. My previous example on which you didn't comment: If it can't be determined by experiment, it's not science (according to you), right? Give me an experiment to show that the one-way speed of light is constant. Remember you can't assume that it is in setting up the experiment for things like syncing your distant clocks. There comes a point in the 'experimentalist' "shut up and calculate" ideology that one can cease being a scientist and become merely lab equipment.
  15. Let's look at the very first prior-before ALL evidence. We call this the 'intrinsic probability'. Swinburne's criteria for intrinsic probabilities are basically that the more a hypothesis says and the more specific it is, the lower the intrinsic probability. This should be pretty obvious since P(a&b)<P(a) and P(a&b)<P(b). So, how do we figure out the intrinsic probabilities? Well, we can talk some metaphysics. There are two ways for grounding reality. You can be like Bishop Berkeley and be what is called a Source Idealist. That means you believe all of the physical stuff comes from mind stuff. The other side of the coin is Source Physicalism. That means you believe all of the mind stuff comes from physical stuff. Those, by Swinburne's criteria, are equally intrinsically probable. There is, though, a third option that is less probable because it posits a third kind of stuff. Now, Theists (well, the one's I'm talking about, anyway) believe that there is a mind that created all of the physical stuff, so they're Source Idealists. However, theism doesn't take up the whole possibility space of Source Idealism since you can have New Agey atheistic Source Idealist belief systems. That means Theism is a proper subset of Source Idealism. Atheism on the other hand, is a superset of the other two kinds of metaphysics and has overlap with with the Source Idealism part of the probability space. If we are to draw out the probability space, it looks something like this. So, Theism is FAR less likely than its negation intrinsically. And that's bare bones theism. The area of the possibility space shrinks more and more with each attribute the god in question has. Want this mind to be omnipotent? Then you have to shrink the area. Want it to be omniscient? Then you have to shrink the area. Want it to even care about humans? Then you have to shrink the area. Want it to be omnibenevolent? Then you have to shrink the area.What does this mean? Well, it means Arguments to the Best Explanation have absolutely no place as theistic arguments. It also means it's time for Sagam's Slogan! For those that don't remember, here's "Extraordinary Claims Require Extraordinary Evidence": For two competing hypotheses h1 and h2: Let P(h1|e)=P(h2|e) and let P(h1)>P(h2) (P(e|h1)xP(h1))/P(e)=(P(e|h2)xP(h2))/P(e) P(e|h1)xP(h1)=P(e|h2)xP(h2) P(h1)=P(h2)x(P(e|h2)/(P(e|h1)) P(h2)x(P(e|h2)/(P(e|h1))>P(h2) P(h2)xP(e|h2)>P(h2)xP(e|h1) P(e|h2)>P(e|h1) ((P(h1|e)=P(h2|e))&(P(h1)>P(h2)))⊃(P(e|h2)>P(e|h1)) So, the lower the prior, the more evidence is needed for a hypothesis to reach any given value. And we've seen that even the bare-bones Deistic Theism is far less likely intrinsicly than its negation. And making the god more specific lowers the intrinsic probability farther and farther. So, YHWH is indeed an extraordinary claim which requires extraordinary evidence. The evidence, however just isn't there. Then there's a whole host of other problems. I can provide some deductive arguments if you'd like. I may just be a cynic, but I have a sneaking suspicion that (s)he's just a dishonest theist.
  16. And yet a driver may know nothing about engineering but is still somehow able to drive a car. Doesn't that say something about the value of engineering? You mean philosophy of science and epistemology? My gods! I take it then that you're in the "science is philosophy" camp. So studying science necessarily means studying philosophy. By definition. Ok, then. Nothing to discuss. No, what he's saying is that science doesn't give you the epistemological tools. Philosophy gave the tools, science just uses them. "You didn't build that". Give me an experiment to show that the one-way speed of light is constant. Remember you can't assume that it is in setting up the experiment for things like syncing your distant clocks. It was when it was an issue. Indeed, it's a sympathetic relationship. They work off of each other. Much of my work relies upon QM and Modern Synthesis. And there's far more metaphysics hanging around in the foundations of physics than people like to believe. Especially when it comes to things involving light and/or relativity. The different interpretations of QM *do* make testable predictions. Consider Bell Inequalities. For those reading along that don't know, Bell's Theorem tells us that QM cannot be both a local theory and a hidden variable (read deterministic) theory. There are explicitly non-local hidden variable interpretations like Bohmian Mechanics. As we both know, standard QFT has SR built right in, so we know from the get go that Bohmian interpretations make different predictions than QFT. And both Copenhagen and MWH take the local non-deterministic route. This means telling them apart experimentally will be a bit harder. As of now, I don't know of any experiment that could tell those two apart, but that's not to say they in principle can't be distinguished experimentally. As someone who likes to point out that "this is a science forum ..." when discussing speculations, I'd think you'd have a higher opinion of philosophy since it is philosophy of science, not science itself, which tells us what is and is not science. There is no scienceometer to read how many kiloscienceons a conjecture is giving off. It's conceptual analysis and methodology which determine what is and is not science. And that's firmly in the domain of non-science philosophy. Remember that science is a subset of philosophy both historically and taxinomically. It is just that this subset has gradually defined more and more strict rules governing what is acceptable. There's no clearcut line in history between Aristotle and Einstein where one part of the endeavor was 'scientific' and the other is not. What is 'scientific' has gradually changed over time, but the demarcation for modern science is now pretty much settled with Ruse's criteria. It does matter when constructing new theories. There's a lot of metaphysics involved in, say, making a TOE. And experimentalists in general can lose sight of what a theory is about. They'll get caught up in Lorentz Transforms and co-ordinates and lose sight of that the theory isn't about them and doesn't need them. SR is about the geometry of spacetimes and can be formulated with nothing but simple spacetime diagrams and the Minkowski 'metric' (which really isn't a metric, btw). When you're dealing with the theory, the theory and underlying metaphysics is important. If you're dealing with experiments, all you need worry about is what the needles on the gauges say. While the math is indeed extremely important and it is the math that tells us what we need to know, sometimes people take the "shut up and calculate" line a bit too far and don't consider what it is the math is saying about reality. It's like learning German grammar and skimming paragraphs to find answers to questions without learning what any of the words mean. If you want the full understanding, you need to slow down a bit. And sometimes this overuse of the "shut up and calculate" mentality leads people to the untenable position of scientific anti-realism. I'm fairly sure it is. That's such a mischaracterization of philosophy that I'm a little offended. The days of walking around in a toga talking about how your underpants are made of fire is over. Welcome to the days of mathematical and experimental philosophy. I'll let you in on a hint: there are these things called 'intrinsic probablitiy' which, by definition, are not based at all in the evidence. This is not only an objective measure of which interpretation of QM is correct, but of scientific theory choice in general. For any given data set, there are an infinite number of theories which explain the data equally well. For example, why do you go with Special Relativity to explain time dilation and length contraction rather than Neo-Lorentzian 'Relativity'? The Lorentzian route postulates more ontic entities than SR, so it is inherently less likely. Similarly, the MWH postulates an uncountably infinite number of more ontic entities than the Copenhagen interpretation, so it is inherently less likely. So, until we find something else like the Bell Inequalities to tell them apart experimentally, we've got good reason to choose Copenhagen rather than MWH and to choose SR rather than NL. You seem to be confusing not being able to know something yet and not being able to know something in principle. Just because we can test something now in no way tells us that we won't be able to test it later. And, as I said above, there are ways of telling what is more likely to be true even before you look at any evidence. The evidence narrows things down unimaginably, but, not having evidence in no way means you can just choose whatever and still be rational in doing so. So, Logical Positivism was science rather than philosophy? istent mathematics that are true that you will never be able to prove. No, there's not. There will be statements in self-consistent mathematics that are true in the system that can never be proven in the system, but that in no way means that they can never be proven. You can always step outside into higher system. Science uses math as a tool, but it does not incorporate all of the math there is, so it's quite possible that none of the undecidable theorems have any use in science. Like, "Are fields ontic entities or mathematical constructions?".
  17. ! Moderator Note Remember to "Be civil" per the rules. Talk about the ideas; not the people.
  18. There are a few (such as Poisson's Spot when people were wondering about the nature of light). The thing is, when metaphysics is done right, it's hard to distinguish from straight up higher order multi-modal logic and/or straight up physics (depending on what you're doing). Hierarchical Temporal Memory models are doing fairly well. I've got something I'm working on now that will fill in some of the gaps, but I've not published it yet. Patterns of patterns. No. A thousand times no. Intuition is complete hogwash and is wrong most of the time. Intuition says that the Earth is flat. Intuition says that if you swing a ball around your head on a rope and cut the rope mid-swing that the ball will go off in a curved horizontal trajectory. Intuition says human memory works well. Intuition says that objects in motion will eventually stop without external forces being applied. All of those intuitions are wrong. And then there's that intuition is completely dependent on factors that are not truth tracking, such as culture, the order i which questions are asked, a person's mood, how clean their local environment is, etc. Experimental philosophy cannot be ignored. When people like you ignore it, you get nonsense. Magic tricks have absolutely nothing to do with metaphysics. ! Moderator Note As such, I'm splitting the posts about Criss Angel to anther thread.
  19. ! Moderator Note You are not a doctor. Do not give medical advice. Medical crackpottery has no place here. Thread closed.
  20. ! Moderator Note Since we don't have a "Conspiracy: Tinfoil Hat Required" section, I've moved this to Speculations.
  21. Przemyslaw.Gruchala has been banned for abusive behaviour and persistent thread hijacking.
  22. So, pwagen and imatfaal, what'd you think overall of the book?
  23. Aside from the whole not meeting the requirement of probability theory that zero is a possible value for probability, it still doesn't follow from the probability calculus at all. P(B|A) = P(B'|A') is wrong. P(B|A) = P(B'|A') = 1 - P(B'|A') would mean that all of those formulas must take the value of 1/2. So, since, according to the OP, P(B|A) = P(B'|A') = 1 - P(B'|A') = 1 - P(B|A') = (1+v2/c)/2, 1/2=(1+v2/c)/2. This means 1=1+v2/c. Obviously, this means v2/c=0. c is a constant, so v2=0. v3 = (v1 + v2)/(1 + v1 v2/c^2), so v3=v1. Using the information in the OP (along with the derived result that v3=v1) and Bayes's Theorem, we get P(B)=1/2. So, P(A)=P(A|B)x(P(B)/P(B|A))=P(A|B) showing A and B are independent which kind of blows tying them into relativity. Going back to v3=v1, v1+(v12)/c2=v1+v2 which gives v1=c So, our velocities that are completely independent have fixed values of v1=c, v2=0, and v3=c. That doesn't sound much like relativity.
  24. Like a Galilean velocity addition at very high speeds. The answers are wrong, but give meaningful answers.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.