ydoaPs Posted February 27, 2013 Posted February 27, 2013 This was originally going to be a response to another thread, but it would have taken it sufficiently off topic (and now the thread is closed). As I went on writing, it got really long, so I'm just turning it into a very short introduction of science. BTW Bignose stated that he's only very interested in predictions. Now that isn't quite correct because I could predict the coming of the Messiah. What you need is falsifiable tests: i.e. provide practically testable predictions. (You know: Popper) Now, Popper was on the right track, but he was off by quite a bit. Popper's naïve falsification is essentially just a modus tollens. T⊃O ~O ∴~T If the theory is true, we have a predicted observation (within a certain amount of uncertainty). When we measure something outside of that range for that predicted observation, we need to throw out the theory wholesale. Think about that. Anytime we have a falsifying observation, per Popper, the whole thing goes out the window. So, let's take the recent measurement of superluminal neutrinos. Do we then throw out all of relativity? But, wait, we're not just testing relativity. No theory is an island-Special Relativity is a deductive consequence of ElectroDynamics (in fact, Einstein's paper was called "Zur Elektrodynamik bewegter Körper" which is translated as "On the ElectroDynamics of Moving Bodies") and the relativity principle which goes back to Galileo. This is called the Duhem problem (sometimes the Quine-Duhem thesis). Now this makes Popper's wholesale trashing upon falsification an even bigger deal. Do we throw out ALL of the related theories that are tangled up in this experiment? If not, which do we toss and which do we keep? (T1&T2&...&Tn)⊃O ~O ∴~(T1&T2&...&Tn) So, because of this insight, upon falsification, we know they can't all be right, but we don't know which, if any, are right. Here comes Lakatos who made a more sophisticated version of falsificationism in that he tried to get rid of the Duhem problem. It's a fairly ingenious thing to do. He broke up a theory into what he called a "research programme" which consists of the theoretical "hard core" and ancillary assumptions composing what he calls the "protective belt". So, with this framework, if a research programme is falsified, you go from an indefinite number of inconsistent theories to two inconsistent classes. If a programme is falsified, check your protective belt of ancillary assumptions and do the test again. This, it turns out, is what science actually does. Going back to the observation of superluminal neutrinos, that falsifies a huge swath of physics given the Duhem problem. But with Lakatos's research programme formulation, we should check out ancillary assumptions first. After checking all of our ancillary assumptions, we found one that was wrong. We assumed that all of our cables were connected properly. This assumption, however was wrong. It turns out, that the neutrinos weren't superluminal after all! On the face of it (or "prima facie", to use fancy philosophy terms), this is a very elegant solution. There is a problem, though. Lakatos's solution doesn't give us any way to know what goes in the "hard core" and what goes in the "protective belt". It also doesn't tell us why to check the protective belt first. So, after a bit of thinking about it, Lakatos's solution seems a bit ad hoc, and that's generally not a good thing. At this point, a guy named Dorling comes along wielding one of the most powerful tools in existence. With it, he showed that Lakatos was correct and he answered the things Laktos's approach couldn't answer, putting Lakatos's more sophisticated falsificationism on firm epistemological ground. What tool is was it that he used? If you've read many philosophy posts by me, you can probably guess. It's the equation that pretty much rules the world--Bayes's Theorem: [math]P_{f}(h_1)=P_{0}(h_1|e_i)=\frac{P(e_i|h_1){\times}P_{0}(h_1)}{\sum^n_{j=1}{P(e_i|h_j){\times}P(h_j)}}[/math] Where P(h|e) is how likely the hypothesis is given the evidence in question, P(e|hn) is how likely the evidence in question is given the hypothesis in question is true, and P(h) is just how likely the hypothesis is without considering the evidence in question. I actually prefer Howson and Urbach's example of Dorling's approach to the example Dorling used in his own paper. The example they use is that of William Prout (a chemist and medical practitioner from the early nineteenth century). Prout had a hypothesis (which Howson and Urbach label "t")that was almost universally accepted at the time. His hypothesis was that all chemical elements were made of hydrogen and thus they all have atomic weights as integer multiples of the atomic weight of hydrogen. At the time, almost all recorded atomic weights were close enough to integer multiples of the atomic weight of hydrogen within the limits of error. Then we measured one that breaks the pattern. We'll call that measurement the evidence, "e". Now, this experiment had a hypothesis entangled with t. That is the accuracy of measurement, purity of samples, etc. Howson and Urbach call that "a". Now e falsifies a&t, so P(a&t|e)=0 and, by a simple corollary, P(e|a&t)=0. The hypotheses t and a are independent, so P(a|t)=P(a), P(~a|t)=P(~a), P(t|a)=t, and P(~t|a)=~t (with similar results from negating the thing upon which the probability is conditional). What we're interested in finding out is which hypothesis (or hypotheses) we should reject. So, what we want to know is Pf(t) and Pf(a|e). From Bayes's Theorem (above): [math]P(t|e)=\frac{P_{0}(t){\times}P(e|t)}{P(e)}[/math] and [math]P(a|e)=\frac{P_{0}(a){\times}P(e|a)}{P(e)}[/math] Since P(e|t)=P(e|t&a)xP(a|t)+P(e|t&~a)xP(~a|t) (by the Total Probability Theorem) and the results mentioned above, P(e|t)=P(e|t&~a)xP(~a), P(e|~t)=P(e|~t&a)xP(a)+P(e|~t&~a)xP(~a) and P(e|a)=P(e|a&~t)xP(~t). At the time, the only real competing theory for Prout's theory was random distribution. From the details of that theory and from the details of the actual measurement of e allows calculation of P(e|~t&a), P(e|~t&~a), and P(e|t&~a): P(e|~t&a)=0.01, P(e|~t&~a)=0.01, and P(e|t&~a)=0.02. The historical values of P(a) and P(t) have been estimated at P(a)=0.6 and P(t)=0.9. This gives: P(e|~t)=(0.01)x(0.6)+(0.01)x(0.4)=0.01 P(e|t)=(0.02)x(0.4)=0.008 P(e|a)=(0.01)x(0.1)=0.001 The Total Probability Theorem tells us that P(e)=P(e|t)xP(t)+P(e|~t)xP(~t). P(e)=(0.008)x(0.9)+(0.01)x(0.1)=0.0082 [math]P(t|e)=\frac{(0.9){\times}P(0.008)}{P(0.0082)}=0.878[/math] which is still pretty darn high (compare to the original value of 0.9) [math]P(a|e)=\frac{(0.6){\times}P(0.001)}{P(0.0082)}=0.073[/math] which is a dramatic decrease from 0.6 (the original value). As we can see, this one piece of evidence that falsifies the conjunction doesn't automatically falsify everything (though it makes them all less likely). So, through Dorling type Bayesian approach, we get a good semi-Lakatosian view of falsification which tells us exactly how each part of the programme being tested is affected via the final probability. This also vindicates Lakatos's distinction between "progressive" and "degenerative" research programmes. A progressive research programme is one that is repeatedly confirmed and a degenerative research programme is one that is repeatedly disconfirmed. A progressive research programme can stand up to some data that doesn't quite fit the programme (which is why we didn't throw General Relativity out the window when we found out about the Pioneer Anomaly). And Popper would have absolutely no truck with this as he was very anti-inductive. Confirmation was a dirty word for him. For Popper, if a test didn't falsify your hypothesis, the only value of the experiment was getting rid of other hypotheses from the pool of competing hypotheses. But, as you can see from Bayes's Theorem, even experiments that don't falsify your hypothesis, it can in fact give partial confirmation to your hypothesis by raising the probability of the hypothesis. It's all about what the P(e|h) is. Bayes's Theorem tells us that Carl Sagan's mantra "Extraordinary claims require extraordinary evidence" is a mathematical fact. [math]P_{f}(h_1)=P_{0}(h_1|e_i)=\frac{P(e_i|h_1){\times}P_{0}(h_1)}{P(e)}[/math] P(e) is a function of all competing hypotheses, so it will remain constant in the evaluation here. Now, this makes P(h|e)=kP(e|h)xP(h). So, let's set a threshold for what we can rationally believe (a lower bound for P(h|e)) this will be a constant. We can divide one constant by the other to get another constant. C=P(e|h)xP(h), so it is blindingly clear that the lower is, the higher P(e|h) needs to be to reach the threshold of rational belief. Contrary to Popper, we need both falsification (a version far more sophisticated than his using tools he wouldn't like) and partial confirmation (which he really wouldn't like). 7
DevilSolution Posted February 27, 2013 Posted February 27, 2013 You lost me on superluminal neutrinos lol (i have a basic concept of them but you use them in equations i have never seen) nad once you lose track its like reading another language. You obviously know your stuff but you dont simplify the concepts being discussed. Have you ever tried drawing up your own equations??
Ringer Posted February 28, 2013 Posted February 28, 2013 (edited) You lost me on superluminal neutrinos lol (i have a basic concept of them but you use them in equations i have never seen) nad once you lose track its like reading another language. You obviously know your stuff but you dont simplify the concepts being discussed. Have you ever tried drawing up your own equations?? The equations aren't actually physics equations, they're probability equations. P() represents the overall probability of an event with the event specified in the (). The P( | ) is a statement representing the probability within a parameter. So P(a|b) would represent the probability of a within the parameter of b. So his P(h|e) in the first equation was the overall probability, P(), of the hypothesis, h, within the parameters of the experiment, e. This is necessary because evidence will raise or lower the probability of the hypothesis. The second part is just simplifying and using it to find the probability that the findings are true by assuming the hypothesis is true. This helps solve problems of whether it is more likely the hypothesis is false or the evidence is false. Then he goes on to use the same method to find the probability that a competing hypothesis is true given the probability of a different hypothesis. Using that you can find which one is more likely, just like you can find probabilities of the evidence vs. hypothesis truth. [edit] I'm sure ydoaPs will correct any errors I made, but do keep in mind I haven't done these kinds of Stats in a while so I may have some things mixed up. I think the overall idea is correct though.[/edit] Edited February 28, 2013 by Ringer
kristalris Posted March 8, 2013 Posted March 8, 2013 (edited) This was originally going to be a response to another thread, but it would have taken it sufficiently off topic (and now the thread is closed). As I went on writing, it got really long, so I'm just turning it into a very short introduction of science. Now, Popper was on the right track, but he was off by quite a bit. Popper's naïve falsification is essentially just a modus tollens. T⊃O ~O ∴~T If the theory is true, we have a predicted observation (within a certain amount of uncertainty). When we measure something outside of that range for that predicted observation, we need to throw out the theory wholesale. Think about that. Anytime we have a falsifying observation, per Popper, the whole thing goes out the window. So, let's take the recent measurement of superluminal neutrinos. Do we then throw out all of relativity? But, wait, we're not just testing relativity. No theory is an island-Special Relativity is a deductive consequence of ElectroDynamics (in fact, Einstein's paper was called "Zur Elektrodynamik bewegter Körper" which is translated as "On the ElectroDynamics of Moving Bodies") and the relativity principle which goes back to Galileo. This is called the Duhem problem (sometimes the Quine-Duhem thesis). Now this makes Popper's wholesale trashing upon falsification an even bigger deal. Do we throw out ALL of the related theories that are tangled up in this experiment? If not, which do we toss and which do we keep? (T1&T2&...&Tn)⊃O ~O ∴~(T1&T2&...&Tn) So, because of this insight, upon falsification, we know they can't all be right, but we don't know which, if any, are right. Here comes Lakatos who made a more sophisticated version of falsificationism in that he tried to get rid of the Duhem problem. It's a fairly ingenious thing to do. He broke up a theory into what he called a "research programme" which consists of the theoretical "hard core" and ancillary assumptions composing what he calls the "protective belt". So, with this framework, if a research programme is falsified, you go from an indefinite number of inconsistent theories to two inconsistent classes. If a programme is falsified, check your protective belt of ancillary assumptions and do the test again. This, it turns out, is what science actually does. Going back to the observation of superluminal neutrinos, that falsifies a huge swath of physics given the Duhem problem. But with Lakatos's research programme formulation, we should check out ancillary assumptions first. After checking all of our ancillary assumptions, we found one that was wrong. We assumed that all of our cables were connected properly. This assumption, however was wrong. It turns out, that the neutrinos weren't superluminal after all! On the face of it (or "prima facie", to use fancy philosophy terms), this is a very elegant solution. There is a problem, though. Lakatos's solution doesn't give us any way to know what goes in the "hard core" and what goes in the "protective belt". It also doesn't tell us why to check the protective belt first. So, after a bit of thinking about it, Lakatos's solution seems a bit ad hoc, and that's generally not a good thing. At this point, a guy named Dorling comes along wielding one of the most powerful tools in existence. With it, he showed that Lakatos was correct and he answered the things Laktos's approach couldn't answer, putting Lakatos's more sophisticated falsificationism on firm epistemological ground. What tool is was it that he used? If you've read many philosophy posts by me, you can probably guess. It's the equation that pretty much rules the world--Bayes's Theorem: [math]P_{f}(h_1)=P_{0}(h_1|e_i)=\frac{P(e_i|h_1){\times}P_{0}(h_1)}{\sum^n_{j=1}{P(e_i|h_j){\times}P(h_j)}}[/math] Where P(h|e) is how likely the hypothesis is given the evidence in question, P(e|hn) is how likely the evidence in question is given the hypothesis in question is true, and P(h) is just how likely the hypothesis is without considering the evidence in question. I actually prefer Howson and Urbach's example of Dorling's approach to the example Dorling used in his own paper. The example they use is that of William Prout (a chemist and medical practitioner from the early nineteenth century). Prout had a hypothesis (which Howson and Urbach label "t")that was almost universally accepted at the time. His hypothesis was that all chemical elements were made of hydrogen and thus they all have atomic weights as integer multiples of the atomic weight of hydrogen. At the time, almost all recorded atomic weights were close enough to integer multiples of the atomic weight of hydrogen within the limits of error. Then we measured one that breaks the pattern. We'll call that measurement the evidence, "e". Now, this experiment had a hypothesis entangled with t. That is the accuracy of measurement, purity of samples, etc. Howson and Urbach call that "a". Now e falsifies a&t, so P(a&t|e)=0 and, by a simple corollary, P(e|a&t)=0. The hypotheses t and a are independent, so P(a|t)=P(a), P(~a|t)=P(~a), P(t|a)=t, and P(~t|a)=~t (with similar results from negating the thing upon which the probability is conditional). What we're interested in finding out is which hypothesis (or hypotheses) we should reject. So, what we want to know is Pf(t) and Pf(a|e). From Bayes's Theorem (above): [math]P(t|e)=\frac{P_{0}(t){\times}P(e|t)}{P(e)}[/math] and [math]P(a|e)=\frac{P_{0}(a){\times}P(e|a)}{P(e)}[/math] Since P(e|t)=P(e|t&a)xP(a|t)+P(e|t&~a)xP(~a|t) (by the Total Probability Theorem) and the results mentioned above, P(e|t)=P(e|t&~a)xP(~a), P(e|~t)=P(e|~t&a)xP(a)+P(e|~t&~a)xP(~a) and P(e|a)=P(e|a&~t)xP(~t). At the time, the only real competing theory for Prout's theory was random distribution. From the details of that theory and from the details of the actual measurement of e allows calculation of P(e|~t&a), P(e|~t&~a), and P(e|t&~a): P(e|~t&a)=0.01, P(e|~t&~a)=0.01, and P(e|t&~a)=0.02. The historical values of P(a) and P(t) have been estimated at P(a)=0.6 and P(t)=0.9. This gives: P(e|~t)=(0.01)x(0.6)+(0.01)x(0.4)=0.01 P(e|t)=(0.02)x(0.4)=0.008 P(e|a)=(0.01)x(0.1)=0.001 The Total Probability Theorem tells us that P(e)=P(e|t)xP(t)+P(e|~t)xP(~t). P(e)=(0.008)x(0.9)+(0.01)x(0.1)=0.0082 [math]P(t|e)=\frac{(0.9){\times}P(0.008)}{P(0.0082)}=0.878[/math] which is still pretty darn high (compare to the original value of 0.9) [math]P(a|e)=\frac{(0.6){\times}P(0.001)}{P(0.0082)}=0.073[/math] which is a dramatic decrease from 0.6 (the original value). As we can see, this one piece of evidence that falsifies the conjunction doesn't automatically falsify everything (though it makes them all less likely). So, through Dorling type Bayesian approach, we get a good semi-Lakatosian view of falsification which tells us exactly how each part of the programme being tested is affected via the final probability. This also vindicates Lakatos's distinction between "progressive" and "degenerative" research programmes. A progressive research programme is one that is repeatedly confirmed and a degenerative research programme is one that is repeatedly disconfirmed. A progressive research programme can stand up to some data that doesn't quite fit the programme (which is why we didn't throw General Relativity out the window when we found out about the Pioneer Anomaly). And Popper would have absolutely no truck with this as he was very anti-inductive. Confirmation was a dirty word for him. For Popper, if a test didn't falsify your hypothesis, the only value of the experiment was getting rid of other hypotheses from the pool of competing hypotheses. But, as you can see from Bayes's Theorem, even experiments that don't falsify your hypothesis, it can in fact give partial confirmation to your hypothesis by raising the probability of the hypothesis. It's all about what the P(e|h) is. Bayes's Theorem tells us that Carl Sagan's mantra "Extraordinary claims require extraordinary evidence" is a mathematical fact. [math]P_{f}(h_1)=P_{0}(h_1|e_i)=\frac{P(e_i|h_1){\times}P_{0}(h_1)}{P(e)}[/math] P(e) is a function of all competing hypotheses, so it will remain constant in the evaluation here. Now, this makes P(h|e)=kP(e|h)xP(h). So, let's set a threshold for what we can rationally believe (a lower bound for P(h|e)) this will be a constant. We can divide one constant by the other to get another constant. C=P(e|h)xP(h), so it is blindingly clear that the lower is, the higher P(e|h) needs to be to reach the threshold of rational belief. Contrary to Popper, we need both falsification (a version far more sophisticated than his using tools he wouldn't like) and partial confirmation (which he really wouldn't like). Rather funny this, continuing a discussion on a locked thread in a pinned one that is a continuation of it. And getting it wrong. Hilarious even. This thread had best be named Effing Popper & Statistics Popper dixit http://en.wikipedia.org/wiki/Karl_Popper#Criticism : Popper wrote, several decades before Gray's criticism, in reply to a critical essay by Imre Lakatos:It is true that I have used the terms "elimination", and even "rejection" when discussing "refutation". But it is clear from my main discussion that these terms mean, when applied to a scientific theory, that it is eliminated as a contender for the truth- that is, refuted, but not necessarily abandoned. Moreover, I have often pointed out that any such refutation is fallible. It is a typical matter of conjecture and of risk-taking whether or not we accept a refutation and, furthermore, of whether we "abandon" a theory or, say, only modify it, or even stick to it, and try to find some alternative, and methodologically acceptable, way round the problem involved. That I do not conflate even admitted falsity with the need to abandon a theory may be seen from the fact that I have frequently pointed out, that Einstein regarded general relativity as false, yet as a better approximation to the truth than Newton's gravitational theory. He certainly did not "abandon" it. But he worked to the end of his life in an attempt to improve upon it by way of a further generalization.[63] So you performed a strawman on Popper. And further more you are clearly not quite up to scratch in thinking there is a useful discussion between frequentist and Bayesian statisticians the one excluding the other. The ones that go into such discussions are effing statistics. These are simply two tools in the toolbox of a statistician, that both should render the same result when used correctly. The one used best is (primarily) dependent on the amount of available data, whereby ("intuitive") Bayesian statistics has a broader application than a frequentist approach, being the latter a more exact approach needing more data. So you're wrong there as well. Now the problem is I've a cartoon as well that I drew myself. Alas I can't figure out how to post the damn thing here. Edited March 8, 2013 by kristalris 1
imatfaal Posted March 8, 2013 Posted March 8, 2013 We really do prefer it if you put quotes around a source and reference it. Wikipedia dixit would possibly suffice - but a link to the page you lifted the quote from would be better. This allows members to see the quote from Popper and the original authors' take on the quote and the context. Whilst you did leave in the hyperlink and the wiki-reference - it would be more acceptable if you were to use the quote facility (it's the 13th from the LHS bottom row) that looks like a speech bubble and provide a link to the source.
ydoaPs Posted March 8, 2013 Author Posted March 8, 2013 Rather funny this, continuing a discussion on a locked thread in a pinned one that is a continuation of it. This pinned topic is not a continuation of a locked thread. It began as a reply to an off-topic part of a thread (and, thus, likely would have been moved into its own thread anyway). Furthermore, as it explicitly says in the OP, it turned into more than that. Now, let's look at why you're wrong.....again. And getting it wrong. Hilarious even. I'm actually not wrong at all. The OP is simply a historical survey of the evolution of thought in Philosophy of Science. Each view is accurately represented, and that includes Popper's despite your misinterpretation of what he said. Your quote: "It is true that I have used the terms 'elimination', and even 'rejection' when discussing 'refutation'. But it is clear from my main discussion that these terms mean, when applied to a scientific theory, THAT IT IS ELIMINATED AS A CONTENDER FOR THE TRUTH--THAT IS, REFUTED, but not necessarily abandoned." (emphasis mine) You're confusing epistemological acceptance with pragmatic acceptance. That's a rookie mistake one can make when they learn about people's position via Wikipedia instead of their actual works. Karl Popper most definitely held that a theory is proven to be wrong and should be rejected wholesale "as a contender for the truth" upon falsification. His view is completely wrong, as shown by the Duhem problem and the more correct version of the Lakatosian Research Programme as put forth by Dorling and Redhead-like I said. So you performed a strawman on Popper. I did no such thing. And further more you are clearly not quite up to scratch in thinking there is a useful discussion between frequentist and Bayesian statisticians the one excluding the other. The ones that go into such discussions are effing statistics. These are simply two tools in the toolbox of a statistician, that both should render the same result when used correctly. Again, you show a shallow understanding of the issue. It's true that the Bayesian approach includes frequentism within it. However, if you'd have learned about it via the literature rather than via Wikipedia, you'd know that in the philosophy of statistics debate, the frequentists overwhelmingly tend to reject Bayesianism wholesale because they don't think of probability the same way.
kristalris Posted March 8, 2013 Posted March 8, 2013 (edited) We really do prefer it if you put quotes around a source and reference it. Wikipedia dixit would possibly suffice - but a link to the page you lifted the quote from would be better. This allows members to see the quote from Popper and the original authors' take on the quote and the context. Whilst you did leave in the hyperlink and the wiki-reference - it would be more acceptable if you were to use the quote facility (it's the 13th from the LHS bottom row) that looks like a speech bubble and provide a link to the source. Oeps, sorry, I thought I actually did provide the link to the page. I've just edited it: here it's again http://en.wikipedia.org/wiki/Karl_Popper#Criticism I'll try to use the quote box in future This pinned topic is not a continuation of a locked thread. It began as a reply to an off-topic part of a thread (and, thus, likely would have been moved into its own thread anyway). Furthermore, as it explicitly says in the OP, it turned into more than that. Now, let's look at why you're wrong.....again. Well it's a bit difficult to ascertain what your pinned topic is: I guess (via an on topic historic continuation what it is on an inherently scientific non speculative topic, that covers this discussion as well) you mean to say in an incomprehensibly elaborate way that there is no room for Bayes in physics? Is that your topic? There is something strange going on with the quote boxes? They seem to disappear suddenly ad random. Quote ydoaPs: I'm actually not wrong at all. The OP is simply a historical survey of the evolution of thought in Philosophy of Science. Each view is accurately represented, and that includes Popper's despite your misinterpretation of what he said. End Quote I gave a direct quote of Popper on the issue that actualy falsifies your position on what you say that Popper stated. I didn't interpret anything. Qute YdoaPs: Your quote: "It is true that I have used the terms 'elimination', and even 'rejection' when discussing 'refutation'. But it is clear from my main discussion that these terms mean, when applied to a scientific theory, THAT IT IS ELIMINATED AS A CONTENDER FOR THE TRUTH--THAT IS, REFUTED, but not necessarily abandoned." (emphasis mine) End quote: You are taking what Popper said out of the context that he himself gave on the issue. Simply take the whole quote I gave what he stated on this issue. Again a clear strawman on your part of Popper. Quote YdoaPs: You're confusing epistemological acceptance with pragmatic acceptance. That's a rookie mistake one can make when they learn about people's position via Wikipedia instead of their actual works. Karl Popper most definitely held that a theory is proven to be wrong and should be rejected wholesale "as a contender for the truth" upon falsification. His view is completely wrong, as shown by the Duhem problem and the more correct version of the Lakatosian Research Programme as put forth by Dorling and Redhead-like I said. End Quote: Argument of authority on your part. I've read more on Popper than you might think old boy, and not just via Wikipedia. Though convenient to refute your position. BTW I don't say or have ever taken the position that I fully agree with everything Popper said. I only state that science isn't only about predictions but about making falsifiable predictions. (Whether it stems from Popper or not is immaterial.) Now you clearly dispute this. That is incomprehensible. Got to go will react further later on. Again, you show a shallow understanding of the issue. It's true that the Bayesian approach includes frequentism within it. However, if you'd have learned about it via the literature rather than via Wikipedia, you'd know that in the philosophy of statistics debate, the frequentists overwhelmingly tend to reject Bayesianism wholesale because they don't think of probability the same way. Ah well again an argument of authority on your part. What the overwhelming part of frequentists within philosophy think: i.e. that they can wholesale reject Bayes just shows that they know little about statistics and that is the issue then and not philosophy. Any good statistician could set them straight on this. BTW I learned about this via much more than just Wikipedia and had it checked by the highest authority available (and no I'm not going to prove the latter, I just counter your argument of authority dito.) No-one in his right mind rejects the correctness of the Bayes theorem. The actual issue is stating something on anything with to little data. Can you do so in science? Yes you use Bayes. Period. You are pseudo scientific if you don't know this. Any frequentist / Bayes discussion is only done by those who don't know statistics. Again in the overlap they should both render the same result. So a point of personal preference. When sufficient data are available the frequentist (or even deterministic Rutherford approach) is of course to be used in stead of Bayes. Look into the Lucia de B court case http://en.wikipedia.org/wiki/Lucia_de_Berk to show where a frequentist approach went horribly wrong. Had the first mathematician (professor Elfers) used Bayes instead he would have had a direct pointer in the right direction: a priori do you think that nurses often if at all kill patients? In my world I would guess not and thus require a lot of extra evidence to prove this on a given norm. Bayes the mathematics of common sense. And yes also to be applied on physics questions such as: is there pressure in the system of the cosmos as we observe? At last got it: well this cracked pot picture shows where you use Bayes to fill in the picture. And how you thus falsify other positions that are improbable on the question where to start looking. Edited March 8, 2013 by kristalris -3
Dekan Posted April 11, 2013 Posted April 11, 2013 The above posts are disturbing. Some of us come on here, expecting a rational discussion of scientific questions.
imatfaal Posted April 12, 2013 Posted April 12, 2013 ! Moderator Note spam post deleted - you can always report a spam post which will bring it quickly to the attention of a moderator
Iggy Posted April 28, 2013 Posted April 28, 2013 So, let's take the recent measurement of superluminal neutrinos. Do we then throw out all of relativity? But, wait, we're not just testing relativity. No theory is an island-Special Relativity is a deductive consequence of ElectroDynamics (in fact, Einstein's paper was called "Zur Elektrodynamik bewegter Körper" which is translated as "On the ElectroDynamics of Moving Bodies") and the relativity principle which goes back to Galileo. This is called the Duhem problem (sometimes the Quine-Duhem thesis). Now this makes Popper's wholesale trashing upon falsification an even bigger deal. Do we throw out ALL of the related theories that are tangled up in this experiment? If not, which do we toss and which do we keep? (T1&T2&...&Tn)⊃O ~O ∴~(T1&T2&...&Tn) So, because of this insight, upon falsification, we know they can't all be right, but we don't know which, if any, are right. Here comes Lakatos who made a more sophisticated version of falsificationism... The Logic of Scientific Discovery actually did a very good job of describing theories. He, for example, addressed the problem you are introducing in section 16: In a theory thus axiomatized it is possible to investigate the mutual dependence of various parts of the system. For example, we may investigate whether a certain part of the theory is derivable from some part of the axioms. Investigations of this kind (of which more will be said in sections 63, 64, and 75 to 77) have an important bearing on the problem of falsifiability. They make it clear why the falsification of a logically deduced statement may sometimes not affect the whole system but only some part of it, which may then be regarded as falsified. -The Logic of Scientific Discovery, s16, Popper In regards to the Neutrino observation, I believe the answer is yes. Were the observation verified, the consequences are well published: Furthermore, the predictions of general relativity are fixed; the theory contains no adjustable constants so nothing can be changed. Thus every test of the theory is either a potentially deadly test or a possible probe for new physics. Although it is remarkable that this theory, born 90 years ago out of almost pure thought, has managed to survive every test, the possibility of finding a discrepancy will continue to drive experiments for years to come. -The Confrontation Between General Relativity and Experiment, s7, Conclusions Don't get me wrong, I have my issues with Naive falsificationism too. Just gotta give Popper his props And Popper would have absolutely no truck with this as he was very anti-inductive. Confirmation was a dirty word for him. For Popper, if a test didn't falsify your hypothesis, the only value of the experiment was getting rid of other hypotheses from the pool of competing hypotheses. Another quote from the Logic of Scientific Discovery should handle that too... Even purely existential assertions have sometimes proved suggestive and even fruitful in the history of science even if they never became part of it. Indeed, few metaphysical theories exerted a greater influence upon the development of science than the purely metaphysical one: "There exists a substance which can turn base metals into gold (that is, a philosopher’s stone)", although it is non-falsifiable, was never verified and, and is now believed by nobody. -Popper You seem to have mischaracterized his method pretty badly. He valued verification as highly as falsification (you learned something either way) just not as a demarcation for scientific laws. 1
kristalris Posted May 2, 2013 Posted May 2, 2013 Agree with Iggy. And on statistics I came across an old blog http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/297.html I don't think much has changed.
ydoaPs Posted June 2, 2013 Author Posted June 2, 2013 The Logic of Scientific Discovery actually did a very good job of describing theories. He, for example, addressed the problem you are introducing in section 16: In a theory thus axiomatized it is possible to investigate the mutual dependence of various parts of the system. For example, we may investigate whether a certain part of the theory is derivable from some part of the axioms. Investigations of this kind (of which more will be said in sections 63, 64, and 75 to 77) have an important bearing on the problem of falsifiability. They make it clear why the falsification of a logically deduced statement may sometimes not affect the whole system but only some part of it, which may then be regarded as falsified. -The Logic of Scientific Discovery, s16, Popper He in no uncertain terms completely failed to overcome the Quine-Duhem thesis. You cannot pull theories apart like that. It doesn't work.You seem to have mischaracterized his method pretty badly. He valued verification as highly as falsification (you learned something either way) just not as a demarcation for scientific laws.That's talking about simple existential statements. Of course you can verify simple existential statements, but that's stamp collecting, not science. The Logic of Scientific Discovery was about theoretical statements rather than simple existential statements. He spends a great portion of the book railing against verification/induction. For example, that's how he starts out the very first chapter. AND he goes so far as to dedicate a whole chapter (and a few additional appendices, if you have the right edition) in a failed attempt to attack probability as a method for validating confirmation. On the contrary to your assertion, it is the Popper apologists who mischaracterize his views by quotemining him and forcing the quotes into contexts where they don't belong.
Iggy Posted June 2, 2013 Posted June 2, 2013 He in no uncertain terms completely failed to overcome the Quine-Duhem thesis. You cannot pull theories apart like that. It doesn't work. It can't be pulled apart like that? Hum... According to you, Popper doesn't pull theories apart. He trashes them wholesale. You said this: If the theory is true, we have a predicted observation (within a certain amount of uncertainty). When we measure something outside of that range for that predicted observation, we need to throw out the theory wholesale. Think about that. Anytime we have a falsifying observation, per Popper, the whole thing goes out the window. I've read at least two of his books, so I knew that wasn't true. Rather than just saying "that isn't true", I found a relevant chapter and found a salient quote: In a theory thus axiomatized it is possible to investigate the mutual dependence of various parts of the system. For example, we may investigate whether a certain part of the theory is derivable from some part of the axioms. Investigations of this kind (of which more will be said in sections 63, 64, and 75 to 77) have an important bearing on the problem of falsifiability. They make it clear why the falsification of a logically deduced statement may sometimes not affect the whole system but only some part of it, which may then be regarded as falsified. -The Logic of Scientific Discovery, s16 (Theories), Popper Your statement: "we need to throw out the theory wholesale... Anytime we have a falsifying observation, per Popper, the whole thing goes out the window" is a mischaracterization of his system: "In a theory thus axiomatized... the falsification of a logically deduced statement may sometimes not affect the whole system but only some part of it" "throw out the theory wholesale" is actually the opposite of "not affect the whole system but only some part, which may then be regarded as falsified." Since the first few paragraphs of your OP are based on this premise of trashing theories wholesale I feel like it should be acknowledged before... You cannot pull theories apart like that. Can I identify the mistaken aspect of the postulates of Newtonian mechanics and derive its domain of validity as a result? 2
jajrussel Posted October 4, 2019 Posted October 4, 2019 In order to lighten the mood implied by the question of how science works. I would say, try taking a crowbar to it, but I gather from the threads content that science isn’t always about Mechanics.
HallsofIvy Posted July 3, 2020 Posted July 3, 2020 If they had offered "Effing Science" when I was in school, I certainly would have taken it! That is, if "Effing" means what I think it does.
Agent Smith Posted February 16, 2022 Posted February 16, 2022 In line with the OP's observations, the only way I can eff since is by saying science isn't about the correspondence theory of truth, it's more about the coherence theory of truth. In short, all of science could be one big fat lie!
TheVat Posted February 16, 2022 Posted February 16, 2022 Unless one rejects the notion that the truth of propositions can consist in other propositions, which is the core of coherence theory. Scientific realism holds that some coherence may reside in a web of beliefs about the world but that the truth condition of propositions must always rest on objective features of the world. IOW, theoretic interpretations can be coherent or not, insofar as they fit into a web of other propositions, but their truth can only be determined empirically. For example, saying that both Saturn and the Sun orbited a stationary Earth once seemed coherent and consistent with other propositions, but ultimately the proposition collapsed as empirical techniques were greatly improved.
Agent Smith Posted June 21, 2022 Posted June 21, 2022 Science is all about hypotheses construction. Hypotheses can never be proven true for the simple reason that more than one will square with observational data. In other words, science ain't about truth! That's how I would eff science.
AIkonoklazt Posted November 30, 2023 Posted November 30, 2023 What Science Does And Does Not Do Let's say that we witness two things, A and B. B comes after A, and as far as we can tell, B is "caused" by A. A ---> B We confirm this by doing stuff so that whenever A happens, B seems to always happen after A. Do this a lot of times (more than a few), and this is become somewhat of a "law." A ---> B A ---> B A ---> B A ---> B A ---> B A ---> B A ---> B We soon have better ways of figuring things out (technological advances), and pretty soon we start to see that there are more "steps" that go between A and B A ---> A1 ---> B A ---> A1 ---> A2 ---> A3 ---> B A ---> A1a-->A1b-->A1c-->A2a-->A2b-->A2c-->A3a-->A3b-->A3c-->B Actually, this division can go on downwards pretty much forever, until you hit a "wall of non-explanation"... "We have the protons and neutrons of an atom which are made up of even smaller subatomic particles, but what is holding those together? The nuclear forces should be pushing those apart... All the atoms in the universe should all be flying apart by now... Wait, we have detected some evidence that something must be holding them together. Since it sort of "glues" the subatomic particles together we'll call it GLUONS..." (hmm ok, so what makes up these "gluons?" Let's play "dissect a gluon" and see what happens next, and next... ad infinitum) or "At the base of evolution is genetic mutation, which is caused by some kind of gene damage via the collision of high-energy particles to the DNA or carcinogenic (or otherwise disruptive/distabilizing) substances to the same..." (...which goes back to "what made up those chemicals" and "what produces that high energy radiation" and the question ultimately ends up going back to the sort of stuff you see in physics, like I described earlier) So basically, science describe things in smaller and smaller steps, and predicts how things would repeat in a more and more accurate basis, but never actually manages to explain exactly how or why any of these "smallest steps" have to happen at all. Why do anything change, at all? Divide things long enough, and you get a really small piece that you don't really have a good explanation for, other than "it looks to always goes to this next step if we have this other step before" So what? Science describes and predicts phenomena. It never "explains" any of it. It does't have to. That's actually not what science is for. Science is about the knowledge of the physical universe. Scientists don't do metaphysics**, and they don't do "metaphysical experiments" because there's no such thing as "metaphysical experiments." **metaphysics- the philosophical investigation of the overall nature of reality ...Also, there isn't such a thing as a "complete and correct model." This goes back to Duhem and Quine, which was mentioned in the opening post. Underdetermination entails no exhaustive modeling of underdetermined systems (such as the brain) is possible, as explained by the following passage from SEP (emphasis mine): https://plato.stanford.edu/entries/scientific-underdetermination/ “…when Newton’s celestial mechanics failed to correctly predict the orbit of Uranus, scientists at the time did not simply abandon the theory but protected it from refutation… “…This strategy bore fruit, notwithstanding the falsity of Newton’s theory… “…But the very same strategy failed when used to try to explain the advance of the perihelion in Mercury’s orbit by postulating the existence of “Vulcan”, an additional planet… “…Duhem was right to suggest not only that hypotheses must be tested as a group or a collection, but also that it is by no means a foregone conclusion which member of such a collection should be abandoned or revised in response to a failed empirical test or false implication.” There are related adages to this, such as: "All models are wrong, some are useful" "Correlation does not imply causation" "The map is not the territory" Etc.
CharonY Posted November 30, 2023 Posted November 30, 2023 2 hours ago, AIkonoklazt said: We confirm this by doing stuff so that whenever A happens, B seems to always happen after A. Do this a lot of times (more than a few), and this is become somewhat of a "law." At that stage we would still think of it as correlation. We need to have a model first to explain why A causes B at minimum.
swansont Posted November 30, 2023 Posted November 30, 2023 5 hours ago, AIkonoklazt said: We confirm this by doing stuff so that whenever A happens, B seems to always happen after A. Do this a lot of times (more than a few), and this is become somewhat of a "law." Laws are, or can be made to be, mathematical statements. We have to do more than see that B always happens after A. We have to know that B doesn’t happen unless A does, and there is not some hidden causal factor involved. You might find that shark attacks correlate with ice cream sales, but buying ice cream doesn’t cause shark attacks.
StringJunky Posted November 30, 2023 Posted November 30, 2023 9 minutes ago, swansont said: Laws are, or can be made to be, mathematical statements. We have to do more than see that B always happens after A. We have to know that B doesn’t happen unless A does, and there is not some hidden causal factor involved. You might find that shark attacks correlate with ice cream sales, but buying ice cream doesn’t cause shark attacks.
AIkonoklazt Posted November 30, 2023 Posted November 30, 2023 5 hours ago, CharonY said: At that stage we would still think of it as correlation. We need to have a model first to explain why A causes B at minimum. 1 hour ago, swansont said: Laws are, or can be made to be, mathematical statements. We have to do more than see that B always happens after A. We have to know that B doesn’t happen unless A does, and there is not some hidden causal factor involved. You might find that shark attacks correlate with ice cream sales, but buying ice cream doesn’t cause shark attacks. What both of you said are true. However, models have this "permanently temporary" status of sorts. You have models that are more or less reliable and useful than others, but ultimately there's no completion. 1 hour ago, StringJunky said: Nahp, this classical example is a bit more impressive: https://www.investopedia.com/terms/s/superbowlindicator.asp
StringJunky Posted November 30, 2023 Posted November 30, 2023 10 minutes ago, AIkonoklazt said: What both of you said are true. However, models have this "permanently temporary" status of sorts. You have models that are more or less reliable and useful than others, but ultimately there's no completion. Nahp, this classical example is a bit more impressive: https://www.investopedia.com/terms/s/superbowlindicator.asp If you prefer to use hundreds of words when a simple few seconds of a video will do ... fill your boots.
AIkonoklazt Posted November 30, 2023 Posted November 30, 2023 4 minutes ago, StringJunky said: If you prefer to use hundreds of words when a simple few seconds of a video will do ... fill your boots. Tell that to the original poster. Meanwhile, here's a coupla thousand words for you: https://plato.stanford.edu/entries/scientific-underdetermination/
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now