Trurl Posted Wednesday at 12:44 AM Posted Wednesday at 12:44 AM Ever meet someone that says, “If I watch the game they will lose.” They believe this. But imagine they watched a game and the team won. Would this end the streak of always seeing them lose? Scientifically there is no way to prove or disprove this theory of losing. Obviously just being a spectator you have no influence on the game, or do you? What if by watching the game you are subconsciously predicting the odds on who will win? If you watch this game, you may have wanted to break the cycle and thought the team had a chance of wining. I am reading a book that talks about random samples. But the question is what is random? Isn’t it that a random sample can look like anything. We are looking for a bell curve where the rare cases are small in number. Everything starts with the average somehow being random. But what if you are in gym class, testing to see how many free throws you can make out of 20. Five to 10 would be average, but what about the skilled player who practices all the time makes 18 out of 20. He has just broke average statistically and on the bell curve compared to the others. My question is: if random looks like anything, and we can’t achieve it by selecting a sample then why are we always compare the test group to average? I understand the importance of statistics, but we start to compare things that don’t compare to each other. We have research statics that contradict one another. So we use a meta study to compare statics that contradict. All to be debated by doctors who disagree on the data. I was surprised on so many medical decisions being judgement calls. I know the scientific method is logical, but the answer is not always clear. Science changes over time. I love the statics in sports. But as the news sportscaster says, “At the end of the day, you still have to play the game.”
swansont Posted Wednesday at 01:02 AM Posted Wednesday at 01:02 AM 19 minutes ago, Trurl said: Ever meet someone that says, “If I watch the game they will lose.” They believe this. But imagine they watched a game and the team won. Would this end the streak of always seeing them lose? Scientifically there is no way to prove or disprove this theory of losing. Obviously just being a spectator you have no influence on the game, or do you? What if by watching the game you are subconsciously predicting the odds on who will win? If you watch this game, you may have wanted to break the cycle and thought the team had a chance of wining. Sure there is. You watch, they win. That disproves it. What you can’t do is prove or disprove it by not watching. 19 minutes ago, Trurl said: I am reading a book that talks about random samples. But the question is what is random? Isn’t it that a random sample can look like anything. We are looking for a bell curve where the rare cases are small in number. Everything starts with the average somehow being random. But what if you are in gym class, testing to see how many free throws you can make out of 20. Five to 10 would be average, but what about the skilled player who practices all the time makes 18 out of 20. He has just broke average statistically and on the bell curve compared to the others. You’ve mistakenly assumes that making a free throw is random with a 50-50 chance, and it’s not. You don’t know the average until you have the statistics, and this is possibly confounded by the fact that you can improve with practice. 19 minutes ago, Trurl said: My question is: if random looks like anything, and we can’t achieve it by selecting a sample then why are we always compare the test group to average? You need to be looking at a properly-formulated situation, which you have not done. 19 minutes ago, Trurl said: I understand the importance of statistics, but we start to compare things that don’t compare to each other. We have research statics that contradict one another. So we use a meta study to compare statics that contradict. All to be debated by doctors who disagree on the data. Can’t really comment on that without an example; this is too vague. One issue with meta-analysis is it’s not difficult to mess it up by using data sets that don’t have identical conditions. This can lead to Simpson’s paradox (though it’s not really a paradox) https://en.m.wikipedia.org/wiki/Simpson's_paradox 19 minutes ago, Trurl said: I was surprised on so many medical decisions being judgement calls. I know the scientific method is logical, but the answer is not always clear. Science changes over time. I love the statics in sports. But as the news sportscaster says, “At the end of the day, you still have to play the game.” Again, you need specific examples, rather than vague descriptions.
Trurl Posted Wednesday at 05:34 AM Author Posted Wednesday at 05:34 AM Well let me say I am not trying to discredit statistics. I am just finding some idiosyncrasies in them. Are research papers and statistics useful in answering unknown problems in the medical field? Well I have been reading that book How to Read Papers and it seems that unknown answers are based on opinions supported by facts but still basically a judgement call. The book even says given the same facts 2 doctors can have different interpretations. Meta studies compare the results of many papers. I talked with a med student a few weeks ago. He mentioned meta studies, but had a hard time explaining how such decisions are made. AI probably helps. And obviously if the problem is unknown the doctor is looking for the best treatment the research is the only option. Again I am not saying research papers don’t have use or the doctor making a judgement call is wrong. That is just how it works. I’m just saying some science is art form.
swansont Posted Wednesday at 01:12 PM Posted Wednesday at 01:12 PM And I’m saying details matter. Saying doctors can disagree is not saying they will. You don’t specify how often this happens, or under what circumstances, which might lead someone to conclude it happens more often than it does. Science strives for precision. Being vague is the enemy.
Trurl Posted 19 hours ago Author Posted 19 hours ago I am not a doctor, I don’t know how often it happens. I just read the book. But this often occurs when they are dealing with the unknown. We all heard of people going to see multiple doctors or patients being misdiagnosed. My question is are the statics helping or hindering how we judge a credible paper? They give you a way to sum up a paper after the abstract, but using stats to summarize the value of a paper imho would lead to disagreement among doctors. Obviously there must be other sorting and cataloging methods, but I am focusing on stats. For instance there was a study report on the local news that processed red meat consumption increased risk of dementia 13%. That is useful and sounds like a worthwhile study. But can we use these stats to compare to other works? This is computational science. The rest of this post is just mho and is meant more for discussion than factual science. But the red meat study followed 30,000 people for several years. I think data science is being applied to everything to find the big discovery. Instead we are getting massive amounts of endless data. So we turn to ai to sort this data. The Sixth Sally or How Trurl and Klapaucius Created a Demon of the Second Kind to Defeat the Pirate Pugg This story explains everything.
CharonY Posted 18 hours ago Posted 18 hours ago 58 minutes ago, Trurl said: My question is are the statics helping or hindering how we judge a credible paper? They give you a way to sum up a paper after the abstract, but using stats to summarize the value of a paper imho would lead to disagreement among doctors. Obviously there must be other sorting and cataloging methods, but I am focusing on stats. That is not how it works, though. The papers will outline the statistical method in the methods section. That will tell you (if you know how to read) a fair bit about things like how strong the observed effects are, the cohort size and composition can be used to evaluate how specific or universal the data set might be and so on. For example, a paper doing calculations with only three patients is not going to have the statistical power of a study with a cohort of a few thousand participants.You just don't look at numbers without context. 1 hour ago, Trurl said: or instance there was a study report on the local news that processed red meat consumption increased risk of dementia 13%. That is useful and sounds like a worthwhile study. But can we use these stats to compare to other works? This, for example in isolation is entirely worthless. One would need to read the paper and look at how they arrived at that number. Especially the use of percentages without showing base value is not telling much.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now