Marat
Senior Members-
Posts
1701 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Marat
-
The essential ridiculousness of the FDA is that it requires various substances on the market to prove the claims made about them by people other than those making them. This allows products to be banned because they fail to live up to the exaggerated claims made by quacks about their healing capacities. Another absurdity of current policy is that it forgets that the basic idea of America is supposed to be individual liberty, and there can logically be no more personal, private, and essential freedom than the freedom to determine for yourself how you will treat or cure your illness or attempt to save your life. Yet the government somehow supposes that this liberty belongs to itself! The right of each individual to control his own medical fate is surely within the individual private sphere of autonomy recognized by the Supreme Court in such cases as Griswold v. Connecticut and Roe v. Wade. It is also now generally forgotten that the founding fathers of America were dedicated to individual medical freedom. Thomas Jefferson mocked the fact that in France the government had the nerve to ban the emetic and insisted that there should be a right to medical freedom in the American Constitution. Benjamin Rush, one of the founding fathers who was also a medical doctor, was also a vigorous supporter of medical freedom for Americans. In fact Americans used to have medical freedom until the beginning of the 19th century, when the medical fraternity enlisted the support of the government to support its monopoly control on medicine. Before then, Coca Cola had cocaine in it, you could buy opium at a grocery store, and poisons were freely available at pharmacies as long as the purchasers signed a form with their name and address in case there was a murder in town. Anyone could set himself up as a doctor, and a wide variety of alternative medical schools existed. Another absurdity of the FDA is its assumption that drugs have to be 'safe and effective' before they can be released to the public. But the problem with this notion is that living with a disease is itself not safe, and in many cases, no remedy is as yet provably effective. So why not allow people to weigh the risks and benefits of conservatively living on with their current illness either inadequately treated or not treatable at all by currently approved therapy or taking a gamble on unproven remedies? That should be up to each individual. This issue was explored in American law in Abagail Alliance v. von Eschenbach, and in the most insane judicial reasoning ever heard in an American courtroom, the court decided that it was better for dying cancer patients who cannot be helped by any conventional medicine just to die without having the right to reach out to any alternative treatments, which might not be safe or effective. By analogy, if you are drowning in quicksand and only one tree branch is in range to pull yourself out, you had better not try it unless the government comes around in time to certify it as safe and effective.
-
More generally, if the possibility of romantic affection being realized fully through marriage is threatened by the prospect of having children with genetic problems, it is best simply to go for love and forget about having children. Because there is a strong cultural (and religious) assumption in favor of having children, the downsides of the experience are simply excluded from serious consideration, but should be given their proper weight. First, when couples have children their own intellectual and emotional development tends to stall since they begin investing their talents for human development in their offspring rather than in themselves. As a result, their evolution as humans stagnates at the age they give birth. Instead of letting their imagination and intellect continue to expand, they contract both in order to focus on imparting the very basics to their children. Second, anyone over 12 years of age should know that life is generally a hideous experience. Since human flourishing depends on sustaining itself as an organic entity in a world governed by entropy, the natural tendency of existence will be to incline human life towards accidents, disease, decay, death, and the frustration of all its efforts to build order. The breadth of human awareness and the imperative of concentrating more on dangers than on pleasures causes our consciousness to operate as a torture appartus for ourselves, since it magnifies the bad and turns away from the good as no longer requiring its attention. Our capacity to anticipate misfortunes and dwell on them after they occur further augments their power over their natural scope. Also, given the possibility of a variety of extreme horrors overtaking us, from Progeria to Huntington's, from pancreatic cancer to Alzheimer's, it seems preferable not to drag any new human entities into this terrible gamble called life. Third, with middle class incomes stagnant or falling because of neoliberalism, the only route to prosperity for many couples will be avoiding increasing expenses by deciding not to procreate, thus eliminating a good $300,000 of costs per child. Fourth, you also help preserve the scarce resources of a stressed environment for the benefit of the rest of humanity. Even if you obsess every day over energy conservation, you will never approach the energy savings achieved by simply not having a child.
-
Of course the basic problem with making social policy is that it is democratic, not scientific, so the general assumptions and beliefs of people or their masters determine policy, rather than rational investigation. A good example is the mass hysteria over Saddam Hussein's weapons of mass destruction being able to be delivered to a target in the West 'within 45 minutes.' This was taken so seriously that it was considered impossible to wait just a few more days before the more careful, empirical study of the reality of the threat by Hans Blix and his UN inspectors was completed before rushing into action. Although in countries with constitutionally entrenched human rights enforced by independent courts there can be some review of government action for its rationality, since any policy that limits rights must prove its warrant to do so by arguing before a court that it had good rational grounds for those limits, in fact those courts tend to be under the sway of the same hysteria that grips the policy-makers, so no truly rational review takes place.
-
When Ed says that "there is research already being done to correct the autoimmunity of diabetes," this phrase seems a bit vague in its reference. Type 1 diabetes was already thought to be primarily an autoimmune disease in the mid-1960s, and by the mid-1980s efforts were underway with the new neurotonin inhibitor class of immunosuppressives to prevent the development of full-blown type 1 diabetes in children at the very earliest stage of the disease. So this is inded old news, but what is truly new and revolutionary is the theory that the same autoimmune processes that cause the beta cells of the pancreas to be destroyed at the onset of the disease also persist throughout the course of the disease. It had been accepted before that that the autoimmunity burns itself out once the beta cells are destroyed, but Dr. Faustman's work in the 1990s in trying to reverse type 1 diabetes in the NOD mouse model revealed that pancreatic beta cells continue to try to regrow, but are continuously being destroyed by an autoimmunity which never ceases. This first raised the possibility that this regrowth is also occurring in human type 1 diabetes, but is also being stopped by a continuing autoimmunity. If this were so, then interventions to suppress the autoimmune processes, such as are routinely used in other autoimmune diseases like lupus, might also allow the regrowth of beta cells to get ahead of the destructive processes. But with Adams' work and a scattering of similar papers in other areas of diabetology, we are now entering a third phase of our understanding of diabetes and autoimmunity, since the truly revolutionary discovery here is that the continuing autoimmunity may not only be destroying the pancreatic beta cells, but also causing the late sequelae of the disease -- the vascular and neurological deterioration -- which had always been thought, and is now still thought by more than 99% of clinicians and scientists working in this field to be caused by hyperglycemia. If treatment were now to switch from the usually futile, extremely burdensome, and often lethal effort to control hyperglycemia to using extemely toxic immunosuppressive drugs to control the autoimmune processes really causing diabetic complications, then this would truly be a clinical and scientific revolution. It would involve scientists in the field becoming so convinced that autoimmunity was really the cause of complications that it was worth accepting all the severe downsides of immunosuppressives, such as heightened cancer and infection risks, cataract formation, and atherosclerosis. Adams himself explicitly recognizes that this would be a revolutionary development, since in his abstract he states: "If this [autoimmune hypothesis of the etiology of diabetic complications] is so, the therapeutic implications are immense, involving a switch from ineffectual tight glycemic control to immunotherapy." As I said above in my discussion of Benfotiamine, of course there are other reasons to treat diabetic hyperglycemia, so as to avoid hyperosmolar coma, predisposition to infection, vitamin loss through polyuria, ketoacidosis, and to maintain growth and metabolism. But the type of control of hyperglycemia in this case would be radically different from the extremely dangerous and burdensome type of control now practised, and could certainly be designed to eliminate neurologically damaging and often lethal hypoglycemia. I'm not really sure what Edtherian is saying in response to this by commenting that "because of the loss of function of the pancreas the people with diabetes still end up with low blood sugar." Of course he must mean high blood sugar, but if this is treated with insulin not to normalize glucose levels but just to avoid the more extreme conditions reviewed above, low blood sugar could largely be eliminated as the huge clinical problem it is today. I'm also not sure what is meant by Rocket's comment that medicine is not a science. Just because it is a science which is directed to practical outcomes, and in its response to practical problems is not always scientific, this does not mean that it is not a science in its development of its theory structure and its testing of that structure by reference to empirical data. Just because people often drive badly and irrationally, and thus in an 'unscientific' way, does not mean that the principles of mechanics that go into designing cars are not a science. There are even MMS degrees -- 'Master of Medical Science' -- given by many universities, so those arguing that medicine is not a science should take up their objects with those august institutions. With respect to what I am saying about scientific revolutions and the resistance of established explanatory paradigms to recalcitrant evidence, all of this is so widely known, so commonly accepted, and so non-controversial ever since Thomas Kuhn's epic-making 'Theory of Scientific Revolutions' was published in 1974 that I can't believe that people are so unfamiliar with it. There is a whole academic industry built around this field, and if you google its common terms of art, such as 'normal science,' 'abnormal science,' 'scientific revolution,' 'paradigm shift,' etc., you will find countless entries. You can dispute whether 50% of the content of every Ph.D. program in the history and philosophy of science in the world is valid or not, or whether the theory of scientific revolutions makes sense or is empirically well-founded, but my applying it in this context certainly should not be regarded as simply betraying an utter lack of understanding about what science is or how it operates.
-
lowest level of consciousness
Marat replied to Peels's topic in Anatomy, Physiology and Neuroscience
The capacity for self-awareness is a key indicator of a higher level of consciousness. If you put a hat on an ape and put a mirror in front of the animal, it will comprehend that the hat is on its head, which is a measure of self-awareness not found in lower animals, which just see another creature of the same species with something odd on its head. In an important experiment about octopus consciousness, it was found that octopi are sufficiently self-aware that they can visually assess a complex structure and judge whether they can fit inside it and move around within it. This was interpreted to indicate that an octopus is not just present unto the world of external experience as a dimensionless stage, knowing nothing but how it feels and what is in its environment, but that it also knows itself and how it fits into that environment as another object among the objects around it. Most animals can't manage this feat. -
I suppose one limit on the potential size of biological entities would be the height of the atmosphere and the diminishing concentration of oxygen at higher levels. This would force New Jersey-sized creatures to crawl fairly low to the ground, which would increase friction and so also increase the demand for oxygen to metabolize the calories needed to deal with the friction. This is perhaps part of the reason why whales, the largest animals, live in the oceans and so reduce friction. Another issue I can imagine if we factor nutrition into the calculation is that the ratio between the size of the creature and the mass of items in the environment needed for nutrition could not exceed a certain limit, since the calories required to keep moving to find more sources of calories would eventually exceed the the nutritional calories that could be gathered by moving.
-
For years now my wife has been interrupting me in the middle of my sentences to finish them for me, and every single time she anticipates incorrectly what I'm about to say, so I have to double back, explain the misconception, and attempt to finish them again, thus wasting a lot of time. But still, she keeps doing it. I can only assume she must be having an affair with the guys who run Google and has had a bad influence on them.
-
It is also important to note how little accurate record-keeping there was at the time when Christ purportedly lived. First, the ancient world did not operate with the same strict separation between myth and reality that our more scientific and objective culture has to use, so even serious historians like Herodotus and Thycidides invented speeches for their historical characters to move the story along, and every army would be said to be 'ten thousand men strong' with no attempt to determine its actual size. So stories about Christ have to be taken with a grain of salt and not treated as even having been intended to be accurate in the modern sense of the term. Second, recording things was itself a very difficult process, given how expensive and rare papyrus and other forms of primitive 'paper' were. They generally had to be imported from just two places, Egypt and Phoenicia, where the right kind of reeds grew for making them, so unless you were willing to take the time and effort to carve something onto stone or paint it on a piece of broken pottery, you had to pay a lot to write something down. This is why so much of what was known was recorded in verse so that it could be preserved by people simply memorizing it, as was done with the works of Homer. Third, as one poster has already pointed out, Christ was not all that special in his time, since ancient Judea was full of religious fanatics who button-holed people to harrangue them about some new theory, trying to establish their credibility by performing a few magic tricks. Many Jews regarded Christ as primarily announcing the return of the earthly rule of the House of David over Judea, so he was more a political reformer than a magician or saint, much less a son of God. Given all of these factors, Christ wouldn't have seemed sufficiently important during his lifetime to be worth depicting.
-
lowest level of consciousness
Marat replied to Peels's topic in Anatomy, Physiology and Neuroscience
Regardless of the accuracy of the specific example used, the poster's general observation was correct and useful for the point he was making. Languages certainly do reflect their culture's concerns by having a larger and more specific vocabulary to address issues of particular interest to them. For example, the philosophically-inclined Ancient Greeks had a much larger store of abstract concepts than the other ancient languages, so while Hebrew had to get by with using 'breath' as a metaphor for 'soul,' the Greeks could already distinguish between 'psyche' and 'nous.' -
What sets the theoretical limit to the size of any animal, even we assume that nutritional requirements were satisfied? Is it the tension between the electrostatic forces holding it together and the gravitational forces pulling it apart? Surely there couldn't be a creature crawling about which was the size of New Jersey!
-
I was talking about the great effort to close down all sorts of alternative forms of medicine with the Flexner Report in the U.S. in 1904 and its spinoffs in other countries. There was nearly complete medical freedom before that, and anyone could call himself a doctor with as much authority as the next person, with it being up to the patient to satisfy himself as to the education, experience, school of medical thought, and intelligence of the individual doctor he was dealing with. Thomsonianism, eclecticism, homeopathy, electric medicine, allopathic medicine, and a host of other branches of the healing arts were all open to unrestrained market forces and the free choice of the public. Medical schools varied in size from five or six students to hundreds, and offered a wide variety of approaches to treating disease. Today, of all these alternative forms of medicine, only osteopathy survives with any real legal authority to educate doctors who can legally sue for recovery of fees and prescribe medicines, and even this alternative has this status only in a few states of the U.S. Similarly, the patient's medical freedom was also not yet restricted prior to the Flexner era. A patient could buy any chemical from a pharmacist, even poisons, though most poisons were dispensed only if the purchaser signed his name and address first, in case it was used in a crime. Opium was sold in grocery stores, Coca Cola actually had cocaine in it, and America had its highest number of drug addicts per capita from all the old ladies addicted to the morphine in Laudanum. Doctors prefer a private healthcare system to a public one, since in a private system they 'negotiate' with patients for their fees when the patients are sick and helpless, so they can get what they want, while public systems control their fees. But what they want most of all is for government to come to their aid to block all their competitors in the health field so that the allopathic physicians can monopolize the field by government licensing restrictions, and then to leave them alone by not also protecting the public with a public healthcare service. In short, what they want is modern America.
-
I will assume that what was meant was that the entire world just have one time zone. Actually, before mass travel, the opposite extreme existed, with each individual town having its own time and keeping its clocks properly set to the local position of the sun. This all had to change in the 19th century, however, with the invention of the railroad, which made it possible to travel at such speeds that frequent resetting of clocks would become too burdensome. There are some countries today like Saudi Arabia which keep their own time for religious reasons, so there are sudden time differentials across its borders. Somehow people travelling back and forth across borders in those areas accommodate this idiosyncracy. But I'm not sure there would be much of a problem if the world were just one gigantic time zone. We would adjust to it by just not all calling the same position of the hands on a clock face 'noon,' 'dusk,' 'day,' or 'night.' So what if noon at Greenwich were 12:00 and in New York noon was 7:OO? That might prove more convenient than having to guess time zone intervals all the time and readjust watches during international plane flights, especially given the difficulty of dealing with time-set electronic devices.
-
lowest level of consciousness
Marat replied to Peels's topic in Anatomy, Physiology and Neuroscience
With respect to the earlier discussion about consciousness being just a collection of connected neurons, I think that two qualifications are necessary. First, to avoid the 'myth of cerebral localization,' we should note that human consciousness is always informed by things going on throughout the body, so hormones, feedback from the peripheral nervous system, characteristics of the circulation, etc., are all influencing thinking, so consciousness is located throughout the body. A second point is that while our consciousness correlates with having an intact, healthy, and functioning set of connected neurons in our head, that anatomical mass doesn't really 'explain' what consciousness is, any more than sex hormones provide an adequate account of human romance, rather than just being the causal ground of it. Some people are now arguing that we understand the brain poorly if we view it as an organ articulated by evolution to help us think, which is not its ultimate purpose. In fact, the ultimate purpose of the brain is to promote our survival, which might in some cases mean developing a talent not to think rationally about certain things, like good reasons we might rationally have to be terrified, in despair, or paralyzed by impending death. -
I went to a private high school in the U.S. and took AP calculus in my final year, and I also took the basic science courses offered. Friends of mine at other private and public schools in the area seemed also to have about the same math and lab science knowledge that I had. I then went on to study at a German university, where I was shocked to find that many of the new German science students (about 30%) had not yet had calculus, and so had to have certain concepts explained graphically rather than mathematically. I was, however, very unpleasantly surprised at how easy the other students found all the labs (which for me were far too advanced) since, as they explained to me, "they had already done these labs in high school." So my personal experience suggests that U.S. secondary education is adequate in math but defective in laboratory science.
-
Clinical medicine is often practised under constraints that keep it from being fully scientific. Irrational ethical and legal standards derived from the surrounding culture limit the fully rational treatment of the patient; institutional pressures from medical societies further limit rational practice for reasons of promoting the status of the profession, its monopoly over treatment, or its fee schedules. Constraints of time and resources -- since something always has to be done to treat the patient before you now -- also force practice to be less than fully scientific. However, there is also an aspect of medicine commonly called 'medical science,' which is the material found in medical journals, and I think this fits the definition of science. It follows the scientific method; theories are designed to match empirical results and predict new data; falsification of theories is officially cultivated (Popper's criterion); every effort is made to weave the explanation of new facts back into the existing explanatory paradigm, rather than extend it with each new discovery; theory construction operates on a continuum with sciences such as chemistry, physics, biochemistry, organic chemistry, physiology, statistics, etc. But like all sciences, even medical science resists new data which pose a serious challenge to established theories. It does this essentially by denying attention to recalcitrant facts, undermining the reputation of those who discover them, inventing theoretical problems with the experimental methods applied, etc., but all of this is done with much more energy than it would with respect to new data which confirm existing hypotheses. Some of this can be characterized as just a scrupulous, scientific response to something new and startling, but certainly not the tendency simply not to notice, take seriously, publish, or think about data which challenge the existing hypothesis too radically. There was an earlier thread here which exemplified this problem. A scientist at Cornell had conducted an excellent study recently which supported the reality of ESP and there was considerable resistance to publishing it, not for any flaws in its methodology, but just because the result was too startling to be accepted as true. Diabetology is now facing a similar refusal of the field even to notice serious challenges to existing hypotheses, such as the clear evidence now in about 70 scientific journal articles that the complications of diabetes can be prevented -- not by potentially lethal and extremely difficult efforts to normalize blood sugar -- but by taking a simple pill with no reported side-effects, Benfotiamine. Michael Brownlee published important findings showing this in the February, 2003 edition of Nature: Medicine, but most endorcrinologists and diabetologists in North America have still eight years later never even heard of this treatment. The reason is that the Benfotiamine results have never been subjected to large-scale studies, such as the strict blood sugar control hypothesis was during the 1990s in the DCCT study. But why not, given that Benfotiamine could revolutionize and greatly simplify treatment, as well as avoiding 4% to 6% of patient deaths from iatrogenic causes? I think it is just a case of the existing paradigm protecting itself by refusing to mobilize its energy to study any results that refute it, so that it can then always claim to persist unchanged on the argument that the contrary hypothesis "has never been adequately proved." The partisans of Benfotiamine treatment have to agree that the evidence supporting it is insufficient, but not because there is anything questionable about Benfotiamine in itself, but rather, just because the resources will never been made available to back it up. The fact that you see television ads every five minutes for the billion-dollar blood sugar measurement meter industry, which would be nearly wiped out if strict blood sugar control were to be shown to be unimportant by Benfotiamine therapy as an alternative, might well have something to do with it, since no one is going to put up comparable funds to test a drug like Benfotiamine which cannot be patent-protected.
-
A few years ago a sarcophagus was discovered among several others in the same grave with names on them suggesting that they may have been those of Jesus and his family. However, the significance of this find has been challenged on two grounds: first, that the collection of names found together was sufficiently common for it not to be significant; and second, that the names could have been added long after the sacrophagi were made in order to dress them up as significant Christian artifacts. Perhaps someone can find the exact web reference, since it was a major story for a while.
-
You actually said, "something should be shown to be harmless (or less harmful) for it to be acceptable," so there was some ambiguity in your statement to which I was responding. It seems that you are now making the parenthetical statement the primary one, but I don't want to be captious. I think we can probably agree that the goal should be to try to develop some metric which sets a standard for the harmfulness of the effects of any exercise of freedom, and that this standard should be sufficiently neutral and objective in design so that it can resist the pull of the kind of majoritarian pressures which could utterly negate individual liberty. Society always tends to associate objective harm with anything it doesn't like (the majority of American medical students in a 1959 survey believed that masturbation was physically harmful), so the tests for the objectivity of any harmfulness which is argued to be entitled to deny anyone personal liberty should be strict.
-
There is a Roman historical document (is it the record left by the historian Josephus?) which refers to Jesus as having been an actual historical figure. Since the writers of this source would have had no motivation to lie to support a Christian story, there probably was some prominent person among the many itinerant religious preachers of that time and place who was called Jesus. Most likely the Jesus who has come down in the New Testament stories, written many decades to a century after his supposed death, rolled a number of historical figures into one composite figure (as happened with the supposedly single author of the Odyssey and Illiad, Homer, for example), and then added some mythological adonrments to make a better story.
-
God 3%. Satan 97%. Does God needs a new marketing man?
Marat replied to Greatest I am's topic in Religion
I'm not suggesting that Peter or Doubting Thomas directly denied the divinity of Christ, but rather, that if they had really believed in the divinity of Christ, they would have had no conceivable motivation either to abjure him (as Peter did) or doubt that he could perform any and every miracle (as Thomas implicitly did by his sceptical attitude to the reports of Christ's resurrection). If anyone had actually encountered God honestly incarnated as a human, the effect would have been so overpowering that any response in the witnesses other than perpetual and absolute adoration, belief, and loyalty would have been inconceivable. It is as though Biff, Scooter, Sally, Spanky, and God all submitted displays to a high school science fair, and the judges were not absolutely convinced that God's submission was the best. -
I think with respect to the diabetes sub-topic there is still some confusion about what I am saying. I am not trying to discuss diabetes per se, considered (in type 1) as a pancreatic insufficiency in which autoimmune processes have destroyed the beta cells so little or no insulin is produced, and for this condition exogenous insulin is required. That's the problem from 1921, and insulin treatment ever since then has had its attendant problems in insufficient normalization of hyperglycemia and causing the unwanted and often lethal side-effect of hypoglycemia. These problems, as you quite rightly state, would to some degree remain even if the late sequelae of the disease, the microvascular and neurological complications, were successfully treated or not by addressing the continuing autoimmunity, which Adams and a few others are now starting to say not only causes the initial beta cell destruction, but also goes on to cause the vascular and neurological lesions which start to appear around a decade after the beta cell destruction. But the key point is this. Much of the problem with insulin therapy now is that it is directed toward the scrupulous normalization of erratic blood glucose levels, and this is not only difficult, imperfect, and dangerous, but it also seems to be not very effective at preventing the vascular and neurological late sequelae of the disease. But if it could be shown, as Adams and others are now doing, that the vascular and neurological late sequelae are in fact not really due to the toxicity of hyperglycemia at all, but are really due to the continuing autoimmune disease, then this would be revolutionary. The present demand for strict blood sugar control could be abandoned; blood sugar control could be relaxed so as just to avoid ketoacidosis, maintain weight and energy, and avoid hyperosmolar coma, but not to court the lethal hypoglycemia which now accounts for 4% to 6% of all type 1 diabetic deaths. Instead the focus could be on using immunosuppressive drugs to treat the continuing autoimmunity so as to interrupt the processes which are actually causing the vascular and neurological complications. This is in fact what Adams recommends. Thus I of course agree with your statement that insulin is not a treatment to correct diabetes but a treatment to manage the symptoms of the disease, at least if we take hyperglycemia as its presenting and most prominent symptom. But if you notice, you presented that comment as a correction of my statement that "the existing theory has been that the sole cause of the complications of diabetes is the HYPERGLYCEMIA which cannot be safely corrected by insulin... ." So what I was saying cannot be safely corrected by insulin was not, as you suggest, 'diabetes,' but rather, simply 'hyperglycemia.' Type 1 diabetes cannot be corrected by anything, other than islet or whole pancreas transplant, and both of these work poorly and are rarely applied in patients who do not already require immunosuppression for other reasons. I think the difference between our views on the wider issue may again just be a miscommunication. I don't dispute that new scientific data do not typically justify a total and radical transformation of the existing theories without many years of further testing. I also agree that when science does take data seriously, it should and does apply the scientific method to their evaluation. But my suggestion is that science often simply refuses to turn its mind in any serious way to data or experimental results which seem too challenging. This is why, for example, Adams' study -- even though if true it would have revolutionary significance -- is still being neglected. I agree that it needs further study and independent confirmation before it could require serious changes in the existing paradigms of diabetology, but my point is rather that it is not attracting those studies, and that there is no good scientific reason why it is not. Instead, the existing explanatory paradigm appears to be protecting itself from this unsettling result by simply refusing to enquire further into it and carrying on as though it had never been published in a peer-reviewed journal. This attitude stands outside of scientific method, as I argued earlier, since it is rather a refusal to engage with the recalcitrant findings. Refractory findings will always generate conceptual pressure, but whether that pressure is fed against the existing explanatory paradigms or fed away from them does a lot to change how significant the data are allowed to become (cf. W. V. Quine, 'The Web of Belief.') This step is a conceptual decision about what to do with the data, and again is not directly part of the use of the scientific method. Thus to take a few examples from 18th century science, some chemists identified phlogiston with what we would today recognize as carbon, while others thought of it as weightless. In both of these cases, experimental results which we might want to say proved that no phlogiston was present could not count against the theory -- not because the scientific method was not being properly applied -- but because the theory structure was designed to feed these conceptual pressures away from challenging the explanatory paradigm. I.e., yes, phlogiston is there, but it's in the carbon, or yes, it is there, but it has no weight or negative weight. Similarly, the supposed electrical fluid was thought to be pumped out of the ground by the electrical accumulators of the era, so this theoretical presupposition warped the significance of any data offered in tracing how electricity interacted with other phenomena. The same trick could be pulled with heat fluid by assuming that it was invisible and had either no weight or negative weight, with the result that correct application of the scientific method within that theoretical structure could not disprove the theory. Theories in this sense act almost like biological entities, designed to preserve their own life against unhealthy environmental influences, such as refractory data in the case of scientific paradigms. Nothing of what I am saying about the often less than rational self-preservation of scientific paradigms is at all new or revolutionary. Thomas Kuhn's 'Theory of Scientific Revolutions' in the 1970s already stated these views, and Imre Lakatos in the 1980s offered some more empirically developed versions of his theory of scientific revolution.
-
The existentialist philosopher Jean-Paul Sartre once commented, "Hell is other people," and although the formulation is so cryptic as to be unclear, there is something in it which reflects on what you say, in that the lives of other people often seem unacceptable for us, even if they count objectively as good or happy. The 17th-century philosopher Gottfried Leibniz said that if a magician offered to transform him into the Emperor of China he would refuse, since having to be someone else would be no different from being dead, which I guess highlights how alien the existence of other people can seem for us. A final, less academic reflection on this comes from a movie in which Stacey Keatch plays a washed-up prize fighter who is trying to cope with his new life without a career. Talking with his manager in an all-night diner, they both look at an ancient, withered Chinese man who works there as the dish washer, and the manager says, "How would you like to wake up every morning and find that you are him?" How do people find a way not only to accept, but even apparently to enjoy, their lives when their existence seems so alien and unsatisfactory to us that we would prefer to be dead rather than to be them? If a healthy, vigorous 18-year-old were to suffer some sort of miraculous progeria and wake up one morning to find himself suddenly 90 years old, he would rather be dead and would perhaps commit suicide, but if he lives to be 90 he will want to live to be 94. Did you ever have the uneasy feeling that if you could suddenly settle your consciousness in that of another person and participate in their consciousness, and experience directly their own peculiar atmosphere of thought, it would horrify you with its alienness, even if that person was a close friend or relative? Fiction usually conceives of one's own death as seeing one's own corpse, but perhaps seeing and studying other lives whose alienness is the negation of our own type of life is a more dramatic intimation of our own death. That experience might also instruct us in the reasons for suicide: I.e., if my life were to become through injury, disease, guilt, shame, or emotional deprivation so alien to my sense of my appropriate identity that continuing to live it seemed more like living as a dead version of myself, then suicide might be as imperative for me as resisting the invasion of my life by another person trying to rape or enslave me.
-
I agree with the above. A normal but sometimes alarming appearance on the visual field is muscati fugitens, or 'floaters,' the little greyish pieces that can be seen floating about, sometimes when a person over 40 is especially tired. These are just dead cells detaching from the inside of the eye and have no serious pathological implications. These have to be distinguished from very dark, somewhat larger blobs that squirt across the visual field, which are the result of bursts in the microvasculature of the retina, a symptom of retinopathy. These can also appear as clouds of ink which then seem to coagulate into a dark mesh of cells. This is a serious symptom and requires immediate medical attention. Dull white objects, about the size of normal floaters, can also appear, and these are fat cells escaping from deeper layers of the retina, and are also a sign of retinopathy.
-
How Do You Define a Genius and What is One to You?
Marat replied to lamp's topic in Psychiatry and Psychology
Math geniuses are sometimes known for their ability to span different subfields within math and synthesize new approaches from those syntheses. An example who comes to mind is the late Serge Lang, Professor of Math at Yale. -
As your post suggests, the ultimate reason why most religions posit the existence of a soul is to contrive an accounting device that guarantees that moral evil will receive its just desert, even though that retribution cannot be demonstrated to occur in our present life. At its most basic, religion is all about pretending that moral value can be supported on real things which validate it, rather than just on its own merits. Concepts like God, Heaven, Hell, Soul, miraculous communications of specially-privileged stories or rule books, etc., are all just ontological struts propping up our social values for treating other people empathetically -- as though this kind of decency had insufficient value in itself without these imaginary supports. The soul, Heaven, and the Afterlife are thus the religious version of 'your check is in the mail' -- i.e., the accounts will balance in some imaginary neverland. Immanuel Kant's philosophy of religion even says as much: We have to posit the existence of an afterlife of rewards and punishments since otherwise we would be faced with the incongruity of an infinitely powerful and good God allowing bad people to get away with murder. Thus Heaven and Hell are the ontological projection of a moral value system which is also ontologically guaranteed to be effective. Judaism seems to be one of the few religions not to show much if any interest in the idea of a soul or the afterlife. This is probably because it was already well-developed before it came in contact with Greek philosophy, which had a fully-articulated notion of a mind or soul potentially existing apart from the human body (psyche/nous), and when this was overlaid on the early, essentially still Jewish message of Christianity as the doctrine was spread by the intellectually Greek St. Paul through the Eastern Mediterranean, Christianity became wedded to concerns with the soul and its fate after death. Ironically, this preoccupation was essential to Ancient Egyptian religion, where the heart of the deceased is weighed in the balance against a feather to determine fate in the afterlife. But the Jews who became the first Christians would have found the idea of a soul being judged after death to be disgustingly Egyptian.
-
I think the inverse Mill principle would lead to a society which is too oppressive, since very few things can be proven to be absolutely not harmful. Even water has an LD50, but we would not want to make that illegal along with cocaine. As soon as I decide to drive I measurably increase every other driver's risk of death, but my driving in a non-negligent way or my driving without actually causing any damage is not a tort, though it might cost other people money by fractionally increasing their insurance rates. A problem on point arose in the Canadian Supreme Court case of R. v. Butler (1992), where it was admitted that the scientific evidence was unclear that pornography actually caused any social harm, so rather than concluding from that that pornography should be permitted, the court decided that it could be prohibited, since the legislature's arational hysteria was due a margin of deference. We would need some metric to determine how objectively real or how extensive the damage would have to be before it would be forbidden if we used an inverse Mill principle, and since this would always be a contested political judgment of arbitrary line-drawing, the law might lack a sure grounding in principles more widely accepted than political judgment.