granpa Posted August 12, 2015 Posted August 12, 2015 There are many definitions of qualia, which have changed over time. One of the simpler, broader definitions is: "The 'what it is like' character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc." Examples of qualia include the pain of a headache, the taste of wine, or the perceived redness of an evening sky. I believe that there are two types of qualia. The first type only conveys information. For example a black and white image or the sensation of touch or a pure tone (without harmonics) The second type conveys a pleasant or unpleasant sensation. For example a beautiful color image of a rainbow or the taste of something sweet. The first type gives us information that we can use to make decisions toward achieving our goals (for example to satisfy our curiosity) The second type becomes a goal unto itself. Imagine a computer capable of recognizing shapes and objects and of recognizing actions performed by thoses objects and capable of creating and analyzing complex simulations. Clearly it is aware of and perceiving some sort of sensation which conveys Information to it. But it is just information. Until we figure out how the second type of qualia works our computers will only be able to experience the first type of qualia Yellow = pleasant white Red = pleasant grey Blue = pleasant black Orange = red + yellow Purple = red + blue Green = blue + yellow Salty = information MSG = information Sweet = pleasant Bitter = unpleasant Sour = ? Hot = ? Fat = ? Touch = infinitesimal pressure Pressure = information Pain = unpleasant pressure Pleasure = pleasant pressure Temperature = information (obsolete) Hot = unpleasant temp Warm = pleasant temp Cold = unpleasant temp Tone = information Harmonics = pleasant tone Your brain is divided into 3 main parts each of which is capable of thinking and acting autonomously: Midbrain (input) decides why to do anything Forebrain (CEO) decides what to do Cerebellum (output) decides how to do it The cerebral cortex (forebrain) is CEO. You are the forebrain. The midbrain and cerebellum are your helpers that take care of routine tasks so you can concentrate on more important things. Most information goes straight from input to output bypassing the forebrain. Much, if not all, of the processing done by the forebrain is inductive in nature. The forebrain is the source of imagination The midbrain is input. The midbrain has thousands of eyes and can raise the alarm when something needs your attention These alarms exert an irresistible all-powerful force upon you. Fortunately for us the midbrain only wants what is best for us and never asks anything for itself. These alarms are capable of supplying us with infinite MentalEnergy and power. Alarm = fear. Anti-alarm = excitement. The cerebellum (hindbrain) is output. The cerebellum has thousands of hands and can juggle thousands of things at once but has no clue "what" it is doing. The cerebellum takes care of simple procedures so the forebrain can concentrate on more important issues. (It also helps the midbrain accomplish its tasks) You point at the target and the cerebellum shoots. (But sometimes it "misses the mark" that you set for it) Each of these 3 parts is likewise divided into an input, output, and CEO each of which is likewise divided into an input, output, and CEO. This continues right down to the level of neurons. As a result your brain is a city full of independent units, organized into a fractal pyramid, that are constantly talking back and forth, buying and selling, living and dying. (See HOW THE MIND WORKS by Steven #Pinker) How the Mind Works The computational theory of mind also rehabilitates once and for all the infamous #homunculus. A standard objection to the idea that thoughts are internal representations (an objection popular among scientists trying to show how tough-minded they are) is that a representation would require a little man in the head to look at it, and the little man would require an even littler man to look at the representations inside him, and so on, ad infinitum. But once more we have the spectacle of the theoretician insisting to the electrical engineer that if the engineer is correct his workstation must contain hordes of little elves. Talk of homunculi is indispensable in computer science. Data structures are read and interpreted and examined and recognized and revised all the time, and the subroutines that do so are unashamedly called “agents,” “demons,” “supervisors,” “monitors,” “interpreters,” and “executives.” Why doesn't all this homunculus talk lead to an infinite regress? Because an internal representation is not a lifelike photograph of the world, and the homunculus that “looks at it” is not a miniaturized copy of the entire system, requiring its entire intelligence. That indeed would have explained nothing. Instead, a representation is a set of symbols corresponding to aspects of the world, and each homunculus is required only to react in a few circumscribed ways to some of the symbols, a feat far simpler than what the system as a whole does. The intelligence of the system emerges from the activities of the not-so-intelligent mechanical demons inside it. The point, first made by Jerry Fodor in 1968, has been succinctly put by Daniel Dennett: Homunculi are bogeymen only if they duplicate entire the talents they are rung in to explain. ... If one can get a team or committee of relatively ignorant, narrow-minded, blind homunculi to produce the intelligent behavior of the whole, this is progress. A flow chart is typically the organizational chart of a committee of homunculi (investigators, librarians, accountants, executives); each box specifies a homunculus by prescribing a function without saying how it is accomplished (one says, in effect: put a little man in there to do the job). If we then look closer at the individual boxes we see that the function of each is accomplished by subdividing it via another flow chart into still smaller, more stupid homunculi. Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, “replaced by a machine.” One discharges fancy homunculi from one's scheme by organizing armies of idiots to do the work. Modern computers know "how" to do things but don't yet know "what" they are doing. LogicProgramming will eventually change that. -1
tar Posted September 8, 2015 Posted September 8, 2015 (edited) granpa, What does this mean? "Yellow = pleasant white Red = pleasant grey Blue = pleasant black" By what reasoning or method or analogy do you come upon these equivalencies? I can follow your thinking, in terms of qualia being of two types, that of information and that of judgment of the goodness or badness of the information, but there should be a understandable principle involved with why a certain piece of information should be judged as good or bad. Red being a pleasant grey, makes no sense to me, in terms of this. What color or other piece of information for instance, would be an unpleasant grey? My thought is that if we were to give a computer some sort of reward and punishment regiment, allowing it to partake in "good" behavior and "bad" behavior, that were germane to its survival and the continuation of its pattern, we would have a sort of analogous situation to living, thinking things, that we would normally suspect of "having" qualia. So if you gave a machine the ability to sense and remember, you would have the formation of an outside pattern on the inside of the machine, or in a real sense, "information", which is the first sense of qualia that you propose. Secondly, if you were to give the machine the ability to move about the environment and thusly vary the input, or the nature of the world it was being informed of, you would have another component of a living thinking thing. Then you might add some other way, other than movement, that the machine could affect the environment it was in, some other motor skill of grasping and holding and manipulating the environment. But then the most important "quality" would have to be added. The machine would have to die if it did the things that caused it to die and survive if it did the things that caused it to survive. These would be the "good" things, and the "bad" things. The pleasurable "good" things would need to have a "reward" associated with it, and the unpleasant "bad" things, would have to have a punishment associated with it. Similar to us enjoying a meal, rather than being hungry. Regards, TAR Edited September 8, 2015 by tar 1
granpa Posted September 8, 2015 Author Posted September 8, 2015 It means that you got a plus one for saying exactly the same thing that I got a minus 1 for. And it means they moved my post from psychology to philosophy. -2
tar Posted September 9, 2015 Posted September 9, 2015 granpa, Sorry you got neg repped. I hate that, as well. So consider my question, as to what basis you have to say something like "yellow=pleasant white". I am not saying that there is no meaning to such a statement, I am asking you what the meaning is. Have you thought the thing through and does it make sense in more than one way? I do not see what such a statement is referring to, and I am asking you to explain. Regards, TAR 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now