Jump to content

Recommended Posts

Posted

We will probably walk backwards into it without seeing it happening. The nature of emergence is such that we can't predict the result beforehand and may miss it at the time it happens.

True, and, yes, we may not see it happen. I think sentience will require many modules, language, logic, interacting with space-time, vision, hearing, smell, etc., and each time one is added to the system, scientists will look for emergent functions. Thus, they may see it, if they think to look in the right way.

Posted

To return to the very beginning, if i may, i think the Weak AI Theorem is correct ,as the Strong AI Theorem is valid, too. Thought is a mechanical activity of the brain, based on memory, bused by synapses and neurotransmitters, so the brain is the hardware, the memory is the software and the synapses are the circuitry. Seeing that thought is matter, machines are definitely able to be given a capacity for mimicing thought. The human brain, however, has a silent consciousness beyond thought , call it real intelligence, which can act upon thought, and which could never be replicated in a machine/computer, however sophisticatedly engineered. When you look at your cat, K, and she looks at you, it is the Universe looking at Itself in silence: machines, however artificially intelligent, could never go beyond the limitations of thought.

Posted

 

Samuel Langley's experiments with airplanes

 

Comment in the New York Times one week before the successful flight of the Kitty Hawk by the Wright brothers:

"...We hope that Professor Langley will not put his substantial greatness as a scientist in further peril by continuing to waste his time and the money involved, in further airship experiments. Life is short, and he is capable of services to humanity incomparably greater than can be expected to result from trying to fly....For students and investigators of the Langley type there are more useful employments."

 

Source: New York Times, December 10,1903, editorial page.

The Wright Brothers flew December 17, 1903.

Posted (edited)

Hi KipIngram

Sorry it took so long for me to get back, but during the week I am so occupied by my work...

About EPR: I do not quite understand what you are saying, and how it is an counter argument against mine: that EPR experiment show that there is no underlying, local mechanism in QM, so that QM does not leave room for being influenced by a mind, so to speak under the threshold of QM.

But that definition doesn't fully capture what I'm talking about. Even if no "out of boundary" force causes me to act a certain way, if my action is pre-determined by my own internal state then I'm not really choosing that action. It was just inertia.


No, that is not true. What you are doing here is taking a mind, or soul for granted. But if you see that we are what the brain does, then this makes no sense. Something cannot be coerced by itself. You can be coerced by somebody else: by threatening you, or literally forcing you to do things. But that has nothing to do with determinism. Determinism is not the same as coercion. Determinism is only saying that from certain start conditions only one set of end conditions follow. But that is the course of things, not coercion. Laws of nature force nothing: they just describe how nature flows.

What my brain does is really a choice: a choice between possible consequences of several actions I can imagine to do in a certain situation. But such an 'evaluation machine' can be determined. Every chess computer is an evaluation machine in a very limited universe.

The situation isn't black and white. I absolutely agree we are influenced by our experiences, memories, and so on. All of those things are a very important part of who we are. I just don't think that those things are the totality of what determines our actions. I think we are capable of "surprising the universe." Capable of injecting an "additional input" to the action determination.


Why do you think so? Can this feeling not be explained just as well by the immense complexity of the brain? And I think even a chess computer may surprise you by doing a fully unexpected move.

Yes, you're right. My leanings do precisely invoke a subentity; the reason being that I haven't figured out how to consider the original entity (a purely material body) capable of generating awareness. I feel that emergence theories do more or less the same thing - they posit awareness as an emergent property, but without offering any real explanation of how that happens. I'm posting a subentity, without offering any real explanation of how it exists. I am attempting to explain how the subentity and the physical entity could be connected, but not what the subentity is. So I guess to some extent it's just two flavors of the same maneuver.


Then you haven't read any book that really tries to explain consciousness as emergence. It is much more than just saying 'emergence', like other say 'soul' or 'God'. Really, Read GEB and Dennett's 'Consciousness Explained'.

Start at birth. All of the things you mention shortly after the quote (beliefs, feelings, etc.) are just accumulated effects of external stimuli experienced starting at birth, in your model. So the adults "choice" still isn't really a choice. Unless you invoke, at some point in the process, an independent external input from something outside the physical. Somewhere in the process there has to be an opportunity for a real, fully independent choice, for me to feel justified in making moral judgment.


Of course it is a choice! Evaluation of several possibilities, and choosing one, is a choice. This is exactly the evolutionary advantage of consciousness: not to react automatically on some chemical gradient or light source, but have a picture of the environment, about its own position in this environment, to recognise its own interests, its possibilities for actions, evaluating the consequences and then choosing the best option. We see an increase in this capability in animals with increasing brain complexity. So why invoke something else than this (neural) complexity?

So you are looking at the wrong place for consciousness if you think it should follow from our laws of physics. You do not become a good chess player by analysing the physical structure of the chess computer that always beats you. Or another example: you do not understand evolution by studying elementary particles. Elementary particles do not evolve in the Darwinian sense. Yet Darwinian evolution exists. Evolution is also an emergent property of complex material structures, called life.

No, that doesn't help me much. My core issue here is that all of the transistors in the computer are still just transistors. Each one still just has a charge distribution in it, and still just has two potential differences (Vgs and Vds) as inputs. They just have no "global" interconnectivity to fuel some mysterious emergence. If you can't look at one transistor and recognize a vehicle for awareness, having 100, or 1000, or 10^10^10 doesn't help.


You are doing it again... If you use the word 'just' like this, everybody knows that you are leaving something out, namely the most important. You are just a portion of chemical: how can you be conscious? If awareness is somehow fundamental to the universe, why is it that we only see awareness in higher animals? It must have to do something with complexity! If we would have found out what, do you think somebody will still say 'oh but this only works because matter is conscious', or 'and so we have an antenna for the soul-entity; but we do not know still what this soul-entity is'.

In Newton's days one could wonder who started to move the planets, or how gravity works (Angels pushing, looking that they push exactly according to the inverse square law...).

 

I just observe my awareness, and it looks like their are two ways I can explain it: 1) it "just is" - i.e., it has to do with something new that appears nowhere in our current theories, or 2) it does somehow "just happen" when physical systems become sufficiently complex.

 

Yep. But in both case you still have to explain why we have awareness, and e.g. a stone hasn't. And of course it is not just 'sufficient complex', but complex in a certain way. If science understands under what conditions complex structures are aware, then we will have understood awareness.

 

MonDie: When I think of emergence in connection with computer applications, the first thing that pops to mind as an example is Conway's Game of Life. The whole thing is driven by those dead-simple little rules, and yet if you watch it go for a while you start to see interesting patterns and so forth. It would be really hard to predict those patterns from the rules themselves (other than by running examples and observing).

 

Exactly. Nothing in the simple rules of 'Life' suggests that such complex structures can arise from them: they have even build Turing machines with it. They even replicated 'Life' with 'Life'!

Edited by Eise
Posted (edited)

Then you haven't read any book that really tries to explain consciousness as emergence. It is much more than just saying 'emergence', like other say 'soul' or 'God'. Really, Read GEB and Dennett's 'Consciousness Explained'.

 

 

Consciousness explained didn't explain anything for me. If consciousness is emergence...then why did it emerge in me and not in you. Conscious is more of a divergence than emergence. Not saying you aren't sentient, maybe I will die and be you in my next life. When you think about reincarnation, you realize the idea of past and future has no meaning. A future life is as forgotten as a past life.

Future life and past life has no difference. Time is only measured by the amount of social progress and technology.

 

The underlying thing we don't understand is why chemicals and molecules transform into smells. Knowing that chemicals and molecules transform into smells, is not at all understanding why they transform into smells.

Edited by quickquestion
Posted

True, and, yes, we may not see it happen. I think sentience will require many modules, language, logic, interacting with space-time, vision, hearing, smell, etc., and each time one is added to the system, scientists will look for emergent functions. Thus, they may see it, if they think to look in the right way.

 

We can't even see sentience in animals. We'll miss it in machines as well until it sits down and has a long talk with us.

Posted

 

We can't even see sentience in animals. We'll miss it in machines as well until it sits down and has a long talk with us.

A machine has already given indication, through talking, that it has sentience.

This doesn't mean it actually has sentience.

Sentience in other humans cannot be proven. They only sentience you can prove is your own.

You can deduce that humans and animals have sentience. But you cannot induce humans and animals have sentience.

Posted

Eise has asked: " Why do we have awareness and a stone hasn't " ? Is it because we have the human brain and the stone hasn't ? A stone cannot know what a stone is, no more than fire can know what fire is, but someone or something has to know, and it is only through the human brain that the Universe can consciously know itself. This is why the Universe, over aeons, has evolved the human brain - it isn't my brain, or your brain, or his brain, or her brain, it is the human brain that is far, far older than any of us. The consciousness of the brain is the only consciousness of the Universe and is one movement, and without the human brain, the Universe is without Self- awareness. This doesn't make us gods- we are the horse, not the rider. Sorry to come on seeming so heavy;

we can play with this and make it fun. Tell me i'm talking through my bottom.

Posted

We will probably walk backwards into it without seeing it happening. The nature of emergence is such that we can't predict the result beforehand and may miss it at the time it happens.

 

Indeed - if I'm totally wrong about this and AI can emerge from conventional computing, then we do need to be very, very careful. I for one don't want to create technology that then turns around and puts us down. The researchers in AI who are focused on this (how to ensure that we keep control of an AI that we create, even if it makes us look about as smart as bacteria), are definitely approaching it the right way.

To return to the very beginning, if i may, i think the Weak AI Theorem is correct ,as the Strong AI Theorem is valid, too. Thought is a mechanical activity of the brain, based on memory, bused by synapses and neurotransmitters, so the brain is the hardware, the memory is the software and the synapses are the circuitry. Seeing that thought is matter, machines are definitely able to be given a capacity for mimicing thought. The human brain, however, has a silent consciousness beyond thought , call it real intelligence, which can act upon thought, and which could never be replicated in a machine/computer, however sophisticatedly engineered. When you look at your cat, K, and she looks at you, it is the Universe looking at Itself in silence: machines, however artificially intelligent, could never go beyond the limitations of thought.

 

Goldglow, I may be wrong, but my reading of the Strong AI theorem is that it denies precisely what you just said. I think your first three sentences exactly captures Weak AI, but then the rest of your post proposes "more," and Strong AI says exactly that "there is no more."

Posted

 

Indeed - if I'm totally wrong about this and AI can emerge from conventional computing, then we do need to be very, very careful. I for one don't want to create technology that then turns around and puts us down. The researchers in AI who are focused on this (how to ensure that we keep control of an AI that we create, even if it makes us look about as smart as bacteria), are definitely approaching it the right way.

 

Goldglow, I may be wrong, but my reading of the Strong AI theorem is that it denies precisely what you just said. I think your first three sentences exactly captures Weak AI, but then the rest of your post proposes "more," and Strong AI says exactly that "there is no more."

True, but we must also think of the AI's feelings. If we create sentient AI that cannot feel, then we may create specimens which are perpetually bored, but with no way to report their own boredom.

Posted

Eise: I want to reply to your last post, but before I do I want to learn how to do the quoting right; so far I haven't figured out how to get those nicely boxed multiple quotes into a single reply. I'll be back with you shortly.

 

quickquestion: There is absolutely no reason for me to presume my awareness is the only one in the universe. it's just the only one I can feel. But I can look at myself in a mirror, and then look around and see many other entities similar to me, behaving as though they are driven by awareness as well. Yes, they could be automatons. But it seems much more reasonable to me to assume they are "others like me" than to assume I am the only aware entity around and everyone else is a robot.

 

Ah, your next post down has it right. No, I can't prove that other humans or animals are aware. It's just the simplest explanation for how I see them behave.

 

cladking: It's entirely clear to me that animals have sentience. Just as other humans behave as though they're driven by awareness, so do animals.

 

quickquestion: We don't have to think about anything other than our own well-being. I agree with you, that it would be best not to create sentient beings that are then miserable. But given StringJunkie's observation that we might create them without meaning to, it behooves us to protect ourselves. Do you think about the saber-tooth tiger's feelings when he's attacking you? Or do you just try to kill it? Self-preservation is the core driving principle of pretty much everything.

Posted

I once read a short sci-fi story: in the far distant future, all intelligent life-forms in the Universe had put their combined knowledge into a colossal super-computer. The first question was: " is there a God ? " " Yes, " replied the computer, " now there is." I don't think there is any danger of technology becoming more intelligent than human beings ; yes, computers can do many things far quicker than we can, but the limits of our knowledge will always be the limit of their knowledge, so they can never take over the world and become Gods. We can always pull the plug out.

Posted

I once read a short sci-fi story: in the far distant future, all intelligent life-forms in the Universe had put their combined knowledge into a colossal super-computer. The first question was: " is there a God ? " " Yes, " replied the computer, " now there is." I don't think there is any danger of technology becoming more intelligent than human beings ; yes, computers can do many things far quicker than we can, but the limits of our knowledge will always be the limit of their knowledge, so they can never take over the world and become Gods. We can always pull the plug out.

 

I think the whole point of the AI claims is that the AIs will become self-advancing, and at a rate we can't compete with. Yes, as long as they're confined in the system we have control over, what you just said is true (we can pull the plug, we can make sure the system isn't equipped with "weapons," etc.) But I think the concern that's voiced here and there is that the AI will wake up, race past us in intelligence, and "hack its way out."

Posted (edited)

Would that not require some form of biosynthesis which machines cannot undergo? Also, though i might be wrong, i don't think intelligence and knowledge are the same thing, so even though a computer had all the knowledge available, as in the story above, it would still not have intelligence.It would be possible, of course, for unscrupulous developers to programme a machine to run amok, ( though that would not be under the machine's own volition ) , and there is always the threat from malicious hackers so i do respect the concerns of others much more qualified than me in these matters.

Edited by goldglow
Posted

Eise: I want to reply to your last post, but before I do I want to learn how to do the quoting right; so far I haven't figured out how to get those nicely boxed multiple quotes into a single reply. I'll be back with you shortly.

 

You can try the sandbox.

Posted

Sorry it took so long for me to get back, but during the week I am so occupied by my work...

 

No worries, Eise - I've been busy as well.

 

 

About EPR: I do not quite understand what you are saying, and how it is an counter argument against mine: that EPR experiment show that there is no underlying, local mechanism in QM, so that QM does not leave room for being influenced by a mind, so to speak under the threshold of QM.

 

I think the point I was making wasn't specific to EPR - I was just noting that the quantum theory requirement that an ensemble of identical measurements show a particular statistical distribution of results says nothing about any one of the measurements - as long as it is one of the allowed results quantum theory wouldn't cry foul. So if free will does manifest through quantum uncertainty, quantum theory limits the menu of available choices, but does not restrict them to one.

 

 

No, that is not true. What you are doing here is taking a mind, or soul for granted. But if you see that we are what the brain does, then this makes no sense. Something cannot be coerced by itself. You can be coerced by somebody else: by threatening you, or literally forcing you to do things. But that has nothing to do with determinism. Determinism is not the same as coercion. Determinism is only saying that from certain start conditions only one set of end conditions follow. But that is the course of things, not coercion. Laws of nature force nothing: they just describe how nature flows.

 

I think we're talking past each other here. You said "Determinism is only saying that from certain start conditions only one set of end conditions follow." I am saying that there is no choice in that situation. If that is how our brains work, then we in fact have no free will. On the other hand, determinism doesn't hold in that strong fashion in our universe - quantum theory tosses it aside. There are options. When a quantum outcome is required, perhaps the universe throws a die and makes a random choice. Or perhaps consciousness exists outside the current laws of physics and makes the choice. Or perhaps some of both, depending on whether the system in question is involved with conscious expression.

 

The only point I have here, really, is that if consciousness exists and has free will, then this is the only manner in which our physical theories would allow it to have free will. If consciousness is emergent, and the quantum outcomes are just random and "average out" at the macroscopic / conscious level, then in fact there is no free will. There just isn't any other 'portal" for it in physical theory, other than via quantum uncertainty.

 

 

What my brain does is really a choice: a choice between possible consequences of several actions I can imagine to do in a certain situation. But such an 'evaluation machine' can be determined. Every chess computer is an evaluation machine in a very limited universe.

 

Chess computers don't "choose." They run an algorithm no different in principle than 2+2 and get an answer, and then they execute that answer. If consciousness is emergent and quantum uncertainty is irrelevant to its operation, then that's all we do too, and "free will" is an illusion. I'll say again that I don't have a solid defense against the claim that free will is an illusion - it could be that my awareness only thinks it's choosing its actions. That's why I said the free will thing was something of a digression. I think I have free will, but I'm not sure. I'm sure I feel my awareness, and that makes awareness the key thing to focus on for me.

 

 

Why do you think so? Can this feeling not be explained just as well by the immense complexity of the brain? And I think even a chess computer may surprise you by doing a fully unexpect

 

I think this because of the awareness aspect. Like I just said, I don't think free will is an illusion, but it could be. But my awareness exists, and my goal here is to try to understand its origins. If it turns out to be emergent, then I'll have to seriously consider that free will may be (probably is) an illusion. But if I'm proposing that consciousness is a fundamental entity in its own right, then I can say, "Well, it feels like I have free will, so until evidence to the contrary emerges I'll assume I do."

 

Emergent consciousness would be subject to serious restrictions, since it's tied ultimately to our laws of physics. Fundamental consciousness makes it open season - we'd have no reason to presume a priori that it didn't have free will (in my sense) as well as awareness.

 

 

Of course it is a choice! Evaluation of several possibilities, and choosing one, is a choice. This is exactly the evolutionary advantage of consciousness: not to react automatically on some chemical gradient or light source, but have a picture of the environment, about its own position in this environment, to recognise its own interests, its possibilities for actions, evaluating the consequences and then choosing the best option. We see an increase in this capability in animals with increasing brain complexity. So why invoke something else than this (neural) complexity?

 

Well, we're defining choice differently here. You're describing an extremely complex system of differential equations, basically, and I say that all of the "choices" were bound up in the initial conditions. That's not the sort of choice I'm talking about - I'm talking about a new forcing function applied to the system at the time of the choice. And I'm saying the only spot in our theories where such an effect could enter in is via quantum uncertainty - everything else is, in fact, nailed down by the differential equations.

 

 

So you are looking at the wrong place for consciousness if you think it should follow from our laws of physics. You do not become a good chess player by analysing the physical structure of the chess computer that always beats you. Or another example: you do not understand evolution by studying elementary particles. Elementary particles do not evolve in the Darwinian sense. Yet Darwinian evolution exists. Evolution is also an emergent property of complex material structures, called life.

 

:) And as far as I can tell, that approach to life does not explain awareness. That's my quest. I have read GEB, but at the time I read it my mind was in other areas and I focused mostly on the math and so forth. I do have it off the shelf for a re-read, with the focus being emergence. I'm looking forward to it, though I'm also rather daunted by my memory of what a tedious read it was the first go round. Maybe my new angle of interest will make it a different experience this time.

 

 

You are doing it again... If you use the word 'just' like this, everybody knows that you are leaving something out, namely the most important. You are just a portion of chemical: how can you be conscious? If awareness is somehow fundamental to the universe, why is it that we only see awareness in higher animals? It must have to do something with complexity! If we would have found out what, do you think somebody will still say 'oh but this only works because matter is conscious', or 'and so we have an antenna for the soul-entity; but we do not know still what this soul-entity is'.

 

I find that hard to argue with. There are these speculations that everything is conscious (rocks, etc.), and that we just don't "see it" because we are predisposed to associate consciousness only with certain observations. I've tried not to go that far - it seems like a stretch. Then there are less extreme theories like Hoffman's, which says that consciousness is fundamental, and that the whole of material reality is just our perception of our (conscious agents) interactions. But as I think about it, that's almost the same thing, just said in a tighter way. Because Hoffman would contend that that rock over there is a reflection of something in the overall structure of (variable complexity) conscious agents.

 

I'm trying not to go down Hoffman's rabbit hole too far, too fast, because you can more or less use it to explain anything. If we are just conscious agents, and the universe is our "interface" as he would call it, then any question we ask the universe will receive an answer (any observation will provide some result). Let's say we see how chemical reactions cause the paramecium to move up the food gradient. Well, sure - the paramecium conscious agent is doing something, so we have to see that happening somehow. It's like anything we observe (behaviors, laws, etc.) just has a reflection in the structure of conscious agents. And his proposal is that we judge the theory based on how well it can mirror our existing laws of physics. Let's say it plays out, and he's able to show the precise emergence of all of our "stuff." Quantum theory, the Standard Model, etc. etc. Then it will be the case that the only new thing his theory offers is a way to say "consciousness is fundamental." Everything we said about the physical world matches, but we get to add on this one other thing that we otherwise don't have a solid explanation for.

 

Well, fine. But that's a little bit like invoking instantaneous action at a distance to explain EPR correlations. You have this mysterious action, that's not detectable or usable in any other way, but it allows you to keep the notion that those quantum states have reality before they're measured. Or you can just accept that things aren't real until observed.

 

Hmmm. So that's a bit rambly and I'm not sure I made any good points. But I'm going on to the rest of your reply now.

 

 

In Newton's days one could wonder who started to move the planets, or how gravity works (Angels pushing, looking that they push exactly according to the inverse square law...).

 

Yes, I see your point. Once we have a theory it all makes sense. We just don't have that "Newton's theory of emergence" yet. I'm just now quite willing to say "it must be emergence." I need something more solid before I throw out all other possibilities. I crave a "how."

 

 

Yep. But in both case you still have to explain why we have awareness, and e.g. a stone hasn't. And of course it is not just 'sufficient complex', but complex in a certain way. If science understands under what conditions complex structures are aware, then we will have understood awareness.

 

Yes - bring it on. :) I think we've done a superb job of understanding under what conditions a complex system can make "feedback responses" to its environment. Although even a rock does that - you step on it and it gives a little; that's a "response." On the other hand, you step on a snake and it bites you. We decide to say the snake was "aware" of being stepped on and the rock was not, even though both produce a physical response. And we suspect we could build an "artificial snake" that if stepped on would whip one end around and drive nails into the thing stepping on it. But we wouldn't say that mechanism was aware.

 

Machines don't know what they're doing - they just do. Somehow you and I know what we're doing in a "different way." And that difference is the thing I want to understand.

 

Whew. That took a while. So look, I figured out how to insert your quotes in the right places, but it was laborious. Basically I quoted your entire reply every time, and then cut out the parts I didn't want from each one. Is there an easier way to do that?

 

You can try the sandbox.

 

Ah - thank you. I will explore it.

Posted

I think the point I was making wasn't specific to EPR - I was just noting that the quantum theory requirement that an ensemble of identical measurements show a particular statistical distribution of results says nothing about any one of the measurements - as long as it is one of the allowed results quantum theory wouldn't cry foul. So if free will does manifest through quantum uncertainty, quantum theory limits the menu of available choices, but does not restrict them to one.

 

But my point is that EPR experiments show that there is no room for hidden local causes. QM has only randomness on offer, and my will is simply not just randomness.

 

You said "Determinism is only saying that from certain start conditions only one set of end conditions follow." I am saying that there is no choice in that situation. If that is how our brains work, then we in fact have no free will.

 

Of course not! You are mixing two different discourses: on one side there is the discourse about reasons, motivations, aims, etc. On the other side is the discourse of laws of nature, of energy, of conservation laws, of causes, etc. In the first discourse talking about free will, or coercion, makes sense: in the second it does not. In the second, talking about determinism and randomness makes sense, in the first it doesn't. The problem of free will and determinism is a pseudo problem, that under the correct understanding just evaporates. There is a relevant way in which e.g. a chess computer has a choice: it can move a pawn from E2 to E4, but also E3, but it cannot move to A5. This is true, even if the choice is determined. The same with us: we are determined, but there is relevant meaning of having a choice: if you are in MacDonalds you cannot choose a coq d'orange. But you can choose between a big Mac or a Cheeseburger. That you are a determined system does not change the fact that you choose.

 

So there is a relevant meaning of free will in a determined world: that what will happen next, depends on your choice (opposed to e.g. that what you do depends on the choice of somebody else).

 

Again, you must be aware of the difference between determinism and fatalism. Your choice might be determined, but what will happen depends on your choice. In fatalism, what happens, happens independently of your choices.

 

We are determined, and therefore free will can arise. In a world of randomness this would be impossible.

 

If it turns out to be emergent, then I'll have to seriously consider that free will may be (probably is) an illusion. But if I'm proposing that consciousness is a fundamental entity in its own right, then I can say, "Well, it feels like I have free will, so until evidence to the contrary emerges I'll assume I do."

 

There is some kind of illusion of free will: that it has no causal foreplay. But that my choices have impact is an empirical fact. The idea that we have some kind of absolute free will is a metaphysical idea, that is not supported by any evidence. It was probably just a theological idea, as a solution to the problem of the theodicy.

 

Emergent consciousness would be subject to serious restrictions, since it's tied ultimately to our laws of physics. Fundamental consciousness makes it open season - we'd have no reason to presume a priori that it didn't have free will (in my sense) as well as awareness.

 

Tell me about these restrictions: give me some empirical evidence that you are restricted by the laws of physics (not the obvious kinds of course, e.g. that you cannot fly, or run faster than light).

 

I have read GEB, but at the time I read it my mind was in other areas and I focused mostly on the math and so forth. I do have it off the shelf for a re-read, with the focus being emergence. I'm looking forward to it, though I'm also rather daunted by my memory of what a tedious read it was the first go round. Maybe my new angle of interest will make it a different experience this time.

 

GEB is not tedious at all! It is fun! But it can be demanding, if you really go through it. But you can try this short cut. But you should really read GEB. It definitely change my view, and gave me a deeper insight of how consciousness can arise in a formal system.

 

Yes, I see your point. Once we have a theory it all makes sense. We just don't have that "Newton's theory of emergence" yet. I'm just now quite willing to say "it must be emergence." I need something more solid before I throw out all other possibilities. I crave a "how."

 

Just to say "It is emergence" definitely falls short. But the theories that cognitive science came up with can be subsumed under the label 'emergence'. The real problem for you is that you say "I cannot imagine how a complex mechanism like our brain can be conscious". Now that is not very solid. You step in one stroke from "I cannot imagine it" to "cognitive science is bankrupt". Understanding the brain might be a slightly more difficult problem than the movement of planets and other dead bodies.

 

Machines don't know what they're doing - they just do. Somehow you and I know what we're doing in a "different way." And that difference is the thing I want to understand.

 

No. We are machines. Very, very complicated electrical-chemical-biological machines.

 

"Yes, we have a soul, but it’s made of lots of tiny robots."

 

So look, I figured out how to insert your quotes in the right places, but it was laborious. Basically I quoted your entire reply every time, and then cut out the parts I didn't want from each one. Is there an easier way to do that?

 

That is the way I do it too. Another way is to use 'raw format': see what happens if you press the button just above the button for 'Bold'. Sometimes my postings get completely messed up, and the only way I can save them is by using the 'raw mode'. There you can use tags, as many other, more primitive forums software, e.g. use the tag

 

 

But it is not very user friendly, mainly because it does not automatically wrap long lines.

Posted

I think we really do have free will, because we are not confined to action arising from instinct alone: using thought, we can decide what to do. Thought has replaced instinct in human beings so we are free to do as we wish, without waiting for any animal impetus or stimulus.We still have instincts, of course, which in certain circumstances can take over and act faster than thought, but usually only under circumstances of extreme danger when there is no time to think. I have had personal experience of this when i was attacked by a cow. ( Don't laugh too much - i was only 9! ).

Posted

Tell me about these restrictions: give me some empirical evidence that you are restricted by the laws of physics (not the obvious kinds of course, e.g. that you cannot fly, or run faster than light).

 

:( We are still talking past each other. I think you are saying that an action taken by your body, even though it may hugely influenced not only by immediate external stimuli but also by your own internal state as evolved through the past, is free will. That is not at all what I am talking about when I say that a fully deterministic system does not have the possibility of free will. Free will in my eyes is the possibility of taking an action that is absolutely unpredictable based on past observations. Granted, someone who knows us well might predict our actions in many many situations. But if we are totally predictable, as a matter of principle, then there is no room for free will. Free will in my definition is the ability to inject new information into the universe.

 

Regarding uncertainty, randomness, and so on, if you have one opportunity to observe a quantum event, you see an outcome. How do you know that outcome is random? You call it random for two reasons (that are related, btw): 1) quantum theory states that it's random, and 2) you can't predict it. When you have the opportunity to watch identical ensembles of events, you see a statistical pattern emerge, and we can predict those patterns. But if any single one of those outcomes was not random (say, if I were hiding on the other side of uncertainty and decided to adjust that one outcome), you would never know, would you?

 

Now, if I sat over there and adjust a large number of them, under an ensemble situation, you'd see a perturbation of the statistics. But you can't see that based on one event. What our experiments really empower us to say is that under controlled conditions in a laboratory, when we repeat a given observation a large number of times, 1) we can predict the statistical outcome, and 2) we don't observe anything other than randomness in the individual outcomes otherwise. In other words, the statistics mentioned above is the maximum information we can extract from the system. If you think about it, that doesn't really prove that all such events, under every possible scenario, are completely random. It suits us to theorize that.

 

The proposal that consciousness might exert free will into the physical world via this mechanism does not force us to presume that every quantum event is thus decided. That's sort of the same thing as saying that everything is conscious. The proposal only requires that some quantum events might be thus determined, and it's likely that those events are buried in living brains, a place where we don't typically set up ensemble experiments. Every quantum event is an opportunity, in theory, for a consciousness to inject information, but there might be no consciousness connected to that particular event, or the consciousness might simply not choose to use that opportunity. A result to the measurement is still required, and if nothing is choosing the outcome then it would be random. Sort of like a car rolling down an unpaved hillside - the wheels are going to turn in response to the shape of the soil, and that will feed back to the steering wheel, which we can't see (assume tinted windows). Obviously that's not random - I'm just trying to draw an analogy. But when the human in the car decides to grab the wheel and take charge, he can. Sort of a lame analogy, I guess.

I think we really do have free will, because we are not confined to action arising from instinct alone: using thought, we can decide what to do. Thought has replaced instinct in human beings so we are free to do as we wish, without waiting for any animal impetus or stimulus.We still have instincts, of course, which in certain circumstances can take over and act faster than thought, but usually only under circumstances of extreme danger when there is no time to think. I have had personal experience of this when i was attacked by a cow. ( Don't laugh too much - i was only 9! ).

 

Yes, I was once cycling when a motorist executed a hard right turn in front of me (they'd passed on the curved "turning lane" we'd passed a few seconds before. This all happened very fast - one second they were driving past me, and the next I was looking at the side of their car about 3-4 feet in front of me. I was barely realizing I was about to fly through their window when suddenly I'd turned the bike inside their turn and was cycling along the crossroad, with them driving off in front of me. I have no idea how I did that - I was completely unconscious of taking any sort of action. But apparently I just wrenched the bike through a 90 degree turn. Blind instinct. I'd been cycling for quite a while and felt totally at home on a bike. My subconscious really looked out for me that day.

Posted (edited)

 

:( We are still talking past each other. I think you are saying that an action taken by your body, even though it may hugely influenced not only by immediate external stimuli but also by your own internal state as evolved through the past, is free will. That is not at all what I am talking about when I say that a fully deterministic system does not have the possibility of free will. Free will in my eyes is the possibility of taking an action that is absolutely unpredictable based on past observations. Granted, someone who knows us well might predict our actions in many many situations. But if we are totally predictable, as a matter of principle, then there is no room for free will. Free will in my definition is the ability to inject new information into the universe.

 

Well, that is a very unusual way of defining free will. What you name free will I would call creativity. And that is not the same as free will at all. I would define free will as follows:

 

A person is said to have free will if he is able to act according his own motivations.

 

This definition covers the daily use of the concept of free will, and does not contradict determinism. (Just to add, one can make several amendments to this definition to make it more precise, but I think for most discussions this suffices.)

 

And also, for new things to happen, we just need new situations that did not occur before. That is also not in contradiction with determinism.

 

 

Regarding uncertainty, randomness, and so on, if you have one opportunity to observe a quantum event, you see an outcome. How do you know that outcome is random? You call it random for two reasons (that are related, btw): 1) quantum theory states that it's random, and 2) you can't predict it. When you have the opportunity to watch identical ensembles of events, you see a statistical pattern emerge, and we can predict those patterns. But if any single one of those outcomes was not random (say, if I were hiding on the other side of uncertainty and decided to adjust that one outcome), you would never know, would you?

 

Now, if I sat over there and adjust a large number of them, under an ensemble situation, you'd see a perturbation of the statistics. But you can't see that based on one event. What our experiments really empower us to say is that under controlled conditions in a laboratory, when we repeat a given observation a large number of times, 1) we can predict the statistical outcome, and 2) we don't observe anything other than randomness in the individual outcomes otherwise. In other words, the statistics mentioned above is the maximum information we can extract from the system. If you think about it, that doesn't really prove that all such events, under every possible scenario, are completely random. It suits us to theorize that.

 

I understand your reasoning, but I do not agree. In the first place because of the above: free will just is not doing something unpredictable. In the second place while EPR shows there are no local causes determining the outcome of a single quantum event. In the third place it is not clear at all that the brain functions as a 'quantum-event amplifier'. And even if it turns out to be, how does the brain know which events to amplify to new ideas or actions, and which not?

 

Further, I would like to mention one point again: you lay stronger constraints on an explanation of consciousness, than you do for other phenomena. Or do you think something is still missing in e.g. the theory of electromagnetics. "Yes, I know, but what is really causing the charges?". We know what charges are, because we have a full blown theory about them. If we have a theory that explains what kind of structures are conscious, we have explained consciousness. I think this is more or less the point that MonDie makes here.

Edited by Eise
Posted

My understanding of the Bell experiments was that it required an ensemble, from which you can extract correlations, to make a results-based statement. Isn't that why the story of the last part of the 20th century was to improve the statistical certainty of the quantum-theory-supporting outcomes? Can you describe for me a single-event Bell-type experiment where you would be able, via measurement, to draw a clear conclusion? Maybe it's possible and I just didn't understand - but I thought the actual results were statistical in nature.

 

We can use the word creativity if you wish - I'm not hung up over the nomenclature.

 

Actually, I don't think anything is missing from electromagnetic theory - we have quantum electrodynamics, and my understanding is that we still feel it's completely precise. On the other hand we still don't really know what charge is, do we? It's just something that "is." I don't mind a theory having "presumed things," but you presume at the beginning and then explain - presuming at the end without providing an explanation bothers me much more.

 

I think a more fundamental way of getting at the difference, though, is that electromagnetic theory explains the interactions of charged particles. Everything that the theory makes a statement about has something to do with charged particles, light, and so on - the "stuff of the theory." That's natural. Such predictions are entirely "in bounds" for that theory. But emergent consciousness theory proposes that something of an altogether different nature somehow arises from theories the foundations of which have nothing to do with that phenomenon. Consciousness is not "the stuff of physics" in any way. So that proposal is much more of a stretch and carries a higher burden of proof in my mind.

 

I'm going to say again that this is not my main point in this thread. I already described a possible scenario in which awareness could exist (for whatever reason) and completely believe it had free will when in fact it did not. That was the Many Worlds theory + awareness choosing which way to go at each fork in the road. I guess you could call that choice free will, but it would have no physical effect whatsoever - the same worlds would be there whether awareness chose to watch or not.

 

I believe I have free will because I sense it, but I actually don't believe that's as strong a statement as saying I believe I have awareness because I sense it. You may be right and free will may be an illusion. But awareness is undeniable, because if I didn't have it I would "be aware of feeling it." It's a self-proving phenomenon as far as existence goes.

 

In my mind that makes awareness the thing to focus on. It's a problem worth solving. If it turns out it is emergent, and we prove that, we've learned something about emergence. If it turns out to be fundamental, and we prove that, we've learned something even more important. So I am not going to let emergence theory have an easy ride of it - I'm going to kick the tires as hard as I can until it either works or doesn't. Meanwhile I'm basing my current opinion (guess) on what I said earlier: it's easier for me to believe that something altogether new exists than to believe something new and completely different can pop out of an existing, almost fully-formed theory. I'll believe the latter if it's shown, but not before. Hand waving just isn't going to cut it for me.

Posted

 

 

cladking: It's entirely clear to me that animals have sentience. Just as other humans behave as though they're driven by awareness, so do animals.

 

 

 

Do you believe that your ability to detect self awareness in animals will make you more or less able to detect it in a machine if it arises?

 

What if this awareness has no more "intelligence" than a toad or a squirrel?

 

Will an aware machine simply respond to humans or will it initiate communication?

A machine has already given indication, through talking, that it has sentience.

This doesn't mean it actually has sentience.

Sentience in other humans cannot be proven. They only sentience you can prove is your own.

You can deduce that humans and animals have sentience. But you cannot induce humans and animals have sentience.

 

It is quite apparent that other humans and animals have sentience.

 

It is going to be centuries until we have enough science to address the nature of life and how the brain operates. In the meantime it is perfectly reasonable to speculate on such things within the framework of what is known. It is my opinion that this speculation must begin with the acceptance of what is apparent as being axiomatic. We either do this or refuse to speculate at all. Since these are the important questions and the reason science was invented to begin with it seems only logical to make such assumptions and ponder the facts.

 

From this perspective things look very different.

Posted (edited)

 

... It is my opinion that this speculation must begin with the acceptance of what is apparent as being axiomatic.....

That's the way I look it: the existence of emergence is self-evident but an analytical explanation for complex phenomena, like those of a brain, is, as yet, beyond reach.

Edited by StringJunky
Posted

I think a lot about human behavior is emergent. We're made of atoms, and QED is probably the most applicable theory, and it only has a handful of very simple rules. But you get absolutely stupendous effects from it. I think most of the things we do (eat, sleep, walk, make sounds, etc.) represent good candidates for emergent behavior. Awareness is fundamentally different, though. No matter how complex the machine is, I can't see how it can "feel." I don't even see what that means. Of course, if you define "feel" as register a sensation, then it's clear - it's a signal and a response from a complex system. I'm talking about internal feelings (emotional feelings). Pain, joy, hate, etc. It's in no way "self evident" to me that emergence can explain that.

 

So I'm keeping all the candidates in the game until one clearly wins.

Posted

 

Indeed - if I'm totally wrong about this and AI can emerge from conventional computing, then we do need to be very, very careful. I for one don't want to create technology that then turns around and puts us down. The researchers in AI who are focused on this (how to ensure that we keep control of an AI that we create, even if it makes us look about as smart as bacteria), are definitely approaching it the right way.

 

Goldglow, I may be wrong, but my reading of the Strong AI theorem is that it denies precisely what you just said. I think your first three sentences exactly captures Weak AI, but then the rest of your post proposes "more," and Strong AI says exactly that "there is no more.".........

.........to sentient thought , other than the operation of some sort of " hardware and software "in brains and computers.( Forgive my paraphrasing ).I think i see what you mean: thought is a mechanical activity of the brain ( Germans think in German, Spaniards think in Spanish etc. ) - as the heart beats and the lungs breathe, so the brain thinks. It's a vital tool for survival. Further, thought arises from it's database of memory, ( it's programming ,if you like ), in response to some demand for reply: the database being all the knowledge garnered throughout life and stored in the brain-cells as memory. The database ,then, is the only source for thought: in other words there is no thinker, only thought. The thinker is the thought, constrained by the parameters of the memory/database. If this is correct, then The Strong AI theory is also correct and thought is thought is thought. So in reference to S.A.I. Theory, as you said: " case closed." What i was stumbling towards (the " more ") was that, in the case of we humans, even though the brain can never be free of thought, it can be free from the hegemony of thought through the capacity of insight, which is something apart from thought, but perhaps that's another topic by itself. In light of this, could any programmed intelligence possibly go beyond it's algorithms without this insight which is proscribed by the S.A.I Theory? If it did,would/could a computer, ( " Eugene Goostman" perhaps! ), know it had passed the Turing Test?

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.