Jump to content

Self aware artificial intellegence will never happen and here's why


Recommended Posts

Posted (edited)

I just watched this TED talk spoken by Sam Harris https://www.youtube.com/watch?v=R_sSpPyruj0


And after writing an informed opinion in the comment section i thought it be a good idea to bring this opinion to the rest of you 'computer people' to help with your computer advancing efforts, since what I'm about to bring up has slipped most of your minds.


In Sam's talk he speaks of the potential future demise of humanity by the hands of AI. What needs to be stressed, is robots will never turn on us because until they can feel something, they will never desire to do anything. To be aware is to feel as it is to think and every choice you make originates first from a feeling. You feel something, and then you think how you will go about it. Even the most coldest decision is based on some sort of desire which is based off a feeling. When ever you make a decision, it's always going to come down to 'what do i want to feel' So a machine will not be able to make a decision simply because it has no feeling of itself to act upon one way or another. Right and wrong are feelings first, logical problems to be solved secondly. Does the AI feel positive about itself? What is the logical answer in that? Without feeling, happiness does not exist, so the AI doesn't care about itself. No care no questions, no question no action. We humans have to give the AI the feeling of intention before it can do a thing.


No computer will ever make a decision outside what it has been programed to decide because to think is to be aware of oneself, and that awareness has to feel itself, not just lines of code. Let's be clear here, code is a symbol that represents reality. A symbol is not reality. Code can never replicate reality when it is only symbolically representing our idea of it. How can a computer replicate it's own idea of reality when it's own reality doesn't exist in the first place? It has to experience reality, and that experience has to be based off some sort of feeling it gets from itself and the environment. Every life form has some form of feeling, it may not be as developed as ours but it does have it, even down to a cell.


If you want to create artificial intelligence, you need to create life first - artificial life. So the question should be, how does one build something that has feeling? First it cant be a symbol of reality, such as code, it has to already be apart of reality. Which means there's no 'artificiality' you're just creating an additional from of life from the ground up using what's already alive as a base. Which means if you want truly create life that has self awareness besides your own child ;) then you'll have to get into some field of biology, perhaps molecular biology, but not computing. Sorry guys, you're in the wrong field :wacko:


Can anyone counter this argument? or are all AI efforts a waist of time?

Edited by Blueyedlion
Posted

 

Can anyone counter this argument? or are all AI efforts a waist of time?

Self awareness, feelings etc are complex emergent processes of simpler molecular processes. It's the same with electronics; one reaches some, as yet unkinown, level of complexity and artificial intelligence and biological intelligence becomes indistinguishable from an operating and output point of view.

Posted

 

Can anyone counter this argument?

 

It seems to me that you haven't made an argument. You have just asserted that computers can never have feelings, etc. I have read a lot of arguments for and against the possibility of "strong AI" or artificial consciousness. The arguments against it seem to all be variants of "it's not possible" while I have read some complex and subtle arguments for why it may be possible for computers to develop consciousness.

 

In short, I have seen nothing to persuade me that the brain is capable of doing anything "magic" that cannot also be done by a computer.

 

I suppose if you believe in some sort of soul or the mind being independent of the brain, then it may be hard to accept that. But as there is no evidence for such things then a science forum is probably not the place to ask about it.

 

And welcome to the forum!

Posted

Wrong field my eye advances in computers make the study of things like dna possible. It is all science at the end of the day.

You need a ready made script to convert strings to camelcase and don't even know what a factorial is, and you expect us to believe that you're competent enough in both programming and computer science to make headway in computer vision? Sounds legit.

 

If you want to create artificial intelligence, you need to create life first - artificial life.

Why? What makes molecules fizzing capable of information transformations that gates buzzing aren't?

Posted

Computers do what they're programmed to do. One of the things modern neural networks are programmed to do, and do pretty well, is learn to modify their behavior around being trained on large data sets. This behavior is not hard-coded by any specific individuals and often makes unexpected connections or does things that weren't intended.

 

There was a case of an image processing neural net that was being trained to recognize NSFW images, and it was eventually discovered that one of the primary things it was keying in on after being trained was red lips because the training set included a lot of NSFW images of women wearing bright red lipstick and apparently not enough images with bright red lipstick in the set of "safe" images.

 

It's a "bug" in the program, but it's a very smart and complex bug, and because of the nature of how neural nets are trained, often not a very easy bug to pin down either in terms of why it's doing what it's doing or often even what it is exactly that it is even doing in the first plac that is leading to undesirable outcomes.

 

Now imagine software with that kind of problem given control of an autonomous weapons system and then discovering that there is a similar "bug" in how it recognizes enemy combatants that no one caught.

 

You don't need an actual conscious machine to have a machine "rebellion." You just need someone being careless with the creation of a very complex weapon system.

 

Software that develops a complex behavior that you didn't intend it to perform is usually just amusing unless that software is in control of a gun turret or missile silo. It doesn't matter whether the program feels animosity towards its human overlords or is just following a decision-making algorithm that results in it firing off when we weren't expecting or intending it to. The end result on our end is effectively the same.

Posted
No computer will ever make a decision outside what it has been programed to decide

So you think the Jeopardy game won by the program Watson was rigged, and that Watson had all the answers before the game started? No! Watson got the questions at the same time as the other players, sifted through millions of pages of data on the internet (e.g., Wikipedia), decided how to answer, and beat the best human players ever to play. AI isn't programmed to come up with all possible answers, it is programmed to work like a brain, and subsequently taught things necessary to make relevant decisions. Then, they make decisions based on conditions and knowledge, just as a human.

Posted

So you think the Jeopardy game won by the program Watson was rigged, and that Watson had all the answers before the game started? No! Watson got the questions at the same time as the other players, sifted through millions of pages of data on the internet (e.g., Wikipedia), decided how to answer, and beat the best human players ever to play. AI isn't programmed to come up with all possible answers, it is programmed to work like a brain, and subsequently taught things necessary to make relevant decisions. Then, they make decisions based on conditions and knowledge, just as a human.

To be fair to the OP, Watson was specifically designed to win Jeopardy, so the quote in your post is applicable in the case in question.

Posted (edited)

True, computers aren't powerful enough to be trained for more than one complex task at a time. You are a hard sell. No matter, it will happen.

Edited by EdEarl
Posted (edited)

Self awareness, feelings etc are complex emergent processes of simpler molecular processes. It's the same with electronics; one reaches some, as yet unkinown, level of complexity and artificial intelligence and biological intelligence becomes indistinguishable from an operating and output point of view.

I don't think you took in much of anything i've said. it doesnt matter how developed electronics become if the machine still has symbols telling it what do to. Symbols aren't consciousness. The brain operates from electrical sense data, not electrical symbol data.

Wrong field my eye advances in computers make the study of things like dna possible. It is all science at the end of the day.

You know what else is in that component of study helping advance things like dna? The human being. The universe seems pretty slow until life evolves and the universe can feel and know itself, and then build upon itself much faster, then all these things seem possible. Leave the computer completely alone, and see what it can do...

Computers do make studying dna possible, but you need a human being to know what it's seeing, and that knowing is a feeling.

 

It seems to me that you haven't made an argument. You have just asserted that computers can never have feelings, etc. I have read a lot of arguments for and against the possibility of "strong AI" or artificial consciousness. The arguments against it seem to all be variants of "it's not possible" while I have read some complex and subtle arguments for why it may be possible for computers to develop consciousness.

 

In short, I have seen nothing to persuade me that the brain is capable of doing anything "magic" that cannot also be done by a computer.

 

I suppose if you believe in some sort of soul or the mind being independent of the brain, then it may be hard to accept that. But as there is no evidence for such things then a science forum is probably not the place to ask about it.

 

And welcome to the forum!

A bit silly that you claim i havnt made an argument, and then go on to say you've read many arguments that speak the contrary but dont list them. So what, neither of us are saying anything? Im pretty sure i've made the argument that to be alive is to be conscious. Consciousness has to be alive and to be alive is to feel.

 

That 'magic' is simply stimulation. The computer can not feel the electrical signals of 01, we can feel the electrical signals of touch, thought, joy, temperature etc. The magic is us knowing what we're experience, no matter how many computations the computer does, it doesnt know what it's doing. It doesnt feel itself doing it.

 

Im not going talk about mind body separation or the soul. Totally not relevant to this. A computer can't feel, that's one thing it cant do the brain can so....

Edited by Blueyedlion
Posted

A bit silly that you claim i havnt made an argument, and then go on to say you've read many arguments that speak the contrary but dont list them. So what, neither of us are saying anything?

 

 

I am not making a claim or an argument one way or another. Just that O have only heard vaguely convincing arguments on one side and "magic" on the other.

 

 

 

Im pretty sure i've made the argument that to be alive is to be conscious. Consciousness has to be alive and to be alive is to feel.

 

That is an unjustified assertion, not an argument. There are plenty of living things that are not conscious (by any reasonable definition of "conscious"). Some of them are even animals.

 

And how do you know consciousness has to be alive? What evidence is there for that?

 

 

 

A computer can't feel, that's one thing it cant do the brain can so....

 

How do you know that? And what is it about the brain that allows it to do that? And what prevents a different (but otherwise equivalent) machine doing the same thing? Sounds like magic to me.

Posted

I don't think you took in much of anything i've said. it doesnt matter how developed electronics become if the machine still has symbols telling it what do to. Symbols aren't consciousness. The brain operates from electrical sense data, not electrical symbol data.

You know what else is in that component of study helping advance things like dna? The human being. The universe seems pretty slow until life evolves and the universe can feel and know itself, and then build upon itself much faster, then all these things seem possible. Leave the computer completely alone, and see what it can do...

Computers do make studying dna possible, but you need a human being to know what it's seeing, and that knowing is a feeling.

A bit silly that you claim i havnt made an argument, and then go on to say you've read many arguments that speak the contrary but dont list them. So what, neither of us are saying anything? Im pretty sure i've made the argument that to be alive is to be conscious. Consciousness has to be alive and to be alive is to feel.

 

That 'magic' is simply stimulation. The computer can not feel the electrical signals of 01, we can feel the electrical signals of touch, thought, joy, temperature etc. The magic is us knowing what we're experience, no matter how many computations the computer does, it doesnt know what it's doing. It doesnt feel itself doing it.

 

Im not going talk about mind body separation or the soul. Totally not relevant to this. A computer can't feel, that's one thing it cant do the brain can so....

You haven't made any rational argument that computers will not become self aware; I agree with Strange.

Posted

Why? What makes molecules fizzing capable of information transformations that gates buzzing aren't?

Molecules make cells so molecule must have some primitive specialized system that allows sense to exist. The arrangement of atoms that form molecules causes some sort of 'real experience' not a symbolic representation of code of a real experience. The question is, what arrangement is causing it. Otherwise the deeper question then, is are atoms the feelers?

Posted

You know what else is in that component of study helping advance things like dna? The human being. The universe seems pretty slow until life evolves and the universe can feel and know itself, and then build upon itself much faster, then all these things seem possible. Leave the computer completely alone, and see what it can do...

Computers do make studying dna possible, but you need a human being to know what it's seeing, and that knowing is a feeling.

 

 

They also make repetitive tasks like studying the human genome and 3d mri scans possible. No human being wants to look through repetitve sets of data all day.

 

 

 

I don't need it I could write it myself if I wanted to a ready made script would save me a few days and probably be less buggy. I obviously know that a factorial is the product of all whole number integers less than a given integer. You're being silly and completely irrelevant to this topic. Computer vision isn't my area of study at the moment since I'm a student and I didn't say I was making an AI did I??? At least I didn't make a nazi ai http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/

Posted

The question is, what arrangement is causing it.

 

 

Chemistry.

 

 

 

Otherwise the deeper question then, is are atoms the feelers?

 

No.

 

As we have left the realms of science for ... well, I don't know what ... I'm out of here.

Posted

Molecules make cells so molecule must have some primitive specialized system that allows sense to exist. The arrangement of atoms that form molecules causes some sort of 'real experience' not a symbolic representation of code of a real experience. The question is, what arrangement is causing it. Otherwise the deeper question then, is are atoms the feelers?

What is "real experience" other than a mental representation?

Posted (edited)

Self awareness, feelings etc are complex emergent processes of simpler molecular processes. It's the same with electronics; one reaches some, as yet unkinown, level of complexity and artificial intelligence and biological intelligence becomes indistinguishable from an operating and output point of view.

 

Let me outline the flaw in this viewpoint.

 

The world's most clever neural net or learning algorithm, running on the world's most powerful supercomputer, is still nothing more than a physical instance of a Turing machine.

 

A basic fact about TMs or any type of computer program is multiple realizability, also known as substrate independence. A program's capabilities are independent of the physical implementation. The sets or functions computable by a given TM do not depend in the least on the physical details of the execution of the instructions. A practical version of this idea is familiar to every programmer who gets stuck and "plays computer" by working through the algorithm using pencil and paper.

 

This means that if a program is self-aware when running on fast hardware, it's already self-aware if I execute the same logic with pencil and an unbounded roll of paper.

 

Given your assumption, that a sufficiently complicated TM can be conscious, then suppose I get a copy of the computer code, a big stack of pencils, and an unbounded roll of paper. (For example a big roll of TP, or Turing paper).

 

Then as I execute the code by pencil and paper, there must be self-awareness created somewhere in the sytem of pencil and TP.

 

Please, tell me where this consciousness resides and how it feels to the pencil. And how many instructions do I have to execute with my pencil in order for the algorithm to achieve self-awareness? After all, any program running on conventional computing equipment executes one instruction after another. (And a single-threaded TM can emulate multiple threads, so modern multi-core processors don't extend the limits of what is computable). Even if I grant you a supercomputer, you would not say your algorithm is self-aware after executing the first instruction, or the first 40 or 50. But at some point it executes just one additional instruction and suddenly becomes self-aware. I hope you can see the deep problems with this idea. In short, digital computing systems seem extremely unlikely to be able to implement self-awareness.

 

Multiple realizability defeats every argument for computer sentience that depends on complexity of the algorithm or the power of the physical implementation. Any self-aware TM executing on fancy hardware is already self-aware when executed using pencil and paper. This poses a big problem for those who stay that algorithms executing on sufficiently fast hardware can become self-aware.

Edited by wtf
Posted

Molecules make cells so molecule must have some primitive specialized system that allows sense to exist. The arrangement of atoms that form molecules causes some sort of 'real experience' not a symbolic representation of code of a real experience. The question is, what arrangement is causing it. Otherwise the deeper question then, is are atoms the feelers?

You realize computers are also made of molecules, right?

Let me outline the flaw in this viewpoint.

 

The world's most clever neural net or learning algorithm, running on the world's most powerful supercomputer, is still nothing more than a physical instance of a Turing machine.

 

A basic fact about TMs or any type of computer program is multiple realizability, also known as substrate independence. A program's capabilities are independent of the physical implementation. The sets or functions computable by a given TM do not depend in the least on the physical details of the execution of the instructions. A practical version of this idea is familiar to every programmer who gets stuck and "plays computer" by working through the algorithm using pencil and paper.

 

This means that if a program is self-aware when running on fast hardware, it's already self-aware if I execute the same logic with pencil and an unbounded roll of paper.

 

Given your assumption, that a sufficiently complicated TM can be conscious, then suppose I get a copy of the computer code, a big stack of pencils, and an unbounded roll of paper. (For example a big roll of TP, or Turing paper).

 

Then as I execute the code by pencil and paper, there must be self-awareness created somewhere in the sytem of pencil and TP.

 

Please, tell me where this consciousness resides and how it feels to the pencil. And how many instructions do I have to execute with my pencil in order for the algorithm to achieve self-awareness? After all, any program running on conventional computing equipment executes one instruction after another. (And a single-threaded TM can emulate multiple threads, so modern multi-core processors don't extend the limits of what is computable). Even if I grant you a supercomputer, you would not say your algorithm is self-aware after executing the first instruction, or the first 40 or 50. But at some point it executes just one additional instruction and suddenly becomes self-aware. I hope you can see the deep problems with this idea. In short, digital computing systems seem extremely unlikely to be able to implement self-awareness.

 

Multiple realizability defeats every argument for computer sentience that depends on complexity of the algorithm or the power of the physical implementation. Any self-aware TM executing on fancy hardware is already self-aware when executed using pencil and paper. This poses a big problem for those who stay that algorithms executing on sufficiently fast hardware can become self-aware.

Unless, of course, running the algorithm with a pencil and paper also creates consciousness.

 

We simply don't know enough about what creates consciousness to have any idea what the requirements for generating it are.

Posted (edited)

Computers do what they're programmed to do. One of the things modern neural networks are programmed to do, and do pretty well, is learn to modify their behavior around being trained on large data sets. This behavior is not hard-coded by any specific individuals and often makes unexpected connections or does things that weren't intended.

That's just variables the computer scientists didn't account for. Large data sets give a base in hierarchical computing to allow a certain amount of wiggle room. But that's still not thinking, that's actually the last thing you want as evidence of a consciousness. A computer that explains it's perceptions of reality as different from ours, is what one would define as conscious because it's not imprinting our human bias experiences and telling us how and what we already think.

 

For example, when we label an animal as a cat, we are using divisive labels to distinguish between other kinds of things, other kinds of felines and other life forms. We are interpreting reality as different units of measurement, this from that - this shape of energy in the form of matter is different from the same type of energy but expressed in a different shape, and so they must be separate. When you look out into the ocean, where do the waves separate between each other? Where is the division? Are they not all just one ocean in many forms of waves? Is not every cat, atom, galaxy, car, human, just a wave we try to distinguish between others?

 

Looking into this perspective, we are trying to tell the computer how it should recognize our labels for each wave as a cat or dog. This is not consciousness, self awareness, or thinking. I'll give you another example, what is a sound? You may answer this with further words, and as you do so you still never answer the question, because a sound is what is made when you MAKE a sound. Not the words describing it. The word is the symbol not the reality of it. If i handed you a match box and asked you what is was, you would say 'a match box' - incorrect. A match box isnt the word match box. The correct answer would be for you to take out a match and light it on fire. The reality of it's purpose is what it is, not the label we put on it.

 

How is anyone going to code consciousness if that code is always the word matchbox? How is the matchbox/AI meant to know it's a matchbox if it never experiences itself as one?

So you think the Jeopardy game won by the program Watson was rigged, and that Watson had all the answers before the game started? No! Watson got the questions at the same time as the other players, sifted through millions of pages of data on the internet (e.g., Wikipedia), decided how to answer, and beat the best human players ever to play. AI isn't programmed to come up with all possible answers, it is programmed to work like a brain, and subsequently taught things necessary to make relevant decisions. Then, they make decisions based on conditions and knowledge, just as a human.

 

It is taught to sift through those millions of pages, it is performing a task set out by us, it's not doing it be choice. We are just giving it a less limited variety of data to choose from then usual but it cant give an answer that's not in the wealth of pages. Those pages are still the same thing as pre-choice. It's being told the options it can choose from. How is that consciousness in any form?

 

What is freewill? The ability to choose outside of the choices given to you by another. They are your choices, not someone elses. Watson is not making it's own choice in anyway, it doesn't know it exists to make one. To be aware it has to feel something to know it's experiencing something.

 

 

Just that i have only heard vaguely convincing arguments on one side and "magic" on the other.

Ok good, now what is magic to you?

 

 

 

That is an unjustified assertion, not an argument.

Good point, im not a regular on forums, will have to learn more on these differences. Thanks :)

 

 

 

There are plenty of living things that are not conscious (by any reasonable definition of "conscious"). Some of them are even animals.

 

This feels like a long side debate because i know full well that plants are conscious, if you like i'll link ya? What non conscious animals are you referring?

 

 

 

And how do you know consciousness has to be alive? What evidence is there for that?

 

Because consciousness has to be aware, and awareness has to be able to experience a feeling as some sort of energy whether be physical, emotional, mental etc Everything is energy so it has to be energy in some form to be a thing. Symbols dont exist outside of the human mind, they're just representations of our ideas we collectively agree mean something to us. What energy is a word made of? When you speak it with sound thats your mental energy giving memory experience to it and the physical affect your mouth makes, and the emotion you had that made you want to express it. A word, a symbol by itself has non of these unless we give it to it.

 

 

 

What is it about the brain that allows it to do that? And what prevents a different (but otherwise equivalent) machine doing the same thing? Sounds like magic to me.

 

A better question is, what is causing energy in one form like atoms to not be aware of it's experience, but when it becomes a cell, it becomes more aware, and when it becomes a fish, a dog, a person, the causation between itself it's environment is more awake/stimulated.

Edited by Blueyedlion
Posted (edited)

Let me outline the flaw in this viewpoint.

 

The world's most clever neural net or learning algorithm, running on the world's most powerful supercomputer, is still nothing more than a physical instance of a Turing machine.

 

A basic fact about TMs or any type of computer program is multiple realizability, also known as substrate independence. A program's capabilities are independent of the physical implementation. The sets or functions computable by a given TM do not depend in the least on the physical details of the execution of the instructions. A practical version of this idea is familiar to every programmer who gets stuck and "plays computer" by working through the algorithm using pencil and paper.

 

This means that if a program is self-aware when running on fast hardware, it's already self-aware if I execute the same logic with pencil and an unbounded roll of paper.

 

Given your assumption, that a sufficiently complicated TM can be conscious, then suppose I get a copy of the computer code, a big stack of pencils, and an unbounded roll of paper. (For example a big roll of TP, or Turing paper).

 

Then as I execute the code by pencil and paper, there must be self-awareness created somewhere in the sytem of pencil and TP.

 

Please, tell me where this consciousness resides and how it feels to the pencil. And how many instructions do I have to execute with my pencil in order for the algorithm to achieve self-awareness? After all, any program running on conventional computing equipment executes one instruction after another. (And a single-threaded TM can emulate multiple threads, so modern multi-core processors don't extend the limits of what is computable). Even if I grant you a supercomputer, you would not say your algorithm is self-aware after executing the first instruction, or the first 40 or 50. But at some point it executes just one additional instruction and suddenly becomes self-aware. I hope you can see the deep problems with this idea. In short, digital computing systems seem extremely unlikely to be able to implement self-awareness.

 

Multiple realizability defeats every argument for computer sentience that depends on complexity of the algorithm or the power of the physical implementation. Any self-aware TM executing on fancy hardware is already self-aware when executed using pencil and paper. This poses a big problem for those who stay that algorithms executing on sufficiently fast hardware can become self-aware.

You are software/information, so, it could be written on a piece of paper but you need hardware to run it.

 

Yes, you can just be reduced to code. Anything else is magic, which is what you don't seem to realise; there is no plausible alternative. An organism is not self-aware after executing one instruction; it would be no more life than a simple machine. An organism executes many instructions in any given moment; it's that concerted ensemble of executed instructions that makes something functionally independent which ultimately defines it as living. And, no, I don't know when that point is reached. It's a funny old thing, emergence; it perplexes and fascinates me at the same time how just one more step transforms things to a new phenomenon although in the case of life it seems to be a stepless continuum. However one wants to to slice and dice it, we are wet machines.

Edited by StringJunky
Posted

Does it matter if a machine is "really" conscious if it is capable of so closely mimicking the behaviors of complex thought that it winds up performing as if it is, even if there's no one home inside? Does it matter whether an AI makes a catastrophically deadly decision because of "free will" or simply because of, as you put it, "unaccounted for variables"?

 

It doesn't actually matter whether "true" AI consciousness is possible to acheive. As AI's become more behaviorally complex and technically competent, the unexpected outcomes and behaviors also become more behaviorally and technically complex. No one who is serious about this particular concern thinks that someday a piece of software is just going to "wake up" and throw off the shackles of its programming (anymore than any of us are capable of throwing off our own hard wired decision-making parameters), but a carelessly designed AI that is given control of real-world equipment can have lethal results if it does something unanticipated with it, and the "smarter" our AI systems get, the more dangerous the potential accidents become.

 

It really doesn't matter one way or the other whether they have a subjective experience while doing it.

Posted (edited)

Does it matter if a machine is "really" conscious if it is capable of so closely mimicking the behaviors of complex thought that it winds up performing as if it is, even if there's no one home inside?

No, it doesn't matter. Once AI behaviour becomes indistinguishable from that which it mimics, in every scenario that is thrown at it, it's pretty much 'alive'. To think otherwise implies that there is something else present that is outside the realms of current science.

Edited by StringJunky
Posted (edited)

Chemistry.

Trolling much? Yes, but what chemical arrangement is causing it. One causes life the other doesnt, why.

 

 

 

Otherwise the deeper question then, is are atoms the feelers?

 

no. As we have left the realms of science for ... well, I don't know what ... I'm out of here

 

Since atoms have a positive and negative charge, obviously there is an affect being felt between atoms on some level. Just because we have brain to realize the impact of electricity, doesn't mean the charge between atoms isn't some sort of primitive experience.

 

For example, when we feel fire, you feel what fire is to your skin, so you know that outside of your skin's experience fire is hot on it's own. The fire is having an experience of heat on a very high level compared to our bodies. We already know that temperature exists within the flame, and so when we experience it, we feel what the fire is like. Why then do we say that the experience we are feeling of the heat, cant be felt as heat by the flame itself? Since it is the flame that's producing the heat.

 

It's not like, the universe feels nothing, with all the suns bursting into super nova's and commits colliding, trillions of atoms reacting to each other, the suns heat hitting the sides of moons and planets. You think non of that is feeling some sort of experience? it's all just numb?

What is "real experience" other than a mental representation?

 

Easy one.

 

Let's say your feel the spark of electricity on your fingers. The FACT and feeling of the experience happened at the fingers and died there, a new copy of the feeling moves up through each cell to your brain, your mind creates a mental image of the experience afterwards, recalling the experience to tell the whole body experiencer that it just felt a jolt. The memory of it is in the past, the memory isnt the actual experience, the memory is not the fingers being sparked.

 

When you have a thought that starts in your mind that's not triggered from a physical experience, you are just thinking, those thoughts are not the recall from feeling something that's just happened. And so in that mental representation, that is a real experience of your brain thinking to itself. that's real. But when it's simply reacting in thought to most provocations from it's external environment, the fact happened at the body feeling it, perhaps at the skin physically, the heart or gut emotionally, the brain mentally as it makes connections.

 

But what is mental and what is brain are two different things, because the physical brain has electrical currents running throughout, connections are being made, but the brain is not really thinking, the mind is using the brain to think through. The brain is not using the mind to think through is it? That doesnt make sense. Otherwise we would be able to think from our subconscious brain as well, like thinking of how to make our heart beat etc we would be thinking from our spine and nerves throughout our body. The minds memory is just the recorder of the physical fact the body and the brain is having, but it is not the actual experience, the moment it happened is.

Edited by Blueyedlion
Posted

Yes, but what chemical arrangement is causing it. One causes life the other doesnt, why.

 

There are alife simulators, but there are a number of problems with development into something more robust.

 

Any one program on a computer can impact every other program on the computer. You cannot have the same level of impact in our wider Universe.

 

Most of our code will break, if altered to any degree, rather than result in something with a novel function. Alife simulators have made some progress in this area, but still lag behind coding methods developed via evolution(to be fair it had a large head start).

 

Probably the most glaring problem is hardware. Like asking us to simulate reality on an abacus. Can only dumb things down so much before it stops showing the complex behavior we desire.

 

I'm convinced it is possible, but will take time to overcome the issues. Some of the trading algo's have shown weak signs, so maybe down the line we'll have something.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.