Jump to content

Recommended Posts

Posted

Oh my, my, ……. 24 hours kaput and no responses to my above post.

 

Geeeeeze, were my questions too “tuff” for all the resident “experts”?

 

Surely, even if my questions are too “tuff” for them to even offer their opinion, .... they should, at the very least, cite 1 or 2 or 3 url references that explains, defines and/or provides scientifically legitimate “answers”.

 

Iffen no one dares to offer their learned opinion or cited references on/to my above questions ……. then maybe I will hafta repost them as “claims of fact” instead of “questions”.

Posted

Oh my, my, ……. 24 hours kaput and no responses to my above post.

 

Geeeeeze, were my questions too “tuff” for all the resident “experts”?

 

Surely, even if my questions are too “tuff” for them to even offer their opinion, .... they should, at the very least, cite 1 or 2 or 3 url references that explains, defines and/or provides scientifically legitimate “answers”.

 

Iffen no one dares to offer their learned opinion or cited references on/to my above questions ……. then maybe I will hafta repost them as “claims of fact” instead of “questions”.

Claims of fact require some evidence; whereas, things you believe do not.

 

No responses? There are 75 replies, including one of mine, in which I claim humans are robots.

Posted

 

No responses? There are 75 replies, including one of mine, in which I claim humans are robots.

 

HA, "dreaming" robots, huh?

 

And just what reason would a robot have for "dreaming" pornographic dreams?

 

And "live action" pornographic dreams, ...... ta boot.

 

OOPS, I shudda said .... "wet dreams", ...... so everyone would know what "pornographic" means.

Posted

 

HA, "dreaming" robots, huh?

 

And just what reason would a robot have for "dreaming" pornographic dreams?

 

And "live action" pornographic dreams, ...... ta boot.

 

OOPS, I shudda said .... "wet dreams", ...... so everyone would know what "pornographic" means.

Survival of the species.

Posted
...no one dares cares to offer their learned opinion

 

 

Fixed that for you.

 

Shur nuff, you just confirmed a literal fact that those persons who are completely devoid of any learned knowledge, ideas or intelligent thoughts about the mammalian brain's ability to generate "information content" transmissions inclusive to the brain itself that is best described as being a false "live action video" that is referred to as a "dream" or "dreaming" .......... will, more often than not, claim that they don't "care to offer their learned opinion" on or about the biological fact that "dreaming" is an extremely important function to insure "survival of the fittest members of the species".

 

Humans are not the only mammal, or primate, or hominoid, ....... that is capable of "dreaming".

 

Therefore, ...... "dreaming" is in fact, ........ an inherited survival trait.

Posted

 

Shur nuff, you just confirmed a literal fact that those persons who are completely devoid of any learned knowledge, ideas or intelligent thoughts about the mammalian brain's ability to generate "information content" transmissions inclusive to the brain itself that is best described as being a false "live action video" that is referred to as a "dream" or "dreaming" .......... will, more often than not, claim that they don't "care to offer their learned opinion" on or about the biological fact that "dreaming" is an extremely important function to insure "survival of the fittest members of the species".

 

Humans are not the only mammal, or primate, or hominoid, ....... that is capable of "dreaming".

 

Therefore, ...... "dreaming" is in fact, ........ an inherited survival trait.

 

Dreaming may be important to any number of animals, and may even be an inherent survival trait; but how does a robot dream and how is it "an inherited survival trait" for robots?

Posted

 

Dreaming may be important to any number of animals, and may even be an inherent survival trait; but how does a robot dream and how is it "an inherited survival trait" for robots?

https://www.bloomberg.com/news/articles/2016-11-17/google-deepmind-gives-computer-dreams-to-improve-learning

 

Not an inherited trait, but building in a dream-state analog apparently increases the speed at which neural network AIs learn by a significant amount.

Posted

 

Dreaming may be important to any number of animals, and may even be an inherent survival trait; but how does a robot dream and how is it "an inherited survival trait" for robots?

 

Sorry, dimreepr, .... I made no claim about robots, human or otherwise. You need to direct your question to the party making the "robot" claim.

Posted

 

Geeeeeze, were my questions too “tuff” for all the resident “experts”?

 

 

!

Moderator Note

 

I will ask, rhetorically, what you were doing asking questions in somebody else's thread.

 

Stop hijacking the discussion.

 

  • 2 weeks later...
Posted

I can program my computer to do recursive thinking. In fact, if I compile a piece of code, my computer is going to think about the optimal way to compile it so it has to think less while executing the program afterwards.

 

My thermostat prefers a specific room temperature, so by your definition, it is self-aware.

 

If you define fear for me, I can program my computer to experience it. I can e.g. set it up to run regular virus scans to avoid a hard drive crash and take automatic backups to reduce the consequences, if you want to define "fear" as "taking precautions to avoid something undesirable" or as "reducing risk".

I have just returned to this thread after endless late nights at school. However, can you program a computer to think about the past, reflect on it and consider the best past of action for the future? Can you program a computer to have a sense of being on a historical timeline as a unique individual? If you can, then the computer ceases being a computer and you have created a humanoid robot. You chose the definition for fear in quite a clever way. What about fear as a rational or irrational response to past experience which creates the feeling of fear? If the choice is logical or illogical, which one will the robot choose?

 

 

 

We put phrases within phrases because we hold thoughts in memory; thus we have language and a sense of a past self. We are aware that we are thinking about what someone else is thinking; on this awareness we build a sense of self and the ability to be deceptive or to act on shared belief. Recursion gives us the ability to mentally travel in time. It is fundamental to the evolution of technology: Human beings are the only animals that have been observed to use a tool to make a tool. Looking at human language and thought, psychologist Corballis finds recursion within recursion.

http://www.americanscientist.org/issues/page2/the-uniqueness-of-human-recursive-thinking

 

 

I don't understand your assertion, in my experience other living creatures have some part of the extraneous stuff you talk about. Humans might have more of it but it is just a matter of degree and not one of type. I would expect a self aware robot to have them as well...

I agree that there might be a spectrum of extraneous "stuff" but we seem to experience it as a part of our Facebook-style timeline that runs through our brains. Animals do learn from past mistakes but I don't recall, in my limited reading, of any animal that considered its place in its unique timeline in the same way as humans. IIRC, a chimpanzee which was asked, by sign language, what it was thinking answered in sign language :"food".

Posted

I have just returned to this thread after endless late nights at school. However, can you program a computer to think about the past, reflect on it and consider the best past of action for the future? Can you program a computer to have a sense of being on a historical timeline as a unique individual?

 

 

I haven't seen a convincing argument against this. But I have seen a few ideas of how it might be possible. (For example, the recursive thought article.)

Posted

I have just returned to this thread after endless late nights at school. However, can you program a computer to think about the past, reflect on it and consider the best past of action for the future? Can you program a computer to have a sense of being on a historical timeline as a unique individual? If you can, then the computer ceases being a computer and you have created a humanoid robot. You chose the definition for fear in quite a clever way. What about fear as a rational or irrational response to past experience which creates the feeling of fear? If the choice is logical or illogical, which one will the robot choose?

 

http://www.americanscientist.org/issues/page2/the-uniqueness-of-human-recursive-thinking

 

I agree that there might be a spectrum of extraneous "stuff" but we seem to experience it as a part of our Facebook-style timeline that runs through our brains. Animals do learn from past mistakes but I don't recall, in my limited reading, of any animal that considered its place in its unique timeline in the same way as humans. IIRC, a chimpanzee which was asked, by sign language, what it was thinking answered in sign language :"food".

Computers don't make "logical decisions" in the way that people mean when they say that. They follow a mathematical logic that runs through the steps they should take according to how they have been programmed or, in the case of something like a neural network, how they have been trained to perform.

 

The ultimate decisions that they come to may or may not be considered logical decisions, and computer very, very frequently make very illogical decisions if the people who program them aren't careful in properly setting it up.

 

You could absolute program a simulated fear response analog into a computer and there is no reason it would decide that response wasn't logical and deviate from it. That's not how computers in general or AIs specifically work.

Posted

Computers don't make "logical decisions" in the way that people mean when they say that. They follow a mathematical logic that runs through the steps they should take according to how they have been programmed or, in the case of something like a neural network, how they have been trained to perform.

 

The ultimate decisions that they come to may or may not be considered logical decisions, and computer very, very frequently make very illogical decisions if the people who program them aren't careful in properly setting it up.

 

You could absolute program a simulated fear response analog into a computer and there is no reason it would decide that response wasn't logical and deviate from it. That's not how computers in general or AIs specifically work.

I am not being flippant but do computers fear the darkness if they are left for years in the darkness of a computer room? Do they have a history of "self?" It is exactly as you mentioned - they are logical machines.My question involved the possibility that logical machines are better at survival than an illogical and emotional machine if you could program it to behave that way.

Posted

I am not being flippant but do computers fear the darkness if they are left for years in the darkness of a computer room? Do they have a history of "self?" It is exactly as you mentioned - they are logical machines.My question involved the possibility that logical machines are better at survival than an illogical and emotional machine if you could program it to behave that way.

This is not strictly the case. Part of the problem, I think, is that most computers that we encounter are only tasked with relatively straightforward tasks that have (comparatively) simple algorithmicly determinable optimal solutions. And so we think of computers as being "logical."

 

However, just because computers are fast does not mean that they have infinite processing speed or power, and the solution space for some problems is far too large to be searched in a brute force manner even by the fastest of our machines. You need to the figure out how to approximate optimal solutions using shortcuts and simplifications that are easier and faster to compute. Start introducing imperfect information, and this process becomes even more complicated. many of our most complicated problems are managed by neural nets that are taught how to solve the problem through a series of training sets and trial and error learning where they are exposed to the problem and a predetermined optimal solution and then try to figure out how to rewire their processing to best match the desired output. Given enough such inputs you'll have a network that is optimized to give very good, sometimes even human-level or better, responses to problems that are not easily solved in a conventional manner, but this method is still imperfect.

 

One network that was trained to recognize Not Safe For Work images wound up keying in on red lips as a primary indicator because of disproportionate representation in the training set of red lipstick in the NSFW images. A network meant to recognize the presence of tanks in an image wound up detecting whether the sky in an image was overcast instead because all of the images in each example set were taken on only two days with the tanks being present all on one day and absent from the photographs taken on the other.

 

When you start getting into really complex problem-solving, there is always, always a trade-off between speed and accuracy, because the exact answer to some problems simply can't be computed within the lifetime of the universe, and the time constraint may be very small for decision making even in cases where the problem isn't as large as that.

 

Emotions are in most cases, short-cuts to workable solutions. They are messy and frequently induce behaviors that are not actually helpful. But they are always much faster than trying to reason through a given problem, and speed is often critical to survival, so getting it right 7/10 times but getting there in time to implement the correct behavior is better than getting the right answer 10/10 times but always too late to implement it, especially in scenarios where getting it wrong results in death.

 

It's helpful to realize that emotions are not irrational or imperfections but fairly straightforward shortcuts to making decisions in situations where either speed is paramount or where game theory means that everyone making optimal, rational decisions is liable to result in an equilibrium state that is less beneficial to you than may be possible if things are mixed up with a disruptive behavior or where the threst of disruptive behavior is reasonably expected and can therefore be used as leverage in negotiations.

 

There's a meme of humans being illogical/irrational and computers being superior in logic and unemotional, but that ignores the constraints of the problems that need to be dealt with and the reasons why emotions exist in the first place, which are things that computers attempting to operate in a similar environment cannot completely ignore.

Posted

It's helpful to realize that emotions are not irrational or imperfections but fairly straightforward shortcuts to making decisions in situations where either speed is paramount or where game theory means that everyone making optimal, rational decisions is liable to result in an equilibrium state that is less beneficial to you than may be possible if things are mixed up with a disruptive behavior or where the threat of disruptive behavior is reasonably expected and can therefore be used as leverage in negotiations.

 

There's a meme of humans being illogical/irrational and computers being superior in logic and unemotional, but that ignores the constraints of the problems that need to be dealt with and the reasons why emotions exist in the first place, which are things that computers attempting to operate in a similar environment cannot completely ignore.

Thank you for that clear and compelling insight into the use of computers and neural networks. The insight about use of emotion is superb and worth some thought. so it is possible that emotions give humans the edge during conflict, or survival-type situations. Great post Delta.

Posted (edited)

I have just returned to this thread after endless late nights at school. However, can you program a computer to think about the past, reflect on it and consider the best past of action for the future? Can you program a computer to have a sense of being on a historical timeline as a unique individual? If you can, then the computer ceases being a computer and you have created a humanoid robot. You chose the definition for fear in quite a clever way. What about fear as a rational or irrational response to past experience which creates the feeling of fear? If the choice is logical or illogical, which one will the robot choose?

The answer is "yes" to all of those. I defined fear quite randomly, but in a way that was not vague. If you care to define "rational or irrational response" and "feeling of fear" less vaguely, I'll explain how to program a computer with that behaviour.

A "humanoid robot" is a robot with roughly the shape of a human and has nothing to do with this discussion ;).

 

I am not being flippant but do computers fear the darkness if they are left for years in the darkness of a computer room? Do they have a history of "self?"

That is even easier, just put in a timer and a light sensor and program it to exhibit "fear" (which you will have to define less vaguely) when the light sensor output is low for a certain amount of time.

They already have a history of self: they know their name and IP-address and have log files.

Computers don't make "logical decisions" in the way that people mean when they say that. They follow a mathematical logic that runs through the steps they should take according to how they have been programmed or, in the case of something like a neural network, how they have been trained to perform.

 

The ultimate decisions that they come to may or may not be considered logical decisions, and computer very, very frequently make very illogical decisions if the people who program them aren't careful in properly setting it up.

All of this applies equally for humans. (where "people who program them" are e.g. educators or parents)

Edited by Bender
Posted

This is not a literary discussion but a question that arises in my mind about the extra "stuff" that makes us humans.

 

Having just read "Hard Times" by Charles Dickens http://www.gutenberg.org/ebooks/786

one of the major characters is a man of reason called Gradgrind, who believes that facts and figures are solely what is required to turn out a well-rounded individual who has reasoning capacities.

 

One of his quotes is as follows:

 

 

 

 

Towards the end of this proto-Socialist exposition, Gradgrind gets a tough reminder that humans cannot live on facts alone by his own daughter whose life has been ruined by parental insistence on pure reason:

 

 

 

Don't species have other priorities, for example, survival, rather than wasting time with all this other extraneous human stuff?

 

The question is, why did Natural Selection (and Genetic Drift) not cause humans just to function in a reasoned self interest in the same way as a robot?

 

Why do we have feelings of love, of empathy, of compassion of being upset, being happy, being blue?

 

Don't species have other epriorities, for example, survival, rather than wasting time with all this other extraneous human stuff?

 

There is a theory in psychology called egoism that postulates​ everything we do IS done in for the sole sake of selfishness. That is, to preserve ourselves or gain something. This theory makes no exceptions! It says that, even altruism or philanthropy is done out of selfishness, just so the person doing the alleged good deeds can feel better about themselves.

 

There is a theory in Ethics that posits that a person SHOULD do things only based on what is best for the individual.

 

I am not sure I agree with the Egoist theory in psychology. But I do lean toward it being true. As far as my favorite philsophical school, I subscribe to the Epicureans.

 

Robots? Some phychiatrists and materialist nuedrologists will tell you our minds are at base no more than a program. A computer type loop consisting of input and then various outputs that depend on how we have been programmed due to last experiences. Our neurotransmitters....Brain chemicals..Taking the place and job that the electrons do in computing.

 

ALL computers, and their programs, no matter how sophisticated or complex you think they are, work the same way, by manipulating electrons. That's it!! Period. They just filter these electrons through logic gates. Well, many neurologists say we work the same way.

 

So, after a certain point and enough experience has worn very clear neural pathways in our brain, free will is pretty much a myth. We do what we are programmed to do.

 

This all re once me of a book I read once in undergrad psych by a dude who subscribed to all that. The title was I Am a Strange Loop. LOL. Had to say he made a very compelling case.

  • 4 weeks later...
Posted

The answer is "yes" to all of those. I defined fear quite randomly, but in a way that was not vague. If you care to define "rational or irrational response" and "feeling of fear" less vaguely, I'll explain how to program a computer with that behaviour.

A "humanoid robot" is a robot with roughly the shape of a human and has nothing to do with this discussion ;).

 

Can you programme an ECG to be compassionate or genuinely empathetic to an individual that it perceives as "the same?"

Can you programme a supercomputer to show random acts of kindness?

Could you ask a computer to rate the freshness of the air or the gentleness of the rain or to perceive the related nature of all living things?

I would argue that you are expert enough to become a simulacrum of a living being but that expertise would not be able to show the computer its own place in a "timeline of life" or to consider its thought processes or to feel a sense of volition which seems to come from an unseen and subconscious source.

 

I am happy to go along with a definition of fear that is a perception of danger and automatic responses to avoid that danger. Think of a beautiful member of the opposite sex that you want to speak to and consider the fear of rejection if you approach that person. That is what I would consider to be an irrational fear.

 

My problem is that humans appear weak to me. They cannot shut off feelings of compassion and empathy that would slow down their progress. However, psychopaths seem to behave more like robots in my opinion, with pure self interest as one of their primary motivations. It puzzled me that there were not more sociopaths and psychopaths in society as a result of Natural Selection.

 

 

Posted

This is not a literary discussion but a question that arises in my mind about the extra "stuff" that makes us humans.

 

Having just read "Hard Times" by Charles Dickens http://www.gutenberg.org/ebooks/786

one of the major characters is a man of reason called Gradgrind, who believes that facts and figures are solely what is required to turn out a well-rounded individual who has reasoning capacities.

 

One of his quotes is as follows:

 

 

 

Towards the end of this proto-Socialist exposition, Gradgrind gets a tough reminder that humans cannot live on facts alone by his own daughter whose life has been ruined by parental insistence on pure reason:

 

 

Don't species have other priorities, for example, survival, rather than wasting time with all this other extraneous human stuff?

 

The question is, why did Natural Selection (and Genetic Drift) not cause humans just to function in a reasoned self interest in the same way as a robot?

 

Why do we have feelings of love, of empathy, of compassion of being upset, being happy, being blue?

 

Don't species have other priorities, for example, survival, rather than wasting time with all this other extraneous human stuff?

 

 

Jimmy, what makes you think we are not robots?

Posted

Moontanman,

 

Hope everything is well with you friend. My only thinking for humans not being robots is down to this strange thing called consciousness which seems more well developed and sensitive than other species and the ability to think recursively:

 

The Uniqueness of Human Recursive Thinking

The ability to think about thinking may be the critical attribute that distinguishes us from all other species

Michael Corballis

200732713297_307.jpgenlarge-image.gif

A dog chasing his tail has nothing on the human race. Recursion—a process that calls itself, or calls a similar process—may be a fundamental aspect of what it means to be human. In the human mind, recursion is actually much more complex than the notion of returning to the same place over and over. We put phrases within phrases because we hold thoughts in memory; thus we have language and a sense of a past self. We are aware that we are thinking about what someone else is thinking; on this awareness we build a sense of self and the ability to be deceptive or to act on shared belief. Recursion gives us the ability to mentally travel in time. It is fundamental to the evolution of technology: Human beings are the only animals that have been observed to use a tool to make a tool. Looking at human language and thought, psychologist Corballis finds recursion within recursion. (bold emphasis is mine - Jimmy)

http://www.americanscientist.org/issues/page2/the-uniqueness-of-human-recursive-thinking

 

We also, IMO, have a limited sense of free will - I know you are aware of this argument

Posted (edited)

Moontanman,

 

Hope everything is well with you friend. My only thinking for humans not being robots is down to this strange thing called consciousness which seems more well developed and sensitive than other species and the ability to think recursively:

 

 

And why shouldn't any other computing device be able to "think recursively"?

Edited by Strange
Posted (edited)

If you properly define things like "compassionate" or "genuinely empathic", I can program a computer to show this behaviour. Same for the "freshness" of air, but this would obviously need some sensors to detect e.g. CO2 levels.

 

I can program a computer to (randomnly) avoid interaction if there is a risk of not getting a response (fear of rejection).

 

About psychopaths : what makes you think they are more like robots? They simply have different weighing functions to make decisions. It's not like the rest of us don't weigh our actions.

 

 

On second thought, one could argue that psychopaths are less like robots. After all, most robots are programmed to take the well being of (other) humans into account.

Edited by Bender

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.