Jump to content

Recommended Posts

Posted (edited)

What AI can not do? Give us examples of tasks that AI can not do (in your opinion) whatever effort and advance technologies have invented.

Dr. Yahya Al-Samawi

Computer Science Department

Al-Razi University

Link removed (rule 2.7)

 

Edited by Strange
Link removed
Posted (edited)

AI is limited by laws of physics - it cannot do God-like things, if this is what you had in mind.

Maybe there are other limitations that we are not aware of (for example, maybe extreme intelligence is not stable in single personality - maybe personalities spontaneously split. I don't think it is like that, but I don't know of any proof against it.)

Edit: typo

Edited by Danijel Gorupec
Posted

In theory I think a  device and or programming could be created to do anything a human can imagine doing within the law of physics. That is basically what humans have designed them for. I think there are things which might more cost effect or just generally preferred to have a human or animal do but that is a different conversation.  

Posted
19 minutes ago, Ten oz said:

In theory I think a  device and or programming could be created to do anything a human can imagine doing within the law of physics. 

No that is false. Newtonian gravity, for example, is not computable. The jury is still out on whether quantum physics is computable. There are many easily human-conceivable problems that can never be computed by an algorithm. The Halting problem is one such, and there are many others. 

Posted
8 minutes ago, wtf said:

No that is false. Newtonian gravity, for example, is not computable. The jury is still out on whether quantum physics is computable. There are many easily human-conceivable problems that can never be computed by an algorithm. The Halting problem is one such, and there are many others. 

I posted "anything a human can imagine doing". The OP references "tasks". Gravity is not a task or something human do. Likewise the halting problem also isn't something a person does. 

Posted
4 minutes ago, Ten oz said:

I posted "anything a human can imagine doing". The OP references "tasks". Gravity is not a task or something human do. Likewise the halting problem also isn't something a person does. 

A human could attempt to solve the halting problem.  That is "doing" something.

18 minutes ago, wtf said:

Newtonian gravity, for example, is not computable.

Can you expand on what you mean by that? Is it a reference to the many-body problem?

Posted
7 minutes ago, Strange said:

A human could attempt to solve the halting problem.  That is "doing" something.

AI could attempt it as well. 

Posted (edited)
35 minutes ago, Ten oz said:

I posted "anything a human can imagine doing". The OP references "tasks". Gravity is not a task or something human do. Likewise the halting problem also isn't something a person does. 

People imagine solving the Halting problem all the time. If we can solve the Halting problem it shows we're not computations. Nobody has succeeded yet but it's an open question. Likewise I can imagine computing the digits of Chaitin's Omega. I just imagined it. But no computer can do it, because it amounts to solving the Halting problem.

There are strict and profound limits on what computations can do. Turing showed that in 1936.

21 minutes ago, Ten oz said:

AI could attempt it as well. 

Ah yes ... At the very least we KNOW a computation can't solve the Halting problem, and we don't know whether humans can. 

31 minutes ago, Strange said:

Can you expand on what you mean by that? Is it a reference to the many-body problem?

Yes. https://en.wikipedia.org/wiki/Stability_of_the_Solar_System. Found interesting thread here ... https://cs.stackexchange.com/questions/43181/is-the-unsolvability-of-the-n-body-problem-equivalent-to-the-halting-problem

Also note that in Newtonian gravity as the distance between two point-masses goes to zero, their gravitational attraction goes to infinity. I believe that's related but haven't references at the moment. 

The point about Newtonian gravity is that because of chaos, the accumulated effect of tiny rounding errors, we cannot in principle compute the evolution over time of even a perfectly deterministic system. That is, in Newtonian gravity, the motion of every particle is a deterministic function of the position, mass, and momentum of every other particle in the universe; and in principle can be calculated by God's computer. But it can NOT be computed by a Turing machine. This is a fact that's often missed in philosophical discussions. Determinism does not imply knowability or computability.

By God's computer I mean the universe itself; which arguably is NOT a Turing machine and perhaps not a computation even in an imaginative extension of its current technical definition as a TM. 

Computations are limited in what they can do.

21 minutes ago, Ten oz said:

AI could attempt it as well. 

Jeez is this English 101 day on the forum?? LOL

Edited by wtf
Posted

AI is at the cradle stage currently.
If its advancement keeps going, there is no limit to its intelligence. Unless of course intelligence has an upper limit in itself. Like C has.

Is there an upper limit of intelligence?

Posted
4 minutes ago, wtf said:

Ah yes ... At the very least we KNOW a computation can't solve the Halting problem, and we don't know whether humans can. 

Sure, but we don't know that a human can either. 

10 minutes ago, wtf said:

People imagine solving the Halting problem all the time. If we can solve the Halting problem it shows we're not computations. Nobody has succeeded yet but it's an open question. Likewise I can imagine computing the digits of Chaitin's Omega. I just imagined it. But no computer can do it, because it amounts to solving the Halting problem.

I can imagine living forever too but my point of stating "doing within the law of physics" I meant something humans can actually do and not merely just imagine.  So perhaps I phrase it wrong. Halting problem is something which a person can imagine doing but it isn't something which has been done or is known can be done. Many things fall into that category but they're all unknowns just has the future of AI is unknown. In theory who is to say computing is limited to our current understanding of it. In the future we may swap out solid state devices for amphibian parts eXistenZ style.  :eyebrow:

6 minutes ago, QuantumT said:

Is there an upper limit of intelligence?

If you believe the Universe in finite there is an upper limit to data. 

Posted
41 minutes ago, Ten oz said:

AI could attempt it as well. 

Yes, but you said it wasn't "doing" anything. That's all I was commenting on. (Unless I misunderstood ...)

Posted (edited)

You know the mention of "AI" obscures a key fact. An AI, which is just a super-duper-fast implementation of a Turing machine, can not do anything that a human can't do sitting at a desk with an unlimited supply of pencils and erasers and an unbounded paper tape. It's true an AI is faster; but the set of functions an AI can compute is exactly the same set as a human implementing a Turing-1936 type TM can do.

Even a quantum computer can not compute anything that a vanilla TM can't. We know that for some specialized problems there are quantum algorithms that run in polynomial time that in classical computing must run in exponential time. That's a significant result. But complexity theory is not the same as computability theory; and in terms of computability. an AI can not do anything a pencil-and-paper human can't do, if the human is constrained by the rules of a TM.

Now if you believe that a human can't do anything more than a TM even when she stands up from the desk and exercises her human capabilities ... then that's your belief. It is in fact an open question.

But when we think of an "AI" as something that transcends the laws of computation as they are currently understood; that is an error. All existing AI's are practical implementations of Turing machines.

AI's are NOT imaginary devices that can transcend the laws of computing.

Nor, it must be emphasized, does running a computation quickly do anything that running the same computation slowly can do. When a supercomputer executes the Euclidean algorithm to find the greatest common divisor of two integers; it performs exactly as well as a human being executing the algorithm by hand out of a number theory text. The supercomputer goes faster. But given unbounded time, as theoretical TMs have, a supercomputer is no better than pencil and paper.

So the real question here is not "AI" versus conventional computers. Rather, it's between what computations can do, and what they can't. That distinction was made by Turing in 1936 and since that time nobody has had a better idea about the subject.

Edited by wtf
Posted
4 minutes ago, wtf said:

Now if you believe that a human can't do anything more than a TM even when she stands up from the desk and exercises her human capabilities ... then that's your belief. It is in fact an open question.

If humans were capable of doing more than a TM then, presumably, it would be possible to create an artificial intelligence that also do that.

Posted
1 minute ago, Strange said:

If humans were capable of doing more than a TM then, presumably, it would be possible to create an artificial intelligence that also do that.

I'm depressed you could say that after what I wrote.

An AI is a computer program. Computer programs are practical instances of TMs. (They're constrained by space and time, whereas TMs aren't).

So if we are not TMs, we can do things TMs can't ... but we could never make an AI that could do what a TM can't ... unless we change the definition of AI to go beyond the limits of Turing machines.

Now it is true that there are theoretical models of computing that go beyond the TM. Turing himself wrote his doctoral thesis on ordinal models of computation, in which you keep adding oracles for noncomputable problems to develop a hierarchy of notions of computation. 

But such models go beyond the laws of known physics, in that they require supertasks: performing infinitely many operations in finite time. 

Without going beyond that boundary, there are things a TM can't do; and if humans CAN do more than a TM, we still could never make a computation go beyond what a TM can do.

The only exception here is that we need new physics. With current physics we're stuck. 

Posted
14 minutes ago, wtf said:

I'm depressed you could say that after what I wrote.

Don't worry, I think I agree with everything you say!

11 minutes ago, wtf said:

An AI is a computer program.

Current AIs are computer programs.

I deliberately wrote "artificial intelligence" to distinguish the artificial recreation of human intelligence (if that is different from TM "intelligence") from what we currently call AI (which is not really very intelligent!)

13 minutes ago, wtf said:

So if we are not TMs, we can do things TMs can't ... but we could never make an AI that could do what a TM can't ... unless we change the definition of AI to go beyond the limits of Turing machines.

Exactly. If we can do things that TMs can't then I can't see any reason why we couldn't create systems that do the same thing.

15 minutes ago, wtf said:

Without going beyond that boundary, there are things a TM can't do; and if humans CAN do more than a TM, we still could never make a computation go beyond what a TM can do.

The only exception here is that we need new physics. With current physics we're stuck. 

All of which seems to be a good argument that humans can't do anything more than a TM.

Posted

Can bring extrernal randomness in to make halting improbable.

Programs likewise don't have to be Turing complete. You do get more flexibility that way.

Posted (edited)
20 hours ago, Yahya Al-Samawi said:

What AI can not do? Give us examples of tasks that AI can not do (in your opinion) whatever effort and advance technologies have invented.

  • Appreciate beauty.
  • Love.
  • Feel emotion..
  • Be creative for its own enjoyment.

The list goes on. Increased computational power doesn't make A.I. conscious.  There is much more to consciousness than the mere processing of information ; computation is a rather superficial layer that provides the illusion of consciousness. 

Roger Penrose said it best on Joe Rogan: https://www.youtube.com/watch?v=9ReEPCpFWwE

 

Edited by Alex_Krycek
Posted
1 hour ago, Alex_Krycek said:

The list goes on. Increased computational power doesn't make A.I. conscious.  There is much more to consciousness than the mere processing of information ; computation is a rather superficial layer that provides the illusion of consciousness. 

How do you know that?

How do you know humans are not “mere information processors”?

How does the brain defy the laws of physics in this way?

How do you know a machine could not do the same thing?

ps I have not watched your video (because it is a video) but I have read some of Penrose’s arguments. As far as I can tell they are just arguments from incredulity (“but we are human, not machines”)

Posted
47 minutes ago, Strange said:

ps I have not watched your video (because it is a video) but I have read some of Penrose’s arguments. As far as I can tell they are just arguments from incredulity (“but we are human, not machines”)

I just watched it, quite interesting. He seems to be saying that certain things in quantum physics are not computable - he talks about the Schrodinger equation giving nonsense answers in certain contexts, i assume he means wave function collapse. He also talks about Godel's theorem, and how we can have a statement that we know is true but cannot be computationally proved. If we can prove it even though it's not computable, there must be something beyond computation to consciousness, and that thing must be related to quantum physics because that also contains uncomputable things. 

That's how i understood it anyway.

 

Posted
10 minutes ago, Prometheus said:

I just watched it, quite interesting. He seems to be saying that certain things in quantum physics are not computable - he talks about the Schrodinger equation giving nonsense answers in certain contexts, i assume he means wave function collapse. He also talks about Godel's theorem, and how we can have a statement that we know is true but cannot be computationally proved. If we can prove it even though it's not computable, there must be something beyond computation to consciousness, and that thing must be related to quantum physics because that also contains uncomputable things. 

Ah, yes: "quantum theory is strange; consciousness is strange; so they must be the same thing".

I am constantly amazed at the shallow arguments produced by someone so brilliant, just because he believes humans must be "special".

And the thing about Godel's theorem is nonsense. There aren't things we "know" are true but can't prove. There are things we can't know are true because we can't prove them. It may be common sense or practical to assume they are, but that doesn't mean we are right in this assumptions (common sense is notoriously unreliable).

Posted (edited)
17 hours ago, Strange said:

See into the future.

..analyze of enough data sometimes is indistinguishable from "predicting the future"..

Edited by Sensei

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.