wtf Posted March 1, 2017 Posted March 1, 2017 (edited) I think it`s safe to assume that in due time AIs will become increasingly complex, in terms of computing power & cleverness of the programmers. At this stage we might start watching for real "intelligence" to emerge. Every algorithm and programming technique, including machine learning and neural networks, reduces to a Turing machine. By substrate independence (aka multiple realizability) we know that whatever it is a program computes is independent of the speed or nature of the hardware. Conclusion: If intelligence is emergent, then it's not computational. And vice versa. Let me say that again, since it's both true and notably absent from virtually every discussion of this topic. If intelligence is emergent, it's not computational. And if it's computational, it's not emergent. In other words if a fancy neural network is intelligent, then the same algorithm is intelligent when it's being executed by a clerk following the instructions with pencil and paper. If an algorithm is intelligent, that intelligence is present when it's running one instruction at a time on the world's slowest processor. If it's not intelligent, speeding it up or running it on a future supercomputer can not make it intelligent. As a concrete example, consider the Euclidean algorithm for determining the greatest common divisor of two integers. Whether that algorithm is performed by Euclid using a stick to draw numbers in the sand; or whether it's coded up on a supercomputer; it only does that one thing. Running Euclid's algorithm faster doesn't make it suddenly know how to drive a car. It's been shown that quantum computers have the exact same computational power as standard Turing machines. So although a quantum computer can do some specialized tasks faster than a conventional computer, the set of problems that can be solved by quantum computers is exactly the same as the set of problems that can be solved by traditional computers. There is much hype in the AI business. Absolutely no results whatsoever in strong AI since the hype got started in the 1960's. Weak AI is driving cars and playing chess. Very impressive feats of programming in highly constrained problem domains. But weak AI is not general intelligence nor is it a step in that direction. https://en.wikipedia.org/wiki/Weak_AI Edited March 1, 2017 by wtf 1
geordief Posted March 2, 2017 Author Posted March 2, 2017 (edited) Conclusion: If intelligence is emergent, then it's not computational. And vice versa. Let me say that again, since it's both true and notably absent from virtually every discussion of this topic. If intelligence is emergent, it's not computational. And if it's computational, it's not emergent. That rings true ....and yet if we postulate that we have this intelligence then how are we different from an "artificial" intelligence of whatever level of sophistication ?Yes ,how does "intelligence " differ from any informational transformation (in the simple ways you describe)? We can point to all the marvelous attributes of what we see as intelligence but how can we say that we are not simply being self referential and looking down our noses at less sophisticated manifestations of what is fundamentally the same phenomenon? If so ,when these manifestations do evolve (?) into something more sophisticated what is there to say that it is somehow lacking in "true" intelligence ? Maybe it will be "better" in some ways and if ,as it evolves the notion of defense takes shape ,then so does the notion of "adversary". But these programs cannot "think for themselves",I hear. Is that the real Turing test? If the programmers have created programs that are too convoluted to keep track of (or even understand in the round) then who or what is doing the thinking? Perhaps one way to test the practicality of a strong AI would be to develop a robot with an artificial mind capable of adaptation to changing environments and set it the task of competing with a primitive organism in the niche it is "set loose in" If it succeeds and kicks out its rivals then it may be a safe bet it can do the same with us eventually I realise we have gone off topic as my OP was really concerned with the potential for using AI in the formulation of new mathematical (and perhaps -if they exist - other sorts of ) models Edited March 2, 2017 by geordief
wtf Posted March 3, 2017 Posted March 3, 2017 (edited) This got a little rambly ... my code must have been in a loop ... in a loop ... in a loop ... I realise we have gone off topic as my OP was really concerned with the potential for using AI in the formulation of new mathematical (and perhaps -if they exist - other sorts of ) modelsI reread your original question and you asked if programs can build models. Of course. I can program a computer to analyze 100 years of temperature data; do a statistical correlation of the temps against the months; and output the prediction, "July will be warmer than December this year." That's easy and commonplace. The Go program that beat the human expert -- an astonishing achievement for weak AI -- was programmed to play millions of games against itself and draw stistical inferences about which moves were more likely to result in victory. But do you think that's all we do? Creativity consists in knowing everything there is to know about the statistics ... and seeing that in this particular instance, the right move is wrong. An AI painter knows that schlock sells. We'd get big-eyed children and poker-playing dogs for the rest of our lives. How do you make an AI Picasso? You can program a computer to paint LIKE Picasso ... but you can not program a computer to create the next revolution in art. Or math, or anything. If there is one thing computers do, it's the same exact thing, over and over. They're algorithms. They can't grow new capabilities. That rings true ....and yet if we postulate that we have this intelligence then how are we different from an "artificial" intelligence of whatever level of sophistication ? Yes ,how does "intelligence " differ from any informational transformation (in the simple ways you describe)? Creativity. The ability to know what's right regardless of the statistical properties of the domain. Intentionality. The ability to know what programs are "about." The self-driving car does not know it's driving a car. It's only flipping bits. It's the human that knows the algorithm is driving a car. Consciousness. "The hard problem" as David Chalmers calls it. I'm conscious and a box of wires isn't. How am I so sure you ask? I'm not We can point to all the marvelous attributes of what we see as intelligence but how can we say that we are not simply being self referential and looking down our noses at less sophisticated manifestations of what is fundamentally the same phenomenon?As computer scientist and awesome blogger Scott Aaronson would say, I am an unreconstructed meat chauvinist. I do take your point that I have no logical basis for my meat-centric beliefs. http://www.scottaaronson.com/ [i linked Aaronson in case people aren't familiar with his awesome site. CS theory, quantum computing, and way more. He has a series called Quantum Computing Since Democritus and if you simply read through it you automatically become smarter.] If so ,when these manifestations do evolve (?) into something more sophisticated what is there to say that it is somehow lacking in "true" intelligencePersonally I think that whatever the next stage of the evolution of intelligence is, it won't be a computer. I don't think we're the last work but I can't imagine algorithm being that clever. I do not believe I'm an algorithm. Maybe it will be "better" in some ways and if ,as it evolves the notion of defense takes shape ,then so does the notion of "adversary".I don't doubt that there's a next stage of evolution. Personally I worry more about human's placing too much faith in machines. That's the danger, not the machines themselves. But these programs cannot "think for themselves",I hear. Is that the real Turing test?This is the problem of other minds. I assume you're conscious. I assume my neighbor is concience even though I never talk to him. How do I know anyone is conscious? How would I know if a machine is conscious? It seems hopeless. Consciousness is subjective. We have no objective test for it. Yet. If the programmers have created programs that are too convoluted to keep track of (or even understand in the round) then who or what is doing the thinking?But all programs are already like that. When an experienced programmer writes a system of 10,000 or 100,000 lines of code, he no longer remembers how it works. The accounting programs at banks were writting in the 1960's by COBOL programmers long since retired. Many of those programs have been patched and extended over the years by generations of programmers. Nobody at the bank knows how any of their software works, they just make changes and try not to break things. That's actually the nature of the programming business ... today! Nobody knows how any of this software works. That's a reason NOT to trust computers with our lives. It's the nature of all code that it's so complicated nobody understands it. All code is like that. Code encapsulates huge amounts of complexity. Always has. Perhaps one way to test the practicality of a strong AI would be to develop a robot with an artificial mind capable of adaptation to changing environments and set it the task of competing with a primitive organism in the niche it is "set loose in""Survivor" pitting humans against bots. I think I'm going to pitch this to the TV networks! If it succeeds and kicks out its rivals then it may be a safe bet it can do the same with us eventually If it did it would be the fault of the programmers. Just like some guy typed in the wrong command yesterday and brought down Amazon Web Services. A typo took down a big chunk of the internet. It's always human error. The infuriating thing about programming is that the computer always does exactly what you tell it to. http://www.usatoday.com/story/tech/news/2017/03/02/mystery-solved-typo-took-down-big-chunk-web-tuesday/98645754/ Edited March 3, 2017 by wtf
Endy0816 Posted March 3, 2017 Posted March 3, 2017 There are race conditions and the like that can place a system in a non programed state. Most of what we want from a machine is opposite of evolution and creative thought.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now