Jump to content

wtf

Senior Members
  • Posts

    830
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wtf

  1. Fine. I don't know how to recognize sentience. And you just told me that you have OBJECTIVE -- your word -- criteria. Your quote was: "I think AI can become objectively sentient." So just tell me what these objective criteria are, that I may be similarly enlightened.
  2. The quote I questioned said that one can objectively determine sentience in others. A chatbot could say, "Ok my goodness are you ok? That must hurt." That's not sentience. What the objective criteria? As I mentioned earlier, the first chatbot Eliza caused naive people to tell it their innermost secrets. It was a simplistic chatbot with no intelligence at all beyond the ability to repeat phrases. You say, "My toe hurts." It responds, "Tell me more about your toe." People mistook that for sentience. It's the exact same example you're using. If your neighbor says his toe hurts, you'll say, "Oh that's terrible, I hope it gets better" But if your washing machine prints out "My toe hurts," you'll call the repairman. Computer scientist Scott Aaronson calls that meat chauvinism. Surely you are not so easily fooled by a chatbot, I hope.
  3. What can "objectively sentient" mean? Is your next door neighbor objectively sentient? How do you know?
  4. You look silly doubling down on your error. You're wrong. I posted the correct example of a function with discontinuous derivative.
  5. ps ... found this good thread. https://math.stackexchange.com/questions/292275/discontinuous-derivative One of the commenters notes that |x| does NOT answer the question of a function with discontinuous derivative, for exactly the reason I gave. The example given of such a beast is: f(x) = x^2 sin(1/x) for x nonzero; and f(0) = 0. The details are in the link.
  6. If you think |x| has a derivative you failed freshman calculus. It's the classic elementary example of a continuous function that does NOT have a derivative. Because to "have a derivative" means to have a derivative at every point of its domain. Since |x| does not have a derivative, it certainly does not have a derivative that's discontinuous. That's because if I don't have a purple elephant, then I certainly don't have a purple elephant with wings. Of course |x| does have a derivative that's undefined at one point of the domain of |x|. That MIGHT be what the OP meant, but it's NOT what they asked.
  7. Too strong. OP asked for a continuous function that has a derivative that is not continuous. |x| does not satisfy OP's requirement, since it does not have a derivative.
  8. Preferences determine what you choose. But the pleasure you feel is subjective. One choice gives more pleasure than another. And that experience is different for every person. We could program a bot to randomly choose chocolate or vanilla ice cream. We could even provide sophisticated sensors that can analyze the fat content, the sweetness, etc. of the ice cream. We could tell it to optimize for something or other. Say, best fit with the choices of a population of ten year olds. Over time, the bot will perhaps develop a preference, based on statistical correlation with the corpus of data representing the ice cream preferences of ten year olds. The bot will not experience the pleasure of one over the other. It's doing datamining and iterative statistical correlation. It's no different in principle than an insurance company deciding what your auto premium should be based on how you correlate with the database of all drivers. People who "totaled your brand new car" are more likely to total another one, to quote a particularly annoying American tv commercial. Am I the only person here who has qualia? Isn't anyone aware of your subjective self? You all really think you're robots executing a crude, physically implemented Turing machine? I am not a bot ... a bot ... a bot ...
  9. > The beginning of consciousness is preference. Input or no input. From where I sit this doesn't even seem wrong. It seems unserious. Apologies if you are in fact serious. If so your examples are weak and unconvincing. A computer may receive input or it may receive no input. But it can not have a preference for one or the other. I simply can't imagine otherwise. It's like saying my washing machine cares whether I use it or not. It can accept input in the form of clothing to be washed. But it can have no preference for washing or not washing clothes.
  10. > You did. It is an intriguing analogy. I'm trying to think how that could relate to consciousness. It seems completely different to me. I am going to have to think about it some more. I'm clarifying the difference between simulation and reality. It's like a beginning exercise in graphic programming. You have a ball bouncing around in a 2-D box. During each frame you check to see if the ball has hit a wall. If so, you apply the rule that the angle of incidence equals the angle of reflection to determine the new direction of the ball. But no physical forces are involved, only mathematical modeling. In fact you could program in a different rule. Angle of reflection is random. Or half the angle of incidence. You'd get funny geometry. That's because simulations aren't reality. With gravity, it's perfectly clear that no bowling balls are sucked in to the simulation. If we made a digital cell-by-cell simulation of a nervous system, we simply don't know if it would be conscious. > Would you go so far as to say that it could even behave exactly as if it were self aware? Even claiming that it is self aware? Yes it might. This of course is Turing's point in his 1950 paper on what's now called the Turing test. If something acts intelligent, we should assume it's intelligent. There are many substantive criticisms of this idea, not least of which is that it's the humans who are the weak point in this experiment. I assume my next door neighbor is intelligent based on "interrogations" along the lines of "Hey man, nice day." "Yeah sure is." "Ok see you later." "You too." What kind of evidence of consciousness is that? So the real problem is that we have no way to determine whether something that acts intelligent is self-aware. Turing's point exactly. If it acts self-aware it is self-aware. Hard to argue with that but hard to believe it too. You may recall that the users of Eliza, the first chatbot, thought it was an empathetic listener and told it their problems. As I say, it's the humans who are the weak point in the Turing test. Not even physicists call gravity a force anymore. It's not a force, it's a distortion in spacetime. Objects are simply traveling along geodesics according to the principle of least action. No force involved. Consciousness is data? By that criterion Wikipedia, the telephone book, and the global supply chain are conscious. Can you clarify that remark? Data? Like the annual rainfall in Kansas and the gross national product of Botswana? I don't buy that at all. > But we'll not know for sure, until it's tested. How would you test for consciousness? See my preceding remarks on the Turing test.
  11. I thought I responded to this point earlier. If I run a perfect simulation of gravity in my computer, nearby bowling balls are not attracted to the computer any more than can be accounted for by the mass of the computer. The simulation doesn't actually implement gravity, it only simulates gravity mathematically. Likewise suppose I have a perfect digital simulation of a brain. Say at the neuron level. Such a simulation would light up the correct region of the simulated brain in response to a simulated stimulus. It would behave externally like a brain. But it would not necessarily be self-aware. It's like the old video game of Lunar Lander. It simulates gravity mathematically but there's no actual gravity, just math simulating the behavior of gravity.
  12. Mind is a computation but the universe isn't? Interesting thought. I believe Searle makes the point that there's something about the biological aspect of the brain that gives rise to consciousness. Of course it's true that computations can be implemented on any suitable physical substrate. Whether that's true for minds is unknown.
  13. I don't think we'll solve this tonight but we can agree to disagree. Have you got any non-organic examples? That's the point. Life seems to encode meaning. Bitflipping IMO doesn't.
  14. Are you claiming that meaning is one-to-one mapped to neurons? Neuroscience doesn't support that conclusion at all. The fact is we have no idea what subjective consciousness and meaning and qualia are. How can you be so certain of things that nobody knows? And provide inaccurate "evidence" to support your unknowable claim?
  15. The meaning is in your mind and mine. Not in the bits. It's like the written word. We make these marks on paper. The meaning is in the mind of the writer and the mind of the reader. The meaning is not in the marks. Good question. Nobody knows how that might work. But since it's not known whether the physics of the universe (the true physics, not human-contingent theories of physics) are computable, it's quite possible that the universe does what it does but not by symbolic Turing-1936 type computation. I'd say it's highly likely. But why should the universe be a computation? It seems so unlikely to me, if for no other reason than the very contemporaneousness of the idea. In ancient times when waterworks were the big tech thing, they thought the world was a flow. In the 18th century they thought the world was a Newtonian machine. Now we have these wonderful computers so people think the world's a computer. The idea is highly suspect for that reason alone. A stronger point is that in 80 years, nobody has found a better definition of computation. And again, why should the universe be a computation at all? The world wasn't a flow when the Romans built grand waterworks. It wasn't a machine in Newton's time. And it's probably not a computer just because we live in the age of computers.
  16. The unsolvability of the Halting problem is a mathematical theorem. It couldn't be falsified in the future anymore than the Pythagorean theorem could. It's not an empirical result. https://en.wikipedia.org/wiki/Halting_problem
  17. Ah you see what you did there. I said consciousness is not computational. You immediately claimed that the alternative is nonphysical or metaphysical. You are implicitly assuming that the mind is a computation. Perhaps the mind is physical but not a computation. That is a perfectly sensible possibility, is it not? Computations are severely limited in what they can do. The human mind does not (to me) seem so constrained. In passing,I just happened to run across this a moment ago. Why Deep Learning is Not Really Intelligent and What This Means https://medium.com/@daniilgor/why-deep-learning-is-not-really-intelligent-and-what-this-means-24c21f8923e0 This relates to the present discussion as well as the similar one in the Computer section. I'm on the side of those who say that whatever consciousness is, it is not algorithmic in nature. That is in no way an appeal to the supernatural. It's an appeal to the profound dumbness of computations. They just flip bits. They can't represent meaning. One can be a physicalist yet not a computationalist.
  18. Such an AI is still implemented on conventional computer hardware and can be executed line by line by a human with pencil and paper. So my questions still stand. Parallelism is still a Turing machine, just as your laptop can run a web browser and a word processor "at the same time." Any parallel computation can be implemented by a computation that just does one instruction at a time from each of the parallel execution threads, round-robin fashion. You get no new computational power from parallelism. Your point that simulation = reality is wrong IMO. If I simulate gravity in a program running on my laptop, nearby bowling balls are not attracted to my computer any more strongly than can be perfectly accounted for by the mass of my computer. Likewise a simulation of a brain would exhibit all the behavioral characteristics of a brain, lighting up the right areas in response to stimuli, for example. But it would not be any more conscious than my gravity simulator attracts bowling balls; which is to say, not at all. I don't want to get into a lengthy convo about emergence till you (or someone) responds to my questions. But emergence is a very murky concept. It doesn't explain anything. "What's consciousness?" "Oh, it's just emergence from complexity." "Well that tells me nothing!"
  19. A thought experiment. Suppose we have someday an AI that is self-aware. Suppose that this AI works on the same principles as conventional computers. That category would include all current implementations of machine learning AI's. And in the 80+ years since Church, Turing, and ‎Gödel worked out the limitations of formal symbolic systems. nobody has found any other model of computation that could be implemented by humans. Therefore its code could be equally well executed by a human using pencil and paper. Beginning programmers learn to "play computer" to figure out why their program's not working. You step through the code by hand. A human sits down with pencil and paper to execute the AI's code, one line at a time. Where is the consciousness? In the pencil? The paper? The "system?" What does that mean? Secondly, when doe the consciousness appear? In the initialization stage of the program? After a million iterations of the main loop? How does this work? If a computer starts executing instructions, at what point does it become self-aware? If it's not self aware after a million instructions have been executed, what makes it conscious after one more instruction? How is all this claimed to work?
  20. Nobody has ever worked out a model of computation implementable by a human being that goes past the ability of a TM. The Church-Turing thesis expresses the fact that no such model exists. It's not a theorem, only a hypothesis. It's stood for 80 years without refutation. https://en.wikipedia.org/wiki/Church–Turing_thesis
  21. Doesn't help. Same level of computability. Deterministic TMs do exactly the same things as nondeterministic ones and vice versa. Nondeterministic TMs may be better in complexity but not in computability. https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine#Computational_equivalence_with_DTMs Here's a nice Quora thread explaining why this is true. You need an account on Quora to read this I believe. https://www.quora.com/Why-do-deterministic-and-non-deterministic-Turing-machines-have-the-same-power The basic idea is that a deterministic TM just executes ALL the possible paths that a nondeterministic TM might take.
  22. This is a common misunderstanding. AI as currently implemented (multi-layered machine learning) is just datamining on steroids. It most definitely executes an algorithm. It has source code that could be published and studied. It runs on conventional hardware. And it is a practical implementation of a Turing machine. The exact same algorithm could be performed, although slowly, by a person sitting at a desk with a large supply of pencils and paper. The algorithms are more subtle: "Aggregate this data with that data and compare with these target results, and keep changing your weighting algorithms to improve the percent on-target." But it's an algorithm. It's perfectly deterministic. There is no difference at all in principle between an AI running the fanciest "deep learning" algorithm, and a beginning programmer's first "Hello world!" program. I'm not discounting the cleverness of the ML approach. I'm only separating the reality from the hype. ML runs deterministic algorithms and could be implemented as a classical Turing machine. And the proof for that is that the AI's all run on conventional hardware! There is no magic involved and no new computational paradigm. Of course it's a tremendously clever way to munge a huge corpus of data. But it's still a conventional algorithm. ps -- Here is a programming-101 assignment for a program that "learns to survive on its own." You have a car that comes to a fork in the road. It randomly turns left or right with 50-50 probability. To the left is a cliff, which the car drives off and the driver dies. To the right leads to the garden of eternal happiness. If the driver dies, you adjust the percentage down one point so that L has a chance of 49% and R has a chance of 51%. If the driver ends up eternally happy, you decrease L and increase R by one percentage point. You repeatedly play the game in a loop. Initially the driver will die or be eternally happy an equal number of times. After a while the program will gradually "learn" to turn right all the time. So we made a program that "learned" to avoid death and to seek eternal happiness. Huzzah. Hold a press conference. I hope I have made my point here. This is exactly what machine learning algorithms do but of course on a much grander scale. They make decisions based on probabilities, which they adjust for the next iteration based on whether a given choice led to a good outcome or not.
  23. I'm depressed you could say that after what I wrote. An AI is a computer program. Computer programs are practical instances of TMs. (They're constrained by space and time, whereas TMs aren't). So if we are not TMs, we can do things TMs can't ... but we could never make an AI that could do what a TM can't ... unless we change the definition of AI to go beyond the limits of Turing machines. Now it is true that there are theoretical models of computing that go beyond the TM. Turing himself wrote his doctoral thesis on ordinal models of computation, in which you keep adding oracles for noncomputable problems to develop a hierarchy of notions of computation. But such models go beyond the laws of known physics, in that they require supertasks: performing infinitely many operations in finite time. Without going beyond that boundary, there are things a TM can't do; and if humans CAN do more than a TM, we still could never make a computation go beyond what a TM can do. The only exception here is that we need new physics. With current physics we're stuck.
  24. You know the mention of "AI" obscures a key fact. An AI, which is just a super-duper-fast implementation of a Turing machine, can not do anything that a human can't do sitting at a desk with an unlimited supply of pencils and erasers and an unbounded paper tape. It's true an AI is faster; but the set of functions an AI can compute is exactly the same set as a human implementing a Turing-1936 type TM can do. Even a quantum computer can not compute anything that a vanilla TM can't. We know that for some specialized problems there are quantum algorithms that run in polynomial time that in classical computing must run in exponential time. That's a significant result. But complexity theory is not the same as computability theory; and in terms of computability. an AI can not do anything a pencil-and-paper human can't do, if the human is constrained by the rules of a TM. Now if you believe that a human can't do anything more than a TM even when she stands up from the desk and exercises her human capabilities ... then that's your belief. It is in fact an open question. But when we think of an "AI" as something that transcends the laws of computation as they are currently understood; that is an error. All existing AI's are practical implementations of Turing machines. AI's are NOT imaginary devices that can transcend the laws of computing. Nor, it must be emphasized, does running a computation quickly do anything that running the same computation slowly can do. When a supercomputer executes the Euclidean algorithm to find the greatest common divisor of two integers; it performs exactly as well as a human being executing the algorithm by hand out of a number theory text. The supercomputer goes faster. But given unbounded time, as theoretical TMs have, a supercomputer is no better than pencil and paper. So the real question here is not "AI" versus conventional computers. Rather, it's between what computations can do, and what they can't. That distinction was made by Turing in 1936 and since that time nobody has had a better idea about the subject.
  25. People imagine solving the Halting problem all the time. If we can solve the Halting problem it shows we're not computations. Nobody has succeeded yet but it's an open question. Likewise I can imagine computing the digits of Chaitin's Omega. I just imagined it. But no computer can do it, because it amounts to solving the Halting problem. There are strict and profound limits on what computations can do. Turing showed that in 1936. Ah yes ... At the very least we KNOW a computation can't solve the Halting problem, and we don't know whether humans can. Yes. https://en.wikipedia.org/wiki/Stability_of_the_Solar_System. Found interesting thread here ... https://cs.stackexchange.com/questions/43181/is-the-unsolvability-of-the-n-body-problem-equivalent-to-the-halting-problem Also note that in Newtonian gravity as the distance between two point-masses goes to zero, their gravitational attraction goes to infinity. I believe that's related but haven't references at the moment. The point about Newtonian gravity is that because of chaos, the accumulated effect of tiny rounding errors, we cannot in principle compute the evolution over time of even a perfectly deterministic system. That is, in Newtonian gravity, the motion of every particle is a deterministic function of the position, mass, and momentum of every other particle in the universe; and in principle can be calculated by God's computer. But it can NOT be computed by a Turing machine. This is a fact that's often missed in philosophical discussions. Determinism does not imply knowability or computability. By God's computer I mean the universe itself; which arguably is NOT a Turing machine and perhaps not a computation even in an imaginative extension of its current technical definition as a TM. Computations are limited in what they can do. Jeez is this English 101 day on the forum?? LOL
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.