Jump to content

Genady

Senior Members
  • Posts

    5375
  • Joined

  • Days Won

    52

Everything posted by Genady

  1. I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.
  2. Unless all these programs are already installed in the same computer.
  3. Yes, this is a known concern.
  4. But I didn't say, DNA.
  5. I think I can program in random replication errors. Maybe I don't understand what you mean here
  6. (continuing my devil's advocate mission...) 1. The web, clouds, distributed computing, etc. are the environments of lots of cooperating computers, aren't they? 2. Hmm, I can't think of a good enough computer analogy of this... 3. Computers can self-replicate, at least in principle. (BTW, it took me some time to figure out what is paradoxical in your jigsaw example. But I did. I think, AI could deal with this kind of language vagueness, given enough examples.) (I gave the statement "My jigsaw has a missing piece" to Google translate and it has translated it correctly, without any inherent paradoxes, into both Russian and Hebrew.)
  7. Yes, I know, they started as somewhat different questions, but boiled down to the same subject. I'd like to hear your comparison, regardless where you post it. Perhaps the thread of what computers can't do, is more relevant. I took a note of the differences you've referred to before. Thank you. Perhaps, but what are these machines fundamentally missing that leads to this difference? What would prevent a sophisticated system of them to behave like a system described by iNow earlier:
  8. You explain, correctly, why the current artificial intelligence is human-like. However, my question is different: Is human intelligence computer-like? It specifically refers to the human intelligence abilities which are not realized in the current AI. The current AI realizes only very small subset of human intelligent tasks. What about the unrealized tasks? Are they or some of them unrealizable, in principle? Is there some fundamental limitation in computer abilities that prevents AI from mimicking all of human intelligence? Following the @studiot's clarification, let's stay with classical digital computers, because their functionality is precisely defined, via reducibility to TM. Anyway, all AI today is realized by this kind. Is human intelligence just a very complicated TM, or rather its functionality requires some fundamentally different phenomenon, irreducible to TM in principle? We know at least one such physical phenomenon, quantum entanglement. It is mathematically proven that this phenomenon cannot be mimicked by classical digital computers. Is human intelligence another one like that? If human intelligence in fact is reducible to TM, i.e. is realizable by classical digital computers, then perhaps intelligence of all other animals on Earth is so, too. But, if it is not, then another question will be, when and how evolution switched to this kind of intelligence? Mammals? Vertebrates? CNS? ...
  9. Yes, human because it is more interesting and understandable to us. OTOH, the goal is not necessarily pragmatic. It can be for sport or for research, for example. I don't think they developed artificial GO champion because it was needed.
  10. So, brains do many things that computers don't do, and computers do many things that brains don't do. Maybe the question should be narrowed to a domain where their functions seemingly overlap, namely, intelligence: Is human intelligence a biologically implemented computer?
  11. I think today "computation" applies to "whatever computers can do". This is certainly what they mean in the book I've cited in the OP.
  12. I didn't feel a need to mention Turing machines because any computation can be implemented by a TM. TM is a formally defined device useful for formal analyzing computations, e.g. compare their complexities. A computational machine doesn't have to be a TM, but whatever it does can be done by a TM. This includes AI neural nets of all types. Since they are implemented by computers, they can be implemented by a TM.
  13. To add to this question: Does 'computation' include determining what to measure?
  14. DNN can approximate any function in a model, but the model is given to it. What I mean by coming up with a new model, in the astrology example for instance, is: would DNN come up with considering, instead of astrological data, parameters like education of a person, social background, family psychological profile, how much they travel, what do they do, how big is their town, is it rural or industrial, conservative or liberal, etc. etc. ?
  15. Materialistic/naturalistic perspective leads me to consider a brain to be some kind of machine. But, why computational? What do they mean when they say computational?
  16. For most galaxies, yes, it is so. For the very close galaxies, e.g. Andromeda, no. The redshift didn't change in the last 100 years.
  17. I've read today in a recent book on Artificial intelligence this statement: "a brain is a computational machine that happens to be made of neurons." (Stone, James. Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning (p. 183), 2020.) Is brain a "computational machine"? If so, in what sense?
  18. x+y+z=10. You can express z by x and y. Then, you can substitute this expression for z in the given function, and it becomes a function of two variables. You can apply the theorem 2.6 as is to this function.
  19. I don't know what "labeling data" is and how it relates to coming up with a new model. But will it come up with a new model?
  20. You know that "Moskva" is "Moscow" in Russian, right?
  21. Regarding the universal approximation theorem, here is an example of a model: astrology. What will DNN do if the training data is astrological data of people as input and their life affairs as output? It can approximate any function in this model. But any function will be garbage anyway.
  22. Perhaps. But can we test if a computer has or doesn't have an idea of infinity?
  23. As I understand it, DNN can approximate any function in a given model, i.e. given input and output spaces. What are these spaces, is up to human.
  24. I try to narrow the original passage to one item: AI can't come up with new models, it can only optimize models created by humans.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.