Jump to content

thoughtfuhk

Senior Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by thoughtfuhk

  1. AlphaZero, the latest variant, can play both chess and go, and it is being prepared for more cognitive tasks. You may have heard that AlphaGo's predecessor, a single reinforcement learning model, could play several atari games, without being reprogrammed by humans. Likewise, AlphaGo Zero is an initial approximation of Artificial General Intelligence. (Remember the whole point is to build more and more general algorithms.) Why? I don't doubt that such a particular point in time may emerge! I tend to detect that Computer Science is a general type of course, that may encompass mathematics, physics/quantum physics, psychology, biology, chemistry etc. In fact, AGI will likely emerge from a combination of multiple disciplines. Here is a recent example by google deepmind et al, that combines several disciplines: Towards deep learning with segregated dendrite. As such, I have a degree in Computer Science and, I seek to contribute to the development of Artificial General Intelligence. Precisely.
  2. It's not merely about processing speed. Here's a scenario that ought to help eliminate your computing speed misconception: The game of go, is one of the hardest human games, with state space of \(10^{170} \). To play go, you either need human intuition or something like a computer the size of the universe to enumerate the possible game states. AlphaGo Zero, an artificial intelligence app, can beat the best human go player, by far. AlphaGo Zero is not the size of the universe. We see here, that its not merely about computing speed, but also cognitive structures such as those enabling human intuition. AlphaGo Zero uses "human like intuition" or cognitive like processes, to reduce the enormous problem space of go, like humans do, because AlphaGo Zero is obviously not a computer the size of the universe! Human intuition is akin to mathematical structures that aim to mirror biological brain function. For example, the notation \(W * x + b\) represents a mathematical or biologically inspired prior in machine learning (i.e. convolutions), or a hyperplane for representing some problem space in terms of artificial neuronal data. Note-1: AlphaGo Zero uses models including Deep Artificial Neural Networks to to play the game of go. (Games are important as test spaces for ai, because games are lower in resolution that real life, hence cheaper to train algorithms on, while offering wide ranges of tasks to test on (to test algorithm generality), and we can safely test ai capabilities in games.) Note 2: And yes, the whole point of AGI is to help humans. However, it is doing so by use of human brain inspired hardware/software applications!
  3. I already gave a suitable answer above. AGI is a discipline whose product may encompass all disciplines, all courses. Postscript: Listen to what PhD Goertzel has to say about AGI:
  4. This thread is concerned with MIT's AGI course, not the purpose of human life. Report at your leisure, albeit. Sorry, that was the wrong link. Correct link: "Artificial Intelligence takes Gene Therapy to the next level". Postscript: I didn't say I was an expert at gene therapy. I simply underlined that Ai may aid in problems involving thinking, which is all problems, including gene issues.
  5. Yes, that AGI may emerge much much later, does not warrant that I was "flattering myself" or being narrow minded about the one particular discipline known as AGI. At least now you've nicely pointed out that AGI is likely merely a matter of time, than a matter of possibility. Postscript: Kurzweil predicts human level AGI by 2029. Postscript-2: It is likely that that new treatment you referred to emerged with the aid of Ai!
  6. Think about this carefully: Narrow Ai/deep artificial neural network powered models can now do individual cognitive tasks. (There's Ai for disease diagnosis that does better than human doctors, Ai for etc etc) General Ai, called AGI, will likely cover the entire landscape of human cognitive ability, when AGI eventually arises. This means it will aid in doing problems involving thinking, which is all problems! Why wouldn't a model that can do all human cognitive tasks better, not classify as mankind's last invention? Don't you recognize that AGI concerns all disciplines?
  7. I find it disappointing that a course concerning perhaps mankind's last invention is hardly known outside of the machine learning community. Why is MIT's Artificial General Intelligence course so weakly known?? Edit: I just noticed another thread of mine was recently closed. Please refrain from discussing that thread here.
  8. Yes, some have certainly disregarded the rules of science; some have constantly confused religion with science. (As I underlined prior).
  9. I can't force you to own up to your errors. It's time to re-evaluate your command of the English language, as far as I detect. Professional writers may make errors too. (Unless they possess omniscience, a property we don't detect to be scientifically feasible!) You ought to own up to your errors. Nitpick: English is not my second language.
  10. Teleonomy does in fact concern purpose in the realm of science rather than religion. Wikipedia/teleonomy: "Teleonomy is sometimes contrasted with teleology, where the latter is understood as a purposeful goal-directedness brought about through human or divine intention." You persistently misread the sentence above; for that teleonomy contrasts purposeful goal directness, as typically expressed in the realm of teleology wrt divine/human intention, does not mean that teleonomy contrasts purpose/goal directness overall! Why do you think the description opens with: "Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures .."? Clearly, teleonomy contrasts not merely purpose and goal-directness, but instead, it contrasts purpose and goal-driectness when it comes to typically unevidenced nonsense, such as teleology wrt divine/human intention. You have a ridiculous command of English, and so do your comrades here!
  11. Apparent purpose does not mean that. Ironically, the subsequent sentence means that teleonomy constrasts teleology, where purposeful goal directness is concerned with the divine/human intention. This doesn't mean teleonomy constrasts purpose, it means it contrasts purpose when concerned with teleology! And yet you accuse me of mangling English? Ridiculous! Bender unavoidably mentions that actual purpose is constrained to religion. (i.e. teleonomy doesn't exist!) Quote from Bender: "This is getting repetetive and boring. Please stop misquoting respected scientists. Archeo-purpose is not real purpose, much like teleonomy, which is specifically invented to be able to use purpose-oriented language in the absence of purpose. If you want actual purpose, it is teleology you are looking for." That you are unable to understand basic sentence structure, does not suddenly warrant that I am "unable to understand English" ! That is demonstrably false. Well, it's "correct" if you misread as Strange did. See my underlining of Strange's misunderstanding above.
  12. I am yet to detect any such expressions as valid. What I said is that there exists scientific purpose, namely teleonomy. Others have consistently argued against this scientific purpose, in favor of some supposedly "real" or "actual" teleological purpose. In other words, people here have been willing to posit that purpose is merely "real" or "actual" in the realm of religion, instead of science as teleonomy underlines. Contrarily, I've largely been duplicating Wikipedia/teleonomy.
  13. On the contrary, many people had argued as if teleonomy didn't exist. Teleonomy may describe organic goal-directness, contrary to the teleological argument. Example, where user Moontonman argued of purpose merely in the realm of the "supernatural": "Not if the hypothesis calls on a word used in place of supernatural to describe something equally illusionary. Teleonomy only describes an illusion of purpose, which much like the supernatural, is not falsifiable... " Where is it supposedly mention that teleonomy is supposedly lack of purpose. Could you point show us where in the opening line of teleonomy your opinion supposedly exists?
  14. I need not redefine anything as such. Opening line in Wikipedia/Teleonomy: "Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by the exercise, augmentation, and, improvement of reasoning Another line from Wikipedia/Teleonomy: "It would seem useful to rigidly restrict the term teleonomic to systems operating on the basis of a program of coded information."
  15. It is clear that many are unaware of teleonomy. That many had been unaware, does not suddenly warrant that I had "misread it". Thus far no evidence has been provided for this supposed misreading. You are yet to provide any evidence of such supposed "misreading". It would be advisable that you avoid blathering on absent evidence! Ironically, the definitions of teleonomy you cited align nicely with the OP. (Wikipedia also links to research discussions, so Wikipedia is not as terrible as you present. It is not very scientific to avoid research discussions, and merely rely on dictionary definitions!) Science itself comprises of models, that may not be precisely what the cosmos is. This does not suddenly warrant that science is illusory! In a youtube video, Richard Dawkins also describes purpose in the realm of man-made items. (See wikipedia teleonomy page)
  16. 1.) How did I supposedly fail to read the definition of teleonomy? 2.) How does my supposed failure to read the definition of teleonomy remove the fact that you sillily confused religious purpose with scientific purpose? 1.) That teleonomic purpose is apparent does not suddenly warrant that discussions regarding purpose is moot. 2.) See on YouTube, Richard Dawkins' non moot discussion regarding purpose. (as cited in Wikipedia/telenomy, in the OP) 1.) Are you theistic? No such misquoting occurred; you persist to insist that teleology is the only type of "real" purpose. (There is no evidence for deities btw, so teleological purpose isn't "real" as far as science goes!) 2.) You ought to recognize that Dawkins describes purpose in the realm of science, rather than religion. Teleonomy is real, and such describes real phenomena. Perhaps it is time that you update your prior knowledge, for it is clear that you were unaware of teleonomy prior to entering this discussion! 3.) To clarify, "a kind of pseudo-purposiveness", as Dawkins mentions, may be thought of in terms of the topic of randomness : For example, Juergen Schmidhuberunderlines that it is sensible to describe the universe from the scope of "short programs" (i.e. reasonably, the laws of physics) instead of truly random processes. He then expresses that it is sensible that the cosmos is "pseudorandom", rather than truly random, i.e. the cosmos comprises of processes involving random components, however with overarching non-random structures. (Similar to how evolution concerns random mutations, all under the paradigm of non-random selection.) Likewise, as far as I can detect, Dawkins refers to "a kind of pseudo-purposiveness", to be scientific processes regarding goal directness, minus the teleological baggage, i.e. purposiveness minus theistic nonsense! This is likely why Dawkins introduces "archeo-purpose" and "neo-purpose" immediately after mentioning the term pseudo-purposiveness. (Perhaps you are confusing Dawkins' use of the word "pseudo" with pseudoscience, and so you persist to falsely express that purpose cannot be in the realm of science, despite contrary evidence!)
  17. -1, terrible display of common sense. The analogy is merely nice if you confuse purpose in the realm of science, with purpose in the realm of religion. It ought to be a crime to confuse Science and Religion on these forums. Once more, the OP concerns objective/scientific purpose, i.e. teleonomy, rather than religious/subjective purpose i.e. teleological argument. 1.) On the contrary, on January 31, I had long pointed out the particular entropy used in the paper, and I had long pointed out that programmers often work with compressed input spaces for the sake of enhanced efficiency! 1.b) Quote from me on January 31: "Shannon entropy does not prevent the measurement of the difference between conscious and unconscious states. (As indicated by the writers, Shannon entropy was used to circumvent the enormous values in the EEG results. It is typical in programming to use approximations or do compressions of the input space!)". 2.) My hypothesis doesn't explicitly mention that evolution favors intelligence. 2.b.) Instead, it clearly mentions that entropy maximization may be steeper as species get more intelligent. 2.c) Why bother to falsely accuse my hypothesis? 2.d) Nitpick: Why do you feel a long length of time prevents evolution from leading to intelligence? You are aware that evolution indeed, lead to intelligence, right? Do you not detect your own brain to be intelligent, having resulted from billions of years of evolution? 3.) If you pay attention to the false accusations you made (as I approached in points 2 to 2-c above), you may come to notice the evidence; i.e. intelligent things reasonably maximize entropy ("Causal Entropic Forces"), and AGI/ASI will be yet another way entropy is maximized, at even steeper rates, i.e. AGI/ASI shall reasonably eventually maximize entropy more than humans, by way of enhanced cognitive task performance! 3.b) Causal Entropic Forces, by Alex Wissner Gross, PhD: "Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human ‘‘cognitive niche’’—tool use and social cooperation—to spontaneously emerge in simple physical systems." ... 4.) As I had long stated, Dawkins' introduction of terms archeo and neo purpose, occur as scientific terms, rather than religious terms. The thing about science, is that it applies regardless of your opinions!
  18. Of what significance do you feel your remark above evokes wrt the OP? It looks like you didn't bother to at least read Wikipedia/Teleonomy (as pointed out in the OP)! Note: The OP concerns purpose in the realm of science/objectivity, rather than subjectivity/teleological argument. Please don't confuse purpose in the realm of science (teleonomy), with purpose in the realm of religion (teleological argument...). It ought to be a crime on these forums to confuse Science and religion, as you're doing in your response above! Example: Richard Dawkins described the properties of "archeo-purpose" (by natural selection) and "neo-purpose" (by evolved adaptation) in his talk on the "Purpose of Purpose". Dawkins attributes the brain's flexibility as an evolutionary feature in adapting or subverting goals to making neo-purpose goals on an overarching evolutionary archeo-purpose. Language allows groups to share neo-purposes, and cultural evolution - occurring much faster than natural evolution - can lead to conflict or collaborations.
  19. 1.) On the contrary, higher degrees of consciousness, reasonably yields increased entropy, such that it is maximized; as long revealed in the OP. 1.b) Source-a: "We present evidence that conscious states result from higher entropy and complexity in the number of configurations of pairwise connections". 1.c) Source-b: "Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human ‘‘cognitive niche’’—tool use and social cooperation—to spontaneously emerge in simple physical systems" 2.) Humans may not be relevant after the creation of AGI/ASI. 3.) As you may see in the sources above, nature may reasonably "find ways" to maximize entropy, by creating smarter and smarter things. In our case, nature will "use humans" to build smarter things, namely AGI/ASI.
  20. We reasonably maximize entropy. AGI will maximize entropy to a larger degree, occupying more macrostates than humans. (More cognitive tasks) Nature reasonably "finds ways" to build entropy maximizers, and in our case, nature is reasonably using humans to construct better things, namely (2).
  21. Q: Can Science be my religion? A: No. Archaic Science/Religion/Mythology became Modern Science in the scientific revolution. So, Modern science (what we call science today), is not religion.
  22. My prior answer referred to spontaneity wrt the particle's behaviour in figure 2. "(a) A particle in a box is forced toward the center of its box." You had asked about figure 2.
  23. As far as I detect, when describing particles, some degree of spontaneity tends to enter the scene, in the regime of the uncertainty principle.
  24. See Dr. Wissner's paper, "Causal Entropic Forces".
  25. My hypothesis underlines that nature is "finding ways" to maximize entropy, and doing so, nature "finds ways" to build smarter and smarter things. Humans are reasonably ways to engender smarter things, namely something engineered, i.e. Artificial General Intelligence. (See principle of maximum entropy from the entropy maximization page, or see Dr. Wissner's paper for more details) On the contrary, the human purpose thing of mine, had long been presented as a hypothesis. (See the OP. Note that in science, hypotheses may comprise of facts!) Contrarily, you quoted Dawkins scientific discussion yourself, you quoted him introducing some scientific terms, including "archeo purpose". He also introduces another term, "neo purpose", on the grounds of science, rather than non-science. Yes, archeo purpose is the kind of pseudopurpose, such that reasons for biological parts derive from long standing natural selection, minus purpose associated with intelligent design, or human intention concerning deities or subjective processes (This is where Richard separates archeo purpose from theistic endeavour), while neo purpose may concern human the goals of man-made components, from the scope of human design. No, both archeo and neo purpose, seek to describe applicable tiers of purpose, so one is not actual, instead both apply. In fact Dawkins mentions that neo purpose may contain archeo purpose. I had long established that my hypothesis aligns with the overall concept that Dawkin's human/purpose discussion entails. Dawkins mentions some sweet spot of flexibility and inflexibility, and additionally, he mentions a paradox, i.e. a sub-optimal point contrary to what humans ought to be doing. I argue that given evidence of entropy maximization in tandem with rising intelligence, the sweet spot aligns with the creation of AGI, something predicted to generate more entropy by way of human exceeding cognitive task performance. Entropy maximization is not limited to one point in nature.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.