Strange Posted January 9, 2018 Posted January 9, 2018 41 minutes ago, thoughtfuhk said: Cognitive tasks refer to activities, done through intelligent behaviour. (As long mentioned) For example, reading is a cognitive task. Great. An example. Finally. It's only taken 3 pages. So, how does reading increase entropy? How would reading be further optimised by evolution? How will reading be further optimised by AIs? For example, does reading faster increase entropy more than reading slowly? Or is there some other aspects of reading that should be optimised?
thoughtfuhk Posted January 9, 2018 Author Posted January 9, 2018 (edited) 1 hour ago, Area54 said: Good. Thank you. We seem to be on the same wavelength on that one. Do you have an approximate notion as to how far "down" the web of life such intelligent behaviour expresses itself? Restricted to primates? Present in amoeba? Somewhere in between? Also, on to the second question: Please justify the claim that as such tasks became optimized that intelligence became more generalised. In the above question "justify" could be replaced by "provide reasoned support for". An isolated repetition of mine to answer that question: As things got smarter from generation to generation, things got demonstrably better and better at maximizing entropy. (As mentioned before) As entropy maximization got better and better, intelligence got more general. (As mentioned before) More and more general intelligence provided better and better ways to maximize entropy, and it is a law of nature that entropy is increasing, and science shows that this is reasonably tending towards equilibrium, where no more work (or activities) will be possible. (As mentioned before) The reference prior given: http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf Edited January 9, 2018 by thoughtfuhk
Strange Posted January 9, 2018 Posted January 9, 2018 2 hours ago, thoughtfuhk said: As things got smarter from generation to generation, things got demonstrably better and better at maximizing entropy. (As mentioned before) If they "demonstrably" got better, you should be able to provide evidence of, or a reference to, such a demonstration. Otherwise it is just another unsupported assertion. 2 hours ago, thoughtfuhk said: As entropy maximization got better and better, intelligence got more general. (As mentioned before) Agan, some evidence supporting this would be nice. 2 hours ago, thoughtfuhk said: More and more general intelligence provided better and better ways to maximize entropy Can you provide some examples? Because the only example so far is "reading" and I don't see how that is a better way to maximise entropy.
thoughtfuhk Posted January 10, 2018 Author Posted January 10, 2018 (edited) 8 hours ago, Strange said: If they "demonstrably" got better, you should be able to provide evidence of, or a reference to, such a demonstration. Otherwise it is just another unsupported assertion. Agan, some evidence supporting this would be nice. Can you provide some examples? Because the only example so far is "reading" and I don't see how that is a better way to maximise entropy. 1) Please refer to the URL you conveniently omitted from your quote of me above. 2) You claiming what I said to be unsupported, especially when I provided a URL, (aka supporting evidence) which you omitted from your response, is clearly dishonest. Edited January 10, 2018 by thoughtfuhk
Strange Posted January 10, 2018 Posted January 10, 2018 2 hours ago, thoughtfuhk said: 1) Please refer to the URL you conveniently omitted from your quote of me above. Sigh. Then AGAIN, please quote the relevant text and/or mathematics from this paper that show that increasing intelligence resulted in improvements in increasing entropy. So far the only example of a "cognitive task" that you have provided is "reading". Does the paper you linked mention reading? No. How does reading improve entropy? How does "optimising" reading further improve entropy? What does it mean to "optimise reading"? Can you either answer these questions or provide some relevant examples of cognitive tasks, and explain how they act to increase entropy. And, in case it isn't clear: I am asking these questions because I don't understand what you are saying. I don't understand what you are saying because you refuse to explain. So another question: why do you refuse to clarify your idea?
thoughtfuhk Posted January 10, 2018 Author Posted January 10, 2018 (edited) 4 hours ago, Strange said: Sigh. Then AGAIN, please quote the relevant text and/or mathematics from this paper that show that increasing intelligence resulted in improvements in increasing entropy. So far the only example of a "cognitive task" that you have provided is "reading". Does the paper you linked mention reading? No. How does reading improve entropy? How does "optimising" reading further improve entropy? What does it mean to "optimise reading"? Can you either answer these questions or provide some relevant examples of cognitive tasks, and explain how they act to increase entropy. And, in case it isn't clear: I am asking these questions because I don't understand what you are saying. I don't understand what you are saying because you refuse to explain. So another question: why do you refuse to clarify your idea? 1) There are many degrees of freedom or many ways to contribute to entropy increase. This degree sequence is a "configuration space" or "system space", or total set of possible actions or events, and in particular, there are "paths" along the space that simply describe ways to contribute to entropy maximization. 3) These "paths" are activities in nature, over some time scale "[math]\tau[/math]" and beyond. 4) As such, as observed in nature, intelligent agents generate particular "paths" (intelligent activities) that prioritize efficiency in entropy maximization, over more general paths that don't care about or deal with intelligence. In this way, intelligent agents are "biased", because they occur in a particular region (do particular activities) in the "configuration space" or "system space" or total possible actions in nature. 5) Highly intelligent agents aren't merely biased for the sake of doing distinct things (i.e. cognitive tasks) compared to non intelligent, or other less intelligent agents in nature for contributing to entropy increase; they are biased by extension, for behaving in ways that are actually more effective ways for maximising entropy production, compared to non intelligent or less intelligent agents in nature. 6) As such, the total system space, can be described wrt to a general function, in relation to how activities may generally increase entropy, afforded by degrees of freedom in said space: [math]S_c(X,\tau) = -k_B \int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)[/math] Equation(2) 7.a) In general, agents are demonstrated to approach more and more complicated macroscopic states (from smaller/earlier, less efficient entropy maximization states called "microstates"), while activities occur that are "paths" in the total system space as mentioned before. 7.b) Highly intelligent agents, behave in ways that engender unique paths, (by doing cognitive tasks/activities compared to simple tasks done by lesser intelligences or non intelligent things) and by doing so they approach or consume or "reach" more of the aforementioned macroscopic states, in comparison to lesser intelligences, and non intelligence. 7.c) In other words, highly intelligent agents access more of the total actions or configuration space or degrees of freedom in nature, the same degrees of freedom associated with entropy maximization. 7.d) In this way, there is a "causal force", which constrains the degrees of freedom seen in the total configuration space or total ways to increase entropy, in the form of humans, and this constrained sequence of intelligent or cognitive activities is the way in which said highly intelligent things are said to be biased to maximise entropy: [math]F_0(X,\tau) = T_c \nabla_X S_c(X,\tau) | X_0[/math] Equation(4) 7.e) In the extension of equation (2), seen in equation (4) above, "[math]T_c[/math]" is a way to observe the various unique states that a highly intelligent agent may occupy, over some time scale "[math]\tau[/math]"....(The technical way to say this, is that "[math]T_c[/math] parametrizes the agents' bias towards entropy maximization".) 8.a) Finally, reading is yet another cognitive task, or yet another way for nature/humans to help to access more of the total activities associated with entropy maximization, as described throughout item 7 above. 8.b) Beyond human intelligence, AGI/ASI are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase. Edited January 10, 2018 by thoughtfuhk
Area54 Posted January 10, 2018 Posted January 10, 2018 18 hours ago, thoughtfuhk said: An isolated repetition of mine to answer that question: As things got smarter from generation to generation, things got demonstrably better and better at maximizing entropy. (As mentioned before) As entropy maximization got better and better, intelligence got more general. (As mentioned before) More and more general intelligence provided better and better ways to maximize entropy, and it is a law of nature that entropy is increasing, and science shows that this is reasonably tending towards equilibrium, where no more work (or activities) will be possible. (As mentioned before) The reference prior given: http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf Thank you for your reply. My concerns with its weakness and, to some extent, its irrelevance have been addressed by Strange in his subsequent posts. Your latest lengthy reply contains abundant detail. I shall review this detail and comment, or question further, once I have (or have not) made sense of it.
Strange Posted January 10, 2018 Posted January 10, 2018 59 minutes ago, thoughtfuhk said: 1) There are many degrees of freedom or many ways to contribute to entropy increase. ... Thank you. That wasn't so hard, was it. I still don't see how you make the logical leap from "a thing that intelligent agents do" to "the purpose of intelligence". But I really can't be bothered to waste another week trying to wring some more detail out of you.
thoughtfuhk Posted January 10, 2018 Author Posted January 10, 2018 (edited) 40 minutes ago, Strange said: Thank you. That wasn't so hard, was it. I still don't see how you make the logical leap from "a thing that intelligent agents do" to "the purpose of intelligence". But I really can't be bothered to waste another week trying to wring some more detail out of you. A) Look at item (8.b), and you'll see that human objective/goal is reasonably to trigger a next step in the landscape of things that can access more ways to maximize entropy. (Science likes objectivity) B) Remember, the trend says nature doesn't just stop at one species, it finds more and more ways to access more entropy maximization techniques. Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. AGI/ASI) that shall generate additional macrostates or paths towards better entropy maximization methods. Edited January 10, 2018 by thoughtfuhk
dimreepr Posted January 10, 2018 Posted January 10, 2018 1 hour ago, thoughtfuhk said: A) Look at item (8.b), and you'll see that human objective/goal is reasonably to trigger a next step in the landscape of things that can access more ways to maximize entropy. (Science likes objectivity) 1 Humans have many objectives, one of which is to try and develop AGI/ASI which as you yourself indicated in the OP is some way off. Quote B) Remember, the trend says nature doesn't just stop at one species, it finds more and more ways to access more entropy maximization techniques. That's a non-sequitur, firstly nature has no intention, it doesn't care what happens and secondly: Quote Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. AGI/ASI) that shall generate additional macrostates or paths towards better entropy maximization methods. Because AGI/ASI is just a potential future that may never happen, much like fusion energy which always seems to be 10/20 years off. So I'll reiterate, life has no purpose, not even to live on let alone create.
Phi for All Posted January 10, 2018 Posted January 10, 2018 ! Moderator Note So assertions have no place in arguments if you can't back them up with evidence, yes? Without evidence in support, it's all guesswork and opinion, which isn't science. Thread closed. Don't start another thread on this topic. 1
Recommended Posts