-
Posts
108 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by thoughtfuhk
-
1) You clearly didn't read it. 1.a) On the prior page, I specifically introduced the term "local optimization" to counter a "global optimum" comment another had made. 1.c) On this page, that event was followed by a similar remark by you, advising against "global optimum" , and repeating the very same "local optimization" concept I had previously introduced on the prior page. 2) By repeating the phrase I had already used to describe evolution, you've undoubtedly proven that you did not bother to read the page. 3) Also, my earlier responses to you demonstrably hold. 4) Why bother to lie when evidence can trivially demonstrate your claim to be false? What is there to gain from stipulating obvious lies? 5.a) Another trivially demonstrably falsified lie of yours. 5.b) As the mod pointed out, I did provide sources.
- 85 replies
-
-1
-
The questions you earlier asked, are approached in the human intelligence Wikipedia URL I provided.
- 85 replies
-
-1
-
1) Ironically, unlike you, I tend to substantiate my points with sources. 2) Are you simply saddened by your errors? You ought not to be, because nobody is infallible, you are bound to discover many short comings, this is typical because nobody is omniscient. 3) Actually, I would have preferred the username as you mistakingly mentioned above, but I wasn't sure it was in line with rules. 1) Your concerns are approached in the url I provided earlier. 2) I don't wish to repeat the same lengthy typing, (like the one that occurred before you finally admitted to seeing some relation between optimization and evolution) so I'll leave it up to you to process the source. (It takes time to process sources too, but such is the process of science)
- 85 replies
-
-1
-
1) That you are unable to process the references I link, does not suddenly warrant that the thread is to be closed. 2.a) For example: 2.b) Others here have seen that evolution does something like local optimization, whereas you didn't demonstrate that initially. (You did after a few responses from me)
-
I don't detect the relevance of your response above, because it does not disregard the reality that computers can do cognitive tasks... 1) I noticed you conveniently excluded the URL I presented from your quote of me above. 2) I advise that you observe the URL: https://en.wikipedia.org/wiki/Human_intelligence
-
1) I advise that you read the prior page, before commenting, as you are almost repeating errors others here have made. 2) Anyway, as the evidence I've presented shows, evolution does appear to have a goal. (See for example "Dissipative Adaptation" by Jeremy England) 3.a) I don't know why you are saying somethings aren't optimal, because I didn't say that everything was optimal. (I've even used the term local optimisation wrt evolution on the prior page!) 3.b) However, that things don't find global optimum, does not suddenly warrant that things aren't optimising some phenomena, as I've repeated several times on the prior page.
-
1) Yes, as you demonstrated, your comprehension on the matter of Artificial general intelligence, definitely needs improvement. 2.a) Your prior words: "This may be the source of your confusion, evolution doesn't select (least of all candidates)...". 2.b) My response: As you can see above, you appeared to have been refuting something you supposed I had said. Otherwise why did you bother to mention that evolution doesn't select....? 3.a) On the contrary, you are demonstrating confusion; but the reference below shall help to minimize or purge your self induced confusion. 3.b) Reference A - Cognitive Computing: https://en.wikipedia.org/wiki/Cognitive_computing 4) Yes, that "may" signifies uncertainty, but reasonably in a sense that humans might go extinct before AGI/ASI surfaces. However, if we don't go extinct, and we continue to work on Ai development, AGI/ASI is inevitable.
-
1. Your response is self--contradictory, as the initial portion of your response entailed a statement that: "evolution doesn't do anything" then you subsequently substantiated that statement with another statement ironically expressing what evolution does. 2. I didn't say evolution "selects least of all candidates". (whatever that means, and you will find no quote or something that sounds like me expressing that) 3.a) We already see evidence of artificial intelligence exceeding or equaling humans on many individual cognitive tasks, much like how nature implemented better hardware in humans, compared to neanderthals. 3.b) Observing the range of cognition, from neanderthals to humans, given the reasonable outcome that AGI/ASI shall exceed humans in cognitive tasks: The construction of AGI/ASI is a profound goal, that may engender a subsequent range of intelligence that exceeds that of humans. (in the same range I underlined at the beginning of this sentence) 4) Regardless of your feelings on the matter, general intelligence (as far as science goes) is reasonably independent of substrate; i.e. no law of physics limits general intelligence to flesh. 1. The OP did in fact link to an article, which contained the paper in question. (Scroll down on the article, to see Dissipative Adaptation etc...) 2.a) Again, I repeat that the words from the article regarding the "birds not being the global optimum of flight" do not warrant that some processes aren't being optimized. 3.a) That a process does not fall in some global optimum, does not mean that a process is not occurring within a set of multiple candidate solutions i.e. something like local optimum (as one may derive from papers by Jeremy et al, or other work)! 4.a) My words: "That's odd, because you previously mentioned that you could observe the relevance of optimization, wrt to evolution. (See source)" 4.b) A part of your response (from the source above): "Even if evolution can be seen as a local optimization (and I am not arguing against that)." So, you indeed mentioned that you could observe the relevance of optimization, wrt to evolution. 5.a) As you'll notice in my responses to dimreeper, he is demonstrably wrong about many components in his response. 5.b) Your claim: "Also, in many (nearly all) cases it doesn't relate to intelligence". 5.b) My response: Contrary to your non-evidenced claim, see this paper: The evolution of intelligence: adaptive specializations versus general process 6.a) It is odd that you mention "there's no evidence". 6.b) It is odd, because you went on to mention that "the intelligence of humans reached a level". 6.c) That level was reached because humans are candidates for optimizing cognitive tasks, i.e. cognitive tasks were optimized while intelligence grew as time passed. 7) Also, since then, we've still been getting smarter, (supplemented by better and better science/technology), although the amount of information we generate is eluding is more and more daily. 8.a) Your words: "So, ignoring the fact tat this is pure speculation / science-fiction at the moment". 8.b) My response: General intelligence (as far as science goes) is reasonably independent of substrate; i.e. no law of physics limits general intelligence to flesh. 9.a) Your words: "what are these cognitive tasks and in what way do they need to be optimized (i.e. what are the constraints)?" 9.b.) My response: See items 3.a and 3.b in my reply above to dimreeper, in this latest thread. 10.a) See items 3.a and 3.b in my reply above to dimreeper, in this latest thread. 10.b) Notably, this applies reasonably, as far as science goes today, whether or not I state it.
-
How does that instance of optimization in ("Dissipative Adaptation in Driven Self-assembly") supposedly contradict the OP? That's odd, because you previously mentioned that you could observe the relevance of optimization, wrt to evolution. (See source) Anyway, here is a clear summary, with quite important things underlined, emboldened, and blued: 1. Evolution selects increasingly suitable candidates all the time. (optimization also pertains to candidate selection) 2.a) In a range of intelligent behaviours, humans are candidates for optimizing cognitive tasks. 2.b) AGI/ASI is observable as yet another thing in nature (although non-biological), that are also candidates that can theoretically generate better intelligence than humans, thus possessing the ability to better optimize cognitive tasks. 3) Based on (1), (2.a) and (2.b), AGI/ASI is a reasonably non-trivial goal to pursue, much like how nature generated smarter things than Neanderthals or chimpanzees.
-
1. If you claim to not argue against the instance that evolution can be seen as "a local optimization", then it is very odd that you don't yet detect the connection, especially given the OP. 2.a) Humans are entities, that produce a range of intelligent behaviours in evolution that can be observed to optimize cognitive tasks. 2.b) AGI/ASI theoretically occur as yet another class (though non-biological) producing intelligent behaviours, that can be observed to optimize cognitive tasks, even more so that humans. 3) So looking at some range of intelligent behaviours performable by things in nature, AGI/ASI occurs quantitatively as a subsequent step, with the ability to yield human exceeding intelligent behaviours. (Similar to how humans outperformed its predecessors when nature implemented better "cognitive hardware" in humans) 4) Note that I didn't say that AGI/ASI is life or biological, but we can still observe a range of intelligent behaviours, in which ASI/AGI and humans occur, and conclude that AGI/ASI is a reasonable class/subsequent step in the landscape of intelligent behaviours! 5) By the way, new research on plant intelligence may forever change how you think about plants.
-
1.a) Again, I repeat that the words from the article regarding the "birds not being the global optimum of flight" do not warrant that some processes aren't being optimized. 1.b) That a process does not fall in some global optimum, does not mean that a process is not occurring within a set of multiple candidate solutions i.e. something like local optimum (as one may derive from papers by Jeremy et al, or other work)! 2. Reference A: "Minimum Energetic Cost to Maintain a Target Nonequilibrium State." 3. Reference B: "Dissipative Adaptation in Driven Self-assembly." 4. I advised you to be wary of your wording style: "y" has nothing to do with "x", but you continue to ignore that advise, strangely. 5. As one may derive from papers by Jeremy et al, artificial general intelligence shall reasonably occur as a better way in solution space, compared to humans. (So humans are in that solution space, but AGI or ASI shall be entities that are better candidates for cognitive tasks) 6. We already see narrow (although more and more general) Ai exceeding humans in several individual cognitive tasks. AGI or ASI shall reasonably outperform humans in most or all cognitive tasks! Refer to items 1 to 6 above. Separately, I already mentioned that to disregard my optimization summary, you could for example show that evolution does not non-trivially concern optimization. (Although I doubt you could, given that evidence supports my optimization summary)
- 85 replies
-
-1
-
1.a) The depth of Jeremy England's work exceeds your opinionated criticism; his work constitutes empirical results from initial experiments, unlike your evidence-absent remarks. 1.b) That "a bird is not the global optimum for flying", does not suddenly remove that some process is being optimized. 2) Thermodynamics does have something to do with optimization; as a scientist one should be wary of the words: "x" has nothing to do with "y". 3) Reference A: Statistical Inference and String Theory 4) I didn't haphazardly introduce point 2, after point 1. You are ignoring a crucial reality; artificial general intelligence has the theoretical capacity to be a meta solution to many problem spaces. This means that AGI will be able to exceed humans in many cognitive tasks. Notably, evolution has not yielded science creating bacteria; humans are the only species that has demonstrated the ability to invent science and technology, which yields the ability to manipulate many other species of lesser intelligence. 5) You are still yet to show that life's goal does not non trivially constitute optimization. 6) Also, could you please list an example of an organism that supposedly lacks intelligence?
-
One would reasonably need to show that optimization is not a crucial evolutionary component/goal. Such evidence would be contrary to evidence seen in 1.b. and 1.c in the OP. Apart from that, science permits the existence of general artificial intelligence, much like how the atom was conceived prior to empirical observation.
-
Deep Learning can do all the things you listed, whether or not you admit it. (Eg: AlphaZero, or One Model To Learn Them All) Deep Learning is yet another program/piece of software, that learns to build very very complicated programs that humans have not been observed to be able to do! Reference A:."Self-taught artificial intelligence beats doctors at predicting heart attacks" Reference B: "AI learns and recreates Nobel-winning physics experiment" Reference C: "AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day" Passing the Turing test is not a requirement to put millions of people out of work, which Ai is already starting to do!
-
Deep Learning can do all the things you describe above. Deep Learning algorithms are very general, and this is why we see Deep Learning doing medical diagnosis, working in congested traffic (self driving cars) etc. Notably no human can sit down and program the billions of parameters that these Deep Learning models automatically do from scratch! These Deep Learning models are becoming more and more general by the day too. Here is a model which already somewhat combines them all: ArxiV: One Model To Learn Them All
-
1.a) Life's purpose is reasonably to do optimization. 1.b) Reference I: "Dissipative Adaptation", Jeremy England. 1.c) Reference II: Wikipedia/Laws of thermodynamics. 2.a) Artificial General intelligence (AGI), will probably arise in one decade or more, and they shall probably be better optimizers than humans. 2.b) Reference III: Kurzweil's law of accelerating returns: https://youtu.be/JiXVMZTyZRw?t=646 2.c) Reference IV: Demis Hassabis' prediction: https://youtu.be/rbsqaJwpu6A?t=918 2.d) In fact AGI is often referred to as the last invention mankind need ever make: https://youtube.com/watch?v=9snY7lhJA4c) 3) Thus, our purpose as a species is reasonably to focus on AGI development. Some benefits of AGI may be: I) Solve many problems, including aging, death, etc. Reference A: For eg, ai can already do this: "Self-taught artificial intelligence beats doctors at predicting heart attacks" http://www.sciencemag.org/news/2017/04/self-taught-artificial-intelligence-beats-doctors-predicting-heart-attacks II) AGI may be used to help to find a unified theory of everything in physics! Reference B: For eg, ai can already do this: "AI learns and recreates Nobel-winning physics experiment" https://techcrunch.com/2016/05/16/ai-learns-and-recreates-nobel-winning-physics-experiment/ III) Enable a new step in the evolutionary landscape; i.e. general intelligence that's not limited to human brain power, where humans may perhaps no longer be required to exist because smarter, stronger artificial sentient things would instead thrive. Reference C: Richard Dawkins, "Big Think" Interview: https://youtu.be/SM__RSJXeHA?t=154
-
(High quality version: https://i.imgur.com/iQEDfXq.png) "Giant intuitive diagram showing how an artificial neural network works; by allowing error signals (aka changes in costs) to “trickle” backwards from the output layer. For example: “Trickling” backwards simply means a value (aka cost) computed at some neuron \(j\) in layer \(L\) is input to values generated by computations of some prior neuron \(k\) in layer \(L-1\). Some cost computed from neuron \(k\) then becomes input to values generated by computations of some prior neuron \(m\), in layer \(L-2\)." Links: Amazon: https://www.amazon.com/dp/B077FX57ZZ Free copy with equations that are nicely coloured differently than their surrounding text content (instead of equations with the same colouring as their surrounding text content)…on research gate : https://www.researchgate.net/publication/321162382_Artificial_Neural_Nets_For_Kids Free copy on quora: https://www.quora.com/What-is-the-most-intuitive-explanation-of-artificial-neural-networks/answer/Jordan-Bennett-9
-
1) I don't need to read all of the ai-academy ml content, to see that it could be clearer. Reading the introductory chapters, tells you a lot about the remaining content. 2) I am not saying you shouldn't use ai-academy, all I am saying is that ai academy can be supplemented by Artificial Neural nets for kids. 3) The tutorial's quora page does talk about a scratch written neural net done by the author 2 years ago, but the tutorial is not code specific. The tutorial however does heavily describe matrix compatible explanations. I think some code should be included in the tutorial though. Anyway, how would you describe back propagation in your own words, in terms of math too, without thinking about a specific programming language?
-
Those tutorials at ai-academy appears to abstract or hide away all the crucial stuff. This is not the path you want to take if you want to perhaps contribute to the field in a non trivial way. You're going to need to understand what is actually going on underneath, to attempt to make some fundamental contributions, or do something truly novel.
-
For those that may be stunned by the thousands of nuts and bolts in elementary neural nets considerable while writing a neural network from scratch, consider these words from the quora tutorial: Although all the nuts and bolts of an elementary artificial neural net are easy to understand in the long run, the neural net unfortunately consists of thousands of moving parts, so it is perhaps tedious to grasp the whole picture. ▼ As such, the entirety of an elementary yet powerful artificial neural network can be compacted into merely 3 parts: 1) “Part A — Trailing hypersurfaces” & “Training set averages”: https://i.imgur.com/AeVOawT.png 2) “Part B — Partner neuron sums — An emphasis on “trailing hypersurfaces”: https://i.imgur.com/lUtg01z.png 3) “Part C — Error correction — Application of costs from the trailing hypersurfaces”: https://i.imgur.com/UgXMjsm.png ▼ ▼ Notably, Part B is merely a way to clarify part A, so basically the neural network is just 2 things: 1) A sequence of “hypersurface” computations wrt to some cost function. 2) An application of costs aka “negative gradients” (using the hypersurface computations) to update the neural network structure as it is exposed to more and more training examples. And thus the neural net improves over time.
-
Two years ago, I myself was one of the lucky people that wrote a basic artificial neural net in python from scratch, and got it to run but I didn't really understand why some of the code was working. I'd been following a channel called 3blue1brown for math stuff, and I recently saw a youtube video about neural nets by the same channel. That 3blue1brown's neural net series is surprisingly very clear, I've never seen anything that clear in all my life. What's even cooler, is that I found a tutorial on 3blue1brown's reddit about neural nets inspired by 3blue1brown's channel, that makes 3blue1brown's neural net series even clearer. The tutorial's is on amazon but, it's also free on quora. (The image above is from the amazon link) If you've never got a neural net to work or even if you're an expert, check out the 3blue1brown's neural net series, and check out Jordan's free quora tutorial.