studiot Posted July 9, 2021 Posted July 9, 2021 2 minutes ago, StringJunky said: If you can't tell the difference, I think it's self-aware I think that's a bit of an unfair response 56 minutes ago, Holmes said: Not answerable unless you can tell me what the difference is between a non-self-aware machine and a self-aware machine. So I am going to give a good faith +1 towards a new start to Holmes. BTW I know what a non self aware machine is, - my lawnmower. I have a very sore toe from kicking it into action. But I am not really sure what constitutes a self aware machine. 🙂
StringJunky Posted July 9, 2021 Posted July 9, 2021 12 minutes ago, studiot said: I think that's a bit of an unfair response So I am going to give a good faith +1 towards a new start to Holmes. BTW I know what a non self aware machine is, - my lawnmower. I have a very sore toe from kicking it into action. But I am not really sure what constitutes a self aware machine. 🙂 If AI ticks all the boxes, what other conclusion can you come to? What it means is that self-consciousness doesn't have to have a wet substrate and perhaps the process can be virtual, such that it's pure software, emulating neural processing.
Holmes Posted July 9, 2021 Posted July 9, 2021 37 minutes ago, StringJunky said: If you can't tell the difference in the responses, I think it's self-aware because you can't prove another real person is self-aware, so a machine and person are in the same boat. What differences? what responses? If we don't know what self aware means, what material differences exist between self-aware and non-self-aware machines then we can't scientifically answer the OP's question.
studiot Posted July 9, 2021 Posted July 9, 2021 18 minutes ago, StringJunky said: If AI ticks all the boxes, what other conclusion can you come to? What it means is that self-consciousness doesn't have to have a wet substrate and perhaps the process can be virtual, such that it's pure software, emulating neural processing. 'If' is a very big word. 'All' is nearly as big.
Holmes Posted July 9, 2021 Posted July 9, 2021 21 minutes ago, StringJunky said: If AI ticks all the boxes, what other conclusion can you come to? What it means is that self-consciousness doesn't have to have a wet substrate and perhaps the process can be virtual, such that it's pure software, emulating neural processing. The difficulty is that machines like computers are merely state machines, whether it be a calculator or supercomputer they are all just state machines. I see no scope for anything like "self awareness". If there is such a thing as self-awareness within a machine then what would be the criteria? Some believe too that the human brain, mind is wholly mechanistic but that's just a belief.
MigL Posted July 9, 2021 Posted July 9, 2021 Everyone seems to mention 'self-awareness' when discussing AI. Star Trek:TNG even had an episode about Data, and an engineer who wanted to dismantle and study him. I believe the biggest difference between life and non-life is the fact that life can self-modify in response to external stimuli; a bunch of circuit boards cannot, neither at the system, or substrate, level. It is this self-modification that is the basis for evolution. I remember when I first started wrking out, in the 70s, the popular adage was "If you put a 6hp load on a 5hp machine, you get a blown machine. If you put a 6hp load on a 5hp body, you eventually get a 6hp body" It is this ability to adapt, our thinking,as well as our bodies, that got us where we are. Once machines ( electro-mechanical ) can do the same, they will 'evolve' far faster than we have.
John Cuthber Posted July 9, 2021 Posted July 9, 2021 5 hours ago, MigL said: I believe the biggest difference between life and non-life is the fact that life can self-modify in response to external stimuli; I wrote code that could do that in about 1982. It played noughts and crosses (badly). It learned how to play by seeking to avoid repeating "losing" moves. The algorithm was from a Sci Am article. I'm not sure how I define "Life", but... that doesn't see to cut it.
Butch Posted July 10, 2021 Posted July 10, 2021 It may already be aware... from a physics standpoint awareness is just an entity responding in a rational manner to stimulus, correct?
iNow Posted July 10, 2021 Posted July 10, 2021 I don’t suspect physics has a stance on awareness, choosing instead to focus on modeling and measurement
Butch Posted July 10, 2021 Posted July 10, 2021 (edited) 5 minutes ago, iNow said: I don’t suspect physics has a stance on awareness, choosing instead to focus on modeling and measurement I have to believe (as much as I dislike the concept) that free will is an illusion, if there is am omnipotent presence, it knows our future... we have no choice, our destinies are what they are, we are just biological entities responding to stimulus. The internet is an electronic and mechanical entity, responding in a rational manner to stimuli. 23 hours ago, John Cuthber said: I wrote code that could do that in about 1982. It played noughts and crosses (badly). It learned how to play by seeking to avoid repeating "losing" moves. The algorithm was from a Sci Am article. I'm not sure how I define "Life", but... that doesn't see to cut it. I built a robot in 1976, that learned. It surprised me when it would demonstrate that it had learned to deal with "situations". It's basic task was the left hand turn exploration of my apartment... it devastated me when it got stuck underneath a wicker chair, could not reach its charging port and "died". Edited July 10, 2021 by Butch
iNow Posted July 10, 2021 Posted July 10, 2021 42 minutes ago, Butch said: I have to believe (as much as I dislike the concept) that free will is an illusion While that depends entirely on how one defines free will, it’s off topic in a thread about a self-aware Internet. 43 minutes ago, Butch said: The internet is an electronic and mechanical entity, responding in a rational manner to stimuli. The same might be said about humans, though despite the chemoelectric cascades always occurring throughout our nervous system that’s obviously stretching the meaning of “electronic.”
Butch Posted July 11, 2021 Posted July 11, 2021 2 minutes ago, iNow said: While that depends entirely on how one defines free will, it’s off topic in a thread about a self-aware Internet. The same might be said about humans, though despite the chemoelectric cascades always occurring throughout our nervous system that’s obviously stretching the meaning of “electronic.” No, Imo you make a valid comparison. How does that electrochemical/mechanical process that produces "awareness" differ so much from what the internet is?
iNow Posted July 11, 2021 Posted July 11, 2021 3 minutes ago, Butch said: How does that electrochemical/mechanical process that produces "awareness" differ so much from what the internet is? How, indeed
studiot Posted July 11, 2021 Posted July 11, 2021 On 7/9/2021 at 5:04 PM, Holmes said: If we don't know what self aware means, It is a pity Holmes had all too often added unpleasant remarks to what started off as an excellent comment such as this one. A good question is "Is there a scale of self awareness or is it all or nthing" The OP seems to have envisioned a sudden Skynet type episode where everything came about at once. But many living creatures surely have some measure of self awareness. For instance a pet cat seems to me to be very self aware and cognisant of what it wants. But it is still not as aware as a human.
Butch Posted July 11, 2021 Posted July 11, 2021 (edited) 42 minutes ago, studiot said: It is a pity Holmes had all too often added unpleasant remarks to what started off as an excellent comment such as this one. A good question is "Is there a scale of self awareness or is it all or nthing" The OP seems to have envisioned a sudden Skynet type episode where everything came about at once. But many living creatures surely have some measure of self awareness. For instance a pet cat seems to me to be very self aware and cognisant of what it wants. But it is still not as aware as a human. Even an amoeba has some awareness, does it not? What about a virus? It does react to stimuli in a way, but is that awareness? Edited July 11, 2021 by Butch
iNow Posted July 11, 2021 Posted July 11, 2021 On 7/7/2021 at 8:20 AM, iNow said: There are words here where your meaning is unclear. For example, what is meant by "self" in context of the internet? Likewise, what is meant by "awareness?"
wtf Posted July 13, 2021 Posted July 13, 2021 (edited) On 7/7/2021 at 5:42 AM, Strange Me said: Could the Internet become self aware? And make it important to turn off and maybe decide "I will delete human kind? I think it's interesting that this question always gets asked in terms of the Internet, because that's the only complex computer system people have a daily experience of. But it's far from the most complex and mysterious computer system. If any computer system were to become self-aware, my bet would be the global supply chain. The system that moves raw materials from here to component factories there to integration sites somewhere else to distribution points somewhere else and ultimately puts a finished consumer good on the shelf at your local big box store, at a price point attractive to buyers yet high enough to ensue a profit for every single actor along the chain. The global supply chain is an immensely complicated system, far more complex than the Internet, whose architecture is generally well understood. It involves maintaining just-in-time inventories, tracking taxes and tariffs across international and local borders, integration of air, sea, and land transportation, predictions of consumer demand and raw material supply, and all the rest of it. It's a system that nobody sees but that affects literally every physical thing around us, from the furniture we sit on to the food in the fridge, and the fridge itself. It touches everything. You can turn off the Internet in your home, but not the global supply chain. If the thesis is that a sufficiently complex system can become self-aware, the global supply chain would be my candidate. Not the Internet, whose architecture is simple by comparison. Edited July 13, 2021 by wtf 1
MigL Posted July 13, 2021 Posted July 13, 2021 (edited) On 7/9/2021 at 7:59 PM, John Cuthber said: I wrote code that could do that in about 1982. It played noughts and crosses (badly). It learned how to play by seeking to avoid repeating "losing" moves. The algorithm was from a Sci Am article. I'm not sure how I define "Life", but... that doesn't see to cut it. Unfortunately semiconductors ( like the simple MosTek 6502 you used in 1982 ) cannot rewire their internal connections. They don't have the 'complexity' ( even now, with billions of transistors ) that human brains have, and which can 'rewire' themselves to think differently. The human brain can even form new neuron pathways to regain function after damage has occurred to a particular part of the brain. Computers may 'learn' to react to different situations but they are constrained by the electronic pathways to always 'think' in the same manner. One surface mount resistor with a cold solder joint, and your computer doesn't work; you can bash a football ( American ) player's, or boxer's, brain to the point of concussion, many times over, doing a lot of damage to their brains, and we would still consider them thinking, self-aware, individuals. If we can develop organic, massively paralle semiconductors, that can re-wire ( and evolve ) the substrate, we may be able to create real AI, or artificial life. And the only example of Global Supply Chain software I'm familiar with is SAP. The most unwieldly, non-intuitive bunch of code I've ever seen. An example of stupidity, not intelligence. Edited July 13, 2021 by MigL
StringJunky Posted July 14, 2021 Posted July 14, 2021 (edited) 3 hours ago, MigL said: Unfortunately semiconductors ( like the simple MosTek 6502 you used in 1982 ) cannot rewire their internal connections. They don't have the 'complexity' ( even now, with billions of transistors ) that human brains have, and which can 'rewire' themselves to think differently. The human brain can even form new neuron pathways to regain function after damage has occurred to a particular part of the brain. Computers may 'learn' to react to different situations but they are constrained by the electronic pathways to always 'think' in the same manner. One surface mount resistor with a cold solder joint, and your computer doesn't work; you can bash a football ( American ) player's, or boxer's, brain to the point of concussion, many times over, doing a lot of damage to their brains, and we would still consider them thinking, self-aware, individuals. If we can develop organic, massively paralle semiconductors, that can re-wire ( and evolve ) the substrate, we may be able to create real AI, or artificial life. And the only example of Global Supply Chain software I'm familiar with is SAP. The most unwieldly, non-intuitive bunch of code I've ever seen. An example of stupidity, not intelligence. Can't it all be virtual, in software, and following the rules of neurons? Edited July 14, 2021 by StringJunky
Butch Posted July 14, 2021 Posted July 14, 2021 3 hours ago, MigL said: Unfortunately semiconductors ( like the simple MosTek 6502 you used in 1982 ) cannot rewire their internal connections. They don't have the 'complexity' ( even now, with billions of transistors ) that human brains have, and which can 'rewire' themselves to think differently. The human brain can even form new neuron pathways to regain function after damage has occurred to a particular part of the brain. Computers may 'learn' to react to different situations but they are constrained by the electronic pathways to always 'think' in the same manner. One surface mount resistor with a cold solder joint, and your computer doesn't work; you can bash a football ( American ) player's, or boxer's, brain to the point of concussion, many times over, doing a lot of damage to their brains, and we would still consider them thinking, self-aware, individuals. If we can develop organic, massively paralle semiconductors, that can re-wire ( and evolve ) the substrate, we may be able to create real AI, or artificial life. And the only example of Global Supply Chain software I'm familiar with is SAP. The most unwieldly, non-intuitive bunch of code I've ever seen. An example of stupidity, not intelligence. Digital systems for some time have had the ability to map out bad memory, also we are talking about a very dynamic system whose abilities increase constantly and geometrically. That it cannot control its own development yet does not preclude self awarness. Software can and does modify itself.
MigL Posted July 14, 2021 Posted July 14, 2021 Yet you can 'think' in binary, decimal, linguistic, or even visual terms, while ALL computers ( other than analogue ) 'think in binary terms. That is a constraint.
zapatos Posted July 14, 2021 Posted July 14, 2021 On 7/10/2021 at 5:39 PM, Butch said: ... awareness is just an entity responding in a rational manner to stimulus, correct? Based on your definition people with locked-in syndrome, sleeping people, and some people with mental disorders are not aware. On the other hand, a mouse trap, a door bell, and my television set are all aware.
Prometheus Posted July 14, 2021 Posted July 14, 2021 11 hours ago, MigL said: The human brain can even form new neuron pathways to regain function after damage has occurred to a particular part of the brain. If we accept (and some don't) that sentience is an emergent feature based on particular patterns of information, a computation some people would say, then why does that pattern have to occur at the transistor level? In animals it may just so happen to occur at the neuronal level because that is the first substrate that achieved a sufficient level of complexity for information processing to achieve sentience, but that does not exclude other forms of information achieving the requisite level of complexity. Inside a neural network, the strength of weights between nodes is being strengthened or weakened as learning occurs which is what could be analogous to the strengthening and weakening of neurons in a brain. 1
studiot Posted July 14, 2021 Posted July 14, 2021 33 minutes ago, Prometheus said: If we accept (and some don't) that sentience is an emergent feature based on particular patterns of information, a computation some people would say, then why does that pattern have to occur at the transistor level? In animals it may just so happen to occur at the neuronal level because that is the first substrate that achieved a sufficient level of complexity for information processing to achieve sentience, but that does not exclude other forms of information achieving the requisite level of complexity. Inside a neural network, the strength of weights between nodes is being strengthened or weakened as learning occurs which is what could be analogous to the strengthening and weakening of neurons in a brain. A fair question to which I would observe that time is the problem for the internet. If self awareness, intelligence etc is just simply a few particular patterns out of many possible I agree that there are many candidates sytems upon which such patterns could be impressed. If the pattern is achieved by making and changing connections over a large number of 'nodes' Nature has the big advantage over Man's constructs, such as the internet. Nature can operate over the entire lifetime of a Universe, efficiency is not a concern nor are false starts. Man has to be more efficient within his timescale. This is of course one reason why Man's constructs are more dedicated and do not rely on random making and breaking links until the right combination is found. However such dedicated constructs do not so readily lend themselves to evolutionary processes.
Prometheus Posted July 14, 2021 Posted July 14, 2021 1 minute ago, studiot said: A fair question to which I would observe that time is the problem for the internet. Sorry, i do tend to forget this thread is specifically about the internet and not AGI in general, or neural networks specifically, which i've been referring to. To expand your point the questions that occur to me are: Is it possible to engineer sentience into the internet? Would we want to? Is the internet subject to evolution at all - what replicates and how, what is the selection pressure (functionality as measured by human users?). 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now