-
Posts
85 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Graeme M
-
I think this comes under the subject cognitive science but I guess it falls into neuroscience and philosophy as well... As I understand it, the Libet experiments seemed to show that people begin to do things before they have decided to do them. These experiments and others since have been debated hotly in terms of what they mean for free will. In essence, evidence for the brain initiating a voluntary movement of the hand before the conscious decision to do so rests with neurological tests that show that preparation for the movement ('Readiness Potential') is registered several tenths of a second before the conscious decision is made. Libet himself claimed that the data indicated that the brain signals the intention to undertake a motor act before any conscious decision to do so, which indicates that a conscious decision to act is in fact a post hoc mental state deriving from the brain's directive state. From what I've read (which admittedly is not that much), it's not quite clear whether all possible conditions have been properly tested - for example, it seems to me that on being told what to do, the subjects have entered a readiness state. They are expecting to have to move their hand. I would want to see evidence that a completely spontaneous act shows the same outcome. That might be by instructing the subjects in one case, but spring on them a different unexpected case. I'm not sure if this has been tested. Anyway, here is a good and recent summary of the current state of this debate: http://www.bethinking.org/human-life/the-libet-experiment-and-its-implications-for-conscious-will And here is a recent(ish) article that offers a different take on what has happened. https://www.newscientist.com/article/dn22144-brain-might-not-stand-in-the-way-of-free-will/ Here Aaron Schurger claims that the accumulation of random neural noise looks like a readiness potential as it approaches a threshold. Although the article is a bit brief and it's probably not clear enough to make any reasonable assessment of Schurger's claims, it seems that he is suggesting that the subjects in whom the noise is closest to the threshold will be the quickest to react to the stimulus, while those with little accumulation will be much slower to react. Presumably this means that those who reacted quickest will combine the neural noise with the actual signals to move and the resulting EEG trace will look like the RP arose before the decision to act (ie, similar to the traces in Libet's experiments). I sort of follow that, but the obvious question must be that while this would look the case for those with neural noise approaching the threshold, it would not for those whose noise levels are low. That would suggest (for Schurger to propose this hypothesis) that in Libet's experiments there must have been cases where the RP arose before the decision to act, some at the same time, and some even later (in fact on average they should be later unless neural noise in most people is quite close to threshold generally). In other words, Libet's data must have shown a reasonably wide spread of RP onset. In what I have read of Libet, I got the sense while in some cases there were variations, on average the RP was high prior to decision making. This seems to have been confirmed in later experiments by both Libet and others. I wonder if anyone knows the current state of the science in terms of the Libet findings? Has it been definitively determined that Schurger's hypothesis explains the strange data, or is it still open to contention? In other words, do we generally accept that motor cortex readiness potential arises before conscious decision making?
-
I came back to this thread today and did a Google search with a better idea of what I was looking for (and asking about originally). This Google Books extract is exactly what I had in mind. I have no idea how valid this proposition is nor the credentials of the author, and it may be that it is more philosophy than science, however it is getting at the idea of development or evolution of mind as a sort of public space beyond the limitations of any single person. What I was thinking is that as humans use language and symbolic representation to communicate ideas between members of our species, we are creating a corpus of knowledge and toolsets that permit more sophisticated conceptual models of aspects of the external world. Whether this be in terms of science, art, culture or politics. I absolutely disagree with anyone who suggests that modern man's ideas about such things are not more sophisticated or 'advanced' than ancient mans. That said, I am not arguing that a person of today has a better brain or a different brain to that of an ancient person. What I am suggesting is that this extended public mind is accessible to any modern person, but was not to an ancient. That is development or evolution of mind as I see it. https://books.google.com.au/books?id=4Z49YSTrvAoC&pg=PA210&lpg=PA210&dq=human+brain+modern+form&source=bl&ots=vAxbcAR6iw&sig=zVIUHBs6E7ZWHIcLgXnZEpPsjcQ&hl=en&sa=X&ved=0CDEQ6AEwBGoVChMIlKag7_-JxwIVBGOmCh2AQQOS#v=onepage&q=human%20brain%20modern%20form&f=false
-
Polar bears... I can't vouch for this blog or its writer, but she certainly offers some food for thought: http://polarbearscience.com/ And this recent interview reported there offers some ideas around Harold Squared's point of view: http://polarbearscience.com/2015/07/08/polar-bear-doom-and-gloom-from-usgs-vs-biologist-mitch-taylors-reasoned-thoughts/
-
Came back to this one and see no-one replied. I'm still curious though. It seems to me that when I really examine closely how it is to imagine an event, I find that I cannot imagine in the form of a linear, fluid action. As I said above, it seems that in fact I narrate the action, and then illustrate that with some kind of largely static mental imagery. Try my example, actually imagine it and try to examine what you are imagining. If you are like me, then intially it will seem like you can imagine it like a little movie. But when you really examine it, what then? Can you honestly say that as you imagine this, you really can conjure up a fluid mental image that shows the movement? Can you see his legs moving, the background passing by (or him passing his background), his hair waving in the wind? Or someone walking up a flight of stairs. Does it unfold step by step, at the right speed, all the way to the top, in a sequence of mental events that actually passes by in real time at the same speed? Can you sustain that imagining for as long as it takes? Really focus in on his legs, his feet as he walks up those steps. Is there really movement?
-
It appears to me that if I try to imagine movement, I can't. I'm not certain, but I think if a recollection springs into mind unbidden then there can appear to be motion (say a memory of a person running). But if I try to imagine a person running, I see a still image that is underpinned by the idea of running. Or a series of stills with the idea of running. What does everyone else see?
-
Interesting. I guess there must be some philosophical treatments of this somewhere but my limited knowledge of the matter offers no insights. So you are suggesting that the 'thingness' quality of a thing depends on whether it is organic or inorganic?
-
Delta can I clarify something about your position? I think (without having any special knowledge or insight into the matter - I don't know for example the current state of understanding about mind/body) that the mind arises in the brain. Consciousness is a property of mind. I think then that consciousness is as much an element of my physical being as is my arm or tongue. A duplicate of me is an exact replica. Are you saying that if a physical copy is indistinguishable from me, it might as well be considered me? Or that the physical me at any moment is not the same physical me as say yesterday? Do you apply this notion only to organic forms, or to inanimate objects like say a teapot?
-
At this point I'd like to hear what Yoseph thinks about all this!!
-
Yeah, I couldn't resist. I know we diverge though I think generally we agree on the basics. At least we agree that mind is not some separate metaphysical construct. But I still can't get my head around what I *think* you are saying!
-
Still going eh? And yes, that's why I've stopped, I think I am just repeating myself. But heck, what's a forum if we don't argue the point? I kind of get where Delta and StringJunky are coming from but tha seems to me to be a confusion between physical implementation and the description of that. The information SJ talks of is a description - a specification if you will. That description can be run on any hardware but to my mind whatever implementation is run, it has a unique identity that lasts as long as that implementation runs. I also wonder at the notion that a replica is essentially me. The process that is me is composed of many sub-processes that each have a continuous linearity - processing input, memories, experience and so on in a constant shifting neural network that I suspect does not have any complete unity at any one point. "I" am smeared over time. The replication process merely creates an initial condition - initialisation of the replica's instance if you like. Once running, the replica's manifestation follows its own path as will any other replica. And the original. The implementation's actual internal state over time is uniquely keyed to the physical arrangement which changes over time, like TAR's scar example. That the physical implements the instance seems clear when you consider that the replication process can only ever set the initial state. You would struggle to build a replication process that could adequately render a complete functioning me at every moment - how often would you rearrange the components? Every second? Half second? Nano-second? What algorithm would have suffiecient complexity to capture all input, possible neural connections and outputs and create that in every moment? The computational complexity would be enormous. I've struggled to see the argument presented. I think I see what you mean but I disagree. I think any consciousness is uniquely embedded in the localised arrangement that is the person concerned. The transporter may create a replica who is my duplicate, but it is not the cognising "me" I am so attached to.
-
OK, I have to capitulate. I think I dimly get what you are driving at, but to me you are arguing for an external seat of consciousness. By suggesting that packaging me up and implementing me elsewhere confers me with an ongoing continuous experience, you are saying that my consciousness has an independent property to my brain. The sticking point for me is the idea that "I" will have an ongoing experience in the scenarios described. I have demonstrated that this isn't so by considering the case where we retain both original and duplicate and neither has access to the other's internal experience. This remains so regardless of the time at which one or the other dies. What you seem to be doing is mistaking each instance of me as the same thing. A duplicate may in its own experience believe it is me, but it is a separate being. It is physically another human being, no different to any other. It is simply a quirk of circumstance that it is identical to me. Take pzkpfw's most recent example above. If I go to bed and wake up next morning, my physical being retains its physical continuity (regardless of what you think is happening at the atomic level). And that brain gives rise to my mind. If you copy me in the night and then kill me, I am dead. The replica, the new instance of me, does indeed wake up and get on with life. But that is not me. It is a copy that thinks it is me. I will not go to bed and wake up in my new body. So I have to leave it there. Thanks for a fun discussion!
-
I don't disagree Ricky12, in fact that is my own personal philosophy. I am simply asking why your idea of balance is necessarily the aim. It's just something I ponder on when people start arguing that the status quo is somehow what we should preserve. Why should it be? CharonY, I suppose I am being sloppy in terminology. I just mean - if generally speaking, human beings were in the vast preponderance. There is always likely to be bacteria and viruses and many lower order forms of life, but if we had far fewer forests, mammals, fish and so on, why would that intrinsically be a Bad Thing? I assume that if that were so it might change the environment substantially, but if we had dwellings and technology that reduce the impact on humans it shouldn't be a great drama. The Great Extinctions certainly reduced diversity so diversity is not necessarily a critical requirement for life on earth.
-
Wait... What??? I'm lost again. You all do seem to be arguing for continuity of consciousness between G1 and G2. I must misunderstand your words, but that seems to me what Delta is saying and others agree. I'll pose a single idea, do you agree of disagree. I create a perfect duplicate of myself, G2, and he is to all intents and purposes "me". To himself and any other observer, he is indistinguishable from me. We all agree on that. In the process of duplication, I - G1 - am destroyed. G2 continues as me. For G2, life has continuity. He recalls stepping into the duplicator, he is aware of stepping out, and he has all the memories of life before that point. We agree on that as well. So.... Do you think that G1's continuous experience is that of G2, or not?
-
I'm not suggesting that there is a goal to evolution, merely suggesting that it doesn't seem unreasonable for evolution (or put in a better way, the natural processes of selection) to lead to a single dominant organism. Remember that the network of interdependence did not arise through an intelligent process - there is no natural requirement for a diverse natural ecosystem. Presumably at the earliest times there was limited diversity and the planet did just fine. The 'planet' will do just fine without life as well. I am questioning the assumption that the present state of the planet, with so much diversity, this chain of interdependence, is any more valid or necessary than any other state. If human beings learn to dominate the entire planet, maintaining if necessary small parks of natural environment for recreation or interest, how would that be a problem? Or if all that was left was a wild windswept wasteland covered in huge sprawling buildings containing human beings living in controlled environments complete with nature parks? The OP's lament is a purely human value derived notion. It has no basis in any framework of natural laws that I am aware of.
-
The question of whether or not to step into the transporter was a diversion raised by me. The original question asked whether a perfect duplicate would be "me". I now realise that the OP can be read in two ways, and my reading was not the same as others. So I offer this amendment to my earlier answer. 1. Am "I" inside the head of the new person? Yes. But there is a distinction in that the experience of life has continuity only for my copy. I - the original me - am dead. 2. If we retain original and copy (G1 & G2), who experiences the world through the one that isn't me? The answer is G2. There is no connection between the two, nor is there any continuity between the experiences of each past the exact moment of duplication. G1 and G2 are two independent beings with identical memories of life prior to duplication. I agree with StringJunky that if we could create such a duplicate, it would indeed think and experience as I do - from its perspective, and any observer, the duplicate is effectively me. In that wise, I agree that "I" am just information. However, that's a different thing to whether or not the original I would experience a continuity of existence. I agree here with Spyman - if you offer me the opportunity to travel through space in this fashion I would decline. If you define Me as the being cognising through the body I currently inhabit, then such a transporter will surely kill me.
-
Maybe... I am not suggesting we *should* do this. But technology so often finds a way. I am simply asking why the idea of a 'balance' with nature is necessarily critical. If in time we figured out a way to overcome those limitations, what would be so wrong with a planet covered in human beings? How likely is it that this is NOT the future? What constraint will prevent such an outcome in time? Perhaps we will need several goes at it, but really, what's the odds we will all die out, or decide on a sustainable future with say just 2 billion people, or head off into space in large numbers? Or put another way, why should that not be the end result of the evolutionary process? A single, ubiquitous, completely dominant organism?
-
Given that the earth and its inhabitants have no intrinsic value beyond what we ascribe, does it matter? If we found a way to run the planet with all landmasses covered by human habitation, and it worked more or less, why would that be a bad outcome? Why does a 'sustainable planet' require an Amazonian rainforest or a balance with nature? Or even more to the point, if we just continue on our merry way, have a good time, and destroy the planet in the process, again, what does it matter? One more deserted planet in a universe of trillions of deserted planets doesn't seem such a big deal, now does it?
-
If we could resist offering our two cents worth there'd be no forums...
-
I thought about this overnight and I think I have grasped where some of you are going with this. You believe "I" to be some kind of thing that can be packaged up and moved around and when reactivated it just continues as before. So in a way you see the mind, the self, as an entity in its own right that can be run anywhere (I think this is what StringJunky means when he describes mind as information). Hence a careful mapping of my mind and the right program means we could copy my mind and run it in a computer simulation. The hardware is relatively unimportant. Thus not only do you see the mind as separate but you also see it has a continuity in its own right. Package it up, move it on, fire it up and "you" just wake up and keep on cognising. Sort of like Frank Tipler's idea for immortality - if we can emulate all possible conditions then all life can be resurrected. You think that there would be continuity of consciousness for me using this transporter. My self has been packaged up - my programs, operating system and hardware if you will - and all we have to do is reconstitute that and "I" will quite happily continue on. Probably without even being aware of the momentary lapse in awareness.
-
Perhaps I am just misunderstanding what people have said. I agree completely that G2 will continue quite happily as "the" me. But I do not agree that G1 can step into such a transporter and then wake up on Mars as G2. G1s experience will be cessation of existence - death. G2's experience will be to have stepped into the transporter and emerged on Mars. I would therefore not step into the transporter if I could help it. Delta, neither.
-
StringJunky, that was not my question. I asked specifically how much of the internal state of SJ2 is accessible to SJ1. Not what they appear to a third observer. Delta, no that's not my point. I am arguing that the mind of a brain is so closely tied to the brain that it ceases when that brain is destroyed. No more, no less. Forget the mind for a moment. If Object 1 is created at T1, then at T plus 10 years copied, then destroyed, and a new object, Object 2, is created by the copying process, do you argue that O2 is the same thing as O1? That O2 has a linearity of existence from the time O1 was created?
-
I'll have a read of your link StringJunky. I completely agree with your summation of G2's experience. G2 would certainly think and feel as G1. My point is simply that G1's experience does not continue, it is a property of G1's brain. Subjectively, G1 died. The I that was G1 died. While G2 is to all intents and purposes the same thing, I - G1 - will have no awareness of that. Perhaps you could take a stab at my question then. Create a perfect copy of yourself so that we have SJ1 and SJ2. Both think they are SJ1, that much is agreed. However, the original SJ1 continues to experience his internal mental cognition. He has no awareness of the internal state of SJ2 which diverges immediately upon creation. What is your distinction between SJ1's internal state, and that of SJ2? By your argument, they are one and the same.
-
Heh, this is a remarkable discussion, and I am enjoying it. Doesn't make the slightest bit of sense to me but that's OK. I am really struggling with why you think consciousness continues past death. Of course, this is all opinion based, none of us knows any of this for sure, so when I argue my case I am not trying to convince you of a truth. I am trying to convince you of my point of view. I am happy to be convinced of your point of view, but so far at least the point of view that the copy is "me" seems absurd. pzkpfw, yes, G2 will wake up, aware that he stepped into a transporter, and he will feel and think exactly as G1 does. In that sense, he is me. But he is a copy of me, as you note. G1 is dead. G1's experience stops at that point. And G1 is the me typing this. Once I have been vaporised, I am dead. I will have no further subjective experience. G2 will, but that's a new thing entirely. The only way for G1 to cognitively share in G2's experience is for some kind of external consciousness pool to exist. Imagine G1 is immediately replicated into G2, and the two stand side by side. Do you imagine G1 now experiences what it is to be G2? If I slowly grind off G2's left hand, G1 will feel this? Delta, no idea what you mean there. While the experience of consciousness arises from my brain and is not an external substance, it should in most cases have a subjective linearity. My brain puts together a whole synthesis of inputs and memories and so on to create a sense of self. Normally, this sense of self retains its sense of linearity. There are conditions which prevent some people having that, but by and large that is how it works for most of us. The "I" in question is the linear subjective experience arising in a particular brain. In that sense, G1 is the same being as GPrime. Physically we can prove that the human called Graeme has existed in that form since he was born. Of course, at every moment his physical construction changes at micro scales, and even at the macro scale as he grows and ages, suffers illnesses and accidents. G2 however can be shown to not exist at T -5 years for example (where T is the moment of replication). G1 is an "I", G2 is an "I", but G1 is the one I am concerned with - it is me, and when it stops, so does my experience of the world. G2's experience is a different thing entirely. StringJunky, a copy of me is a copy of me. 10 million copies of me would be 10 million copies of me. Each would be conscious of itself as me. But not one of them is conscious of the experience of any other. Each subjective experience lies rooted in the physical brain of each. And when one ceases, it ceases. Imagine 1000 copies of me, each perfect. One dies every hour for 1000 hours. Will the first copy have any sense of what happened to the 1000th copy who lived for some 999 hours longer? No. Simply put, when G1 steps into the transporter and is vaporised, he is dead. He will not 'wake up' as G2. G2 of course will. For mechanical purposes, that is fine. Not so much subjectively.
-
Ahhh... I see what you are saying. So Eise and Delta, you are arguing FOR some kind of separate mental continuity. Here I was thinking you were arguing against it. Let's step back because I think you have confused yourselves when you think about "I". Let's take Hannah and Linda. Hannah is German, born in Leipzig. Linda is a native New Yorker. Both are female, aged 45. Both were raised by traditional two parent families and have shared similar life experiences although they have never met. Would you suggest in any way that Hannah and Linda are privy to each other's thoughts? Does Hannah share in the mental cognition of Linda? I hope you agree that this does not happen, at least not that we have been able to establish in science. Let's assume Hannah dies at 46 and Linda at 89. Did Hannah share in Linda's mental cognition at any time after she died? I trust you will agree that she did not. Hannah died, her subjective experience, rooted in her brain, ceased at death. Taking me and my copy, let's call me Graeme 1 and my copy Graeme 2. We are the same as Hannah and Linda. Two quite separate brains with separate mental cognitions. Due to the circumstance of Graeme 2's creation, he is an identical copy at his creation and imagines himself to be me. Quantitatively of course he is me. I doubt there is any measurement you could make at that moment that would distinguish any difference. However, my mental cognition, were I to die at the moment of 2's creation, ceases. Just as Hannah's did. 2's mental cognition continues, like Linda's. But I am no more aware of, or privy to, 2's cognition than Hannah is of Linda's. You need to disentangle the fact that the two brains are identical at a single moment. The cognitions are indeed separate. "I", Graeme 1, is not aware of Graeme 2's existence. To all outside observers of course, Graeme 2 is me. This is illustrated if we run both processes to their conclusion. At moment 1, Graeme 2's creation, we are identical. A circumstance of creation. Allow 1 and 2 to live on for another 40 years, and measure the processes activities and eventual final states. they will be different. 1 is not 2. Never were, never will be.
-
Sorry, I can't grasp what you are saying at all. You simply are not making sense to me. The critical word there is "your". The something is indeed my brain process. The process that continues is not that of my brain, it is that of another brain. Just because two objects work the same does not mean they are the same thing.