Jump to content

KipIngram

Senior Members
  • Posts

    710
  • Joined

  • Last visited

Everything posted by KipIngram

  1. Well, I'm unimpressed. It would be a great read for someone entering into the study of artificial intelligence and interested in a general introduction to programming "externally believable 'conscious' behaviors. You'd need more, on each specific thing, but it would be good orientation. But he didn't take even one step toward what I called "the nut of it" earlier. He basically is saying "just don't think like that." If my self-awareness was an elephant in the room, he's essentially saying "there's not really an elephant - you're misguided." Which totally dodges the fact that my very act of forming the belief that I'm aware is an act of awareness. A third person perspective explanation is simply not adequate.
  2. Sure - I can think of it that way. But whichever it is, it produces my experiences, and it's the sensation associated with those I'm looking to explain. I can view processes unfolding in a computer's processor and memory, but can't explain to myself how that would ever result in equivalent sensations. However complex we make the whole business, that core nut still remains - how do we feel results from it? How do we get from a relationship amongst data patterns to that? Apparently that little bit of it just doesn't trouble you guys as much as it does me. I just grabbed a copy of Consciousness Explained, so I will read. I still haven't started my GEB re-read though. I just remember it being such a tiresome read the first go round that I haven't mustered up the energy yet. I don't feel that same reluctance re: Dennett's book, though, so I'll start it now. This quote: "According to the various ideologies grouped under the label of functionalism, if you reproduced the entire “functional structure” of the human wine taster’s cognitive system (including memory, goals, innate aversions, etc.), you would thereby reproduce all the mental properties as well, including the enjoyment, the delight, the savoring that makes wine-drinking something many of us appreciate." I can imagine a computer having, in its "cognitive machinery," the sensors to recognize particular chemical compositions, the memory to store these recognitions, and perhaps even an algorithmic casting of "goals." But the "innate aversions"? That one loses me. You could program the computer to output the statement that the analysis resulted in an aversion for some particular reason, but that is merely a reflection of the software designer's aversion. "I'm averse to this sort of wine, so I'll program the computer to say such when it recognizes this sort of wine." In that sense the goals are also reflections of the programmer's goals - not truly goals of the computer. At this stage I'm somewhat nervous that he's going to wind up instructing me along the lines of "That question that's made it so difficult for you to accept a functionalist position? Train yourself to stop asking that question." But we'll see - I'm still reading. Haha - he used my favorite cartoon. Figure 2.4 - the infamous "I think you need more details here in step 2" one. Love that cartoon.
  3. Ok. Well, I haven't read it yet, and I will. But I'm bothered by the idea that I should be satisfied without a real explanation. In that sense I'm nervous about Hoffman's ideas too - even if he's 100% successful and is able to show that his mathematical model of conscious agents leads very elegantly to all of our observations, he's still invoking an untestable scenario. In an ideal situation he'd match all existing observations and then have some new, testable predictions. But I'll be surprised if it comes out that way. Most likely outcome is that his program will either fail to predict something known, or else will succeed but only in some terribly convoluted and inelegant way, and that will take his ideas off of the playing field for me. I guess what I'm saying is that "just accepting" an inexplicable extension of existing physics bothers me just as much as "just accepting" some new fundamental entity. In some ways it's a bit easier for me to open my mind to the new fundamental thing, because I just think that solid physics should be able to provide a complete explanation. But I don't really "like" either of those positions. That greater willingness to be open-minded also applies more to areas of standard physics that are less completely understood than to areas that are more completely understood. Hence my willingness to at least consider quantum stuff as applicable than the purely classical and thoroughly understood physics of standard computer technology. We *know* that landscape, and I personally know a lot about it, and I just don't see the path. Here, let's talk about this for a bit. If we're going to propose that awareness can emerge in a conventional computer, we need to at least say whether that has to do with the hardware complexity or the software arrangement. I feel *particularly* strongly that just increasing the number of transistors in a computer isn't going to lead to anything fundamentally new. No matter how many there are, each one is no different, in and of itself, from the transistors in a calculator, or an AND gate. So that leaves us with the software. Do mainstream emergence ideas focus on software patterns? I have strong doubts there too, but my knowledge isn't quite as strong (I'm an EE, not a computer scientist). I still don't see how an algorithm or a data structure can ever "become aware," but I do feel like there ought to be arguments on that front for me to listen to at least.
  4. If those two paths truly result in explanations of identical sets of phenomena I agree with you. However, it is not enough to me to explain the externally observed behavior of conscious entities. I must also have an explanation (a real one, that is solid and believable) of my observation of my own awareness / ego / qualia / whatever. The explanation must be complete in this way. I will take a look at Consciousness Explained, but I have already seen criticisms of it "out there." One of the comments was that Dennett essentially denies qualia from the get-go; if that proves to be the case when I read the book then it won't fully satisfy me. "Can't explain that, so we'll deny it exists" doesn't get the job done.
  5. StringJunky: I think that's a very good point. I don't know if it's "proof," but it is evidence most definitely. That's more persuasive to me than just about anything else that's been noted. I'll have to think about it some, and maybe see if there's anything out there to read on the topic, but my knee-jerk guess would be that you'd expect a fundamental consciousness to experience a "dark silence" or something during times when the brain was incapacitated. My own experience "under the knife" has been that losing consciousness to regaining consciousness is pretty much instantaneous. So thanks - that's a good extra observation for me to roll into my thinking. Eise: I agree that the notion of a quantum-rooted consciousness is likely unprovable for the reasons you cited. That doesn't mean it isn't so, but it does likely make it something that will always lie outside the business of science. However, I still think there's a burden of proof on the emergence proponents. They argue that conscious experience is produced from aspects of physics that we claim a more-or-less complete understanding of, so the question remains, "How?" If that claim is not true, then there is some other mechanism of awareness that we should be interested in identifying and studying. If the claim is true, then there are implications of accepted theory that we don't grasp yet, and we should be interested in defining and studying that. That, by the way, was the purpose of my original post. Who's working on emergent theories of consciousness? Are they making any real progress? Etc. etc. The materials I've seen so far on the subject fail to convince, but that absolutely doesn't mean I'm not subject to being convinced. If the final answer winds up being from the GEB path, and is more or less "awareness arises from physical processes in the brain, but it's impossible to prove how," I'm going to find that pretty much as unsatisfying as you guys find "awareness is fundamental, but we'll never be able to prove it."
  6. So do you believe computers have qualia? How do qualia emerge from data structures and algorithms? You seem to agree with me that we have these qualia / experiences, but I still see no shred of a hard argument as to how they arise from an algorithmic process. This is interesting. I'd read before about Penrose and Hammeroff's ideas, but hadn't seen anything quite this detailed. http://www.quantumconsciousness.org/sites/default/files/Quantum%20computation%20in%20brain%20microtubules%20-%20Hameroff.pdf I like the general idea, but it always struck me as somewhat far out to be invoking gravity in the context of such a topic. Also, even if Penrose and Hammeroff are entirely correct, it still just seems to provide a portal for quantum influences in the brain. I don't really see how a quantum superposition would have awareness any more than I see how a transistor computer would. Allegedly it's still just a superposition of physical states - if each individual state can't host awareness, I don't see how a superposition of them would suddenly be able to do so. So it still seems to call for something "extra."
  7. Ok, so based on this article: http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics I'll say that the "awareness/ego things" I'm talking about are "qualia." Whereas I see very well how we could program a computer to analyze sensor inputs and, say, set variables that we've designated to correspond to various qualia (for example, "there's a lot of red in this image"), I don't see how that corresponds to the way we experience qualia. That is what I am looking for a theory for.
  8. Ok, so tell me more about the ego being an illusion. How exactly do we define that word? It does seem, to me, to capture the "I am" essence that I'm trying to get at when I say "awareness." We're typing a lot of words at each other, but let's see if we can focus in this piece. I could define "illusion" in a way that would work for a computer. For example, one of my kids was really psyched over the "face recognition unlock" on her phone. I showed her how insecure it was by pulling a picture of her up on my phone and using it to unlock her phone. So you could say that her phone was suffering an illusion - it thought it was looking at her face when it in fact was not. Of course we are subject to such illusions too, which I'll refer to as "sensory illusions." It's entirely obvious how those can work in a fully mechanistic way. But what I have trouble with is seeing how we could have an "ego illusion" if we don't have an ego to start with. That's the very point I've been trying to get at - our whole ability to have a "more than data" notion of what's going on in our world. We have a "higher level sense" of our own existence than I can explain via an algorithm. Algorithms just shuffle data around without having any notion of what that data represents. Our ability to have that notion - to "feel" things - is what I'm referring to as awareness or ego. Anyway, back to you. Nothing you've said so far has caused me to decide I'm misguided on this, but I feel that I haven't swayed you either, and I don't think either of us is "just being stubborn"; we're both just failing to bring our points into focus for one another.
  9. I'm just nagged by the notion that "awareness is an illusion" is a catch-22; if we're aware, then it's not an illusion, and if we're not aware, we can't be "aware" of any illusions. The very fact that we think it needs explaining means that something is there. Also, re: the studies that have been done mapping various cause and effect relationships in the brain, I don't doubt any of those. Those are solid experiments that measure "measurable things." But I haven't yet gotten to the point of associating my awareness with any of those things (voltages in the brain are still just voltages, etc. - I haven't been able to convince myself that "awareness" can plausibly arise from those things we can measure). So while the experiments are perfectly sound, I don't know that they relate to what I'm talking about. This is all rather frustrating, because I can't even really find the right words to specify precisely what I'm talking about. I've just assumed that you guys know, because you have the same thing. Awareness, "mental spark," ego, etc. etc. etc. The part of us that feels triumph when we win and frustration when we lose and all that jazz. We don't even know how to quantify that, much less explain its origin. There seems to be fair consensus among at least some of us (us specifically - here in this conversation) that animals such as dogs and cats have awareness. I imagine we'd also have consensus that bacteria don't. So as we navigate the spectrum in between, where does it appear? I'm sure we could narrow the range intuitively, but we don't know how to make a measurement that tells us whether it's there or not, or point to a specific brain structure that it's associated with, much less explain how that brain structure triggers it. When we can do those things, that's when I'll be on board, and I might be on board sooner, if it at least looks like we're closing in on it. Stating a "how" - a mechanism - is vital, because it's certainly possible that consciousness could be fundamental and there still be brain regions that have activity that correlate with it.
  10. Hi. I'm looking through this book: https://www.math.columbia.edu/~woit/QM/qmbook.pdf At the very top of page 8, the following is presented: g1 . (g2 . f)(x) = g2 . f(g1^-1 . x) = f(g2^-1 . (g1^-1 .x)) = ... I understand what they did - they applied equation 1.3 from the previous page treating g1 as g in 1.3 and then applied 1.3 again treating g2 as g. But it looks to me like it would also be valid to do this: g1 . (g2 . f)(x) = g1 . [ (g2 . f)(x) ] = g1 . [ f(g2^-1 . x) ] In other words, it looks like we can great f as a function acted on by group member g2, and move g2 inside first, OR treat g2 . f as a function acted on by group member g1 and move g1 inside first. I suspect only one of these is valid (the second one, which is what's given in the book). But I'm not clearly seeing why. Can anyone advise?
  11. I agree with your last statement - I wouldn't have us throw away the idea. We should work on that path, and see where it leads. If it's right, I think we'll eventually figure it out. I'm just not prepared to accept it as a certainty, without a better understanding of the mechanics involved. I also concur with your misgivings about the other proposal - I suspect that if consciousness is fundamental it likely does "reside" in a place that science won't ever be able to "get at." But if you think about it that's not entirely hard to understand. Under my definition is, in fact, fundamentally unpredictable. It's hard to see how the methods of science can study something that has no predictability. That doesn't make it ipso facto wrong, though - it just makes it a situation where science isn't very useful. So you have one proposal (unproven but possible) that science can work on, and another proposal (unproven, but possible) that it can't. How science should respond seems pretty clear to me: work on what it can work on, until such time as it becomes evident (not sure what that would entail) that it's a fruitless path. I'm not particularly bothered by the notion that reality might contain features which are not susceptible to the methods of science. I don't see that there are any guarantees that isn't the case. That in no way makes science useless - clearly it's useful for a heck of a lot. I don't really have any more to say about free will. I think your definition of free will clearly represents something valid and present in reality. My definition, which goes further, may or may not be present in a physical sense. But awareness is something that more or less "proves itself" - the very fact that you think you're aware means that you're at least aware of being aware. If free will is an illusion, then awareness is the thing that's experiencing the illusion. I consider it to be a much more "rigorous" question than free will (as shown by the fact that we couldn't even really agree on what free will is). So we have this pile of neurons, or this pile of transistors, that are behaving in some sort of algorithmic fashion. We can use that model to explain how every behavior arises. Even though we may not be far enough along to do so explicitly in all cases, I don't feel any doubt about the "robot aspects" of all this. But I still maintain that our existing theories don't provide any insight whatsoever into how that arrangement of neurons / transistors can come to possess an explicit awareness of itself in the way I'm talking about. For example, say an organism is pursuing some goal. How does the "optimization process" driving that (something that makes total sense in a robot) become desire / yearning (something that doesn't make sense in a robot). We can pursue goals unconsciously, in the same manner that a robot would. But in order to yearn for something we must be aware.
  12. Yes, I was generally familiar with that difference (in atom / out of atom), and had it in mind that that had to do with continuity of the solution (similar to the particle in a box problem, where it's the zero probability boundary conditions at the edges of the box that fix the solutions). I'm just having difficulty seeing that from the "information perspective" as readily as I can see the spin example I cited originally. But your correction for me, specifically, is that you come to an N eventually where you're no longer "in the atom." Thanks - that makes sense.
  13. Ok, so after thinking about this for a while the Neptune example isn't quite as good as I thought it was. In the case of Neptune it was proposed that there was another entity "more or less" like all of the other planets - just in a different place and with a different mass and velocity and so on. Something "new," but not something "different." So the proposal is is rather more compelling than in the case of consciousness. I do understand that it's wise in science to resist the urge to introduce new fundamental things to explain observations. If we do that too freely, we wind up not "pushing the theory" as hard as we should try to push it, and might not move forward as quickly. Feynman talks about this in the 1964 Messenger Lectures (video 7). He took a strong position, saying we should always squeeze our existing theories as hard as we can before adopting new fundamental entities. Ok, that's fine - and I agree. But he made it very clear that this always involves guessing, and that when the dust settles you might wind up having to adopt the new thing anyway. I'm ok with that perspective. I absolutely don't think that we should say "Oh, awareness is just different - we're never going to explain it with mainstream theory, so let's not even try. I think we should work these emergence theories as hard as we can; one of them might come through. But I agree with Feynman - it's a guess.
  14. As a follow on to my original question, let's look at another case, say energy levels in an atom. I can't get the information perspective to pan out as nicely here. Yes, the energy levels are discrete, so we're talking about quanta. But in theory there can be any number of them, right? Just plug in N, and get an answer? So that doesn't "restrict" to any particular number of bits of information. So spin looks like a "one bit thing" more strongly than other cases. Spin seems "different" in some way. Anyone have light to shed here?
  15. Oh, no worries - thanks for pitching in. And I DO think we have awareness, so I'm certainly not disagreeing with you.
  16. I'm trying to get a good feel for quantum information theory, and I'm wondering if this is on the right track: ===== Consider the simplest possible quantum system (say spin measurements, so there are just two possible outcomes). We can choose to measure spin in any direction, and we'll get "up" or "down." But that quantum system is capable of housing just one quantum bit of information, and by making the measurement against a chosen axis we "use up" that information holding ability. It now "remembers" that it's spin up or down for that axis, and that's the one bit so it can't have any information about a different orthogonal axis. So said another way, by making a measurement we force the limited information retention ability of the system to reflect the result of that measurement. Now if we make another measurement we force the information resources to reflect the new measurement, so it can no longer reflect the old one. In macroscopic systems there's a huge amount of information in the system, so we can extract some without really making much of a difference. The resources can be used to reflect many different things, without conflict. ===== That at least feels like it's on the right track, but I'm a noob on this so I thought I'd invite critique.
  17. Eise: I'm sorry - I don't have time this morning to do the quoting dance, but I think you'll be able to tell what goes with what in my reply. Yes, you're right - the Bell experiments don't say decisively that there's no interaction between the entangled particles; just that it's not local. Actually I lean more toward the interpretation that the measured quantities have no reality before they're measured, but I don't find that at all incompatible with the notion of consciousness as fundamental. I'm not really prepared to take a position on precisely how conscious choice might be "implemented." By that I mean to say that I don't have a theory for how the precise selection of which quantum events a particular consciousness might be able to influence is made. I can propose an experiment, but I'm not sure it's an experiment that would be easy to do in a humane way. First we'd have to find, in the brain of a test subject, the region we thought housed the quantum events in question, and then we'd create conditions that would produce repeatable behavior (say, offer food that the subject had to reach for), and then we'd look for deviations from the usual statistical distribution of quantum outcomes. It might be hard to get to the bottom of the situation, but at some level, if this theory is right, we ought to be able to find a "starting point" where those stats deviated, and that deviation would avalanche into the macroscopic response. One possible problem there, though, is that our instruments would be entering into the subjects ability to physically realize its free will in a highly invasive way - just by trying to watch we might "break things." Color arises directly from electromagnetic effects, so that's perfectly natural. Actually "color" is a human perception, so that's not entirely true. But the behavior that produces light of the frequency we call some color or another is just a direct outcome of the same theory that describes how the individual particles behave. No, I can't give a reason beyond "I can't imagine." But on the other hand, a few hundred years ago people wouldn't have been able to imagine how electrons and protons would give rise to color, whereas now it makes perfect sense to us. That's really the nut of my whole issue - I want to see a better connection before I completely accept emergence as the explanation here. I'm not rejecting it wholesale - I accept it as a contender. But I don't want to toss out the other possibilities without something a bit more solid. By the way, I could ask you the same thing. We've never done quantum experiments in living brains; do you really feel you can reject as entirely impossible the idea that consciousness exists independent of physical reality, and achieves free will in the manner I've suggested? I am absolutely not proposing that consciousness could cause a quantum outcome that was not an eigenvalue of the wave function. I think both of these candidate thories are possibilities. If I were insistent that free will was NOT an illusion - that it was totally real and certain, then I'd feel I had to reject emergence based on that. But as I've noted I'm not nearly as sure of that as I am of the existence of awareness. I really do think our free will discussion here isn't terribly productive. We do have different working definitions of free will, so even using the term is difficult. I am definitely referring to "uncaused input," by which I mean physically uncaused (obviously it would be "consciousness caused") and have suggested a route by which it could enter the physical domain. But I've also proposed a reasonable scenario where my sort of "free will" wouldn't really be having any effect on physical reality - it would only be affecting what aspect of physical reality the awareness was aware of, and that makes it a much more murky concept. So I really don't want to try to put a "stake in the ground" on free will. I think both of these are perfectly reasonable theories: The Copenhagen Interpretation is essentially on the right track (i.e., no Many Worlds) and consciousness exercises free will by selecting certain allowed quantum outcomes. The Many Worlds Interpretation is essentially on the right track, and consciousness "perceives" free will by selecting the path awareness takes through the worlds (but has zero physical effect on the multiverse). That latter of those two proposals adheres to your perspective completely as far as free will goes. No physical effect - zero, zip. I can't quite get my head around how you're choosing which possibilities to accept as possible and which ones not to. Without a working theory, it's just as "magical" for awareness to arise from emergence as it is for it to be fundamental. In the one case you have an absolutely unexplained "effect," and in the other you have an absolutely unknown "entity." We saw that Uranus didn't move right, so we postulated Neptune, and we found it. But what if Neptune had been invisible somehow? Yes, I know that's a stretch, but I'm drawing an analogy here. Let's say we just never could confirm it's existence, except for the fact that Uranus moved funny. The equivalent of your position would be "there is no Neptune - we just don't have the right theory yet." Of course we could see Neptune and all was well. But you're taking the position that we are so sure about what does and what doesn't exist that we can deny the existence of consciousness as a fundamental entity without further thought - that our theories MUST be extensible in some fashion to explain anything that would be attributable to "consciousness." On the other hand, I'm saying "Maybe you're right, but maybe there's a Neptune." goldglow: Rational mental processing is vital for survival, but early on in this thread someone noted that "awareness" is not. You could envision a robot designed to function entirely as a human or other organism. As long as it made the right responses and so forth, it would survive as well as the real organism. Having "awareness" of those things happening, in the sense I mean it (i.e., "feeling it," as opposed to "registering and responding") isn't really necessary. it's an "add on" of some sort.
  18. I think a lot about human behavior is emergent. We're made of atoms, and QED is probably the most applicable theory, and it only has a handful of very simple rules. But you get absolutely stupendous effects from it. I think most of the things we do (eat, sleep, walk, make sounds, etc.) represent good candidates for emergent behavior. Awareness is fundamentally different, though. No matter how complex the machine is, I can't see how it can "feel." I don't even see what that means. Of course, if you define "feel" as register a sensation, then it's clear - it's a signal and a response from a complex system. I'm talking about internal feelings (emotional feelings). Pain, joy, hate, etc. It's in no way "self evident" to me that emergence can explain that. So I'm keeping all the candidates in the game until one clearly wins.
  19. My understanding of the Bell experiments was that it required an ensemble, from which you can extract correlations, to make a results-based statement. Isn't that why the story of the last part of the 20th century was to improve the statistical certainty of the quantum-theory-supporting outcomes? Can you describe for me a single-event Bell-type experiment where you would be able, via measurement, to draw a clear conclusion? Maybe it's possible and I just didn't understand - but I thought the actual results were statistical in nature. We can use the word creativity if you wish - I'm not hung up over the nomenclature. Actually, I don't think anything is missing from electromagnetic theory - we have quantum electrodynamics, and my understanding is that we still feel it's completely precise. On the other hand we still don't really know what charge is, do we? It's just something that "is." I don't mind a theory having "presumed things," but you presume at the beginning and then explain - presuming at the end without providing an explanation bothers me much more. I think a more fundamental way of getting at the difference, though, is that electromagnetic theory explains the interactions of charged particles. Everything that the theory makes a statement about has something to do with charged particles, light, and so on - the "stuff of the theory." That's natural. Such predictions are entirely "in bounds" for that theory. But emergent consciousness theory proposes that something of an altogether different nature somehow arises from theories the foundations of which have nothing to do with that phenomenon. Consciousness is not "the stuff of physics" in any way. So that proposal is much more of a stretch and carries a higher burden of proof in my mind. I'm going to say again that this is not my main point in this thread. I already described a possible scenario in which awareness could exist (for whatever reason) and completely believe it had free will when in fact it did not. That was the Many Worlds theory + awareness choosing which way to go at each fork in the road. I guess you could call that choice free will, but it would have no physical effect whatsoever - the same worlds would be there whether awareness chose to watch or not. I believe I have free will because I sense it, but I actually don't believe that's as strong a statement as saying I believe I have awareness because I sense it. You may be right and free will may be an illusion. But awareness is undeniable, because if I didn't have it I would "be aware of feeling it." It's a self-proving phenomenon as far as existence goes. In my mind that makes awareness the thing to focus on. It's a problem worth solving. If it turns out it is emergent, and we prove that, we've learned something about emergence. If it turns out to be fundamental, and we prove that, we've learned something even more important. So I am not going to let emergence theory have an easy ride of it - I'm going to kick the tires as hard as I can until it either works or doesn't. Meanwhile I'm basing my current opinion (guess) on what I said earlier: it's easier for me to believe that something altogether new exists than to believe something new and completely different can pop out of an existing, almost fully-formed theory. I'll believe the latter if it's shown, but not before. Hand waving just isn't going to cut it for me.
  20. Delta1212: Brilliant work! Taking science and presenting it in a way that flips a kid's lid is no small feat!
  21. We are still talking past each other. I think you are saying that an action taken by your body, even though it may hugely influenced not only by immediate external stimuli but also by your own internal state as evolved through the past, is free will. That is not at all what I am talking about when I say that a fully deterministic system does not have the possibility of free will. Free will in my eyes is the possibility of taking an action that is absolutely unpredictable based on past observations. Granted, someone who knows us well might predict our actions in many many situations. But if we are totally predictable, as a matter of principle, then there is no room for free will. Free will in my definition is the ability to inject new information into the universe. Regarding uncertainty, randomness, and so on, if you have one opportunity to observe a quantum event, you see an outcome. How do you know that outcome is random? You call it random for two reasons (that are related, btw): 1) quantum theory states that it's random, and 2) you can't predict it. When you have the opportunity to watch identical ensembles of events, you see a statistical pattern emerge, and we can predict those patterns. But if any single one of those outcomes was not random (say, if I were hiding on the other side of uncertainty and decided to adjust that one outcome), you would never know, would you? Now, if I sat over there and adjust a large number of them, under an ensemble situation, you'd see a perturbation of the statistics. But you can't see that based on one event. What our experiments really empower us to say is that under controlled conditions in a laboratory, when we repeat a given observation a large number of times, 1) we can predict the statistical outcome, and 2) we don't observe anything other than randomness in the individual outcomes otherwise. In other words, the statistics mentioned above is the maximum information we can extract from the system. If you think about it, that doesn't really prove that all such events, under every possible scenario, are completely random. It suits us to theorize that. The proposal that consciousness might exert free will into the physical world via this mechanism does not force us to presume that every quantum event is thus decided. That's sort of the same thing as saying that everything is conscious. The proposal only requires that some quantum events might be thus determined, and it's likely that those events are buried in living brains, a place where we don't typically set up ensemble experiments. Every quantum event is an opportunity, in theory, for a consciousness to inject information, but there might be no consciousness connected to that particular event, or the consciousness might simply not choose to use that opportunity. A result to the measurement is still required, and if nothing is choosing the outcome then it would be random. Sort of like a car rolling down an unpaved hillside - the wheels are going to turn in response to the shape of the soil, and that will feed back to the steering wheel, which we can't see (assume tinted windows). Obviously that's not random - I'm just trying to draw an analogy. But when the human in the car decides to grab the wheel and take charge, he can. Sort of a lame analogy, I guess. Yes, I was once cycling when a motorist executed a hard right turn in front of me (they'd passed on the curved "turning lane" we'd passed a few seconds before. This all happened very fast - one second they were driving past me, and the next I was looking at the side of their car about 3-4 feet in front of me. I was barely realizing I was about to fly through their window when suddenly I'd turned the bike inside their turn and was cycling along the crossroad, with them driving off in front of me. I have no idea how I did that - I was completely unconscious of taking any sort of action. But apparently I just wrenched the bike through a 90 degree turn. Blind instinct. I'd been cycling for quite a while and felt totally at home on a bike. My subconscious really looked out for me that day.
  22. studiot: Oh, NICE! I just showed that to my kids, and they had a blast.
  23. No worries, Eise - I've been busy as well. I think the point I was making wasn't specific to EPR - I was just noting that the quantum theory requirement that an ensemble of identical measurements show a particular statistical distribution of results says nothing about any one of the measurements - as long as it is one of the allowed results quantum theory wouldn't cry foul. So if free will does manifest through quantum uncertainty, quantum theory limits the menu of available choices, but does not restrict them to one. I think we're talking past each other here. You said "Determinism is only saying that from certain start conditions only one set of end conditions follow." I am saying that there is no choice in that situation. If that is how our brains work, then we in fact have no free will. On the other hand, determinism doesn't hold in that strong fashion in our universe - quantum theory tosses it aside. There are options. When a quantum outcome is required, perhaps the universe throws a die and makes a random choice. Or perhaps consciousness exists outside the current laws of physics and makes the choice. Or perhaps some of both, depending on whether the system in question is involved with conscious expression. The only point I have here, really, is that if consciousness exists and has free will, then this is the only manner in which our physical theories would allow it to have free will. If consciousness is emergent, and the quantum outcomes are just random and "average out" at the macroscopic / conscious level, then in fact there is no free will. There just isn't any other 'portal" for it in physical theory, other than via quantum uncertainty. Chess computers don't "choose." They run an algorithm no different in principle than 2+2 and get an answer, and then they execute that answer. If consciousness is emergent and quantum uncertainty is irrelevant to its operation, then that's all we do too, and "free will" is an illusion. I'll say again that I don't have a solid defense against the claim that free will is an illusion - it could be that my awareness only thinks it's choosing its actions. That's why I said the free will thing was something of a digression. I think I have free will, but I'm not sure. I'm sure I feel my awareness, and that makes awareness the key thing to focus on for me. I think this because of the awareness aspect. Like I just said, I don't think free will is an illusion, but it could be. But my awareness exists, and my goal here is to try to understand its origins. If it turns out to be emergent, then I'll have to seriously consider that free will may be (probably is) an illusion. But if I'm proposing that consciousness is a fundamental entity in its own right, then I can say, "Well, it feels like I have free will, so until evidence to the contrary emerges I'll assume I do." Emergent consciousness would be subject to serious restrictions, since it's tied ultimately to our laws of physics. Fundamental consciousness makes it open season - we'd have no reason to presume a priori that it didn't have free will (in my sense) as well as awareness. Well, we're defining choice differently here. You're describing an extremely complex system of differential equations, basically, and I say that all of the "choices" were bound up in the initial conditions. That's not the sort of choice I'm talking about - I'm talking about a new forcing function applied to the system at the time of the choice. And I'm saying the only spot in our theories where such an effect could enter in is via quantum uncertainty - everything else is, in fact, nailed down by the differential equations. And as far as I can tell, that approach to life does not explain awareness. That's my quest. I have read GEB, but at the time I read it my mind was in other areas and I focused mostly on the math and so forth. I do have it off the shelf for a re-read, with the focus being emergence. I'm looking forward to it, though I'm also rather daunted by my memory of what a tedious read it was the first go round. Maybe my new angle of interest will make it a different experience this time. I find that hard to argue with. There are these speculations that everything is conscious (rocks, etc.), and that we just don't "see it" because we are predisposed to associate consciousness only with certain observations. I've tried not to go that far - it seems like a stretch. Then there are less extreme theories like Hoffman's, which says that consciousness is fundamental, and that the whole of material reality is just our perception of our (conscious agents) interactions. But as I think about it, that's almost the same thing, just said in a tighter way. Because Hoffman would contend that that rock over there is a reflection of something in the overall structure of (variable complexity) conscious agents. I'm trying not to go down Hoffman's rabbit hole too far, too fast, because you can more or less use it to explain anything. If we are just conscious agents, and the universe is our "interface" as he would call it, then any question we ask the universe will receive an answer (any observation will provide some result). Let's say we see how chemical reactions cause the paramecium to move up the food gradient. Well, sure - the paramecium conscious agent is doing something, so we have to see that happening somehow. It's like anything we observe (behaviors, laws, etc.) just has a reflection in the structure of conscious agents. And his proposal is that we judge the theory based on how well it can mirror our existing laws of physics. Let's say it plays out, and he's able to show the precise emergence of all of our "stuff." Quantum theory, the Standard Model, etc. etc. Then it will be the case that the only new thing his theory offers is a way to say "consciousness is fundamental." Everything we said about the physical world matches, but we get to add on this one other thing that we otherwise don't have a solid explanation for. Well, fine. But that's a little bit like invoking instantaneous action at a distance to explain EPR correlations. You have this mysterious action, that's not detectable or usable in any other way, but it allows you to keep the notion that those quantum states have reality before they're measured. Or you can just accept that things aren't real until observed. Hmmm. So that's a bit rambly and I'm not sure I made any good points. But I'm going on to the rest of your reply now. Yes, I see your point. Once we have a theory it all makes sense. We just don't have that "Newton's theory of emergence" yet. I'm just now quite willing to say "it must be emergence." I need something more solid before I throw out all other possibilities. I crave a "how." Yes - bring it on. I think we've done a superb job of understanding under what conditions a complex system can make "feedback responses" to its environment. Although even a rock does that - you step on it and it gives a little; that's a "response." On the other hand, you step on a snake and it bites you. We decide to say the snake was "aware" of being stepped on and the rock was not, even though both produce a physical response. And we suspect we could build an "artificial snake" that if stepped on would whip one end around and drive nails into the thing stepping on it. But we wouldn't say that mechanism was aware. Machines don't know what they're doing - they just do. Somehow you and I know what we're doing in a "different way." And that difference is the thing I want to understand. Whew. That took a while. So look, I figured out how to insert your quotes in the right places, but it was laborious. Basically I quoted your entire reply every time, and then cut out the parts I didn't want from each one. Is there an easier way to do that? Ah - thank you. I will explore it.
  24. I think the whole point of the AI claims is that the AIs will become self-advancing, and at a rate we can't compete with. Yes, as long as they're confined in the system we have control over, what you just said is true (we can pull the plug, we can make sure the system isn't equipped with "weapons," etc.) But I think the concern that's voiced here and there is that the AI will wake up, race past us in intelligence, and "hack its way out."
  25. Eise: I want to reply to your last post, but before I do I want to learn how to do the quoting right; so far I haven't figured out how to get those nicely boxed multiple quotes into a single reply. I'll be back with you shortly. quickquestion: There is absolutely no reason for me to presume my awareness is the only one in the universe. it's just the only one I can feel. But I can look at myself in a mirror, and then look around and see many other entities similar to me, behaving as though they are driven by awareness as well. Yes, they could be automatons. But it seems much more reasonable to me to assume they are "others like me" than to assume I am the only aware entity around and everyone else is a robot. Ah, your next post down has it right. No, I can't prove that other humans or animals are aware. It's just the simplest explanation for how I see them behave. cladking: It's entirely clear to me that animals have sentience. Just as other humans behave as though they're driven by awareness, so do animals. quickquestion: We don't have to think about anything other than our own well-being. I agree with you, that it would be best not to create sentient beings that are then miserable. But given StringJunkie's observation that we might create them without meaning to, it behooves us to protect ourselves. Do you think about the saber-tooth tiger's feelings when he's attacking you? Or do you just try to kill it? Self-preservation is the core driving principle of pretty much everything.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.