StringJunky Posted May 4, 2017 Share Posted May 4, 2017 The fact that we can lose parts of conscious experience is evidence that it is not fundamental and, thus, actually emerges from several functions of the brain. If it was fundamental i.e. indivisible, it would be all or nothing. Link to comment Share on other sites More sharing options...
KipIngram Posted May 4, 2017 Author Share Posted May 4, 2017 StringJunky: I think that's a very good point. I don't know if it's "proof," but it is evidence most definitely. That's more persuasive to me than just about anything else that's been noted. I'll have to think about it some, and maybe see if there's anything out there to read on the topic, but my knee-jerk guess would be that you'd expect a fundamental consciousness to experience a "dark silence" or something during times when the brain was incapacitated. My own experience "under the knife" has been that losing consciousness to regaining consciousness is pretty much instantaneous. So thanks - that's a good extra observation for me to roll into my thinking. Eise: I agree that the notion of a quantum-rooted consciousness is likely unprovable for the reasons you cited. That doesn't mean it isn't so, but it does likely make it something that will always lie outside the business of science. However, I still think there's a burden of proof on the emergence proponents. They argue that conscious experience is produced from aspects of physics that we claim a more-or-less complete understanding of, so the question remains, "How?" If that claim is not true, then there is some other mechanism of awareness that we should be interested in identifying and studying. If the claim is true, then there are implications of accepted theory that we don't grasp yet, and we should be interested in defining and studying that. That, by the way, was the purpose of my original post. Who's working on emergent theories of consciousness? Are they making any real progress? Etc. etc. The materials I've seen so far on the subject fail to convince, but that absolutely doesn't mean I'm not subject to being convinced. If the final answer winds up being from the GEB path, and is more or less "awareness arises from physical processes in the brain, but it's impossible to prove how," I'm going to find that pretty much as unsatisfying as you guys find "awareness is fundamental, but we'll never be able to prove it." Link to comment Share on other sites More sharing options...
StringJunky Posted May 4, 2017 Share Posted May 4, 2017 (edited) So thanks - that's a good extra observation for me to roll into my thinking. I wouldn't have thought of it if you hadn't asked the question and probed intelligently, as you have. I think this is what good discussion is about with hard problems like this one, exploring and critiquing each point with a mutual goal of understanding and not just trying to win empty debating points against each other that don't lead to further personal enlightenment. Thanks to you and Eise I've learnt a fair bit more on this subject as well. Edited to add. Edited May 4, 2017 by StringJunky Link to comment Share on other sites More sharing options...
MonDie Posted May 4, 2017 Share Posted May 4, 2017 That's the way I look it: the existence of emergence is self-evident but an analytical explanation for complex phenomena, like those of a brain, is, as yet, beyond reach. Are you sure you mean emergence and not causation? Emergence does not involve lower order phenomena causing higher order phenomena: the lower and higher order phenomena exist simultaneously and inextricably. This is not causation! A change in the micro does not cause a change in the macro; the micro and macro are different ways of describing on the same change. There is clearly something more fundamental causing there to be mind or else there could not be multiple, isolated minds. So how is this a question of emergence? In fact, we see emergence within our own minds. Our minds experience numbers and words that emerge from colors, and ideas that emerge from simpler ideas. To have something mindless, you need something else other than mind, and that something is causing mind. Link to comment Share on other sites More sharing options...
StringJunky Posted May 4, 2017 Share Posted May 4, 2017 .... or else there could not be multiple, isolated minds.... Can you explain this? Link to comment Share on other sites More sharing options...
Eise Posted May 5, 2017 Share Posted May 5, 2017 Eise: I agree that the notion of a quantum-rooted consciousness is likely unprovable for the reasons you cited. That doesn't mean it isn't so, but it does likely make it something that will always lie outside the business of science. However, I still think there's a burden of proof on the emergence proponents. They argue that conscious experience is produced from aspects of physics that we claim a more-or-less complete understanding of, so the question remains, "How?" If that claim is not true, then there is some other mechanism of awareness that we should be interested in identifying and studying. If the claim is true, then there are implications of accepted theory that we don't grasp yet, and we should be interested in defining and studying that. That, by the way, was the purpose of my original post. Who's working on emergent theories of consciousness? Are they making any real progress? Etc. etc. The materials I've seen so far on the subject fail to convince, but that absolutely doesn't mean I'm not subject to being convinced. If the final answer winds up being from the GEB path, and is more or less "awareness arises from physical processes in the brain, but it's impossible to prove how," I'm going to find that pretty much as unsatisfying as you guys find "awareness is fundamental, but we'll never be able to prove it." Slowly I have only one answer to you: read Consciousness Explained, of Daniel Dennett. The answer is not just 'emergence'. Dennett gives a pretty good theory how consciousness emerges from brain processes. In the end I think it will boil down to this: we see that consciousness has to do with the complexity of the brain. A researcher, believing that consciousness is a fundamental property of the universe, discovers what kind of complex structures reveals this consciousness. Another researcher, believing that consciousness is an emergent property of some structures in the universe, discovers what kind of structures leads to consciousness. The structures they describe will be the same. For me this means that the assumption of consciousness is a fundamental property of the universe is as superfluous as is the assumption that it needs a God to keep everything running. We just have to accept that complex structures can give rise to consciousness, that this is a fundamental fact of the universe. Empirically both kind of theories cannot be distinguished: using Ockhams rasor tells us to ignore superfluous assumptions. 1 Link to comment Share on other sites More sharing options...
KipIngram Posted May 5, 2017 Author Share Posted May 5, 2017 If those two paths truly result in explanations of identical sets of phenomena I agree with you. However, it is not enough to me to explain the externally observed behavior of conscious entities. I must also have an explanation (a real one, that is solid and believable) of my observation of my own awareness / ego / qualia / whatever. The explanation must be complete in this way. I will take a look at Consciousness Explained, but I have already seen criticisms of it "out there." One of the comments was that Dennett essentially denies qualia from the get-go; if that proves to be the case when I read the book then it won't fully satisfy me. "Can't explain that, so we'll deny it exists" doesn't get the job done. Link to comment Share on other sites More sharing options...
Eise Posted May 6, 2017 Share Posted May 6, 2017 (edited) If those two paths truly result in explanations of identical sets of phenomena I agree with you. However, it is not enough to me to explain the externally observed behavior of conscious entities. I must also have an explanation (a real one, that is solid and believable) of my observation of my own awareness / ego / qualia / whatever. The explanation must be complete in this way. I think you get yourself in a kind of logical problem here. There are two kinds of explanations: the 'virtus dormitiva' way; and the reductive way. In the first way, the working of a sleeping powder is explained by saying that it contains 'sleeping-force'; in the second way a phenomenon is explained by lower level phenomena that have nothing in common with the phenomenon to be explained. To give an example: life can be explained by assigning all organisms vis vitalis ('élan vital', or 'living force'). Or it can be explained by a lot of chemical reactions working together, reactions that are not alive themselves. You obviously choose the 'virtus dormitiva' way: you explain our consciousness by stating it has conscious constituents. Dennett explains it by processes in the brain that itself are not conscious. If they were, nothing would be explained. So you have a choice: to give up explaining; or accepting that some complex processes constitute subjective experiences. I will take a look at Consciousness Explained, but I have already seen criticisms of it "out there." One of the comments was that Dennett essentially denies qualia from the get-go; if that proves to be the case when I read the book then it won't fully satisfy me. "Can't explain that, so we'll deny it exists" doesn't get the job done. That comment is completely wrong: Dennett argues a chapter of about (I have the book as ebook) 45 pages against several conceptions of qualia. Dennett really takes every bull by the horns. Edited May 6, 2017 by Eise 1 Link to comment Share on other sites More sharing options...
KipIngram Posted May 6, 2017 Author Share Posted May 6, 2017 Ok. Well, I haven't read it yet, and I will. But I'm bothered by the idea that I should be satisfied without a real explanation. In that sense I'm nervous about Hoffman's ideas too - even if he's 100% successful and is able to show that his mathematical model of conscious agents leads very elegantly to all of our observations, he's still invoking an untestable scenario. In an ideal situation he'd match all existing observations and then have some new, testable predictions. But I'll be surprised if it comes out that way. Most likely outcome is that his program will either fail to predict something known, or else will succeed but only in some terribly convoluted and inelegant way, and that will take his ideas off of the playing field for me. I guess what I'm saying is that "just accepting" an inexplicable extension of existing physics bothers me just as much as "just accepting" some new fundamental entity. In some ways it's a bit easier for me to open my mind to the new fundamental thing, because I just think that solid physics should be able to provide a complete explanation. But I don't really "like" either of those positions. That greater willingness to be open-minded also applies more to areas of standard physics that are less completely understood than to areas that are more completely understood. Hence my willingness to at least consider quantum stuff as applicable than the purely classical and thoroughly understood physics of standard computer technology. We *know* that landscape, and I personally know a lot about it, and I just don't see the path. Here, let's talk about this for a bit. If we're going to propose that awareness can emerge in a conventional computer, we need to at least say whether that has to do with the hardware complexity or the software arrangement. I feel *particularly* strongly that just increasing the number of transistors in a computer isn't going to lead to anything fundamentally new. No matter how many there are, each one is no different, in and of itself, from the transistors in a calculator, or an AND gate. So that leaves us with the software. Do mainstream emergence ideas focus on software patterns? I have strong doubts there too, but my knowledge isn't quite as strong (I'm an EE, not a computer scientist). I still don't see how an algorithm or a data structure can ever "become aware," but I do feel like there ought to be arguments on that front for me to listen to at least. Link to comment Share on other sites More sharing options...
StringJunky Posted May 6, 2017 Share Posted May 6, 2017 Ok. Well, I haven't read it yet, and I will. But I'm bothered by the idea that I should be satisfied without a real explanation. In that sense I'm nervous about Hoffman's ideas too - even if he's 100% successful and is able to show that his mathematical model of conscious agents leads very elegantly to all of our observations, he's still invoking an untestable scenario. In an ideal situation he'd match all existing observations and then have some new, testable predictions. But I'll be surprised if it comes out that way. Most likely outcome is that his program will either fail to predict something known, or else will succeed but only in some terribly convoluted and inelegant way, and that will take his ideas off of the playing field for me. I guess what I'm saying is that "just accepting" an inexplicable extension of existing physics bothers me just as much as "just accepting" some new fundamental entity. In some ways it's a bit easier for me to open my mind to the new fundamental thing, because I just think that solid physics should be able to provide a complete explanation. But I don't really "like" either of those positions. That greater willingness to be open-minded also applies more to areas of standard physics that are less completely understood than to areas that are more completely understood. Hence my willingness to at least consider quantum stuff as applicable than the purely classical and thoroughly understood physics of standard computer technology. We *know* that landscape, and I personally know a lot about it, and I just don't see the path. Here, let's talk about this for a bit. If we're going to propose that awareness can emerge in a conventional computer, we need to at least say whether that has to do with the hardware complexity or the software arrangement. I feel *particularly* strongly that just increasing the number of transistors in a computer isn't going to lead to anything fundamentally new. No matter how many there are, each one is no different, in and of itself, from the transistors in a calculator, or an AND gate. So that leaves us with the software. Do mainstream emergence ideas focus on software patterns? I have strong doubts there too, but my knowledge isn't quite as strong (I'm an EE, not a computer scientist). I still don't see how an algorithm or a data structure can ever "become aware," but I do feel like there ought to be arguments on that front for me to listen to at least. It might help if you think of 'mind' as a process rather than an entity. Link to comment Share on other sites More sharing options...
KipIngram Posted May 6, 2017 Author Share Posted May 6, 2017 Sure - I can think of it that way. But whichever it is, it produces my experiences, and it's the sensation associated with those I'm looking to explain. I can view processes unfolding in a computer's processor and memory, but can't explain to myself how that would ever result in equivalent sensations. However complex we make the whole business, that core nut still remains - how do we feel results from it? How do we get from a relationship amongst data patterns to that? Apparently that little bit of it just doesn't trouble you guys as much as it does me. I just grabbed a copy of Consciousness Explained, so I will read. I still haven't started my GEB re-read though. I just remember it being such a tiresome read the first go round that I haven't mustered up the energy yet. I don't feel that same reluctance re: Dennett's book, though, so I'll start it now. This quote: "According to the various ideologies grouped under the label of functionalism, if you reproduced the entire “functional structure” of the human wine taster’s cognitive system (including memory, goals, innate aversions, etc.), you would thereby reproduce all the mental properties as well, including the enjoyment, the delight, the savoring that makes wine-drinking something many of us appreciate." I can imagine a computer having, in its "cognitive machinery," the sensors to recognize particular chemical compositions, the memory to store these recognitions, and perhaps even an algorithmic casting of "goals." But the "innate aversions"? That one loses me. You could program the computer to output the statement that the analysis resulted in an aversion for some particular reason, but that is merely a reflection of the software designer's aversion. "I'm averse to this sort of wine, so I'll program the computer to say such when it recognizes this sort of wine." In that sense the goals are also reflections of the programmer's goals - not truly goals of the computer. At this stage I'm somewhat nervous that he's going to wind up instructing me along the lines of "That question that's made it so difficult for you to accept a functionalist position? Train yourself to stop asking that question." But we'll see - I'm still reading. Haha - he used my favorite cartoon. Figure 2.4 - the infamous "I think you need more details here in step 2" one. Love that cartoon. Link to comment Share on other sites More sharing options...
KipIngram Posted May 7, 2017 Author Share Posted May 7, 2017 Well, I'm unimpressed. It would be a great read for someone entering into the study of artificial intelligence and interested in a general introduction to programming "externally believable 'conscious' behaviors. You'd need more, on each specific thing, but it would be good orientation. But he didn't take even one step toward what I called "the nut of it" earlier. He basically is saying "just don't think like that." If my self-awareness was an elephant in the room, he's essentially saying "there's not really an elephant - you're misguided." Which totally dodges the fact that my very act of forming the belief that I'm aware is an act of awareness. A third person perspective explanation is simply not adequate. Link to comment Share on other sites More sharing options...
Eise Posted May 7, 2017 Share Posted May 7, 2017 (edited) But I'm bothered by the idea that I should be satisfied without a real explanation. I wonder what kind of explanation would satisfy you: 'virtus dormitiva' or reductionism? If both are not acceptable for you, what kind of other possible explanation would be? If consciousness principally cannot be explained by processes that are not consciousness, then consciousness cannot be explained at all. But what is an explanation that already contains the explanandum? I guess what I'm saying is that "just accepting" an inexplicable extension of existing physics bothers me just as much as "just accepting" some new fundamental entity. In some ways it's a bit easier for me to open my mind to the new fundamental thing, because I just think that solid physics should be able to provide a complete explanation. But I don't really "like" either of those positions. Why do you expect physics to give an explanation of consciousness? It is like expecting an explanation of life, or evolution, from physics. But huge parts of evolution can be studied without any reference to the underlying mechanism. In the end, Darwin devised his theory of evolution without knowing anything about genes, DNA or hydrogen bonds. It looks like trying to become a good chess player by studying chess computers physically. Here, let's talk about this for a bit. If we're going to propose that awareness can emerge in a conventional computer, we need to at least say whether that has to do with the hardware complexity or the software arrangement. I feel *particularly* strongly that just increasing the number of transistors in a computer isn't going to lead to anything fundamentally new. No matter how many there are, each one is no different, in and of itself, from the transistors in a calculator, or an AND gate. So that leaves us with the software. Do mainstream emergence ideas focus on software patterns? I have strong doubts there too, but my knowledge isn't quite as strong (I'm an EE, not a computer scientist). I still don't see how an algorithm or a data structure can ever "become aware," but I do feel like there ought to be arguments on that front for me to listen to at least. Dennett surely concentrates on the 'software'. He sees the mind as a fuzzy virtual von Neumann architecture running on the massive parallel functioning brain. So it is not just complexity. It is complexity that enables new kinds of processes, like minds. I can imagine a computer having, in its "cognitive machinery," the sensors to recognize particular chemical compositions, the memory to store these recognitions, and perhaps even an algorithmic casting of "goals." But the "innate aversions"? That one loses me. You could program the computer to output the statement that the analysis resulted in an aversion for some particular reason, but that is merely a reflection of the software designer's aversion. "I'm averse to this sort of wine, so I'll program the computer to say such when it recognizes this sort of wine." In that sense the goals are also reflections of the programmer's goals - not truly goals of the computer. This is missing the point completely. You cannot use our 'simple' software algorithms as examples of how the brain works. At this stage I'm somewhat nervous that he's going to wind up instructing me along the lines of "That question that's made it so difficult for you to accept a functionalist position? Train yourself to stop asking that question." Dennett does a lot more. Why do you think his book has so many pages? Well, I'm unimpressed. It would be a great read for someone entering into the study of artificial intelligence and interested in a general introduction to programming "externally believable 'conscious' behaviors. You'd need more, on each specific thing, but it would be good orientation. But he didn't take even one step toward what I called "the nut of it" earlier. He basically is saying "just don't think like that." If my self-awareness was an elephant in the room, he's essentially saying "there's not really an elephant - you're misguided." Which totally dodges the fact that my very act of forming the belief that I'm aware is an act of awareness. A third person perspective explanation is simply not adequate. What you basically are saying here is that any science of consciousness is impossible. Science is taking the third person perspective. If knowledge must be valid for everybody, then you cannot refer to subjective feelings that might not be valid for everybody. Read on. Dennett gives many examples of how our simple intuitions about the mind are wrong (the 'Cartesian Theatre'), and how his multiple drafts model can solve many riddles about consciousness that first leave us totally perplexed. As an interesting exercise: imagine there are philosophical zombies. (You will also meet them in Dennett' book, if you read on...) A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except that it lacks conscious experience, qualia, or sentience. A p-zombie, as it is indistinguishable from a normal human per definition, would also report about the beautiful colours of a painting, or would also suffer from the same optical illusions as we do (e.g. report colours that are not really there). Can you imagine such a thing? Would it be logically possible to describe inner states ("That's beautiful!"; "I feel lonely."; "Sorry, I am not in the mood to play chess now." etc), or react on possible inner states of you ("Sorry, I did not want to hurt you.") It is not about these sentences just being displayed on a terminal. The p-zombie is consistent in its utterances and behaviour, but is not conscious. Is that easier to imagine than that it is not a p-zombie at all, and that he is just as human as we are, and thus is conscious? Edited May 7, 2017 by Eise Link to comment Share on other sites More sharing options...
KipIngram Posted May 7, 2017 Author Share Posted May 7, 2017 I don't necessarily expect physics to give an explanation to consciousness. However, if one wishes to claim that consciousness arises from existing physical laws, then under those circumstances I expect an explanation. A claim has been made (that my experience of self-awareness arises from the known laws of physics) - the burden of proof rests on the claimant. When Newton proposed that the planets move the way they do because of F = -GMm/r^2, we didn't simply accept that - we spent centuries showing that it was (almost) so. Ditto with general relativity, even more successfully (no little chink like Mercury's motion so far). Ditto again, even more because of how big a stretch it was initially, with quantum theory. I thought that was the whole point of science - make a claim, and then show conclusively that it is supported by evidence. Why should I let this claim skate in without subjecting it to the same standards? Especially when my own professional training (electronics / computers) tells me that the claim has a very, very low probability of being true? Link to comment Share on other sites More sharing options...
EdEarl Posted May 7, 2017 Share Posted May 7, 2017 The Hard Problem must be divisible into several simple problems, as the Hubble Space Telescope is comprised of systems that are ultimately composed of simple machines (inclined plane, lever, wheel, etc.) and simple electronic circuits (and, or, not, etc.). Thus, the Hard Problem must be redefined as many pieces; subsequently, each piece is either understandable or not. If not, it is another hard problem that needs to be divided into simpler problems. This division may occur many times before a simple problem can be understood as programmable. Scientists are currently identifying many components of the Hard Problem, which are also hard (for example speech recognition), and they are building AI solutions that are rapidly improving. No expert currently knows how to make AI conscious, but I think many believe it is possible enough to work towards improving the state of the art. And, I can understand why there are non believers. Link to comment Share on other sites More sharing options...
StringJunky Posted May 7, 2017 Share Posted May 7, 2017 The Hard Problem must be divisible into several simple problems, as the Hubble Space Telescope is comprised of systems that are ultimately composed of simple machines (inclined plane, lever, wheel, etc.) and simple electronic circuits (and, or, not, etc.). Thus, the Hard Problem must be redefined as many pieces; subsequently, each piece is either understandable or not. If not, it is another hard problem that needs to be divided into simpler problems. This division may occur many times before a simple problem can be understood as programmable. Scientists are currently identifying many components of the Hard Problem, which are also hard (for example speech recognition), and they are building AI solutions that are rapidly improving. No expert currently knows how to make AI conscious, but I think many believe it is possible enough to work towards improving the state of the art. And, I can understand why there are non believers. One takes some confidence in the knowledge that there is no other viable path. Link to comment Share on other sites More sharing options...
MonDie Posted May 9, 2017 Share Posted May 9, 2017 (edited) Are you sure you mean emergence and not causation? Emergence does not involve lower order phenomena causing higher order phenomena: the lower and higher order phenomena exist simultaneously and inextricably. This is not causation! A change in the micro does not cause a change in the macro; the micro and macro are different ways of describing on the same change. There is clearly something more fundamental causing there to be mind or else there could not be multiple, isolated minds. So how is this a question of emergence? In fact, we see emergence within our own minds. Our minds experience numbers and words that emerge from colors, and ideas that emerge from simpler ideas. To have something mindless, you need something else other than mind, and that something is causing mind. ... or else there could not be multiple, isolated minds.... Can you explain this? We do not perceive the boundary between our own minds and other minds (call it the mind-gap), and yet a boundary clearly exists otherwise we could experience eachother's mental states. Imagine Comey, who is blind and deaf and sees only hallucinations. Comey still see colors and hears sounds, but they appear random to him. The hallucinations are actually caused by external phenomena, but Comey will likely never realize this. Despite possessing all the basic qualia, Comey has no reason to think there is a mind-gap nor even that anything else exists other than his own mind. This is the case because the mind-gap is not something we experience directly; the mind-gap is inferred in the same way that your laptop and the room around you are inferred from the more basic experiences of color and visual field, pitch and timbre, time, etc. This means that the mind-gap is not mental/intuitive, like colors and sounds, but physical/inferred, like your laptop and the room around you. Even though it clearly exists, the mind-gap is not experience by anyone and therefore is not a mental phenomenon, and the implication is that something other than mind causes it, or more concisely, causes there to be both mental things and non-mental things (instead of only one and not the other). Therefore mind is not emergent, but caused. Edited May 9, 2017 by MonDie Link to comment Share on other sites More sharing options...
StringJunky Posted May 18, 2017 Share Posted May 18, 2017 (edited) I found an example of emergent behaviour using computing in the boids program. The rules in the program can be seen as analogous to the physical laws determining the behaviour of matter and complex biological systems.There's an animation in the Wiki page but the Youtube one is nice. Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference.[1] The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object.[2]. Rules applied in simple Boids Separation Alignment Cohesion As with most artificial life simulations, Boids is an example of emergent behavior; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows: separation: steer to avoid crowding local flockmates alignment: steer towards the average heading of local flockmates cohesion: steer to move toward the average position (center of mass) of local flockmates More complex rules can be added, such as obstacle avoidance and goal seeking. https://en.wikipedia.org/wiki/Boids Edited May 18, 2017 by StringJunky Link to comment Share on other sites More sharing options...
Eise Posted May 21, 2017 Share Posted May 21, 2017 Hi KipIngram, How are you doing with Dennett? Bored? Fascinated? Disgusted? Irritated? I don't necessarily expect physics to give an explanation to consciousness. However, if one wishes to claim that consciousness arises from existing physical laws, then under those circumstances I expect an explanation. A claim has been made (that my experience of self-awareness arises from the known laws of physics) - the burden of proof rests on the claimant. 'Arising' is a big word. It is only needed that the structures that would lead to consciousness can be physically realised. I think that 'computing' neither 'arises' from laws of physics. But we know structures that can compute can be realised with physical means. I really think you lay harder constraints on an explanation of consciousness than on other (natural) phenomena. Again, explaining a higher order phenomenon from simpler phenomena necessarily means that these phenomena are not conscious themselves, otherwise you have explained nothing. But if you principally do not accept such a step, from the non-conscious to the conscious, then you have already ruled out to explain it ever. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now