Popcorn Sutton Posted April 24, 2014 Posted April 24, 2014 Oh it's far from resolved. I'm going to get into more depth after work.
SamBridge Posted April 24, 2014 Posted April 24, 2014 (edited) Oh it's far from resolved. I'm going to get into more depth after work. Well it's not ultimately resolved, but for the purposes of this forum, if there's no new information, then it might as well be. If you plan on returning to the topic later then I guess you're right, it isn't resolved. Edited April 24, 2014 by SamBridge
Popcorn Sutton Posted April 24, 2014 Posted April 24, 2014 I plan on returning ASAP. This is a very important issue. At the same time though, I can't ignore work for this right now. I actually work on this kind of thing as a career though. I'm seriously lucky to be able to do that.
SamBridge Posted April 25, 2014 Posted April 25, 2014 I'm seriously lucky to be able to do that. But I thought the universe already determined it was going to hire you
Popcorn Sutton Posted April 25, 2014 Posted April 25, 2014 Has it now. Yes it has. I want to reread your post but I can't ignore what's happening right now so I'll respond later
SamBridge Posted April 25, 2014 Posted April 25, 2014 (edited) Has it now. Yes it has. I want to reread your post but I can't ignore what's happening right now so I'll respond later Wait a minute, you seriously think every actions is pre-determined, despite everything that's been said? And that you can create a model that represents the correlation? Edited April 25, 2014 by SamBridge
Popcorn Sutton Posted April 25, 2014 Posted April 25, 2014 Yes, but only based on spatiotemporal proximity, meaning that your most probable universe may be different from the subjects most probable universe. The universi split per every subjected unit. That's too philosophical though, CMT should be entirely programmatical
SamBridge Posted April 25, 2014 Posted April 25, 2014 (edited) Yes, but only based on spatiotemporal proximity, meaning that your most probable universe may be different from the subjects most probable universe. The universi split per every subjected unit. That's too philosophical though, CMT should be entirely programmatical Well, you might want to have a look at quantum mechanics then, and maybe even some chaos theory. I guess chaos theory on its own doesn't rule out causality and can actually agree with determinism, but it will still yield unpredictable future conditions, it makes the requirement for predictable determinism in the context you're presenting it as to be an infinite amount of knowledge, sort of like trying to calculate the "last" digit of pi. But QM rules this out as a potential universal model through inherent randomness and violations in bell's inequalities. What you're proposing is that you can create a single equation that describes what the coordinates must be for all matter/energy in every dimension, as if it were an equation for such who's correlation was always upheld. But, over time, we have found this cannot be the case. Einstein tried to do the same thing, but at the same time ignored new components of physics because of how attached he was to describing universe like clockwork. You seem much older, so I guess it would make sense that you'd be attached to that viewpoint in the same way and didn't notice the passing of the Newtonian era. However, what current theories suggest is that even if you re-arranged all matter and energy in the universe to previous dimensional coordinates in a way that is equivalent to time travel, you could still get different end results, or in other words, Einstein shouldn't be telling god that he/she can't play with dice, and this is because the very exact position of particles and photons, vacuum fluctuations and more, have no causal relationship to previously measured locations. Particles for instance appear by chance in probability distribution clouds and it is the distribution of probability that may change in size or translate about different axis. Edited April 25, 2014 by SamBridge
Popcorn Sutton Posted April 25, 2014 Posted April 25, 2014 I'm actually pretty young. People were surprised at first when they realized how young I actually am and how I took it upon myself to share my ideas with others at approximately the same age that Newton started sharing his ideas. The original question that I had that put me in this sort of mindset was this, "what does it take to make a computer able to learn any language?" I asked that question at the age of 17 and I concluded back then that the one absolutely necessary function for acquisition was pattern recognition. It's a widely accepted belief by this time and has a lot of practical applications. It's theoretical and it's also practical and it is the main assumption of computational neuroscience, CMT, machine learning, etc.
SamBridge Posted April 25, 2014 Posted April 25, 2014 (edited) I'm actually pretty young. People were surprised at first when they realized how young I actually am and how I took it upon myself to share my ideas with others at approximately the same age that Newton started sharing his ideas. The original question that I had that put me in this sort of mindset was this, "what does it take to make a computer able to learn any language?" I asked that question at the age of 17 and I concluded back then that the one absolutely necessary function for acquisition was pattern recognition. It's a widely accepted belief by this time and has a lot of practical applications. It's theoretical and it's also practical and it is the main assumption of computational neuroscience, CMT, machine learning, etc. Well Newton's mechanics were also useful approximations for a wide variety of applications at the time and even today, until you went near the speed of light and wanted better orbital projections and a way to explain frame dragging and the equivalence principal and that light was the speed limit. Then relativity then faced a revolution of its own and we're trying to recycle it into current theories, the same is to come for neuroscience and it's already happening in psychology itself if you look at the difference between how psychologists dealt with environmental determinism and emotional problems like anorexia. Of course it would be useful for people if it was as simple as boiling everything down to some set functions seeing as how it would imply we could easily create AI and transfer synaptic patterns into digital patterns, but it's not quite so simple, and you wouldn't be ruling out that the approximation model for transference created limitations that didn't previously exist in biological systems, but I guess that would just be a matter of making the approximation indefinitely more and more accurate. I'm actually pretty young. What!? You're avatar doesn't match your real appearance!!?? Edited April 25, 2014 by SamBridge
Popcorn Sutton Posted April 25, 2014 Posted April 25, 2014 (edited) my equation has been useful for some practical purposes, but besides the equation, for truly intelligent applications (computational minds), you need to integrate what Chomsky calls "3rd factor principles". Basically, you refer to physics to deal with certain aspects of the program. One is quantum, and it's absolutely amazing how, upon recognition and parametrization, a single unit on a list that has a length of 10^100 is accessed instantaneously. It's for that reason that I think that our classical computers may alrady be quantum. We just may not know how quantum computers actually work. There was a recent article about a few scientists from Massachussets, I believe, where they successfully made a quantum computer. What's funny about it, quite ironic, is that their quantum computer is only slightly faster than a classical computer. Edited April 25, 2014 by Popcorn Sutton
SamBridge Posted April 25, 2014 Posted April 25, 2014 (edited) It's for that reason that I think that our classical computers may alrady be quantum. The real life computer uses physical elements that are quantum, but the completely mathematical version of a completely deterministic computer isn't physically real. It's also a problem of the way computers are modeled that don't incorporate improbability in their parameters. In terms of computers, if a computer said 1+1=3, it would be considered a glitch, but the statistics of that random glitch occurring are actually just as physical as getting the result 1+1=2, because both instances are based on the motions of electrons throughout the system. In reality there will always be the inherent possibility of things like glitches in a physical computer, which show contrast between a mathematical model and a physical model. So with biological systems, the important thing is to not call anything that doesn't fit a given model a "glitch" since any result you get is the product of the same system, and since there is likely always going to be glitches, you are left with trying to create an indefinitely accurate model. What's funny about it, quite ironic, is that they're quantum computer is only slightly faster than a classical computer. That's because they're trying to use quantum computers to process 1s and 0s still, and information doesn't transfer between two points faster than light, you still need to confirm a classical result in a manner that obeys relativity. The thing that quantum computers do differently is they use atoms to create a superposition of many possible outcomes that can be analyzed rapidly until you get the result you want. http://en.wikipedia.org/wiki/Quantum_computer Edited April 25, 2014 by SamBridge
Popcorn Sutton Posted April 26, 2014 Posted April 26, 2014 (edited) They might be doing it wrong. I assume that the brain is a quantum computer, and judging by what you said, our classical computers, by nature, are quantum as well. Therefor, I cannot find any classical explanation for how a point of interest is identified instantaneously from a list that could go on infinitely. But, when it comes to prompting the next point of interest (the mind), it takes time. Edited April 26, 2014 by Popcorn Sutton
SamBridge Posted April 26, 2014 Posted April 26, 2014 (edited) They might be doing it wrong. I assume that the brain is a quantum computer, and judging by what you said, our classical computers Classical computers are physically quantum, but they aren't modeled to consider that nature, up util recently. They're forced to only deal with a limited scope of how data can be interpreted/analyzed. Therefor, I cannot find any classical explanation for how a point of interest is identified instantaneously from a list that could go on infinitely. The atoms are what hold all potential results simultaneously due to their inherent random nature, and the results are based on the position and quantum states that the particle was measured to have via photons, and when measured, the photon can give a result of the information of the particle that fits certain parameters of probability predicted by quantum physics, but its the photons that still travel at a finite speed and represent quibits from the particle they interacted with that can also take multiple simultaneous paths through different optic cables prior to when they are actually measured, so information doesn't actually travel instantaneously, there's just more of it at once that has the potential to be analyzed, and the speed of photons is still a limitation. Or at least that's a specific optic model of quantum computers, I'm not entirely sure how many versions of quantum computers there are, I remember seeing somewhere there's a model that uses something like super-cooled helium to slow down and analyze light. But my guess is that you're not going to learn how to build a quantum computer from me. If you're interested in using it for your career which I think you said deals with computers, you should take a classes with someone who actually knows what they're talking about. Edited April 26, 2014 by SamBridge
Popcorn Sutton Posted April 27, 2014 Posted April 27, 2014 I'm not getting into anything theoretical though, I know what I am able to do within a relatively short timeframe so I don't need to worry about which platform my programs are being implemented on, just that it works. Evolution of artificial intel may split and go classical or quantum but I think that eventually they will be unified.
SamBridge Posted April 27, 2014 Posted April 27, 2014 (edited) Out of curiosity, if an artificial computer said it was conscious but did not have the same structure has a known animal brain, how would you prove it is conscious? Edited April 27, 2014 by SamBridge
SamBridge Posted April 27, 2014 Posted April 27, 2014 (edited) How would you prove that I am conscious? I don't think I can/can't depending in type of way an average person thinks of it. According to you, it could all just be a result of deterministic, so if I assume you're model, an outside viewer wouldn't be able to tell for sure. Which brings me to another question, that if consciousness or free-will or "control" already doesn't exist, what are neurologists actually looking for and how haven't they found whatever it is? What exactly is so evading? Edited April 27, 2014 by SamBridge
Popcorn Sutton Posted April 27, 2014 Posted April 27, 2014 It's a hierarchy of discreet bounded finite bits of knowledge. IMO, only the strongest bits are coherent from a higher level perspective, but if you were to take a look at the whole ensemble, it would be very difficult to know which bits are conscious or not until you enumerate them.
SamBridge Posted April 27, 2014 Posted April 27, 2014 (edited) It's a hierarchy of discreet bounded finite bits of knowledge. IMO, only the strongest bits are coherent from a higher level perspective, but if you were to take a look at the whole ensemble, it would be very difficult to know which bits are conscious or not until you enumerate them. How do you have a "conscious bit"? A fundamental unit of information has consciousness? And how do you measure the "strength" of a bit? And how do you mathematically define "a higher level of perspective"? Edited April 27, 2014 by SamBridge
Popcorn Sutton Posted April 27, 2014 Posted April 27, 2014 To me it seems pretty simple but I could be horribly wrong. I've described it several times here but I'll give another example. Let's assume that we know exactly how the system is ordered. Input- Hello Emergence- HelloHeyHelloHiHelloWhats upHelloHeyHelloHow are you?HelloWhats upHelloWhats up Max = 0 For u in emergence; If gen > Max Max = gen output.append Output = [hey, What's up] Do you see what I'm getting at? The reason I think that it's this way is because the output is something you would expect because it's grammatical. Also, in my experience using this method, it will get reference right. It's context sensitive. So after a few other inputs, it still knows what you're talking about. I also assume that a quantum gravity mechanism is necessary for both solidification of knowledge and prompting of information. Neurologists are on the right path. If you watch Stuart Hammerhoffs discussion on the CMT, you'll see what they're getting at. The problem, as he states it and of which I completely agree, is that even if our computers are good enough to become conscious, the AI guys are going to need to order the system with perfect precision. Some people, a lot of people actually, tend to use a rule based approach (especially when it comes to language), but that method alone is completely incoherent because how many arbitrary stipulations that are seemingly necessary for that particular method. If you use only statistics, it eliminates any arbitrary stipulations. The goal is to make the system as efficient as possible, and statistics does that. But, if you look at the minimalist approach, they prefer to make the system as simple as possible, but it's no where near simple. It's confusing as hell and it makes any logical student of linguistics hate studying the subject because of the learning curve. Merging the two approaches is fine because then you have control over what the program does.
SamBridge Posted April 27, 2014 Posted April 27, 2014 But how do you actually apply your algorithm to synaptic patterns to prove the result is always the semantic symbol "hello"?
Popcorn Sutton Posted April 27, 2014 Posted April 27, 2014 Lately I've been preferring absolute precision with the algorithm. It's going to take a boatload of examples to recognize what someone is saying, but I think that it's worth it in the long run so you can identify and specify who is who. At this point, Siri will call my other friends by my own name and I suspect that it's because it uses averages to determine speech to text. When you say semantics though, I've learned semantics but it's not useful for my purposes. When you're trying to build a computational mind, you need three things necessarily. You need to recognize sequences of occurrence, reduce the possibility of randomness, and prompt correlating bits. Beyond that is output, which is not necessary for a computational mind but is necessary for verification purposes.
SamBridge Posted April 28, 2014 Posted April 28, 2014 And what if by coincidence we found with some model that choice physically exist?
Popcorn Sutton Posted April 28, 2014 Posted April 28, 2014 I doubt that will happen, but the only reason I say that is because it cannot exist within the context of AI. I don't see any way that it could exist in a physical sense though. The only example that may be taken as a physical isomorphism of choice is whether the photon both goes through a mirror and gets reflected. Given that, you'd have to ask what that has to do with the mind. I'm assuming that our heads would explode if we were faced with a 50% likelihood of two options, or even worse, greater than two options. "BOOM" my mind just exploded
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now