ydoaPs Posted May 23, 2009 Posted May 23, 2009 How could we teach HTM based robotics to move? It seems simple to teach them that they have limbs and to know the location of said limbs. However, I don't know how we could teach them how to use them. Would we need random signals uploaded on the machine to generate random movements for it to learn? Would we need to just stimulate the limbs externally?
bascule Posted May 23, 2009 Posted May 23, 2009 How could we teach HTM based robotics to move? It seems simple to teach them that they have limbs and to know the location of said limbs. However, I don't know how we could teach them how to use them. Would we need random signals uploaded on the machine to generate random movements for it to learn? Would we need to just stimulate the limbs externally? Well, let's step back from HTM itself and talk about the raw functions of the neocortex. The basic idea behind how the neocortical hierarchy stimulates movement comes in the form of a prediction. Effectively, predictions propagate down your neocortical hierarchy until they get to the neocortical columns responsible for a particular movement. When a particular prediction of movement reaches a strong enough threshhold, it stimulates parts of your brain like the basal ganglia and cerebellum and actually causes movement. The movements in turn stimulate your peripheral nervous system and feed information back into the parts of your neocortex that process sensory input. So you end up with this feedback loop of predictions, with the neocortex constantly cross-checking whether the movements you intended to make are the ones your body actually performed. To do the same thing with HTM you would need a similar configuration. You would have at least two nodes or two groups of nodes at the bottom of the hierarchy: one that handles motor control, and one that handles sensory input. The predictions coming out of the motor control nodes would feed into your motor control system: after a certain threshhold is reached, movement is evoked. You'd also need sensors feeding back as input into another node or set of nodes in the hierarchy. Ideally then you could input a general "sense" of the intended movement into the top of the hierarchy. This would come in the form of a set of predictions about what types of movements you want performed. Obviously at first your little robot is going to fail until it develops a general sense of itself, so whatever is making the "predictions" for intended movement is going to also have to process all the prediction failure data which will propagate back up the hierarchy and incorporate that into the prediction signals it's sending. However, eventually, you should be able to train it to the point that, say, if it has legs you can feed it a "predicted" direction you want it to go and the lower levels of the hierarchy will work out the necessary leg movements to get it there. I'm not sure if anyone is working on something like this yet but Numenta has forums about NuPIC. It'd be something interesting to ask.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now