The stupid, it hurts.
There's no way he's serious with this objection. This is so dumb. No one claims water should be conscious(well, except that crazy guy in 'What the Bleep Do We Know' who says that water not only has feelings, but can read Japanese).
Wetness and consciousness are two completely different emergent properties and have nothing at all to do with each other. The only way that they are similar is that they are convenient ways to describe large scale behaviour of an incredible number of small scale interactions.
He might as well claim that water can't be wet because it's not also conscious. If he did, he'd be justifiably laughed at. The same justification here. If all his work is this bad, this guy is a joke. Let's resume the video.
Again, no. Not at all. While we would need better computers to run an HTM large enough to simulate the activity of an entire neocortex, that is rather irrelevant to this point as the neocortex and turing machines run on completely different principles. That's why rather than modeling the brain structures themselves, it is more efficient to make the HTMs do what the neocortical structures do.
cCdbZqI1r7I
The CPU lacks the hierarchal nature of which consciousness is said to be an emergent property. If you are interested in how the neocortex actually works, read 'On Intelligence' by Jeff Hawkins, however, I will do my best to summarize the basic idea of the book here.
The brain work in a fundamentally different manner than that of the computers, however, draw an analogy between consciousness and computer programming.
Our minds are analogous to computer programming. we can look at computer programming from various levels. You can look it it from the low level of electrons moving about on wires. In the same manner, you can look at the brain as ion currents through neurons. At a higher level, you have logic gates in computers and neural hierarchies in the brain. Then you have higher level programming like python. The analog in the brain is an idea.
Our actions, our choices, are all based upon our beliefs, our values, preconcieved notions, etc. It's algorithmic(albeit VERY complicated). All of these things come at the lowest level from deterministic physics, since the neurons involved are classical structures.
Our consciousness comes from a thin covering of the "old brain" called the neocortex. It works hierarchically(with many more feedback connections than feedforward) to produce a working model of the world. Instead of creating trillions of files to save what every object looks like under every condition(that would be utterly ridiculous as the pattern on your retina is never the same) the cortical-thalamo-cortical loops use a time delay to form invariant auto-associative memories which are used recursively in hierarchical feedback loops to provide a model of our world. This is how our senses are cleaned up. For example, these auto-associative memories fill in our blind spot. This model is what we experience. Most of our experiences are what we expect to experience rather than what we actually experience.
We can illustrate this last point with a simple experiment you can do at home(it's easier to do around Halloween, though).
What you'll need:
1)A barrier of some sort(a cardboard box will do).
2)A fake arm(that's why I said it's easier at Halloween)
3)An assistant
4)A chair
5)A table
Now, sit down at the table and place your arms on the table. Block one arm from your view with the barrier and place the fake arm on the table next to your arm such that you can see it. Have your assistant sit opposite you at the table. The assistant will now touch both the fake arm and the hidden arm simultaneously in the same manner. If you poke the fake arm, poke the real arm in the same point at the same time. Poke them, stroke them, shake them, whatever. After a while of watching your fake arm while the assistant manipulates both the fake arm and the real arm, your neocortex will assimilate the fake arm into your model of self.
Now here comes the creepy part. Have the assistant(at a point in time unknown to you) stop manipulating the hidden arm and keep manipulating the fake arm. You will still feel it.
The binding process is inherent in the hierarchal nature of the neocortex as the multiple senses feed into each other in associative nodes. This guy doesn't even understand that against which he is attempting to argue; bascule was right in saying that he's similar to a YEC.
I've already said that the brain is NOT like a classical computer. He's equivocated here. Notice that in the beginning, he was talking about the idea that consciousness is an emergent property of a hierarchal system like the neocortex, but all his objections are about turing machines which aren't hierarchal.
It's not pushing anything back; it's saying what we've said from the get go-the brain is not a computer.
I'm not well versed in paramecia, but, based on the rest of the video, it is extremely likely that he is overstating the case(and there are probably very simple answers known for a while). Nonetheless, a neuron is not a paramecium.
That's what we've observed it do. That's what the evidence says. The burden of proof is upon you to say it is something else.
Really? Neurons are cells with structure for internal functions that all life has? Who would have thought? This is a giant red herring.
The cells in my big toe have the same microtubules, but no one is suggesting that it is in any way conscious. We know fairly well, iirc, how neurons fire. Unless he can present any evidence that these microtubules are relevant, this whole part of his rant is moot. Let's continue.
For once, I agree. Your irrelevant little rant doesn't explain anything.
I'm sorry you wasted your time. We don't need to model the inside of the cells. We don't even need to model the cells themselves. We just need to replicate what the cells working together do. And, in fact, we've begun doing that, but our HTMs are nowhere near the size of the neocortex.
The brain is not a turing machine; we know this. We base the HTM theory on it. However, turing machines can simulate anything, so we can still make HTMs. The neocortex doesn't compute answers, it remembers them, so we're developing software to do the same.
I've already shown that he doesn't have a clue about free will.
Not only that, he seems to think that his desire for free will is a reason to negate the hypothesis(even though free will necessitates determinism). The universe doesn't care what you want to be true.
So much for the coin toss.
Regardless, QM doesn't apply, because the calculations were done and the relevant brain structures are classical.
It's hard to see how anyone can take Hammerhoff seriously. Bascule was right, he argues quite a bit like YECs:
1)Straw men everywhere
2)A demonstrated complete lack of understanding of the relevant issues
3)Dismissal based on what is desired to be true
4)Attempts to tear down with no building up
Thanks, truedeity, for the amusing video.
Merged post follows:
Consecutive posts merged
That's a weird objection, as it has nothing to do with why the output of one machine would be 'real' and the other 'simulated' if the machines are made to do the same thing.