So after 20 years of on-and-off studies on the subject, I’m ready to express my current provisional argument against the claim that the brain is a particular type of machine, namely a computer or other information processing system. It goes like this.
Laws of physics (Newton & al.) describe, e.g., how the planets move around the sun. The planets themselves need not perform calculations according to the laws in order to move. The laws are a way for us to organise our empirical observations, the planets need not pay any attention to them. Laws of neurophysics (or whatever) may at some point be able to describe how the brain works. If that happens, the brain need not calculate according to those laws in order to function. Deep Thought and Garri Kasparov both play chess, and their capacity may in both cases be explained in terms of executing an algorithm. However, this does not mean that both function in the same way, or are similar. One is engaged in embodied skillful coping (GK), the other in passing along electronic currents in a systematic way that humans interpret as moves in the game of chess (DT). In the case of the brain it is very easy to confuse the description (the theory, the laws, etc.) with the described, simply because we ourselves in some sense are our brains, and we can and do calculate the laws and in that sense function as machines. But that intermittent “functioning-as-a-machine” does not mean that this is all or fundamentally what a brain does (not any more than a planet following the laws of physics means that a “planet-moving-around-the-sun” is a machine, an equation or a solution to an equation). To be blunt, the brain does not calculate even when we do. When we calculate, the brain does whatever it does in a way that, when successful, closely approximates abstract descriptions of algorithmic activity. Most of the time there is no such approximation.
This can be augmented by adding a bit from Searle’s Chinese Room. Typically, if the brain is said to be a machine, people mean an information processing machine. However, not one piece of information or any type of symbolic code has ever been found in a brain, independently of pre-existing theories of brains as computers. The brain is, by and large, a yoghurty-bloody mush. It is at the same time much too a) easy and b) hard to find information processing in a brain. A) too easy: any large enough collection of atoms can successfully be described as running/functioning according to any (arbitrarily selected Turing-computable) program. Take a cubic meter of air: there are enough atoms and their interactions there to implement any conceivable computer program (sometimes the “population of China” is used in the same role in the literature). Does that mean that in the cubic meter there “is” a computer? B) too hard: no real (in the sense of currently existing) biological, chemical, medical, physical, neurological description of the brain essentially or absolutely needs the concept of information. Everything that can be explained in terms of information processing, can be explained without the hypothesis. There is no proof that there is information in the brain. (In fact, there can not even in principle be such proof, because information is not a natural kind, but relative, in the eye of the beholder. It is impossible to build a scientific “information counter” that starts ticking whenever there is information around. This, really, is the crux of the matter. If we wish we can describe anything and everything as information processing. Such a description, even when successful [you can choose your own criteria of success, it does not matter], does not imply that there really is information processing going on.) The only “proof” is a metaphysical assumption that everything in the universe is a machine. (If you insist that everything that is, is a machine, I can not prove you wrong. Likewise, if you insist that everything is deterministic, I can not prove you wrong. This because both are metaphysical assumptions that can not be proven, only argued about. However, metaphysics cuts both ways. If I insist that there is only physical stuff going on in a computer, no information processing whatsoever, you can not prove me wrong. Whatever you point to, I can insist on seeing atoms, energies, fields and so on.)
(Ain’t it funny, btw, that when La Mettrie was writing, the most amazing machine in the world was a clock, and La Mettrie thought that the brain was a complicated clockwork, while in Freud’s time the leading technology was the steam engine, and Freud’s scientific image of the brain was one of pressures and releases, and now that we have computers, the brain is supposed to be one. Come quantum machines, I’m pretty sure that … Furthermore, people will see the move from digital computers to quantum computers as the model as a smooth and continuous augmentation of our understanding of cognition, even though in some important respects digital computers and quantum computers are like fire and water).
3 Comments
I think that by following Kant we can distinquish epistemological and ontological “levels”: When we research the brains it is solely from a human standpoint. Objects in space and time are human perspectives, not ‘things in themselves’. I think this would perhaps clarify the obvious problem of free will: Though we would know the function of the brains precicely, atom by atom, we would still have the question: what about our free will?
We still have this antinomy between free will and determinism, which can not be solved; any attemp to solve it would be a metaphysical hypothesis. We would have to have an absolut perspective on things.
This wasn’t perhaps anything remarkable for an experienced cognition scientist. This just came in to my mind when reading your thesis: ‘The only “proof” is a metaphysical assumption that everything in the universe is a machine’ which I agreed with.
Here is something to think about:
Is it not true that when men inveted these tools and items – the clock, the steam engine and the computer – they did so precisely because these tools can operate functions that humans cannot? A clock can keep time better than even a network of people. It is also more practical and economic to operate.
If this is true, then shouldn’t we rather state that the essence of machines is non-human? We can argue over the philosophical points of whether or not a human is amechanistic or not, but I believe that nobody disputes the fact that the raison d’être of machines stems from then having superhuman qualities in the first place.
What is L’Homme-machine then? If it is not a contradiction in terms, is it merely a poor machine? If so, who cares about it?
This is a bit beside the point but I’ll mention it anyway. The clock – steam computer – electronic computer progression reminds me of a vignette from the history of economics: Harry Collins mentions (in The Golem, or The Golem at Large, I forget which one) that sometime in the late forties an economist at LSE (I think it was) tried to construct a model of the British national economy that was a huge network of tubes, pumps and vessels containing some kind of fluid. You could adjust the flow at various points to find the optimal settings, like adjusting the central heating system of a block of flats. An analogue computer, that is. It didn’t quite work.
@ T.P. Perhaps not a poor machine, considered as a whole – I suppose in this regard the human animal would be a pretty decent general purpose machine. After all, you can use a barrel with a hole in it to keep time, even though it was made for storing and dispensing beer, for example. It’s not a good clock but usable in some circumstances. Likewise, as you said yourself, a clock kan keep better time than a human being: this means a human being can keep time too (pretty well in fact).
And of course, anything any computer can do (in the formal sense) can be done by a human being drawing with her finger in the sand.
Unfortunately, she will pretty soon get bored and find something more worthy of her limited time on earth. This makes her a poor computer, and she will need to be replaced, as some people are planning to do, in the name of the intellectual adventure that awaits us beyond the limits of wetware.
The essence of machines may not be non-human but rather inhuman.
Post a Comment