There are many (most?) people who dispute the idea that the brain is computable—there is something different and special about the human brain, they say. It is not possible to dispute this for now, but my own stance is a basic one: You may be right that the brain is somehow magical but my position is simpler and, all things being equal, more likely to end up as the correct one.
The argument that the brain is not a machine broadly rests on three ideas: those who lean to science and say: something, something quantum or another (quantum gravity if you want to be really fancy), or those who think it something magical, such as possessing a soul. The final group simply argue that the brain is not computable.
It is not uncommon to see the argument put forward that the brain is not computable, that what computers do is mere mechanistic cranking of mathematical algorithms. This is true, but who's to say the brain is also not doing this?
Occam's razor, Bayesian smoothing and regularization are all tools to keep one from over-fitting the evidence and failing to generalize. They are not laws, but tools to help you minimize your regret—make the fewest learning mistakes—over time. They do not say your idea must be simple, only that it does not say more than is possible given the data. The idea that the brain is computable fits within this regime as the hypothesis that is the simplest fit to the data. Why?
I often hear the point made that since people once compared the brain to clockwork and steam engines—comparisons we now know to be false—what makes you think an equivalence (and not just analogy) with computers won't show the same failing in time? Small aside: steam engines and the brain, thanks to the link between thermodynamics and information, is actually more interesting than what one might at first think.
Turing Machines are, unlike a clock, universal. They can emulate any machine or procedure that is "effectively calculable". Our physical theories might use crutches such as real numbers or infinities but are, at the end of the day, only testable using computable procedures and numbers. This is what sets Turing Machines apart: any testable quantitative theory about the universe we can expect to devise will be simulatable (given enough time) on a Turing Machine (note: this is not the same thing as The Church Turing Thesis, instead of placing the restriction on the universe as CT does, it places it on any testable theory that compresses data. That is, more than a map from observation to expected outcome).
Even for the case that some physical things like the brain cannot be computed, it is simpler to believe that whatever non-computability the brain exploits is not unique to the exact biochemical make up of brains.
Interestingly, Occam's Razor applies here too, and my argument is short. Even if Souls are a property of the universe unexplainable by science, it is still simpler to believe that the pattern and arrangement of matter that ends up with things acquiring souls is not unique to a gelatin soup of fats and proteins. Something that thinks and acts as if it is conscious, is (in essence, I drop the extra requirement that the object must also be an organic human brain like thing). That, in a nutshell, is also Turing's argument.
But what is fascinating is that computer science has made the idea of a soul a scientific and testable hypothesis. If we do build intelligence (and maybe some of them will be more intelligent than humans in every way measureable) and yet they never wake up or attain consciousness or anything resembling (that is, nothing ever passes for consistently conscious but humans), then this is very suggestive of something unique and special about human beings. Until then, that hypothesis is unnecessarily complex.
Quantum mechanics is the go to argument for people who want to appear scientific even while talking nonsense. However, it is possible that the brain does something that our current machines cannot.
It is overwhelmingly unlikely that the brain is a Quantum Computer. What we know about quantum mechanics makes this highly unlikely considering how wet, noisy and hot the brain is. It is implausible that coherent and entangled states could remain in such a situation. Additionally, humans do poorly at things we expect Quantum Computers will be good at (things such as factoring, perceiving quantum interactions intuitively—simulating quantum evolution). In fact, regular Turing Machines already outpace us in many areas; we don't focus as much on the fact that we're terrible at deductive reasoning, arithmetic or enumerating the possibilities of a large search space; for those things, it did not take long for computers to surpass human ability.
But, suppose the brain was not quantum mechanical but still leveraged quantum mechanical artifacts for its functioning—artifacts unavailable to our machines—then it is possible that current efforts will not lead to AGI.
In a certain trivial sense everything is quantum mechanical in that an agent adhering to predictions based on the theory will be able to explain the world with the highest accuracy. Of course, with such a broad definition then even the computer you are currently reading this on is a Quantum one. Not at all a helpful distinction.
Yet there is also a non-trivial sense in which quantum effects can be leveraged. We see this with our current processors; part of the difficulty with getting higher speeds and lower power is that (amongst other reasons) quantum tunneling effects are getting in the way. Biological homing mechanisms and photosynthesis have also been implicated with taking advantage of quantum effects.
Evolution is extremely powerful at coming up with unexpected uses to subtle phenomenon. Consider the following, from a fascinating article:
A program is a sequence of logic instructions that the computer applies to the 1s and 0s as they pass through its circuitry. So the evolution that is driven by genetic algorithms happens only in the virtual world of a programming language. What would happen, Thompson asked, if it were possible to strip away the digital constraints and apply evolution directly to the hardware? Would evolution be able to exploit all the electronic properties of silicon components in the same way that it has exploited the biochemical structures of the organic world?
In order to ensure that his circuit came up with a unique result, Thompson deliberately left a clock out of the primordial soup of components from which the circuit evolved. Of course, a clock could have evolved. The simplest would probably be a "ring oscillator"-—a circle of cells that change their output every time a signal passes through.
But Thompson reckoned that a ring oscillator was unlikely to evolve because only 100 cells were available. So how did evolution do it—and without a clock? When he looked at the final circuit, Thompson found the input signal routed through a complex assortment of feedback loops. He believes that these probably create modified and time-delayed versions of the signal that interfere with the original signal in a way that enables the circuit to discriminate between the two tones. "But really, I don't have the faintest idea how it works," he says. One thing is certain: the FPGA is working in an analogue manner.
Up until the final version, the circuits were producing analogue waveforms, not the neat digital outputs of 0 volts and 5 volts. Thompson says the feedback loops in the final circuit are unlikely to sustain the 0 and 1 logic levels of a digital circuit. "Evolution has been free to explore the full repertoire of behaviours available from the silicon resources," says Thompson.
Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation. Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all—there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working. It appears that evolution made use of some physical property of these cells—possibly a capacitive effect or electromagnetic inductance—to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.
But how well would that design travel? To test this, Thompson downloaded the fittest configuration program onto another 10 by 10 array on the FPGA. The resulting circuit was unreliable. Another challenge is to make the circuit work over a wide temperature range. On this score, the human digital scheme proves its worth. Conventional microprocessors typically work between -20 0C and 80 0C. Thompson's evolved circuit only works over a 10 0C range—the temperature range in the laboratory during the experiment. This is probably because the temperature changes the capacitance, resistance or some other property of the circuit's components.
Although this is the result of a genetic algorithm, a similarity with its natural counterpart is found: the exploitation of subtle effects and specificity to the environment it was evolved within. The article shows us two things: how evolution is not bounded by man's windowed creativity but also, that, even if our current designs do not leverage some subtle effect while brains do, there's no reason why we could not build a process that searches over hardware to leverage similar powerful processes. The search could be more guided; instead of random mutations, we have something else that is learning via reinforcement what actions to take for a given state of components and connections (we could have another suggesting components to inject freshness) then we select the best performing programs from the pool as the basis of the next round and appropriately reward the proposal generators.
Returning to the quantum, what, if there were something subtle about ion-channels or neuron vesicles, that allowed more powerful computation than one might expect. Perhaps something akin to a very noisy quantum annealing process is available to all animal brain's optimization and problem solving processes? The advantage need not even be quantum it might even be that perhaps subtle electromagnetic effects or whatever are leveraged in a way that allows more efficient computation per unit time. This argument is one I've never seen made—yet, still, it consists of much extra speculation. Plausible though it is, I will only shift the weight of my hypotheses in that direction if we hit some insurmountable wall in our attempts to build thinking machines. For now, after seeing how very inherently mathematical the operations we perform with our language are (some may dispute that this is cherry picking but that is irrelevant because the point is the fact that this is possible at all is highly suggestive and strongly favors moving away from skepticism and), it is premature to hold such (and other) needlessly complex hypotheses on the uniqueness of the human brain.
I have not argued against the soul or that the brain is incomputable or somehow special, instead I've argued that such hypotheses are unnecessary given what we know today. And even indirectly, when we look at history, we see one where assumptions of specialness have tended not to hold. The Earth is not the center of the universe, the speed of light is finite, simultaneity is undefined, what can be formally proven in any given theory is limited, a universal optimal learner is impossible, most things are computationally intractable, entropy is impossible to escape, most things are incomputable, most things are unlearnable (and not interesting), there is only a finite amount of information that can be stored within a particular volume (which is dependent on surface area and not volume), the universe is expanding, baryonic matter makes up only a fraction of the universe, earth like planets are common, some animals are capable of some mental feats that humans are not, the universe is fundamentally limited to being knowable by probabilistic means (this is not the same thing as the universe is non-deterministic)!
While one cannot directly draw any conclusions on the brain from these, when constructing our prior (beliefs) it perhaps behooves us to take these as evidence suggesting a weighting away from hypotheses reliant on exception and special clauses.