The reason you'd want to start with the Turing machine as the paradigm is because it's the simplest universal computational tool we know and unless one can find something that it cannot do that the brain can, then there's no reason to look elsewhere. Since free will seems to have no basis then we needn't concern ourselves with whether a Turing machine can replicate such. While consciousness is likely required for the execution of free will were it shown to exist the two aren't the same. The question then is can a Turing machine show consciousness. Open question. Since there's no evidence or mathematical proof that it can't, and since the brain is a physical incarnation and further that it acts very much like a digital computer (synapses are on/off, for instance), there's no reason to conclude that a computer, digital or otherwise, can't show consciousness. There's no argument to be made that because the brain is organic matter and the digital computer silicon that a priori this implies it will have special properties. Quite to the contrary, DNA has been shown to be able at least to perform Turing-like computation, as an example. Certainly Information Theory applies, and equivalence of that theory to Thermodynamics has been shown, so this result should come as no surprise. It would only come as a surprise if it didn't.
Again, I remain surprised at all those who think such things are too esoteric to apply to biological organisms and especially to people. The belief that people and our brains are somehow different and outside such laws just because we're people is religion until proven otherwise. Just because the typical medical doctor doesn't know that the transport across cellular boundaries is well described by the the drift-diffusion equation or that synaptic firings follow well known circuit and electrostatic principles doesn't mean that it ain't so.
Then again, let's conjecture for our entertainment, that the brain is different, a computational device of a description for which we have no model or understanding. This is Penrose's view. Certainly it's easy to think of other brain functions outside of free will and consciousness that a Turing machine of any level of complexity may not possess. Intuition, where we can synthesize a solution to problem, doesn't on the face of it seem algorithmic, nor does innovation seem like it is. Could a Turing machine syntactically be shown a a car and a set of skis and come up with a snowmobile? Dunno but if it could but I would imagine it could do so only by trying a large number of possibilities and then comparing the function against a prescribed number of success criteria. I don't think it could do so of its own, though, but I cannot prove such nor can anybody else. For now. At a higher level, since Godel's theory was brought up, I don't see a Turing machine being able to step outside of a sufficiently complex system and determine its completeness. As an aside, I would bet, however, that anything that might be capable of so doing would be of such complexity that Godel's Theorem could be applied. At least as a necessary but not sufficient condition.
Then let's take that a step further and conjecture to say that the brain is special because it came into being via evolution. No real basis for those assumptions but they provide some distinction of the physical incarnation of the Turing machine with which we are familiar and, to be clear, we need only find some non-superficial characteristic of the brain that is not shared by a Turing machine to make the brain something special. Here again though we have to allow there are ways to make a Turing machine also evolve and update itself as it goes. The real question there, deeper than it sounds, is if in so doing the Turing machine evolved is still a Turing machine. Which would lead one to question whether early brains were really Turing machines that evolved into something else. This is interesting. Is consciousness something that comes on all of a sudden or slowly-do our kids one day wake up conscious or do they so become slowly through some intermediate state, discrete or continuous, of some description. Worth much more thought. Certainly parts of our brains must be Turing-like, at least those that run the autonomic functions. All that said, any which way you look at it, there must be some physical mechanism (and we have to identify and understand how it works) by which this achieved, whether through the hypothesized microtubules that Bucket mentioned, or through some interaction of the same for if all evolution does is tweak the algorithm, we're back to a Turing machine.
As to books written by psychologists and psychiatrists, knowing nothing else, the default reaction is don't bother. While there are some psychologists and psychiatrists of great erudition who can engage deeply on the subject most are just a few steps away from the thought processes behind Freud's explanation of human behavior asserting it manifests because every guy wants to bend mommy over a piece of furniture or Greek philosophers who explained away gravity by giving inanimate objects agency to desire to be nearer other objects. Neuroscientist seems of late to be a relabeling of the psychologist.
Quantum mechanics is an extraordinarily powerful description of the world and this very conversation over the internet owes its existence to this theory. Whether some smacked ass somewhere cheapens it by throwing around jargon he doesn't understand in the same way that Einstein's Relativity Theory is abused as "even Einstein says everything's relative" doesn't diminish in any way its explanatory power.
Circling back to the Turing machine question, rephrasing it in a way, the problem may be re-framed by asking if suitably complex syntax can mimic even simple semantics. Have to think about that.