Thursday, January 05, 2006

Beliefs, Knowledge, and the Limbic System

Why do people believe the things they believe? This is a fundamental question, and there’s no good answer. It’s an important question, because peoples’ beliefs are among the crucial factors in their determinations of what is possible (the choice set) and what is desirable (their preferences). Prior to Deng Xiaopeng’s “revolution” in China, the common belief in China was that markets and capitalism would lead to economic debacle and impoverishment. Deng provided a new vision of what was possible (particularly because he had a vision provided by the examples of Hong Kong and Taiwan) that suggested a different choice set – a non-centrally planned system might create wealth rather than simply rob workers and peasants in a zero-sum game.

In a few cases, such as China’s transition, it is relatively easy to understand how great shifts in beliefs came about. But in most cases it’s difficult to see why people believe what they do. And in particular, why are they sometimes persuaded by reason and evidence, and other times hold beliefs despite overwhelming evidence against them.

Research in neuroscience has identified a distinction between the limbic system and the system governed by the prefrontal cortex. These are not independent and isolated, and there is a great deal of interplay and coordination between them. But for some purposes they can be roughly thought of as governing emotions (limbic) and rational thinking (prefrontal cortex). Some neuroeconomic research has suggested that when humans make decisions, the timing of payoffs matters. Payoffs that will be received in the immediate future tend to be evaluated by the limbic system (emotions), while payoffs that will be received further out tend to be evaluated by rational thinking. (Again, this is a somewhat crude characterization; it’s fairly well established, for example, that if the limbic system functions poorly because of an injury, rational thinking will function poorly as well – there are crucial interdependencies between the systems. There is no clean dichotomy between reason and emotion as is sometimes supposed.)

This might be the explanation for commitment devices that seem to fly in the face of neoclassical preference theory. The neoclassical view of preferences holds that they are consistent and unchanging; hence individuals should not need commitment devices to lock-in their own choices and keep their own behavior self-consistent. Yet individuals do indeed use such devices (e.g. giving a friend the car keys before going drinking, signing up for plans which commit them to take savings from future paychecks, etc.). Economists have developed models in which agents essentially have two selves to account for these behaviors; neuroscience appears to give a biological basis for this approach.

No doubt further research in these lines will help us to answer why we believe what we believe. But this is perhaps sufficient to give us a scientific basis for a distinction between knowledge and belief, one that we can at least apply to our own thinking. (Never mind trying to determine why others believe as they do, it’s hard enough trying to figure this out for oneself.)

A belief is any idea that one holds. Knowledge is typically defined as a justified right belief. “Justified” means that belief is held because of reason – it is self-consistent and consistent with other knowledge – and evidence – it is consistent with observed facts. “Right” means that it is true. Let’s limit ourselves for the moment to general empirical propositions and assume that it is correct that such can never be verified, that the only possibilities are that they can either be falsified or not falsified. In this case, we can never know with one hundred percent certainty that something is true. Therefore, at least in this context, I define knowledge as justified belief – justified signifying that evidence and reason point to one particular belief as superior to all identified alternatives.

Hence our knowledge is a subset of our beliefs. Arguably, our beliefs lie on continuum between falsified to completely justified (held with theoretic one hundred percent certainty), although this isn’t necessary for the following.

It seems clear that the justification that makes a belief knowledge is a function of the rational system, the workings of the prefrontal cortex. This may give a standard that’s useful for an individual to sort out why s/he believes the things s/he does. Here’s the test – for any given belief you hold, ask the following: does the degree to which I believe in this idea fluctuate with my emotional state? If so, the belief shouldn't be considered as something I know . In other words, is my acceptance of the idea largely independent of the degree to which my limbic system is in charge? If so, the belief has passed the first test to be classified as knowledge. The second test is whether it is reasonable (self-consistent and consistent with other knowledge) and empirically warranted (not falsified). On the other hand, if the degree to which I accept of a particular belief depends importantly upon my emotional state, then I should not treat that belief as knowledge. I may still hold it, but I should admit that it’s a cherished belief, maybe a best guess, and maybe even true…but it doesn’t have the status of knowledge. I should not expect to persuade others of its truth status, since my own acceptance of it depends not on justification but on the influence of my limbic system.

Of course, this is a difficult standard to apply, since it requires careful introspection and self-honesty. I think it is a useful nevertheless. Here’s an example. I sometimes describe myself as an atheo-pantheist. As an atheist, I do not believe in God; even more, I find all of the definitions of God to be internally contradictory and in many cases empirically falsified. Hence I not only don’t have belief in God, I believe God doesn’t exist. This belief remains consistent regardless of my emotional state. (There may be no atheists in foxholes, but there are atheists in near-death experiences.) Hence I treat this as knowledge. On the other hand, I also have strong elements of pantheism in my thinking. These seem to be internally consistent at least, and not falsified. However, I notice that the degree to which I accept these ideas depends in part on my emotional state. This immediately raises a warning flag – not that the ideas are necessarily wrong, but that my “reasons” for accepting them may be independent of justifications for accepting them. Hence I recognize that these ideas are not knowledge, and need to be treated differently. I accept them, but much more guardedly and provisionally – even though at times I find them utterly and completely convincing.

It is even more difficult to apply this test interpersonally, although in some instances this may be possible. It’s easy to find examples of deeply thoughtful and well-informed people whose statements about politics, religion, history, philosophy, the future of humanity, etc. vary systematically and dramatically according to their emotional states. In fact, this seems to be the norm, not the exception.

In one sense, maybe I have said nothing new here – the distinction between accepting an idea on the basis of reason and evidence versus emotional appeal is as old as epistemology. And maybe it's not important whether it’s new to suggest that we should each try identifying why we believe the things we do, and linking that to our brain functioning. But I think it is useful. I also think it has further important implications for epistemology, for individual decision making, and for the evolution of societies. I’ll develop these issues in future posts.

Comments:
My (quite limited) knowledge is primarily from the neuroeconomics literature. The place to start is:

Camerer et al., Journal of Economic Literature, March 2005.

This piece surveys some interesting work and provides loads of citations.

There are a couple of related articles in the latest Journal of Economic Perspectives (Fall 2005); it just came out so I haven't read them yet.

Most of this material reflects economists' interests, esp. the rationality of decision making.

I'd be very interested in hearing what you make of all this.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?