Friday, January 27, 2006
Open letter to the Organization of the Islamic Conference
Below is the text of an email I sent on 27 Jan. 2006 to the Organization of the Islamic Conference in Saudi Arabia.
I would like to protest the OIC's shameful attack on free speech and expression.
You should retract your attacks on Jylland Posten. You cannot simultaneously claim to be for human rights while at the same time attacking the rights of those who happen to disagree with you.
Furthermore, your actions raise the question: is your religion truly so weak that it is threatened by a few silly drawings? Come on...you are simply defaming yourselves and your religion by your over-reaction.
Please show for others the tolerance you'd hope be shown yourselves.
Sincerely,
Charles N. Steele
I am hoping no one in Saudi Arabia knows where Montana is.
I would like to protest the OIC's shameful attack on free speech and expression.
You should retract your attacks on Jylland Posten. You cannot simultaneously claim to be for human rights while at the same time attacking the rights of those who happen to disagree with you.
Furthermore, your actions raise the question: is your religion truly so weak that it is threatened by a few silly drawings? Come on...you are simply defaming yourselves and your religion by your over-reaction.
Please show for others the tolerance you'd hope be shown yourselves.
Sincerely,
Charles N. Steele
I am hoping no one in Saudi Arabia knows where Montana is.
Diversity Fundamentalism
I case you have missed it, a portion of the Muslim world is in an uproar over some cartoon portrayals of Mohammed that appeared in a Danish newspaper, the Jyllands Posten. The UN has become involved by appointing “racism experts” to investigate this possible crime of “disrespect for belief.” In the meantime, the paper has had to hire security guards to protect its staff.
“We” here atUnforeseen Contingencies are always happy to help stimulate disrespect for nonsensical beliefs, and revealed religions are the most nonsensical ideas humans have yet developed (although the U.N.’s High Commissioner for Human Rights, Louise Arbour, is challenging this with her idiotic offense of “disrespect for belief”). So, as a public service, “we” are providing a link to the “blasphemous” cartoons.
BUT BE FORWARNED! They are pretty mild, far less offensive than cartoons mocking, say, George W. Bush, and would only offend a particularly foolish, backwards, and illiberal fundamentalist kook…someone like Louise Arbour.
“We” here atUnforeseen Contingencies are always happy to help stimulate disrespect for nonsensical beliefs, and revealed religions are the most nonsensical ideas humans have yet developed (although the U.N.’s High Commissioner for Human Rights, Louise Arbour, is challenging this with her idiotic offense of “disrespect for belief”). So, as a public service, “we” are providing a link to the “blasphemous” cartoons.
BUT BE FORWARNED! They are pretty mild, far less offensive than cartoons mocking, say, George W. Bush, and would only offend a particularly foolish, backwards, and illiberal fundamentalist kook…someone like Louise Arbour.
Thursday, January 19, 2006
Endorsement: for President in 2008
In defiance of McCain-Feingold, I am making the following endorsement, and accepting donations, corporate or otherwise, in any amounts, from anyone.
Unforeseen Contingencies officially endorses for U.S. President Swami Beyondanda, and his "Right to Laugh" Party"
"We have the right and duty to laugh at our leaders, particularly those times when their actions are either seriously foolish or foolishly serious. We have the right to help them laugh with each other and at themselves, and if they are incapable of doing that, we have the right to laugh them out of power."
(From the Right-to-Laugh Manifesto)
Check out Swami's site, join the movement, laugh at Bush et al., and send me money (I will likely need for my legal defense).
Unforeseen Contingencies officially endorses for U.S. President Swami Beyondanda, and his "Right to Laugh" Party"
"We have the right and duty to laugh at our leaders, particularly those times when their actions are either seriously foolish or foolishly serious. We have the right to help them laugh with each other and at themselves, and if they are incapable of doing that, we have the right to laugh them out of power."
(From the Right-to-Laugh Manifesto)
Check out Swami's site, join the movement, laugh at Bush et al., and send me money (I will likely need for my legal defense).
Thursday, January 05, 2006
Beliefs, Knowledge, and the Limbic System
Why do people believe the things they believe? This is a fundamental question, and there’s no good answer. It’s an important question, because peoples’ beliefs are among the crucial factors in their determinations of what is possible (the choice set) and what is desirable (their preferences). Prior to Deng Xiaopeng’s “revolution” in China, the common belief in China was that markets and capitalism would lead to economic debacle and impoverishment. Deng provided a new vision of what was possible (particularly because he had a vision provided by the examples of Hong Kong and Taiwan) that suggested a different choice set – a non-centrally planned system might create wealth rather than simply rob workers and peasants in a zero-sum game.
In a few cases, such as China’s transition, it is relatively easy to understand how great shifts in beliefs came about. But in most cases it’s difficult to see why people believe what they do. And in particular, why are they sometimes persuaded by reason and evidence, and other times hold beliefs despite overwhelming evidence against them.
Research in neuroscience has identified a distinction between the limbic system and the system governed by the prefrontal cortex. These are not independent and isolated, and there is a great deal of interplay and coordination between them. But for some purposes they can be roughly thought of as governing emotions (limbic) and rational thinking (prefrontal cortex). Some neuroeconomic research has suggested that when humans make decisions, the timing of payoffs matters. Payoffs that will be received in the immediate future tend to be evaluated by the limbic system (emotions), while payoffs that will be received further out tend to be evaluated by rational thinking. (Again, this is a somewhat crude characterization; it’s fairly well established, for example, that if the limbic system functions poorly because of an injury, rational thinking will function poorly as well – there are crucial interdependencies between the systems. There is no clean dichotomy between reason and emotion as is sometimes supposed.)
This might be the explanation for commitment devices that seem to fly in the face of neoclassical preference theory. The neoclassical view of preferences holds that they are consistent and unchanging; hence individuals should not need commitment devices to lock-in their own choices and keep their own behavior self-consistent. Yet individuals do indeed use such devices (e.g. giving a friend the car keys before going drinking, signing up for plans which commit them to take savings from future paychecks, etc.). Economists have developed models in which agents essentially have two selves to account for these behaviors; neuroscience appears to give a biological basis for this approach.
No doubt further research in these lines will help us to answer why we believe what we believe. But this is perhaps sufficient to give us a scientific basis for a distinction between knowledge and belief, one that we can at least apply to our own thinking. (Never mind trying to determine why others believe as they do, it’s hard enough trying to figure this out for oneself.)
A belief is any idea that one holds. Knowledge is typically defined as a justified right belief. “Justified” means that belief is held because of reason – it is self-consistent and consistent with other knowledge – and evidence – it is consistent with observed facts. “Right” means that it is true. Let’s limit ourselves for the moment to general empirical propositions and assume that it is correct that such can never be verified, that the only possibilities are that they can either be falsified or not falsified. In this case, we can never know with one hundred percent certainty that something is true. Therefore, at least in this context, I define knowledge as justified belief – justified signifying that evidence and reason point to one particular belief as superior to all identified alternatives.
Hence our knowledge is a subset of our beliefs. Arguably, our beliefs lie on continuum between falsified to completely justified (held with theoretic one hundred percent certainty), although this isn’t necessary for the following.
It seems clear that the justification that makes a belief knowledge is a function of the rational system, the workings of the prefrontal cortex. This may give a standard that’s useful for an individual to sort out why s/he believes the things s/he does. Here’s the test – for any given belief you hold, ask the following: does the degree to which I believe in this idea fluctuate with my emotional state? If so, the belief shouldn't be considered as something I know . In other words, is my acceptance of the idea largely independent of the degree to which my limbic system is in charge? If so, the belief has passed the first test to be classified as knowledge. The second test is whether it is reasonable (self-consistent and consistent with other knowledge) and empirically warranted (not falsified). On the other hand, if the degree to which I accept of a particular belief depends importantly upon my emotional state, then I should not treat that belief as knowledge. I may still hold it, but I should admit that it’s a cherished belief, maybe a best guess, and maybe even true…but it doesn’t have the status of knowledge. I should not expect to persuade others of its truth status, since my own acceptance of it depends not on justification but on the influence of my limbic system.
Of course, this is a difficult standard to apply, since it requires careful introspection and self-honesty. I think it is a useful nevertheless. Here’s an example. I sometimes describe myself as an atheo-pantheist. As an atheist, I do not believe in God; even more, I find all of the definitions of God to be internally contradictory and in many cases empirically falsified. Hence I not only don’t have belief in God, I believe God doesn’t exist. This belief remains consistent regardless of my emotional state. (There may be no atheists in foxholes, but there are atheists in near-death experiences.) Hence I treat this as knowledge. On the other hand, I also have strong elements of pantheism in my thinking. These seem to be internally consistent at least, and not falsified. However, I notice that the degree to which I accept these ideas depends in part on my emotional state. This immediately raises a warning flag – not that the ideas are necessarily wrong, but that my “reasons” for accepting them may be independent of justifications for accepting them. Hence I recognize that these ideas are not knowledge, and need to be treated differently. I accept them, but much more guardedly and provisionally – even though at times I find them utterly and completely convincing.
It is even more difficult to apply this test interpersonally, although in some instances this may be possible. It’s easy to find examples of deeply thoughtful and well-informed people whose statements about politics, religion, history, philosophy, the future of humanity, etc. vary systematically and dramatically according to their emotional states. In fact, this seems to be the norm, not the exception.
In one sense, maybe I have said nothing new here – the distinction between accepting an idea on the basis of reason and evidence versus emotional appeal is as old as epistemology. And maybe it's not important whether it’s new to suggest that we should each try identifying why we believe the things we do, and linking that to our brain functioning. But I think it is useful. I also think it has further important implications for epistemology, for individual decision making, and for the evolution of societies. I’ll develop these issues in future posts.
In a few cases, such as China’s transition, it is relatively easy to understand how great shifts in beliefs came about. But in most cases it’s difficult to see why people believe what they do. And in particular, why are they sometimes persuaded by reason and evidence, and other times hold beliefs despite overwhelming evidence against them.
Research in neuroscience has identified a distinction between the limbic system and the system governed by the prefrontal cortex. These are not independent and isolated, and there is a great deal of interplay and coordination between them. But for some purposes they can be roughly thought of as governing emotions (limbic) and rational thinking (prefrontal cortex). Some neuroeconomic research has suggested that when humans make decisions, the timing of payoffs matters. Payoffs that will be received in the immediate future tend to be evaluated by the limbic system (emotions), while payoffs that will be received further out tend to be evaluated by rational thinking. (Again, this is a somewhat crude characterization; it’s fairly well established, for example, that if the limbic system functions poorly because of an injury, rational thinking will function poorly as well – there are crucial interdependencies between the systems. There is no clean dichotomy between reason and emotion as is sometimes supposed.)
This might be the explanation for commitment devices that seem to fly in the face of neoclassical preference theory. The neoclassical view of preferences holds that they are consistent and unchanging; hence individuals should not need commitment devices to lock-in their own choices and keep their own behavior self-consistent. Yet individuals do indeed use such devices (e.g. giving a friend the car keys before going drinking, signing up for plans which commit them to take savings from future paychecks, etc.). Economists have developed models in which agents essentially have two selves to account for these behaviors; neuroscience appears to give a biological basis for this approach.
No doubt further research in these lines will help us to answer why we believe what we believe. But this is perhaps sufficient to give us a scientific basis for a distinction between knowledge and belief, one that we can at least apply to our own thinking. (Never mind trying to determine why others believe as they do, it’s hard enough trying to figure this out for oneself.)
A belief is any idea that one holds. Knowledge is typically defined as a justified right belief. “Justified” means that belief is held because of reason – it is self-consistent and consistent with other knowledge – and evidence – it is consistent with observed facts. “Right” means that it is true. Let’s limit ourselves for the moment to general empirical propositions and assume that it is correct that such can never be verified, that the only possibilities are that they can either be falsified or not falsified. In this case, we can never know with one hundred percent certainty that something is true. Therefore, at least in this context, I define knowledge as justified belief – justified signifying that evidence and reason point to one particular belief as superior to all identified alternatives.
Hence our knowledge is a subset of our beliefs. Arguably, our beliefs lie on continuum between falsified to completely justified (held with theoretic one hundred percent certainty), although this isn’t necessary for the following.
It seems clear that the justification that makes a belief knowledge is a function of the rational system, the workings of the prefrontal cortex. This may give a standard that’s useful for an individual to sort out why s/he believes the things s/he does. Here’s the test – for any given belief you hold, ask the following: does the degree to which I believe in this idea fluctuate with my emotional state? If so, the belief shouldn't be considered as something I know . In other words, is my acceptance of the idea largely independent of the degree to which my limbic system is in charge? If so, the belief has passed the first test to be classified as knowledge. The second test is whether it is reasonable (self-consistent and consistent with other knowledge) and empirically warranted (not falsified). On the other hand, if the degree to which I accept of a particular belief depends importantly upon my emotional state, then I should not treat that belief as knowledge. I may still hold it, but I should admit that it’s a cherished belief, maybe a best guess, and maybe even true…but it doesn’t have the status of knowledge. I should not expect to persuade others of its truth status, since my own acceptance of it depends not on justification but on the influence of my limbic system.
Of course, this is a difficult standard to apply, since it requires careful introspection and self-honesty. I think it is a useful nevertheless. Here’s an example. I sometimes describe myself as an atheo-pantheist. As an atheist, I do not believe in God; even more, I find all of the definitions of God to be internally contradictory and in many cases empirically falsified. Hence I not only don’t have belief in God, I believe God doesn’t exist. This belief remains consistent regardless of my emotional state. (There may be no atheists in foxholes, but there are atheists in near-death experiences.) Hence I treat this as knowledge. On the other hand, I also have strong elements of pantheism in my thinking. These seem to be internally consistent at least, and not falsified. However, I notice that the degree to which I accept these ideas depends in part on my emotional state. This immediately raises a warning flag – not that the ideas are necessarily wrong, but that my “reasons” for accepting them may be independent of justifications for accepting them. Hence I recognize that these ideas are not knowledge, and need to be treated differently. I accept them, but much more guardedly and provisionally – even though at times I find them utterly and completely convincing.
It is even more difficult to apply this test interpersonally, although in some instances this may be possible. It’s easy to find examples of deeply thoughtful and well-informed people whose statements about politics, religion, history, philosophy, the future of humanity, etc. vary systematically and dramatically according to their emotional states. In fact, this seems to be the norm, not the exception.
In one sense, maybe I have said nothing new here – the distinction between accepting an idea on the basis of reason and evidence versus emotional appeal is as old as epistemology. And maybe it's not important whether it’s new to suggest that we should each try identifying why we believe the things we do, and linking that to our brain functioning. But I think it is useful. I also think it has further important implications for epistemology, for individual decision making, and for the evolution of societies. I’ll develop these issues in future posts.
Wednesday, January 04, 2006
Oh those Russians!
Happily, Russia’s attempt to blackmail Ukraine though the state agency Gazprom seems to have collapsed. Putin’s soulful eyes notwithstanding, this episode ought to be a wake up for the West. Putin appears to be working as best he can to undo the undoing of the Soviet Union.
This might not be the best time to increase our dependence on
Russian energy , nor to permit Iran to base its nuclear programs in Russia.
This might not be the best time to increase our dependence on
Russian energy , nor to permit Iran to base its nuclear programs in Russia.