What to say about this?
Well, first the good: Harris makes a few good points that really need to be made in a public way. They are not new points--absolutely nothing here is even close to being new--but some of them are worth making.
Harris's thesis is that ethics is something like an incipient science. What he says is this: "...morality should be considered and undeveloped branch of science." My formulation is probably better, as morality is something more like the phenomenon and ethics is something more like the study of it. Hence ethics (or moral theory) is analogous to science and moral rightness more like the phenomena studied by the science. Harris also holds (roughly) that ethics is the science of maximizing well-being. That is, he's a consequentialist. Again: nothing new here.
If there's nothing new here...if these conversations just recapitulate the early moves in the on-going debates in moral theory, then why does Harris re-start the discussion without reference to those very well-developed debates? According to him, he finds terms like "deontology," "meta-ethics" and "non-cognitivism" boring.
This, incidentally, is what it's like to be a philosopher: you slowly and painstakingly grind out distinctions, definitions, and the occasional fairly secure conclusion, but people couldn't give a rat's ass. They just wade in as if they were inventing the stuff. And that's a tad bit annoying. (Not that it's not good for people to do their own thinking--but they'd spare themselves lots of work and embarrassment if they'd just get up to speed by attending to what's already been figured out.) But no need to stew over this. At least there's a little philosophy going on in public forums, and that's good.
In this piece, Harris is responding to criticism from, e.g. the physicist Sean Carroll, who trots out the old chestnut that "there is no single definition of well-being." Harris rightly points out that if this is supposed to bring us to a screeching halt, then every branch of inquiry is in trouble, since there is always someone who disagrees about something. This is the kind of point we make to our students in 101. Again, it's not new, but it's a point worth making.
Harris also shows himself to be on the side of the angels when he writes that "...it has been disconcerting to see the caricature of the over-educated, atheistic moral nihilist regularly appearing in my inbox."
Now, I certainly think that moral nihilism is a view that deserves respect and careful attention. What irritates me, however, is how many otherwise intelligent people smugly advocate nihilism, or views that entail nihilism, with apparently absolutely no recognition of the earth-shaking consequences of their beliefs. Nor do they generally recognize the self-defeating nature of their positions. For example, as a means of blocking right-wing puritanism, many on the left hold that morality is a pure and unmitigated "matter of opinion" (or, writ large, a "social construction"), i.e. that there are no moral truths. They think this gets them some kind of liberal permissiveness about e.g. sex, when what it actually gets them is the view that there is not a thing in the world wrong with being Hitler or Pol Pot--or Jerry Falwell. Among other things, it gets them that sexual permissiveness is no better than sexual puritanism. With no moral truths, there is no kind nor degree of oppression that is impermissible, since nothing is impermissible. Permitting homosexual actions is no better than forbidding them--both are ungrounded choices that cannot be rationally evaluated.
So Harris gets all that right. But he also gets many important things wrong.
He may find that the term 'deontologism' "increases the amount of boredom in the universe, but the discussion he wants to have cannot be had without that concept being in play. (Imagine someone saying: I want to talk about mathematics, but I find the term 'function' boring, so I don't want to look at anything previous mathematicians have done.) Harris presupposes many things he's not entitled to presuppose, and perhaps foremost among them is that consequentialism is true (and deontologism is false). That is, he presupposes that what we ought to do is to maximize well-being. However, the perils of such a view are well-known. Suppose that you need kidneys and I need a liver, and that we'll die without them. Suppose also that Smith, an innocent person, happens by, sporting healthy organs of the specified kinds. Suppose that we can easily kill him and take them and the world will be a better place for it--Smith is an unpleasant sort with no family or friends; but we are otherwise. On simple views of Harris's sort, we are not only permitted to kill Smith and take his guts, we must do so. One might try to squirm out of this by invoking speculative claims about our subsequent guilt, but such defenses don't work--some people's guilt will, after all, be outweighed by their future well-being. Ergo Harris cannot affirm what seems to be true--that we'd never be justified in killing Smith just because we need his guts. Of course the conversation gets complicated after this point...but Harris seems oblivious to even these initial moves in the discussion.
Carroll raises roughly the above obvious objection. Harris's reply shows how far out of his depth he is:
It is true that many people believe that "there are non-consequentialist ways of approaching morality," but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality. This is a logical point before it is an empirical one, but yes, I do think we might be able to design experiments to show that people are concerned about consequences, even when they say they aren't.Um, no. Kant is not a consequentialist. This is an elementary error. If you're making mistakes like this, they you shouldn't be pronouncing on these topics. Rather, you should be hitting the stacks. Kant did not think that the Categorical Imperative was the fundamental principle of morality because of any consequences it has, nor did he think that the moral law has rational authority because it had good consequences. Rather, Kant holds that morality has a kind of rational authority over us in virtue of our rationality and autonomy. The authority of the Categorical Imperative over our actions seems to be rather like the authority of the Law of Non-Contradiction over our thinking. To will something not in accordance with the Categorical Imperative is to will incoherently, hence irrationally (and, on some interpretations, not to even will at all--to abandon our rationality and autonomy). And there is no hidden premise in play about maximizing rationality or any such thing. Kant could (obviously) be wrong--but he is certainly no consequentialist. To insist that every view must reduce to conseqentialism is simply to betray a misunderstanding of the logical terrain. Even if the topic of deontologism bores you, you really ought to understand the view before dismissing it.
But let me end with something important that Harris is right about. Harris notes that Sean Carroll and P. Z. Meyers hold--as do so many people--that skepticism is a problem for morality, but not for science. If you just grant science its presuppositions, then it can do all sorts of wonderful things. Carroll writes:
And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don't agree with ordinary science. That's mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress.But Harris's response is right on the money, even if, again, the point is familiar to philosophers:
Of course, it is easy enough for Carroll to assert that moral skepticism isn't analogous to scientific skepticism, but I think he is simply wrong about this. To use Myer's formulation, we must smuggle in an "unscientific prior" to justify any branch of science. If this isn't a problem for physics, why should it be a problem of a science of morality? Can we prove, without recourse to any prior assumptions, that our definition of "physics" is the right one? No, because our standards of proof will be built into any definition we provide. We might observe that standard physics is better at predicting the behavior of matter than Voodoo "physics" is, but what could we say to a "physicist" whose only goal is to appease the spiritual hunger of his dead ancestors? Here, we seem to reach an impasse. And yet, no one thinks that the failure of standard physics to silence all possible dissent has any significance whatsoever; why should we demand more of a science of morality?
So, while it is possible to say that one can't move from "is" to "ought," we should be honest about how we get to "is" in the first place. Scientific "is" statements rest on implicit "oughts" all the way down. When I say, "Water is two parts hydrogen and one part oxygen," I have uttered a quintessential statement of scientific fact. But what if someone doubts this statement? I can appeal to data from chemistry, describing the outcome of simple experiments. But in so doing, I implicitly appeal to the values of empiricism and logic. What if my interlocutor doesn't share these values? What can I say then? What evidence could prove that we should value evidence? What logic could demonstrate the importance of logic? As it turns out, these are the wrong questions. The right question is, why should we care what such a person thinks in the first place?
Science cannot secure its own foundations in the ways that Carroll and Meyers want it to. Above, Carroll seems to make two common mistakes at once--he suggests that science should get to simply presuppose the rationality of its method, and he points out that, if we do so, then we'll see that science is successful. The two errors here are (1) presupposing what needs proof, and (2) pointing to the successes of science as evidence of the truth of these presuppositions. Of course you could just assume the rationality of the scientific method--but then you're left with an unjustified assumption at the heart of our seemingly most important and successful epistemic endeavor. And, of course, you'd have to let morality assume anything it needed. So if we're allowing wanton assumptions, then any view can be "defended" in this way. On the other hand, if we point to the successes of science in support of the rationality of science, we have to ask: how do we know that science is successful? The answer ends up being, roughly: when we carefully tabulate the successes of science and compare them to the successes of other methods, science wins. But that sounds like: when we scientifically compare science to non-science, science wins. And that, ignoring a few details, seems circular: science shows that science is best. We have little reason to care what science says about its own success unless we already have reason to think that science is the right way of investigating things.
The view I currently favor, FWIW, is a Peircean view of the logic of science based on a Kantian deontological ethics of belief. (The best place to see this view worked out is in Richard Smyth's Reading Peirce Reading, incidentally. Don't mind the title--it was supposed to be the much less trendy Readings of Peirce.) But to understand this view requires understanding a lot of the long conversation that is Western philosophy. In particular, it requires understanding some moral theory. And yes, it requires understanding deontologism, even if you find it boring...
[Addendum:
Harris also ends with the following tangle of confusions. There are points hidden in there, but as it stands its rather a mess. Let me just reiterate that these are the kinds of mistakes you end up making if you ignore what all the smart people who came before you had to say, and try to solve the most difficult problems about science and morality in one essay on the Huffington Post...:
So, while it is possible to say that one can't move from "is" to "ought," we should be honest about how we get to "is" in the first place. Scientific "is" statements rest on implicit "oughts" all the way down. When I say, "Water is two parts hydrogen and one part oxygen," I have uttered a quintessential statement of scientific fact. But what if someone doubts this statement? I can appeal to data from chemistry, describing the outcome of simple experiments. But in so doing, I implicitly appeal to the values of empiricism and logic. What if my interlocutor doesn't share these values? What can I say then? What evidence could prove that we should value evidence? What logic could demonstrate the importance of logic? As it turns out, these are the wrong questions. The right question is, why should we care what such a person thinks in the first place?
So it is with the linkage between morality and well-being: To say that morality is arbitrary (or culturally constructed, or merely personal), because we must first assume that the well-being of conscious creatures is good, is exactly like saying that science is arbitrary (or culturally constructed, or merely personal), because we must first assume that a rational understanding of the universe is good. We need not enter either of these philosophical cul-de-sacs.
Carroll and Myers both believe nothing much turns on whether we find a universal foundation for morality. I disagree. Granted, the practical effects cannot be our reason for linking morality and science -- we have to form our beliefs about reality based on what we think is actually true. But the consequences of moral relativism have been disastrous. And science's failure to address the most important questions in human life has made it seem like little more than an incubator for technology. It has also given faith-based religion -- that great engine of ignorance and bigotry -- a nearly uncontested claim to being the only source of moral wisdom. This has been bad for everyone. What is more, it has been unnecessary -- because we can speak about the well-being of conscious creatures rationally, and in the context of science. I think it is time we tried.
Harris is right in the first paragraph until the last sentence, where he suddenly tries to dismiss the very problem at issue. These questions are real questions, and waving your hands in irritation will not make them go away. And, besides, wasn't he just taking these questions seriously like two sentences earlier? He's right that moral relativism has had bad consequences, but that's rather beside the point. What matters is that the view is false--though he gives us no reason to think so here. He's also right that the search for a rational ground of morality is important--philosophically important...important qua theoretical question--though I'm not sure it's practically important, since I think that most people recognize the rational authority of morality in action, even if they deny it in words. And I actually think that science might be more relevant to that search than it's common to think. But it's not relevant in the way the Harris thinks it is. Of course if we simply assume the truth of a certain type of consequentialism, then suddenly science becomes clearly and direction relevant. But everything interesting here is simply packed into Harris's unproven assumptions.
So, again: there's nothing new here, and nothing that interesting to anyone who's had an introductory ethics course. I'm glad somebody's talking about this stuff in a public way, and Harris is, by my lights, right about a lot of stuff that many people get wrong...but in the end, it's hard to classify the majority of this as anything but sophomoric.]
15 Comments:
While I understand the epistemic (ho ho!) objection to using scientific to validate the logical grounding of science, it's always bothered me that, as far as I know, none of the critics who raise this point have ever been able to even vaguely sketch out what shape a solution to the problem might take. What, precisely, would satisfy the objection? Do we need to revive Divine Command Theory or something and say that God says science works, therefore we can trust science?
I mean, I encounter this objection almost exclusively as part of some Deepak Chopra-esque "other way of knowing" argument, where people want to say that, since we can't prove science works without using science, therefore reading chakras or whatever is just as good. Which seems rather obviously bullshit, just on the basis of my own anecdotal experience and the anecdotal experience of pretty much everybody I've ever met who wasn't personally involved in a scam to bilk people of money by reading and manipulating chakras.
Perhaps this is just the damned, dirty scientist in me, but I tend to think any problem where we can't even take a random half-assed guess at a half approximation to some sort of vague attempt at a solution just isn't even worth wasting breath on.
Well, I'm not sure a "damned, dirty scientist" should think that all problems are soluble. Though we, of course, DO have random, half-assed guesses at approximations to some sort of vague attempt at solutions. Though it isn't clear that any of them work.
I think these questions are rather too difficult for bloggy venues, but here's a gesture at a possible solution. You won't like it. Few scientists will like it. Sketching it out in this preposterously abbreviated way will probably do more harm than good...but here's a short description anyway:
It's not crazy uncommon to think that there is an ethics of belief--that is, that we have theoretical obligations as well as practical ones. Someone who cheats to "prove" his pet hypothesis is a cheater--he is, in an important respect, like someone who cheats at cards. We have obligations with regard to our thinking, rather as we have obligations with regard to our actions. Thinking is, after all, a type of action. So this shouldn't seem like the craziest thing you've ever heard.
Now, there are two relevant views about obligations we need to think about: consequentialism and deontologism. Consequentialism is the view that right and wrong, good and bad are determined ONLY by consequences. Deontologism holds that other things can matter. In particular, we have obligations that have nothing to do with maximizing outcomes. The scientist who cheats with his data has done something wrong, even if he is accidentally right--even if the outcome happens to be good.
If we're going to try to work out an ethics of belief--a theory of right and wrong thinking--the first two types of view we'll think of are consequentialist views and deontological views. What if we try to go consequentialist here, as so many scientists want to do? Then we'll encounter the circularity problem--if we say that the right way to think is the way that produces the best outcomes, then we'll have to use science to determine when certain outcomes have been produced. So we'll basically have to use science to establish the right ways of doing science. Circularity. Bad.
But what if we go deontological here? That is, we try to work out a system of rights and obligations that are as they are independently of their actual outcomes? Then we'll at least dodge the circularity problem. Furthermore, it is a tenet of most such views that your obligations must be transparent to you--that is, you cannot be obligated to do something if you can't see/understand that you are so obligated. Thus our obligations are guaranteed to be transparent to us in a way that other things--like the actual efficacy of our reasonings--are not guaranteed to be.
Views like Kant's emphasize the inherent value of a good will rather than the maximization of outcomes. We recognize such grounds for judgment insofar as we distinguish between honest and dishonest thinkers. If scientist A is scrupulously fair and honest in his handling of evidence, and scientist B is a low-down dirty cheater, then even if B happens upon way more important discoveries than A, there is still a sense in which we will recognize A as the superior thinker--there will be a way in which he acted better than B.
That's the type of judgment that an ethics of belief puts at the center of logic and science--and it avoids the circularity problem.
I think my reference to Divine Command Theory was a bit confusing, as I was speaking in a more general sense about the problem of "how do we know science is true?" rather than supporting Harris' claims that science supports (a specific version of) consequentialism.
For what it's worth, I don't understand the details of ethical theory well enough to say for sure, but I suspect I'm more comfortable with deontological than consequentialist ethics anyway. Product of being raised Baptist? Maybe. But I think it's more like I think intent and adherence to a set of guiding virtues at least influence a judgment on an action, where consequentialism (at least as I've encountered it) presupposes that we can ignore all of that and just look at the results.
I hadn't considered your point about consequentialism failing a transparency test, but I think that really hits on the core of a lot of people's sense of fairness. Certainly mine. It just doesn't feel right somehow to punish someone for taking an action which has long-term consequences that couldn't have been foreseen. Would it be morally right to punish Hitler's grandmother for giving birth to one of his parents? Etc.
So actually, I think if anything I'm going to have to say that I'm probably a deontologist. I hope my sciencey skeptic friends don't find out. ;)
Incidentally, I help run a group that hosts regular informal lectures in the Boston area. Would you happen to know (personally or otherwise) any philosophers up this way who might be willing to give a talk to a room full of lay people on these sorts of ethical issues? I think it might be quite interesting. Or, separately, are there any pop-oriented books that get these discussions more or less right (if not up to 100% academic rigor)?
I have a hard time keeping track of where people are, but nobody's springing to mind right off the bat. I'll keep thinking.
As for a reading that's easy...I'll ask Johnny Quest for a recommendation.
Re: Kant's possible consequentialism
When he was discussing why the good will is the only truly good thing, didn't he basically argue that that's because other theoretically good things (strength, intellect, etc) could be used in service of evil?
Isn't that just a roundabout way of saying "look, these things aren't purely good, because they can be used to reach bad ends"? If so, doesn't that necessarily entail accepting that ends are important?
---Myca
Joshua, has your group ever thought about hosting a speaker via video conference? The technology today makes it pretty easy, and it would dramatically enlarge the geographic reach of your group. Just a thought...
Myca,
Kant thought ends were important--he just didn't think they were of central importance. Consequentialism is the view that moral status is *entirely* determined by consequences. To reject consequentialism is to hold that something else matters to moral status.
Kant famously held that *It is impossible to conceive anything at all in the world, or even out of it, which can be taken as good without qualification except a good will.* (That's close to being an exact quote, but I'm too lazy to look it up.)
The good will is good without qualification--good in itself and regardless of anything else. An action performed for the sake of doing what is right is good, according to Kant, even if it has unforseen terrible consequences.
Kant, of course, recognized that consequences were important in all sorts of ways--but they do not, in and of themselves, determine moral status.
He could be wrong, of course, but that's his view. Harris is simply wrong to think that every view is a consequentialist view, either overtly or covertly.
Of course some version of consequentialism might turn out to be true, but it's not the only game in town--and not even the best/most plausible game in town, by my lights.
I guess what concerns me has to do with things like the invasion of Iraq ... the claim defenders make is that it was done for all sorts of good reasons ... to free the people of Iraq, to defeat tyranny, etc. If Kant's right, was the invasion moral?
Would it matter that the negative consequences were perfectly predictable, and were in fact predicted by any number of people?
I guess I just have trouble with the inquiring murderer, and Kant holding that good intentions are of primary importance even when we can easily foresee that bad consequences will result.
---Myca
M--
First note that I'm not trying to defend Kant here, tho I have certain sympathies with his view. I'm just trying to point out that Harris's claim that all views are consequentialist is false.
Second, note that *claiming* to be motivated by good reasons is not enough for Kant. You must *actually* be so motivated. He's very clear that you must be scrupulously honest about the maxim of your action. So there'd be no Kantian defense of the dishonest arguments for invading Iraq.
As for whether you really aren't permitted to lie to the axe-wielding murderer...well, Tom Hill often says that Kant is bad at applying his own theory...but I've certainly got no sympathy for Kant on that point.
Re: Claiming to be motivated versus actual motivation -
Yeah, I take your point, and this is an area where Kant is solidly non-consequentialist (actually, to my view, far less consequentialist than he ought to be) but the notion that if Iraq had gone precisely the same, but they'd really meant well, it would be moral ... I find hard to swallow.
---Myca
Well, remember, they'd have to have been duly diligent about investigating the facts and so forth. And then it doesn't mean that their action would have been optimal, but that they'd have been good people.
Easier to think of a case like this: suppose that the allies in WWII had been wrong for extremely obscure reasons that they didn't and couldn't have known about. Would that have made e.g. Churchill a bad person? After all, he had the best of reasons for doing what he did.
Oh, certainly. I think that it tends to hinge on predictive ability and how likely your actions are to lead to good results. Some of it has to do with judgment.
I mean, I think both intentions and consequences matter, really. You need to intend to do good, and have a reasonable belief that your actions will actually achieve that good. Which is a test that neither the Iraq War nor the Inquiring Murderer meets, and one that Churchill in World War 2 does.
---Myca
I absolutely agree--and so does Kant, actually. Irresponsibility with regard to predictions about the consequences of one's actions is culpable wrong-doing.
This comment has been removed by the author.
Sorry--duplicate post.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home