Sam Harris claims that a science of morality may be considered objective because if it is founded on a premise that no reasonable person could disagree with. The premise or axiom proposed by Harris is that moral good is that which tends to improve the aggregate well-being of conscious creatures, while moral evil is the converse. He maintains that this is so obviously correct that were anybody to disagree, we would not take them seriously.
I do think Harris's choice of moral axiom is a good one, and may be among the best we can come up with. However it may not be sufficient to provide a basis for all of our common moral intuitions, indicating that perhaps this axiom is not a perfect basis for morality after all. Our moral intuitions are important, because these are the only justification that Harris can offer for this axiom in the first place. There can be no other reason to suppose that it is correct.
In this post, I intend to argue that there are grounds to disagree with this axiom.
If we can show that there are conclusions that could be validly drawn from Harris's premise which would run counter to the basic moral intuitions of the vast majority of humans, then Harris's axiom is on shaky ground.
One problem, as raised by Harris himself in Chapter 2 of his book, is that it is unspecified how exactly to calculate the aggregate well-being of conscious creatures. One approach might be to find the average of the well-being across the whole population. Another might be to find the sum total of the population's well-being. Each of these approaches results in moral paradoxes.
If we're seeking to maximise average well-being, then a world with one very happy conscious being is preferable to a world with millions of slightly less happy ones. If we could painlessly murder all the miserable people in the world, it would seem to be a moral imperative to do so, assuming for the sake of argument that the survivors of our holocaust would not be negatively affected. As the moral mainstream would presumably reject this action as immoral, an "averaging" interpretation of Harris's axiom is on shaky ground.
If, on the other hand, we seek to maximise the total well-being, then we might prefer a vastly overpopulated world full of unhappy people to a more sustainable population of contented individuals. Our prime moral responsibility would seem to be to reproduce as often as possible, with little regard for the likely quality of life of our offspring. It seems that a "totalling" interpretation of Harris's axiom is probably on ground no surer than an "averaging" one.
Harris acknowledges these issues but offers no solutions. He maintains that while moral questions may be impossible to answer in practice, that does not mean that moral truth does not exist. I find this handwaving to be highly unsatisfactory.
Harris's argument for the objectivity of morality hinges entirely on the assumption that we can decide upon an appropriate axiom to form the basis of distinguishing good from evil. The proposed axiom depends on the aggregate well-being of conscious beings. Seeing as he doesn't define how to calculate this aggregate well-being so as to avoid these moral paradoxes, Harris has not established that a truly correct moral axiom exists, and perhaps it does not.
We could attempt to tack on other axioms, such as respecting the right to life of conscious beings, perhaps distinguishing between the currently alive and those yet to come. We could assign a negative value to the well-being of suffering people so as to address the problem with the totalling paradox mentioned above.
These solutions might help to solve some paradoxes, but it is hard to see precisely what the one objectively correct formulation of these ideas might be. It begins to feel less like we are approaching the objective truth and more like we are tacking on a series of ad hoc adjustments to compensate for our faulty axioms. A series of such special case rules and exceptions would bring to mind the unnecessary complexity of Ptolemy's epicycles rather than the elegant truth of Copernicus's heliocentrism.
Harris's axiom is good, and I may even agree with it as a general heuristic. I don't however think that we can accept it unreservedly and blindly follow its conclusions. While perhaps a better formulation of this axiom might answer a lot of the problems with it, Harris has not shown that such a formulation exists. If it does not, then the objective truth of moral values seems to be unlikely.
Agreed. Suppose that you could sacrifice one human being in order to maximise the total well-being of all other humans. Harris's axiom would compel you to sacrifice that one person. Most reasonable people would reject that kind of morality.
ReplyDeleteIndeed.
DeleteHowever, I would be open to the idea that such reasonable people have inconsistent moral beliefs. I think that a truly consistent thinker might be willing to make such a sacrifice.