Recently, I have decided to embrace preference-based theories of welfare over hedonism. I had two reasons for doing so. First, due to the difficulty of assigning cardinal utilities to mental states. Second, because I no longer feel as though there are any grounds for claiming that preferences that do not supervene over mental states are irrational—and if they are not irrational, then we ought to respect them when it comes to moral decisions. In this post, I will expand upon the first point.
Before talking about ethics, it would be beneficial to briefly review the decision theory necessary for understanding the notion of cardinal utility. For more detailed discussions of these topics, see Schwarz or Peterson (do note that the former is open source). Put simply, according to orthodox decision theory, every individual has a utility function, which intuitively measures “how good” outcomes are for that individual (according to their preferences—we’re not yet concerned with making any substantive claims on what those preferences should be). Formally, the utility function is a function from the set of possible outcomes to the real numbers; outcomes with higher numbers assigned to them are preferred by the agent. The utility function does not just describe the individual’s preference ordering; it also captures the magnitudes of their preferences. Let’s say A, B, and C are outcomes, and say A has utility 0 and C has utility 1. Suppose further that I prefer C to B and B to A. Then the utility of B should be somewhere between 0 and 1. But it matters where between 0 and 1 the utility of B is—the mere ordering of these numbers is not all that matters. If the utility of B is, say, 0.6, then in some sense my preference for B over A is “stronger” than my preference for C over B. If the utility of B were 0.4, this would not be so. Because relative magnitudes between preferences matter, we call the assigned utilities cardinal utilities. This is distinct from mere ordinal utilities, which only capture the order of one’s preferences.
According to expected utility theory, if an agent is faced with a choice between two gambles, she should choose the one with a higher expected utility. A gamble is an assignment of probabilities to possible outcomes. If the possible outcomes of a gamble G are , each of which will yield with respective probabilities , then the expected utility of G is given by , where is the utility function assigning real numbers to outcomes. As an example, consider the above situation with outcomes A, B, and C which have respective utilities 0, 0.6, and 1. Let the gamble G1 denote getting outcome B with certainty, and let G2 denote the gamble which gives A with 70% probability and C otherwise. G1 has expected utility equal to 0.6. G2 has expected utility 0*0.7+1*0.3 = 0.3. G1 has a higher expected utility, so I should choose G1. Expected utility can be thought of as the average yield of a gamble; if I were to partake in G2 over and over, on average I would get 0.3 utility. Notice that the assigned cardinal utilities are crucial to what decision I make: if the utility of B were 0.2, my preference ordering of outcomes would be the same, but I would have to choose G2. So, knowing how my preferences order the outcomes isn’t enough to tell me how I should make choices under conditions of uncertainty.
So far, I have only described cardinal utilities as representing the “strength” of a preference. However, this is quite a weak basis for expected utility theory. For sure, people do often have some sense of the relative magnitudes of their preferences (this used to be a standard way of defining cardinal utility—see Harsanyi 1955). But basing cardinal utilities on this intuitive sense would not be anywhere near precise enough to serve as a foundation for expected utility theory. One would not be blamed for concluding that there is no fact of the matter regarding the relative magnitudes of preferences—that the only real facts are about one’s preference ordering of outcomes. In fact, this position, called ordinalism, was argued for in the beginning of the 20th century (Schwarz). Schwarz calls this the “ordinalist challenge”: to give a basis for assigning cardinal utilities to preferences.
How, then, can the expected utility theorist respond? On what basis can we actually compare the relative strengths of preferences, in a way that doesn’t just rely on some vague intuition? The answer is to look at the agent’s preferences under conditions of uncertainty, and to try to use these preferences to define their utility function. Here’s a rough example: as above, I prefer B to A and C to B. Let Gp denote the gamble which yields A with probability p and C otherwise. I thus prefer G0 to B and B to G1, since G0 just always yields C and G1 always yields A. Moreover, in some sense, the “value” of the gamble Gp seems to increase continuously with p. It stands to reason, then, that there must be a unique value of p such that I’m indifferent between B and Gp—if I start at Gp where p = 0 and increase p, eventually the value of Gp will have to “cross” that of B. Let’s say this value for p ends up being 0.6. The expected utility of Gp is just p; since I’m indifferent between B and Gp where p=0.6, we should take the utility of B to be 0.6. This method can be used to assign cardinal utilities to all outcomes.
Two things should be addressed. First, the fact that I was only able to assign a utility to B because I had already assigned utilities to A and C. How, then, am I supposed to get these utilities? As it turns out, you can just choose them arbitrarily. Choosing the utilities of A and C just specifies my choice of the units with which we measure utility. This is similar to measuring temperature, for example. It doesn’t matter what number you assign to any particular temperature; what matters is how these numbers relate to one another. I can choose any numbers I like for two temperatures—say, the freezing and boiling points of water—and I get a valid way to measure temperature just as long as all other temperatures are consistent with the first two. Likewise, I may arbitrarily choose A to have utility 0 and C to have utility 1, just as long as all other utilities are consistent with this choice.
The second thing to address is that the above reasoning likely seems circular. The original challenge was to assign numbers to outcomes so that we can tell agents that they should maximize expected utility. I showed a method for assigning such numbers by reasoning backwards, and assigning the numbers which would make the agent’s preferences consistent with expected utility theory. Is this not circular? A utility function is said to represent an agent’s preferences provided that she prefers a gamble G1 to G2 iff the expected utility of G1 is greater than that of G2, with respect to that utility function. The expected utility theorist merely wants to say that there exists some utility function that represents a given agent’s preferences—if the agent has rational preferences. A representation theorem is a theorem which guarantees the existence of a utility function representing an agent’s preferences, provided that her preferences satisfy certain axioms. Arguing that agents should be expected utility maximizers, then, comes down to arguing that their preferences should satisfy the axioms of some representation theorem. This eliminates any circularity. One example of such an axiom is transitivity: if you prefer G1 to G2 and G2 to G3, then you should prefer G1 to G3. Another example is continuity: if you prefer the outcome A to B and B to C, then there should be some probability p such that you’re indifferent between B and a gamble that yields A with probability p and C otherwise. These two axioms are found in von Neumann and Morgenstern’s representation theorem, which defines cardinal utility by looking at preferences between gambles. Another is Savage’s representation theorem, which not only provides a utility function but also a probability function assigning probabilities to events.
Preference utilitarianism holds that total (or average, but that’s beside the point) utility should be maximized, where an agent’s utility function is just constructed from their preferences as above—or, from what their preferences would be if they were rational (see Railton 1986 for a discussion how this notion of welfare can work). It should be noted that although decision theory generally does not place substantive constraints upon agents’ preferences (merely formal constraints, given by axioms such as transitivity and continuity), preference utilitarians may hold that there are reasonableness constraints which should be taken into account (Harsanyi 1977). These may take the form of substantive constraints on the content of preferences.
Hedonistic utilitarianism disagrees with preference utilitarianism in how it defines an individual’s welfare. According to preference utilitarianism (as I have described it), an individual’s welfare is measured by the utility function that represents an idealized version of that individual. According to hedonism, cardinal utility is an objective property of mental states—some mental states just are intrinsically better to experience than others, and by a certain amount. In referring to the utilities of mental states as objective, I mean this: whether a mental state in better than another, and by how much, only depends on properties of the two mental states, and not explicitly on any other factors, such as the individual’s desires. In particular, these utilities are the same for different individuals. An individual’s welfare at a certain time, according to hedonism, is thus given by the utility of the mental state they are experiencing at that time. Hedonistic utilitarians seek to maximize the sum (or average) of these utilities over all sentient beings.
The fact that for the hedonist cardinal utility is taken to be an objective property of mental states, and not a mere result of an individual’s subjective preferences between those mental states, is crucial. This is also the aspect of hedonism which, in my view, makes it untenable. There are no such facts intrinsic to mental states, and even if there were, they would be in principle impossible to know, even approximately. Any attempt to define the cardinal utility of mental states in terms of something knowable (whether observed from the outside or by introspection) inevitably leads to a notion of utility too subjective to be suitable for hedonism.
Effectively, I am making the claim that hedonists do not have any way to meet the ordinalist challenge described above, whereas preference utilitarians do. A natural proposal is for the hedonist to respond to this challenge the same way expected utility theorists do. Recall that in decision theory, an individual’s utility function is determined by their preferences under uncertainty. A hedonist may likewise attempt to base the cardinal utility function on an individual’s preferences between mental states under uncertainty—call this the “preference approach”. As a matter of fact, I think this is the only option for hedonists. The older approach, in which agents simply judge the relative strengths of preferences of mental states, is basically equivalent to this approach (although it is less precise). This is because, if the individual in question is rational and therefore cognizant of the fact that they should be an expected utility maximizer, judgements about the relative size of utility increments are equivalent to preferences between gambles whose outcomes are mental states. Thus it suffices to focus on the preference approach in judging whether hedonism can meet the ordinalist challenge.
There are two natural ways hedonists can use the preference approach as a basis for assigning cardinal utilities to mental states. First, they can define the utility of mental states according to agents’ preferences between mental states under uncertainty. Second, they can claim that utility is a more fundamental property of mental states, of which preferences are an approximate expression. The first does not seem to work, because different agents may disagree about which gambles are better than others, even if they agree on how they order mental states. Since utility is just defined using preferences, we have no basis by which to resolve this disagreement1. It follows that cardinal utility would not end up being an objective property of mental states. Thus, I imagine most hedonistic utilitarians would prefer the second strategy, and that’s what I’ll focus on for most of the rest of this post. My problem with this position is that, if it is true, then it is in principle impossible to measure the posited properties of mental states, even approximately. This is because there would be no reason to think that preferences between mental states track the actual goodness of those mental states. Thus, the utilities of mental states can vary while one’s preferences between mental states change to “cancel out” the changes in utilities, thus causing these changes to be unobservable. I hope my argument will also motivate the stronger claim that there are no such facts about mental states, but as we will see, this stronger claim is not necessary to refute hedonism.
If hedonists want to claim that our preferences track the objective utilities of mental states (at least approximately), they need to provide some explanation as to why this is so. For example, they can argue that having preferences track the value of our mental states is evolutionarily beneficial. Put very simplistically, the reason why some mental states are preferable to others is to motivate us to act in ways that are beneficial for our survival—the good mental states result from beneficial behaviors, thus incentivizing us to behave that way. For example, getting punched is more painful than getting poked because it’s more important to avoid the former than the latter, since it causes more damage to one’s body. But this can be achieved just as well if an individual’s preferences are “out of sync” with the objective value of that individual’s mental states. This is because any “error” in how objectively good or bad a mental state is can be corrected by another “error” in the agent’s preferences between mental states. We, then, cannot expect the utilities of mental states to be tracked by anything observable, or anything available to introspection.
Here’s an explicit example to illustrate the above point. Let’s say that it’s “twice as important” to avoid getting punched than it is to avoid getting poked (relative to the status quo), in the sense that it is optimal that we are indifferent between getting poked and a 50% chance of getting punched. How is it that our biology makes us exhibit this behavior? According to the hedonist’s story, this is accomplished by having some resulting mental states feel better or worse than others. Say M1 is the mental state had after getting pinched, and M2 after getting punched. Assuming the status quo has utility 0, agents will exhibit the desired behavior if i) the utility of M2 is twice that of M1 (with both being negative), and ii) agents’ preferences are represented by the utilities of these mental states. Thus, we have an explanation for why our preferences should track the objective utilities of mental states: if they didn’t, then in situations like the above, agents would not exhibit optimal behavior.
But there’s a flaw in the above explanation. There are many ways we can get agents to have the “right” behavior when it comes to getting punched and poked. For one, we could do as above, and assign M1 and M2 utilities of, say, -1 and -2, and have agents’ preferences represented by these utilities. But we could also do the following: assign M1 and M2 utilities of, say, -1 and -4, and have agents’ preferences fail to reflect these utilities, and instead acting as though M2 had utility -2. Agents like these can be thought of as experiencing more pain when punched than those in the first case (or perhaps less pain when poked, or some combination of these), but also dispreferring larger amounts of pain to a lesser degree, so that their behavior is the same.
More generally, say Smith is rational according to hedonism, so that the utilities of her mental states represent her preferences. Let M be the set of possible mental states, and the function which assigns the correct utilities to mental states. Now suppose Jones differs from Smith in the following two ways. First, in situations where Smith would experience a mental state with utility , Jones experiences a mental state with utility . Second, Jones’ preferences are not represented by , but rather by . Jones is irrational, according to hedonism: she doesn’t take more “extreme” outcomes as seriously as she should. But outcomes are in general more extreme for her than for Smith, so that the two end up having the same behavior.
Given that Smith and Jones will exhibit the same behavior in any given situation, why would evolution make us like Smith (as is required for hedonists) instead of making us like Jones? It seems that the only causal role played by the utility of mental states is acting in conjunction with one’s preferences in order to produce certain behaviors. But as we saw above, one’s utilities and preferences between utilities are underdetermined by behavior. In other words: if I observe that someone prefers a state A to a state B by a factor of two (relative to the status quo), this can be explained in two ways. First, by saying that the mental state associated with A is actually twice as good as that associated with B and that the intrinsic goodness of these mental states are reflected by her preferences. Second, by assigning arbitrary utilities to the mental states associated with A and B and stipulating that the agent’s preferences are not represented by these utilities. We have no reason to think the former is happening instead of the latter.
Obviously the above evolutionary story is simplistic, but it illustrates my point. Moreover, I think the same reasoning can be used to undermine any story the hedonist tries to give for why our preferences track objective utilities.
I would go so far as to say that this sort of underdetermination shows that mental states have no such properties. No doubt, this sort of argument will remind some of arguments made against phenomenal properties of consciousness more generally, such as Dennett’s attacks on the concept of qualia using the considerations like inverted spectrum thought experiment (Dennett 1988)2. The idea is that, assuming color qualia exist, there is in principle no way to tell whether yours are different from mine—that is, no way to tell whether you actually see what I call “blueness” when we look at something green, and vice versa. Dennett uses considerations like that to argue that the concept of qualia doesn’t refer to anything real. I’m basically telling a similar story about utility, replacing the inverted color spectrum with an “inversion” of preference and utility.
I happen to agree with Dennett’s conclusion, but no doubt many people find this too counter-intuitive, and still assert that there is a fact of the matter about whether your color spectrum looks the same as mine, even if it is in principle impossible to find out. Should these individuals reject my argument here on a similar basis? I think there’s good reason to accept my argument as damaging to hedonism, even if one isn’t convinced by eliminativism about qualia. Because even if one still wants to assert that mental states have objective utilities, I have shown that these utilities are unsuitable to serve as the basis for a normative theory. If hedonism is to be nontrivial, it has to tell us what to do in some situations where ordinal values of outcomes are not sufficient to make a decision. Hedonism tells us to base these decisions off objective cardinal utilities of mental states. But, as I have shown, there is no basis by which to resolve disagreements about these objective utilities—thus, agents do not have access to the information required to make (even approximately) correct decisions according to hedonism. In other words: even if one wants to assert that mental states have objective utilities, one must admit that it is impossible to use them as a basis for decision making. Thus hedonism fails to be actionable in any cases which require comparisons of cardinal utilities. Imagine I had a normative theory which treated as absolutely crucial interpersonal comparisons of color qualia—the theory says you should do one thing if our color qualia are the same, and another thing if our spectra are inverted with respect to one another. Even if you’re committed to believing in color qualia and that there is a fact of the matter as to whether our spectra are inverted, such a theory should nevertheless be seen as defective, as it requires us to act according to information that we in principle cannot have access to. But hedonism is just as bad. It is in principle impossible to know whether we are like Jones or like Smith, but hedonism prescribes different actions in each case.
To summarize the above point: in order to assign cardinal utilities to mental states, hedonists appeal to our intuition that some preferences between mental states are stronger in magnitude than others. But all that is immediately apparent is that these preferences track a subjective attitude towards gambles between mental states, not recognition of some objective property of them. Hedonists have to provide some justification for the claim that these objective properties exist, and why our attitudes track them. I have shown that even if they do exist, they are causally inert (because any change in objective utility can be “cancelled out” by a change in preferences between objective utilities, keeping everything observable constant), so there is no reason to expect our preferences to track them, even approximately. This undermines any story the hedonist provides for why our preferences track objective utilities of mental states.
There’s one more point I should address. Above, I stipulated changes in mental states without corresponding changes in agents’ preferences. That is, I assumed that it is possible for individuals’ preferences between mental states to not be represented by the utilities of those mental states. Could the hedonist then claim that it is in principle impossible for this to happen? Lazari-Radek and Singer in passing make the ordinal version of this claim in their book when discussing future-Tuesday-indifference. When considering the rationality of someone who doesn’t care about anything taking place on future Tuesdays, they stress the importance that this only applies to future Tuesdays by saying “If I am now experiencing a sensation that I have no desire to stop, then what I am feeling could not be agony” (p. 46) So, they’re basically claiming that it is in principle impossible to be wrong about whether agony is worse than the status quo—if someone thinks a feeling is better than the status quo, then it wasn’t agony to begin with.
Could hedonists make the same claim, but about cardinal utilities? That is, could hedonists claim that it is impossible for agents to really disagree (to a large extent) about the values of gambles between mental states, and argue that there would have to be some difference between the mental states in questions to account for any disagreement? This view is really problematic. Say I’m trying to figure out my preferences between gambles G1 and G2. At a first pass, my choice as to which I prefer will depend on three variables: properties of G1, properties of G2, and the properties of myself when I am making this decision. To use this defense, then, the hedonist needs to claim that the outcome of this decision process actually doesn’t actually vary with the third variable. This is a substantive claim which would need justification. Moreover, it seems to be clearly false. One cannot just magically output their preferences given G1 and G2; the brain has to have some process by which the outcome is determined given the necessary information about G1 and G2. But then a person could just be re-wired so that this process works differently, and the outcome is reversed even when G1 and G2 stay the same. Maybe such a re-wired individual would necessarily be irrational, but they would still be a counterexample to the claim that preferences in principle must reflect objective properties of mental states.
All of the points I’ve made warrant further elaboration, but this is getting a bit long for a blog post, so I’ll conclude here. To summarize, I disagree with hedonism on the basis that there isn’t any suitable basis for assigning cardinal utilities to mental states, even granted the existence of objective ordinal values of mental states. Any attempt to define the utilities of mental states in terms of preferences (or, equivalently, introspection about those mental states) leads to a notion of utility too subjective to be suitable as a basis for hedonism. Any attempt to claim cardinal utility as an objective property of mental states leads to skepticism about utility, because there is no way to derive the utilities of mental states unless we assume our preferences between them are (at least approximately) rational, and there is no basis for making such an assumption.
I welcome any feedback. I’m not horribly well-read, so it’s not unlikely that someone has made arguments similar to the above. If they have, let me know.
- When talking about disagreements about gambles whose outcomes are mental states, I rely on the assumption that interpersonal comparisons of mental states are possible. This assumption is innocent here, since hedonistic utilitarians assume such comparisons are possible anyway.
- In fact, Dennett briefly argued against the idea of pleasure and suffering as intrinsic qualities of mental states in Consciousness Explained.
Schwarz, Wolfgang. “Belief, Desire, and Rational Choice”, 2017.
Peterson, Martin. “An Introduction to Decision Theory”, 2009.
Harsanyi, John. “Cardinal Welfare, Individualist Ethics, and Interpersonal Comparisons of Utility.” 1955
Dennett, Daniel. “Quining Qualia”, 1988
Lazari-Radek and Singer. “The Point of View of the Universe”, 2016.
Railton, Peter. “Facts and Values”, 1986