Econstudentlog

Utilitarianism (and some comments about ethics)

“The system of normative ethics which I am here concerned to defend is […] act-utilitarianism. […] Roughly speaking, act-utilitarianism is the view that the rightness or wrongness of an action depends only on the total goodness or badness of its consequences, i.e. on the effect of the action on the welfare of all human beings (or perhaps all sentient beings).”

The book is simple: The first half tells you why (act-)utilitarianism is great, and the second half tells you why utilitarianism sucks.

I’ve been unsure how to blog this book, and as I’m writing this I have still yet to decide what’s the best approach. It probably makes sense to start out with some general remarks. The first general remark is that I liked Smart’s half (the first half) better than Bernard Williams’ half, and I did that to a significant degree because it is in my opinion much easier to read and understand than especially the first half of the second half of the book – regardless of the merits of the arguments, I simply think J.C.C. Smart is a much better writer than is Bernard Williams. There are some important points hidden away in Williams’ account, but in my opinion he waffles so much you sometimes don’t really care one way or the other. Trained philosophers may disagree, but I’m not used to read philosophical texts and stuff like that is part of the reason.

The second general remark is that this book reminded me why I don’t really care about moral philosophy in the first place. Moral judgments don’t really interest me very much. Coming up with elaborate systems (or, in some cases, not-so-elaborate systems) of thought which allows some action patterns and disallows others, evaluated by considering how these systems perform in hypothetical scenarios which may or may not ever happen to anyone you know (“the common methodology of testing general ethical principles by seeing how they square with our felings in particular instances”, as Smart puts it in the book..), or perhaps evaluated by figuring out if the systems are self-consistent or not, simply seems to me a strange approach to how to identify good decision(/justification) rules.

I have come to realize that my opinion of the coverage – but perhaps especially Smart’s account – is influenced by some thoughts I had a while back and discussed with a friend last week. I was at the time considering blogging some of those thoughts, but I decided against it. Anyway these thoughts relate to how knowledge may shape how you think about stuff; this specific topic is actually covered in the book, though from a very different angle. I hold to the view that thinking which is more or less unconstrained by knowledge will most often be a very inferior type of thinking to the kind of (‘directed…’ was the word my friend used, a good word in this context I think) thinking which is constrained by data. What I came to realize along the way was that what I was really missing in this book was some actual knowledge about how humans behave, some understanding of why people behave the way they do, and how such aspects intersect both with which types of behaviours may in theory be ‘permissible’ or not, and why people think the way they do about the thoughts they have and the actions they engage in. We know some stuff about those kinds of things, books have been written about such things – for a neat little book on related topics, see Tavris & Aronson’s account. Smart mentions in his part of the book that: “If […] act-utilitarianism were put forward as a descriptive systematization of how ordinary men, or even we ourselves in our unreflective and uncritical moments, actually think about ethics, then it is of course easy to refute […] [But] it is precisely because a doctrine is false as description and as explanation that it becomes important as a possible recommendation.”

‘People don’t seem to make moral judgments the way I’d like them to, but if they did the world would be a better place’ may be true or it may not be true, but when your argument is founded on logic and you don’t really have good data to suggest that this approach to making moral judgments actually leads to better ‘moral outcomes’ (whatever that may mean – but then again the proponent of such a view is free to define his terms and then argue why his system is better, as that is how people do in other areas, so this caveat may not be important) then I don’t really think you have a very strong case. People (well, some people – it’s probably mostly other economists…) occasionally criticize economists harshly when they fail to take general equilibrium effects into account when making policy recommendations based on partial-equilibrium analyses (‘the employment effects of a job programme involving 500 people may be very different from the employment effects of the same type of job programme scaled up so that it involves 50.000 people’); what these guys are doing is in some sense even worse, as they’re really arguing without any data at all – “I think this”, “I think that”.

I’m sure this kind of stuff related to things like how you approach the topic of meta-ethics and where people stand on things like the non-cognitivist approach Smart talks about in his introduction, but I’m not well-versed in such matters. What I will say is that given what I know about many other topics (primatology, (/behavioural) economics, medicine, psychology, evolutionary biology, anthropology, …), I think the sort of approach these guys have to all of this stuff is not very ‘useful’; in my opinion you need to know and understand a lot of stuff about why people behave the way they do in order to even be in a position where you are justified in having any sort of opinion about how to evaluate the things people do or think in the first place. And these guys have not convinced me they know a lot about things aside from the sort of things philosophers know about this sort of stuff. I’ll go into more detail about these aspects below, but before doing that I would point out that another way to approach moral questions from the one they apply would be to identify/define specific outcomes, behaviours or motivations of interest, analyze variation in data on these variables, and figure out if there are some useful patterns to be found. Perhaps people who commit murder have things in common, and perhaps some of the variables they have in common can be addressed/modified by policies and/or behavioural change at the individual level. I’m not a philosopher, this is more along the lines of ‘where I’m coming from’.

In terms of ‘the stuff I know’ I alluded to above, a few examples are probably in order to get at some of the issues:

i. “Research on parent-child conflict during the first decade of life most often has focused on emotional outbursts, such as temper tantrums […] and coercive behavior of children toward other family members as evidence of conflict. The frequency of such behavior begins to decline during early childhood and continues to do so during middle childhood […] The frequency of episodes during which parents discipline their children also decreases between the ages of three and nine […] research on conflict management in this period has focused on the relative effectiveness of various parental strategies for gaining compliance and managing negative behaviors.” (link)

ii. “The result of an interview is usually a decision. Ideally this process involves collecting, evaluating and integrating specific salient information into a logical algorithm that has shown to be predictive. However, there is an academic literature on impression formation that has examined experimentally how precisely people select particular pieces of information. Studies looking at the process in selection interviews have shown all too often how interviewers may make their minds up before the interview even occurs (based on the application form or CV of the candidate), or that they make up their minds too quickly based on first impression (superficial data) or their own personal implicit theories of personality. Equally, they overweigh or overemphasise negative information or bias information not in line with the algorithm they use.” (link)

iii. “many doors in life are opened or closed to you as a function of how your personality is perceived. Someone who thinks you are cold will not date you, someone who thinks you are uncooperative will not hire you, and someone who thinks you are dishonest will not lend you money. This will be the case regardless of how warm, cooperative, or honest you might really be. […] a long tradition of research on expectancy effects shows that to a small but important degree, people have a way of living up, or down, to the impressions others have of them. […] judges use stereotypes as an important basis for their judgment only when they have little information about the target. […] When you know someone well you can base your judgments on what you have seen. When you have little information, you fall back on stereotypes and self-knowledge.” (link)

iv. “The need for closure (NFC) has been defined as a desire for a definite answer to a question, as opposed to uncertainty, confusion, or ambiguity […] People exhibit stable personal differences in the degree to which they value closure. Some people may form definitive, and perhaps extreme, opinions regardless of the situation, whereas others may resist making decisions even in the safest environments. […] Taken together, the research on intrapersonal processes demonstrates that people who are high in NFC seek less information, generate fewer hypotheses, and rely on early, initial information when making judgments.  […] The manner in which people interpret their own and other people’s behaviors and outcomes is linked predictably with their self-esteem and self-concepts. […] a large body of research on attribution processes shows that people high in self-esteem take credit for their successes and blame their failures on external factors […] In contrast, people low in self-esteem are less inclined to take credit for their successes and more inclined to assume responsibility for their failures” (link)

v. “All addictive drugs are subjectively rewarding, reinforcing and pleasurable [1]. Laboratory animals volitionally self- administer them [2], just as humans do. Furthermore, the rank order of appetitiveness in animals parallels the rank order of appetitiveness in humans […] it is relatively easy to selectively breed laboratory animals for the behavioral phenotype of drug-seeking behavior (the behavioral phenotype breeds true after about 15 generations in laboratory rodents)” (link)

vi. “Psychological autopsy studies in the West have consistently demonstrated strong associations between suicide and mental disorder, reporting that 90% of people who die by suicide have one or more diagnosable mental illness” (link)

vii. “Evolutionary explanations are recursive. Individual behavior results from an interaction of inherited attributes and environmental contingencies. In most species, genes are the main inherited attributes, but inherited cultural information is also important for humans. Individuals with different inherited attributes may develop different behaviors in the same environment. Every generation, evolutionary processes — natural selection is the prototype — impose environmental effects on individuals as they live their lives. Cumulated over the whole population, these effects change the pool of inherited information, so that the inherited attributes of individuals in the next generation differ, usually subtly, from the attributes in the previous generation. […] Culture is a system of inheritance. We acquire behavior by imitating other individuals much as we get our genes from our parents. A fancy capacity for high-fidelity imitation is one of the most important derived characters distinguishing us from our primate relatives […] We are also an unusually docile animal (Simon 1990) and unusually sensitive to expressions of approval and disapproval by parents and others (Baum 1994). Thus parents, teachers, and peers can rapidly, easily, and accurately shape our behavior compared to training other animals using more expensive material rewards and punishments.” (link)

viii. “When two people produce entirely different memories of the same event, observers usually assume that one of them is lying. […] But most of us, most of the time, are neither telling the whole truth nor intentionally deceiving. We aren’t lying; we are self-justifying. All of us, as we tell our stories, add details and omit inconvenient facts […] History is written by the victors, and when we write our own histories, we do so just as the conquerors of nations do: to justify our actions and make us look and feel good about ourselves and what we did or what we failed to do. If mistakes were made, memory helps us remember that they were made by someone else. If we were there, we were just innocent bystanders. […] We remember the central events of our life stories. But when we do misremember, our mistakes aren’t random. The everyday, dissonance-reducing distortions of memory help us make sense of the world and our place in it, protecting our decisions and beliefs. The distortion is even more powerful when it is motivated by the need to keep our self-concept consistent; by the wish to be right; by the need to preserve self-esteem; by the need to excuse failures or bad decisions; or by the need to find an explanation, preferably one safely in the past” (link)

ix. “The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people—inferring who we are and what we like from our actions. […] We may not always know exactly why we do what we do, choose what we choose, or feel what we feel. But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings.” (link)

One key point is that people are different, in all sorts of ways. They’re systematically different in terms of behavioural dispositions, and some behaviours may to a great extent be simply the result of biological factors (drug abuse is certainly relevant here, and suicide probably is as well. These are relevant to the discussion not just because there are relevant differences in behavioural dispositions, but also because people tend to think they ought to have views about the ethics of these behaviours). If individuals are different and such differences are important in terms of which actions the individuals are likely to engage in, it might be natural to suggest that taking such differences into account may be an important component in the evaluation of the ethical properties of a given behaviour. That was actually not the point I was going for, as I’m not sure I really care a great deal about how moral systems should look like. However it does seem to me that people are taking many individual-level differences into account, to varying degrees, when making moral jugments, whether or not they ‘should’.

The basic point is that people are different, and so they have different moral systems. This is not a new idea of mine, and I’ve previously touched upon factors of relevance in this analysis; see for example this post (key point: “If you’re better able to handle complexity you’re able to make use of more complex moral algorithms.”). Another way to think about it, which also relates to the quotes above, would be to say that as people use their moral systems repeatedly to justify their own behaviours and as people behave in different ways, it’s really beyond doubt that people have different moral systems which incorporate different stuff. When looked at from that point of view, utilitarianism is really just one system (or family of systems), which appeal to some specific people due to specific reasons related to why those people are the way they are and behave the way they do. This is my observation, not an observation made in the book, but Williams does touch very briefly upon related aspects, in the sense that he talks about “the spirit of utilitarianism, and […] its demand for a rational, decideable, empirically based, and unmysterious set of values”, and at the end of his contribution charges the system with “simplemindedness”.

The social dimension alluded to in the quotes above seems relevant as well. Individuals are different from each other, but so are different groups of individuals (see e.g. vii). Groups are particularly important because stuff like social feedback systems are really important determinants of individual behaviours, and important determinants in terms of how individuals approach various questions and actions. For example people may act differently when they’re in a group than they do when they’re on their own – ethicists may or may not agree that such differences are relevant to the ethical judgment of behaviour, but there’s a potential variable lurking here which some people may consider to be important. Another related example might be that some people may search out social environments that contain people who are likely to approve of their behaviours and avoid social environments including people who do not – they may, in short, behave in a manner which may make enforcement of ethical systems more difficult. Some people may also respond differently to social feedback than do other people. If some people do consider such variables to be important when making moral judgments, and you’re planning to discuss ethics with such people, then you probably need to have some knowledge about how groups of people work, and how social aspects impact behaviour (i.e. you need to know some stuff about social psychology, sociology and related fields).

One argument here which is implicit is that if you have a moral system which makes judgments without regards to the knowledge we actually have of how people behave and why they behave the way they do, you’re likely to end up ‘left behind’ in the long run. You end up with something like religious rules, where you have a system of behavioural rules which perhaps sort of made sense, kind of, during a period where people didn’t really know anything about anything, but which makes a lot less sense now because we know better. It’s not hard to argue, though I’m sure some moral philosophers might disagree with me, that it is better to medicate the schizophrenic than to deem him mad and incarcerate him. I make this point explicit because at least judging from this book, I got the impression that the philosophical approach to how to handle ethical systems and evaluate their attributes seems to me to have many things in common with the religious approach, and much less in common with a behavioural sciences approach. Thought-experiments asking questions like how you would/should behave if you happened to find yourself in front of a guy who’s threatening to shoot 20 other people unless you shoot one of them yourself may be useful in terms of illustrating key aspects of an ethical system, but is this kind of analysis really likely to lead you very far? Some of this stuff seems to me not that different from theology. ‘People who act friendly and non-threatening in social situations are more likely to find friends and keeping them’ (or whatever) seems to me to be much more useful information, in terms of how to answer questions such as ‘what is a good (‘ethical’) way to live your life’, than are thought experiments like these and discussions about key assumptions related to those thought experiments. It seems to me that a lot of what these people are doing is adding new floors to the ivory tower and not much else.

In terms of the risk of being left behind comment above, I should note that I’m aware this is perhaps a problematic way to think about things. Some people (especially religious people, presumably) would certainly argue that it makes a lot of sense to adopt sort of a Darwinian approach to meta-ethics and consider the moral systems likely to persist and ‘survive’ to be ‘better’ than the alternatives; in which case religious systems have a lot of things going for them, in part because they’re very good at constraining thinking and suppressing certain lines of thought likely to weaken the systems (like the thought that all this stuff is just made up). Williams talks about related stuff in his coverage – his view is incidentally that such implicit constraints on moral thinking is a good thing, and he considers the absence of such constraints to be a problem with utilitarianism – I decided to include a few relevant quotes on that matter below:

“It could be a feature of a man’s moral outlook that he regarded certain courses of action […or thought, US] as unthinkable, in the sense that he would not entertain the idea of doing them […] Entertaining certain alternatives, regarding them indeed as alternatives, is itself something which he regards as dishonourable or morally absurd. But, further, he might equally find it unacceptable to consider what to do in certain conceivable situations. […] Consequentialist rationality, however, and in particular utilitarian rationality, has no such limitations: making the best of a bad job is one of its maxims”

Something I found interesting in that part is that Williams does not make clear that constraints on moral thinking have the potential to lead to both good and bad ‘outcomes’ (‘lead to ‘better’ or ‘worse’ performing moral systems’ would be a statement inclusive enough to also incorporate non-consequentialist ethical systems, it seems to me, but then a different problem related to what we mean by ‘better’ or ‘worse’ of course pops up. Anyway if you have difficulty conceptualizing this idea it probably makes sense to just model it this way: Constraints on moral thinking may stop you from thinking that it might be a good idea to kill all the jews (the argument being that where people are free to think this thought, the associated outcome becomes more likely), but such constraints may also stop you from thinking that killing jews is wrong, if you happen to live in a society where killing jews is the morally enforced norm), even though a related symmetry argument seems to be used by both proponents and opponents of utilitarianism in the context of events taking place in the far future. Note incidentally on a related, if different, note that when people make moral judgments about a given action, in terms of how long time has passed since the event in question, may have a significant influence on the judgment in question (see viii).

I do not think people use utilitarian systems of thought to decide upon which actions to engage in, and as mentioned previously neither does Smart; he’s careful to point out in his coverage that what he’s defending is a normative system, not a descriptive system. In my view people often don’t know why they do the things they do, and even when they think they do, they probably don’t, really, because there are an incredible number of aspects which are relevant, and people probably often don’t know about half of them. “But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings”, as pointed out in ix. We’re not rational creatures, but we are rationalizing creatures. People may use a utilitarian framework to present the decision context and the decision process, but it’s just a model. I probably differ from Smart also in the sense that Smart may be a lot more optimistic about the feasibility of even applying such a scheme than I am. Smart would probably think about a hypothetical situation in this way: ‘I have thought about this potential action X, and it seems to me that the consequences of this action X would be that one person is made much better off and another person is made slightly worse off. If I do nothing instead, no-one is made either better off or worse off. I wish to maximize average happiness, and so this action seems justified. Thus I shall now proceed to do X.’ I would be more likely to think along these lines: ‘Smart’s primate brain had decided after 2/10ths of a second that Smart wanted to do X. Smart’s primate brain is good at making Smart think he’s in charge, so now Smart’s brain will engage in a bit of work which will yield him the answer ‘he’ already decided upon.’

The utilitarian model is just a model, and/but it’s the type of model which appeal to some types of people more than others. When you look at it like this, it sort of changes how you view the question of whether the question of whether ‘people should use utilitarian systems of thought more’ even makes sense. A book like this will probably in some ways tell you more about the personalities of the authors than it will tell you about the desirability of the more widespread ‘implementation’ (whatever that may mean) of a specific ethical system of thought. There’s no data here, just arguments, so neither of the authors really have a clue, would be my contention, and they probably would not be able to agree about how to even evaluate competing systems if they did. It is not perfectly true that they ‘have no clue’, as e.g. the information problems pointed out in Williams’ account towards the end, where he talks a bit about collective decision making rather than individual-level decision making in a utilitarian framework (the point being that you need a lot of data, which is not available, in order to engage in utilitarian analyses and semi-sensible utilitarian-inspired decision making at e.g. the population level), certainly do have at least some real-world relevance, but I think it’s close enough. One aspect that really irritated me about this coverage is that although there are some potentially valuable distinctions made along the way (people may employ the correct decision rule yet end up with a bad outcome anyway, and such things may be important when making moral judgments (…or judgments about how to best set up compensation schemes in organizations, I’ll add…); when deciding whether or not to praise an action a potentially relevant distinction is to be made between the desirability of the action and the desirability of praising the action), they don’t really get very far. If I ever find myself facing a Mexican who’s about to kill 20 people, I’ll know what to do, but…

Some people might have read some of the stuff above and thought to him/herself that if you’re a hardcore consequentialist/utilitarian who does not care about anything but the consequences of actions and the utility derived from them, then you probably don’t care about whether or not the individual made the decision because he was sleep-deprived or had high levels of testosterone in his blood due to an untreated medical condition. That’s the whole point, that you disregard irrelevant factors like intentions and similar stuff, right? I have sort of assumed this would not be the utilitarian’s reply because in that case the system seems to me to devolve into a caricature very fast (on account of the ‘and similar stuff’ part, not the intentions part), where you lock away the schizophrenic. I think there’s a big difference between including in the analysis people’s explicit justifications for their actions (leading to a ‘you meant well’ judgment) and other, implicit, factors which might also have influenced behaviour (‘the cancer patient was tired and in pain, and that was why she yelled at her neighbour when his dog ran into her garden’). There’s a difference between explaining and explaining away, but they sort of go hand in hand. In case you were not aware of this, this objection does not only relate to individual-level decision-making, as objections with a similar structure can be made in the contexts of population-level decision making, where the behaviours of groups of people may also have explanations/reasons which are relevant to the ethical judgment yet unrelated to the explicit justifications people forward for behaving the way they do. I’m not sure how I feel about the validity of some of the specific arguments to be made in the latter case and how relevant they are/ought to be to the moral judgments to be made, but I did want to mention this aspect to preclude people from perhaps assuming erroneously that even if there are problems at the level of the individual, such problems go away when you start looking at groups of people instead. I don’t think this is true at all, though of course details are different in different social contexts.

I know that I have not really talked a great deal about the actual contents of the book in this post, and if you’re really curious to learn more about what’s in there you’re welcome to ask and maybe I can be persuaded to provide some more details. I was planning to perhaps include a few quotes from the book in a future quotes post, but aside from that I’m not really considering spending any more time on the book here on the blog.

Feedback to the thoughts and ideas presented here are very welcome.

July 23, 2014 - Posted by | Books, ethics, Philosophy

5 Comments »

  1. This is a very well-written post, and you have managed to echo my own reasons for not taking most analytic philosophers too seriously. As I mentioned to you the other day, excelling in logical reasoning does little to lead one towards a system of thought that coheres with reality — it guarantees internal consistency within one’s model, but it does not guarantee consistency between one’s model and the external world. I think many analytic philosophers dabbling in theories of ethics are drawn to the ambitious idea of constructing a moral theory that is internally consistent — their logical minds desire a normative theory which is clean and coherent, and which might explain why they don’t usually spend much effort on reading books in the behavioural sciences. Very tellingly, as you quoted, Smart recommends act-utilitarianism precisely because it barely has any explanatory power as a descriptive theory — i.e., precisely because it ignores the messy and wildly inconsistent ways in humans actually act.

    I can actually sympathise somewhat with their attraction to consistency, because demanding consistent moral principles can be very useful/helpful in nudging policy-makers or the masses in general towards fairer laws and institutions — the suggestion of which might initially repulse or annoy them. E.g., if white men have the right to vote, there seems to be no morally relevant factors to compel us to withhold the same right from non-whites or women; so for the sake of consistency, we should extend the same voting rights to people who are not white men. Viewed in such historical contexts, the pursuit of consistency has prominently led to better outcomes (insofar one defines ‘better outcomes’ as ‘fairer treatment for everyone in the appropriate reference class’ — as you pointed out, what qualifies as a ‘better outcome’ is not something that is universally agreed upon) on at least a few occasions, and so it is understandable that some thinkers thus come to venerate consistency. It is a delicate issue to determine when they cross the line and end up venerating consistency too much — i.e., to the extent of ignoring hard-wired facts about human psychology.

    It is my view that none of the three major theories of ethics — namely, deontology, utilitarianism and virtue ethicscompletely captures how we mete out moral judgments; our moral views are really more like a mix of all three. E.g., we would presumably judge a drunk driver much more harshly if he ends up killing three pedestrians than if he just crashes into a streetlamp. If we are to be perfect deontologists, we should judge him equally in both cases, since he broke the same rule both times — i.e., the rule that one should not drive with unsound faculties. But that doesn’t mean that we are perfect utilitarians either. If we were, then people’s intentions should not factor into our decisions on how to assign blame or praise — and yet it is clear that, e.g., we treat manslaughter and murder differently. And yet it is also clear that we are not perfect virtue ethicists — if we were, then we would think that well-meaning people who cause harm to others through gross incompetence should not be punished; but this is obviously not the case. Of course, my examples here are rather simplified and I am sure some people would take issue with them, but I hope to have at least motivated why it seems to me that declaring one’s allegiance to any of these three major schools of ethics is quite misguided. Unfortunately, a lot of philosophers seem to spend way too much time arguing for and against each theory, without realising that it is perfectly okay that we do not consistently invoke any one of them in all our moral judgments. Yes, sometimes our primitive moral intuitions can benefit from some refinement by exposing them to tests of consistency, but a lot of philosophers strike me as going too far.

    This comment is probably getting too long now, so I will just end with one anecdote. I once attended a seminar where the first presenter shared some of his ideas on justifying fairness — defined as giving everyone a say in how resources are to be shared; or, more broadly, how laws are to be determined — in a society. He began with a thought experiment which was supposed to eventually lead to the points he was trying to make, and which was something like this: “Adam and Ben are the only two people living on an isolated island. Adam is more efficient at both catching fish and picking apples. In such a situation, it seems that Adam may just procure all the resources for himself and not give Ben any say in how they should share the resources… Etc.” That was when I stopped listening, because if the presenter had never heard of the concept of comparative advantage, then I didn’t believe he would have very useful things to say about his chosen topic.

    Comment by Maxwell Bühler | July 23, 2014 | Reply

    • I notice at least one typographical error in my comment above — please pardon my sloppiness.

      Comment by Maxwell Bühler | July 23, 2014 | Reply

  2. Thank you for your comment and your feedback.

    “Viewed in such historical contexts, the pursuit of consistency has prominently led to better outcomes”

    A person who’s in favour of consistent moral frameworks may frame the development that way, but I’m not convinced that consistency is necessarily desirable. The desire to be consistent may also lead one to make decisions one might not otherwise have made, and provide justification for acts which would in the absence of a commitment to consistency not be justified; that is, a desire to be consistent may change people’s preferences and change how people approach moral problems, sometimes in unfortunate ways. This closely relates to concepts like escalation of commitment. Another way to approach this would be to think of examples of situations where people with common moral frameworks would consider the consistent individual with a different moral framework to be ‘less moral’ than the inconsistent individual with a similar moral framework; many examples of such situations could be listed. A consistent racist might be an individual who hates all black people, whereas an inconsistent one might be an individual who hates most black people but who would nevertheless be perfectly okay with interacting with Mr. Smith living next door – which of these people would outsiders criticize most harshly? The pursuit of consistency may lead to both good outcomes and bad outcomes; I’d certainly prefer an inconsistent communist who was not in favour of killing all the class-enemies to a consistent one who was, and I’d prefer an inconsistent nazi who might be okay with looking the other way, rather than informing on Mr. Stern and have him sent to the concentration camp. I really don’t consider consistency much of a signal that you’re doing things right; if you by accident have adopted a moral system which is really quite terrible (perhaps because you grew up in the sort of society mentioned in the post where killing jews is the behavioural norm), being inconsistent is a good thing and the ideal of consistency may if anything lead to worse outcomes. An important point in that context is incidentally that people are notoriously bad at figuring out if they live in that metaphorical jew-killing society or not (/and/or, if they do live there, bad at figuring out that killing jews might be a bad thing); we are always the moral ones, and the people who disagree with us are always the immoral ones. How does it feel like to be wrong? It feels like being right. It’s part of why this stuff is really quite difficult.

    “Unfortunately, a lot of philosophers seem to spend way too much time arguing for and against each theory, without realising that it is perfectly okay that we do not consistently invoke any one of them in all our moral judgments.”

    I was considering saying a lot more about this aspect in my post, but at that point I had decided that I’d spent more than enough time on it already. But yes, this is something to which I certainly object as well, as I also do make clear in the coverage above. These people don’t seem to think about these things as convenient models, as various different approaches one might apply to a given problem; they seem to think it’s all or nothing, and that only one model (‘my model’) can be The Correct Moral Theory. Frankly I consider this to be stupid. It’s incidentally my impression that (slightly) ‘related fields’ like medical ethics are somewhat more interested in the data and more pragmatic about which models to apply to a given problem, perhaps because they tend to seek answers to quite specific questions to which classical moral models do not provide good answers, and that the ‘classical ethicists’ might be able to learn something from the approaches applied in e.g. that field – but this impression is based on limited data and may well be wrong.

    Comment by US | July 23, 2014 | Reply

    • Thanks for your reply. You are of course correct to point out that consistency is a two-edged sword, but when analytic moral philosophers promote consistency, their implicit, unmentioned premise is that such consistency would be used to promote what they consider to be a desirable outcome for everyone involved. E.g., they (most probably) wouldn’t argue that we should disenfranchise white men for the sake of consistency since we do not allow non-whites and women to vote. Of course, now the question turns to how we may make sure that consistency is being used to promote good outcomes rather than bad ones, and that is where things get tricky, and it certainly doesn’t improve the discussion to ignore information from relevant disciplines. You might still be able to discover some relevant insights through the use of thought experiments — these devices can be quite good at teasing out people’s moral intuitions — but of course the methodology involved in extrapolating these insights to more generalised cases requires more justification, and such justifications are stronger if they are based on findings in, say, the behavioural sciences than if they are based on logic.

      I think you are also correct to observe that philosophers working in applied ethics tend to have a more nuanced/not reality-impaired view of how we should regulate potentially morally problematic issues. But I don’t know if my assessment of your statement is accurate, as it is not a field in which I have done very much reading.

      Comment by Maxwell Bühler | July 24, 2014 | Reply

      • “I think you are also correct to observe that philosophers working in applied ethics tend to have a more nuanced/not reality-impaired view of how we should regulate potentially morally problematic issues.”

        Right, the applied part is probably more important than is the field in which such research takes place. The only sort of applied ethics I’ve really read much of has been medical ethics, but it’s certainly my impression that these people approach these sorts of topics very differently from how the authors of this book does.

        I actually think I have read more medical ethics than I have read ‘classical ethics’, but most of the ethical problems have been talked about by medical doctors, not ‘ethicists’ (/philosophers) – this is part of why I’m cautious about drawing conclusions. Ethical issues pop up all the time when dealing with medical topics, e.g. when dealing with information asymmetries in health care (a topic also closely related to insurance stuff) and things like aspects pertaining to end-of-life-care. However I think I’ve only read one ‘proper’ book specifically devoted to the topic of medical ethics.

        Comment by US | July 24, 2014


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.