Yet another one of Paul Graham’s essays – read it here. As usual, it’s full of good stuff:
“What does it mean to disagree well? Most readers can tell the difference between mere name-calling and a carefully reasoned refutation, but I think it would help to put names on the intermediate stages. So here’s an attempt at a disagreement hierarchy:
DH0. Name-calling. [...]
DH1. Ad Hominem. [...]
DH2. Responding to Tone [...]
DH3. Contradiction [...]
DH4. Counterargument. [...]
At level 4 we reach the first form of convincing disagreement: counterargument. Forms up to this point can usually be ignored as proving nothing. Counterargument might prove something. The problem is, it’s hard to say exactly what.
Counterargument is contradiction plus reasoning and/or evidence. When aimed squarely at the original argument, it can be convincing. But unfortunately it’s common for counterarguments to be aimed at something slightly different. More often than not, two people arguing passionately about something are actually arguing about two different things. Sometimes they even agree with one another, but are so caught up in their squabble they don’t realize it.[...]
The most convincing form of disagreement is refutation. It’s also the rarest, because it’s the most work. Indeed, the disagreement hierarchy forms a kind of pyramid, in the sense that the higher you go the fewer instances you find.
To refute someone you probably have to quote them. You have to find a “smoking gun,” a passage in whatever you disagree with that you feel is mistaken, and then explain why it’s mistaken. If you can’t find an actual quote to disagree with, you may be arguing with a straw man.
While refutation generally entails quoting, quoting doesn’t necessarily imply refutation. Some writers quote parts of things they disagree with to give the appearance of legitimate refutation, then follow with a response as low as DH3 or even DH0
DH6. Refuting the Central Point.
The force of a refutation depends on what you refute. The most powerful form of disagreement is to refute someone’s central point.
Even as high as DH5 we still sometimes see deliberate dishonesty, as when someone picks out minor points of an argument and refutes those. Sometimes the spirit in which this is done makes it more of a sophisticated form of ad hominem than actual refutation. For example, correcting someone’s grammar, or harping on minor mistakes in names or numbers. Unless the opposing argument actually depends on such things, the only purpose of correcting them is to discredit one’s opponent.
Truly refuting something requires one to refute its central point, or at least one of them. And that means one has to commit explicitly to what the central point is. So a truly effective refutation would look like:
The author’s main point seems to be x. As he says:
But this is wrong for the following reasons…
The quotation you point out as mistaken need not be the actual statement of the author’s main point. It’s enough to refute something it depends upon.
What It Means
Now we have a way of classifying forms of disagreement. What good is it? One thing the disagreement hierarchy doesn’t give us is a way of picking a winner. DH levels merely describe the form of a statement, not whether it’s correct. A DH6 response could still be completely mistaken.
But while DH levels don’t set a lower bound on the convincingness of a reply, they do set an upper bound. A DH6 response might be unconvincing, but a DH2 or lower response is always unconvincing.
The most obvious advantage of classifying the forms of disagreement is that it will help people to evaluate what they read. In particular, it will help them to see through intellectually dishonest arguments. An eloquent speaker or writer can give the impression of vanquishing an opponent merely by using forceful words. In fact that is probably the defining quality of a demagogue. By giving names to the different forms of disagreement, we give critical readers a pin for popping such balloons.
Such labels may help writers too. Most intellectual dishonesty is unintentional. Someone arguing against the tone of something he disagrees with may believe he’s really saying something. Zooming out and seeing his current position on the disagreement hierarchy may inspire him to try moving up to counterargument or refutation.
But the greatest benefit of disagreeing well is not just that it will make conversations better, but that it will make the people who have them happier. If you study conversations, you find there is a lot more meanness down in DH1 than up in DH6. You don’t have to be mean when you have a real point to make. In fact, you don’t want to. If you have something real to say, being mean just gets in the way.
If moving up the disagreement hierarchy makes people less mean, that will make most of them happier. Most people don’t really enjoy being mean; they do it because they can’t help it.”
Another one of Paul Graham’s essays. A very, very good read, so I’ve quoted extensively from the essay below:
“Let’s start with a test: Do you have any opinions that you would be reluctant to express in front of a group of your peers?
If the answer is no, you might want to stop and think about that. If everything you believe is something you’re supposed to believe, could that possibly be a coincidence? Odds are it isn’t. Odds are you just think whatever you’re told. [...]
What can’t we say? One way to find these ideas is simply to look at things people do say, and get in trouble for. 
Of course, we’re not just looking for things we can’t say. We’re looking for things we can’t say that are true, or at least have enough chance of being true that the question should remain open. But many of the things people get in trouble for saying probably do make it over this second, lower threshold. No one gets in trouble for saying that 2 + 2 is 5, or that people in Pittsburgh are ten feet tall. Such obviously false statements might be treated as jokes, or at worst as evidence of insanity, but they are not likely to make anyone mad. The statements that make people mad are the ones they worry might be believed. I suspect the statements that make people maddest are those they worry might be true. [...]
In every period of history, there seem to have been labels that got applied to statements to shoot them down before anyone had a chance to ask if they were true or not. “Blasphemy”, “sacrilege”, and “heresy” were such labels for a good part of western history, as in more recent times “indecent”, “improper”, and “unamerican” have been. [...]
We have such labels today, of course, quite a lot of them, from the all-purpose “inappropriate” to the dreaded “divisive.” In any period, it should be easy to figure out what such labels are, simply by looking at what people call ideas they disagree with besides untrue. When a politician says his opponent is mistaken, that’s a straightforward criticism, but when he attacks a statement as “divisive” or “racially insensitive” instead of arguing that it’s false, we should start paying attention. [...]
Moral fashions more often seem to be created deliberately. When there’s something we can’t say, it’s often because some group doesn’t want us to.
The prohibition will be strongest when the group is nervous. [...] To launch a taboo, a group has to be poised halfway between weakness and power. A confident group doesn’t need taboos to protect it. It’s not considered improper to make disparaging remarks about Americans, or the English. And yet a group has to be powerful enough to enforce a taboo. [...]
I suspect the biggest source of moral taboos will turn out to be power struggles in which one side only barely has the upper hand. That’s where you’ll find a group powerful enough to enforce taboos, but weak enough to need them.
Most struggles, whatever they’re really about, will be cast as struggles between competing ideas. The English Reformation was at bottom a struggle for wealth and power, but it ended up being cast as a struggle to preserve the souls of Englishmen from the corrupting influence of Rome. It’s easier to get people to fight for an idea. And whichever side wins, their ideas will also be considered to have triumphed, as if God wanted to signal his agreement by selecting that side as the victor.
We often like to think of World War II as a triumph of freedom over totalitarianism. We conveniently forget that the Soviet Union was also one of the winners.
I’m not saying that struggles are never about ideas, just that they will always be made to seem to be about ideas, whether they are or not. [...]
To do good work you need a brain that can go anywhere. And you especially need a brain that’s in the habit of going where it’s not supposed to.
Great work tends to grow out of ideas that others have overlooked, and no idea is so overlooked as one that’s unthinkable. Natural selection, for example. It’s so simple. Why didn’t anyone think of it before? Well, that is all too obvious. Darwin himself was careful to tiptoe around the implications of his theory. He wanted to spend his time thinking about biology, not arguing with people who accused him of being an atheist. [...]
When you find something you can’t say, what do you do with it? My advice is, don’t say it. Or at least, pick your battles.
Suppose in the future there is a movement to ban the color yellow. Proposals to paint anything yellow are denounced as “yellowist”, as is anyone suspected of liking the color. People who like orange are tolerated but viewed with suspicion. Suppose you realize there is nothing wrong with yellow. If you go around saying this, you’ll be denounced as a yellowist too, and you’ll find yourself having a lot of arguments with anti-yellowists. If your aim in life is to rehabilitate the color yellow, that may be what you want. But if you’re mostly interested in other questions, being labelled as a yellowist will just be a distraction. Argue with idiots, and you become an idiot.
The most important thing is to be able to think what you want, not to say what you want. And if you feel you have to say everything you think, it may inhibit you from thinking improper thoughts. I think it’s better to follow the opposite policy. Draw a sharp line between your thoughts and your speech. Inside your head, anything is allowed. Within my head I make a point of encouraging the most outrageous thoughts I can imagine. But, as in a secret society, nothing that happens within the building should be told to outsiders. The first rule of Fight Club is, you do not talk about Fight Club. [...]
The trouble with keeping your thoughts secret, though, is that you lose the advantages of discussion. Talking about an idea leads to more ideas. So the optimal plan, if you can manage it, is to have a few trusted friends you can speak openly to. This is not just a way to develop ideas; it’s also a good rule of thumb for choosing friends. The people you can say heretical things to without getting jumped on are also the most interesting to know. [...]
Who thinks they’re not open-minded? Our hypothetical prim miss from the suburbs thinks she’s open-minded. Hasn’t she been taught to be? Ask anyone, and they’ll say the same thing: they’re pretty open-minded, though they draw the line at things that are really wrong. (Some tribes may avoid “wrong” as judgemental, and may instead use a more neutral sounding euphemism like “negative” or “destructive”.)
When people are bad at math, they know it, because they get the wrong answers on tests. But when people are bad at open-mindedness they don’t know it. In fact they tend to think the opposite. [...]
To see fashion in your own time, though, requires a conscious effort. Without time to give you distance, you have to create distance yourself. Instead of being part of the mob, stand as far away from it as you can and watch what it’s doing. And pay especially close attention whenever an idea is being suppressed. Web filters for children and employees often ban sites containing pornography, violence, and hate speech. What counts as pornography and violence? And what, exactly, is “hate speech?” This sounds like a phrase out of 1984.
Labels like that are probably the biggest external clue. If a statement is false, that’s the worst thing you can say about it. You don’t need to say that it’s heretical. And if it isn’t false, it shouldn’t be suppressed. So when you see statements being attacked as x-ist or y-ic (substitute your current values of x and y), whether in 1630 or 2030, that’s a sure sign that something is wrong. When you hear such labels being used, ask why.
Especially if you hear yourself using them. It’s not just the mob you need to learn to watch from a distance. You need to be able to watch your own thoughts from a distance. That’s not a radical idea, by the way; it’s the main difference between children and adults. When a child gets angry because he’s tired, he doesn’t know what’s happening. An adult can distance himself enough from the situation to say “never mind, I’m just tired.” I don’t see why one couldn’t, by a similar process, learn to recognize and discount the effects of moral fashions.
You have to take that extra step if you want to think clearly. But it’s harder, because now you’re working against social customs instead of with them. Everyone encourages you to grow up to the point where you can discount your own bad moods. Few encourage you to continue to the point where you can discount society’s bad moods.
How can you see the wave, when you’re the water? Always be questioning. That’s the only defence. What can’t you say? And why?”
So, let’s say you think policy X is optimal and policy Y is not. Or perhaps religion X is true and religion Y is not. Or you know something about subject X and you think you’re right, even though other people disagree. Now, if you’re like most people, you haven’t taken a closer look at the data.
Not necessarily, mind you, the policy data or the data supporting or questioning the religious ideas. Most people use some form of this type of data in their arguments, perhaps not as much because they find the data convincing but rather because they think they need to justify their beliefs somehow, and if you say that ‘policy X will result in more poor people’, or some kind of stuff like that, odds are that added information makes your position look more convincing to the opponent than if you chose not to say it. But the ‘unemployment will go up 2,4 % if policy Y is implemented’ is not the kind of data I was thinking about here. I was thinking about the data on who thinks what. Background variables. Do people who think X have stuff in common which might explain why they think the way they do? It’s an important part of understanding the subject – if your age or gender affects your opinion on the subject matter, disregarding those factors when explaining why you think the way you do leads to a potentially huge omitted variables bias. In short, it can cause you to deceive yourself about which factors have been important in the formation and development of your views. You think that you think X because of A and B (‘unemployment will go up 2,4 %’); but really it’s more a mixture of A, B, C and D.
People make arguments constructed like this: I think/like/prefer X because Y, where Y is some variable that pertains somewhat to the validity of the arguments under evaluation. Like, say, unemployment. Maybe I think the other guy’s argument is faulty or incomplete. Perhaps A (‘taxes’) is more important to me than B (‘environmental safety measure Q’). On net, the amount of supporting arguments in favor of X is higher than the amount of arguments in favor of Y. Things like that.
Here are some other things you might say in an argument – I don’t think most people bring up stuff like this very often, and when they do it’s mostly the characteristics of the opponent in the argument that gets the attention. To bring up this kind of stuff in an argument can go from being considered irrelevant to the matter in question to being considered an unjustifiable attempt to smear the opponent. The funny thing is that variables and related inferences like the ones below sometimes have extremely high explanatory power when you want to estimate what individual A thinks about subject X. We know this stuff matters a lot, but people really like to pretend it doesn’t and it’s often considered cynical or perhaps downright rude to bring it up in conversation. Here are some of them. Of course no one of these will have 100 percent explanatory power either, so I urge you not to reject arguments like these out of hand because they only explain part of the variation in the data – think of them as variables you might decide to estimate in an econometric model while trying to explain, say, the distribution of the opinion variable Z:
‘I think X because my mother and father had an academic education.’ ‘My parents (priest/teacher/big brother) told me X and I’ve been taught by them not to question their authority.’ ‘Because I was born in country C instead of country D.’ (related – articles like this one is part of why I keep coming back to tvtropes even though I tell myself not to) ‘Because I was born in the year XXX instead of the year XXY.’ ‘Because I have a girlfriend and a child.’ ‘Because I’m XX years old instead of XY years old’ – or a more specific example: ‘Because I’m 55 and policy X will benefit me personally.’ ‘Most of my friends think X is better/true.’ ‘If I support policy X I will obtain a higher status among my peers, even though at a cursory glance it might look like policy X will hurt me personally.’ ‘Supporting (/cause) X makes me feel special and I like to feel special.’ ‘Because I’m (fe)male.’ ‘Because I like my job and have an optimistic frame of mind.’ ‘I spent a lot of time thinking about these things because I derive status from winning arguments because I think it makes me look smart. If the other guy is perceived to be right and win the argument I won’t look smart.’ ‘I haven’t really thought about this at all and I don’t know what to think, but I’m supposed to participate in arguments like these and provide an opinion so I’ll just say X because it’s the first thing that popped into my mind when they asked me. Also, most people I care about seem to support X.’ ‘I have to support Y because A supported X and I don’t like/trust A’s.’ ‘People with a high education and income tend to believe/support X so if I support/believe X my status will increase.’ ‘I heard argument X before I heard argument Y.’ ‘A supports Y. If I support X then A will become offended and an unpleasant situation might arise. I will therefore support Y.’
Part of why people don’t look at data like this is that it’s often impossible to come by in specific cases and it’s usually very difficult to quantify effects like these. There’s a lot of impact heterogeneity as well when it comes to the impact of specific variables on individuals and you easily risk committing the ecological fallacy without thinking about it if you try to include variables like these in your model of the opinion forming mechanism of your opponent in a debate. Maybe the inclusion of such variables do not really make matters more clear, perhaps the opposite, perhaps some of the included variables are irrelevant. Do I think X because the cute girl in the lab thinks X, because my parents disagrees, because my friends who introduced me to the subject all think X or because of the latest employment figures? Who knows? But we like to pretend that we do know, and that our motives are pure – only the employment figures matter. If somebody cedes the point that that stuff also matters, then even though there’s an effect it still isn’t something important that should merit our attention; quite the opposite, we ought to focus on the employment figures. An interesting thing is also that in some cases it’s very easy to come by the numbers, and even when it is this stuff tends to be ignored. For example, 90 % of all Egyptians are identified as Muslim, so if you grow up in Egypt, there’s a very high likelihood that you’ll be born and raised by people who think the Muslim religion is the ‘true one’ – whereas if you’re on the other hand born in the US there’s something like a less than 1 % chance that you’ll be born and raised by Muslim parents, and there’s a much, much higher chance that you’ll be born and raised by people who consider themselves christians. There’s a very high correlation between the religious views of children and that of their parents.
I tend to think that people who spend time thinking about this kind of stuff are usually not much harder to deceive than people who do not. We’re all rational when it suits us, but when that’s the case is most often not something we spend much time thinking consciously about. Most people pretend to be rational when you question their rationality by bringing up ‘the other stuff’; some are just better pretenders than others.
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- Flemming Rose
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly