Here are 5 statements:
“You have a nice place.”
“You’re a bit lazy, and I’m sure you’d have gotten more out of the latest lecture if you’d read the material more carefully beforehand.”
“you have a fantastic episodic memory.”
“I love that you actually read these kinds of things…”
“The place would have looked less messy if you’d dusted a bit before we arrived.”
Yesterday I was told 3 of those things. One is a direct quote, the other two are English translations of what was said in Danish. I don’t think it takes a lot of work for you to realize which of the above statements I ‘made up’.
There’s a lot of stuff you can’t say. And a lot of stuff you’re expected to say. And there’s a lot of stuff that doesn’t go into either of those categories.
I assume that saying nice things to others will most often make others think you’re more likely to be a nice person, because saying nice things is certainly something most people would assume that nice people are more likely to do (doing nice things is a stronger signal than saying nice things, but saying nice things provides many psychological benefits as well).
Providing constructive criticism will often be a much more risky thing to do than to say something nice, even if that criticism includes potentially much more useful information. This is, among other things, because the more potentially useful the criticism provided is, the more likely the other party is to respond emotionally, rather than rationally, to the criticism in question. So people are unlikely to run the risk of providing useful constructive criticism to another individual before they know the other party well (…and presumably have said a lot of nice things to them). Granted, someone who knows the other individual well is also more likely to be able to provide constructive criticism so this dynamic is not without benefits (lower signal to noise ratio), but the total amount of constructive criticism supplied would surely be much higher if it was costless to provide it to strangers. One big problem is that it’s hard to credibly commit to not taking constructive criticism personally and responding emotionally.
At this point it seems to me that most people who interact with me regularly are being nice to me and mostly say nice things to me. I find it interesting that I rarely explicitly acknowledge that this fact may not necessarily have anything to do with me and my attributes, and that people may say nice things simply because of how they believe such statements reflect on themselves (‘I’m the kind of person who says that he has a nice place. That’s what nice people say – so I must be a nice person.’). Also, communication strategies may be implicit and not subject to close scrutiny by the people employing them – indeed it may be optimal not to subject your communication strategies to close scrutiny, as an implicit approach to these matters makes it harder to evaluate e.g. the level of sincerity displayed (and thus makes you more likely to successfully claim at the very least plausible deniability when you’re not being perfectly honest). Different perceptions of an individual’s status, attributes, etc., may make some sincere nice statements from one individual to the other seem insincere to the receiver (making a (negative) emotional response more likely).
Maybe a good way of thinking about this stuff is in terms of a binary social (verbal) feedback varible, which may be either ‘nice’ or ‘critical’, and then making an analogy to consumption vs investment. Nice things being said have consumption value; we like when others say nice things about us, and we derive pleasure from that. Criticism has investment characteristics; it’s initially costly (it hurts to be told you’re lazy), but it may have large positive effects in the long run if potential improvement strategies are addressed. Most of income is consumption – we’re mostly told nice things. If consumption is very low (not enough social validation from peers), it may be better for an individual to lower income than to invest the marginal unit of income; even potentially very useful criticism may not be very welcome when you feel socially rejected by others. Actually you’ll only be willing to undertake an investment (accept critical remarks) once your consumption is higher than some specific baseline level (people are required to say a lot of nice things to others before they’re allowed to say less nice things to them without repercussions).
I don’t know. I like when people say nice things to me, so I’m certainly not telling anybody to stop doing that. But social stuff is confusing when you start to think about it.
On a related note – yesterday three people said something nice to me. Yesterday was a good day.
A few recent examples:
i. I played Citadels with my little brother this Christmas. I spotted two obvious instances of poor modelling which happened during the game.
The game is complex and I won’t go over all the rules here – it should be noted that the game complexity is probably part of why these errors to be described below were made in the first place. But anyway, we were in a situation where my brother had picked a specific card. Having picked that specific card he had to try to guess which card I had on my hand – if he guessed correctly, I’d lose my turn and the income that turn would generate (which would benefit him and harm me, making him more likely to win the game). There were two obvious candidates; one card generating a potential income of 2 and another other card generating a potential income of 5. He knew I’d taken one of these cards but not which of them I’d picked – if I randomized my draw completely there’d thus be a 50% chance for him to pick the right card. The situation took place during one ’round’ (subgame) of the game, and both of us knew that this would not be the last round in the game. But we did not know how many more rounds were to be played – a conservative estimate would be at least 4 or 5. Whether it would make sense to consider the round to be one round of several in a semi-’pure’ repeated game or not, and which type of repeated game we’re talking about, depended to some extent on which cards would be picked in future rounds (as I mentioned, the game is complicated – the fundamentals of the stage game can change during gameplay, e.g. I might end up in my brother’s position, i.e. as the player who should guess which card the other player had taken, in a future round); but it would make little sense to consider it a single-shot game.
Now the first thing to note here is that if you consider it a repeated game, it probably doesn’t make a lot of sense not to at least consider to mix strategies. You could probably make an even stronger argument: Consider that if I play ’2′ (the card giving me an income of 2) with a probability of 100 % my brother would probably pick up on that relatively fast and pick that card every round, and I’d end up with an income of zero – and if I always played ’5′, he’d always pick 5. So the second person, the one picking the card to be guessed, has to consider adding some uncertainty to the table or he’s probably going to be in trouble. Now let’s think about how one might best mix strategies in this situation. An important theoretical aspect here is that while it’s certainly a finite game, the lenght of the game is still unknown, or at least uncertain, to the players (they do have some idea how long it’ll take to finish the game). This uncertainty adds complexity, and even though only relatively few rounds of the game is left, the game is still much too complex to be solvable by backwards induction by the players while they play the game even if such a solution might exist. Incidentally in the specific game in question when playing that specific subgame I evaluated the costs of reversing the roles of the players (so that I’d get to be the one guessing, which would be a permissible change to the stage game given a specific subgame strategy constellation) to be too high to implement – but my brother didn’t know that.
The first modelling error here was done by me when I was deciding which card to pick. I did pure randomization when I picked my card – basically I shuffled the cards and picked one of the two cards at random. Basically this was just me being stupid, because this is obviously not the best mixed strategy (it’s only optimal in the case where the expected income derived from the two cards are equal). One way to think about this is that a 50% likelihood of picking either card gives you an expected income of 0,5 x 2 + 0,5 x 5 = 3,5 if your opponent also mixes 50/50 – and foolishly I’d considered only that strategic response to my mixing strategy. The problem is that of course the opponent needn’t mix at all! A mixing strategy on his part is obviously dominated by the pure strategy of always picking ’5′ – if he always picks ’5′, I end up with an average income of only 1 (I get an income of 2 every second round). I realized this 5 seconds after I’d picked my card..
This is where we get to the second modelling error. My little brother said after that specific round had been played – where he’d picked 5 and I’d gotten lucky and randomly picked 2 (so the inferior strategy did not cost me anything in this specific case) that ‘of course he’d picked 5, it was the dominant strategy’. I thought that this was obviously true in the specific case of a mixing strategy on my part with 50/50 mixing, but that it would not be an optimal response to other mixing strategies with a low probability of playing ’5′ (nor would it be an optimal response to the pure strategy of 2). I assumed we’d play at least four more rounds, and in that case it would probably be optimal to go with a mixing strategy with a ~30/70% likelihood or something along those lines (i.e. one ’5′ and 3 ’2′s in the rounds to come) – I figured that 5 is 2,5 as much as 2, so I should play ’2′ 2,5 times as often as ’5′ in equilibrium; i.e. 2,5 ’2′s for every 1 ’5′, meaning I should play ’2′ in 2,5 out of 3,5 rounds, which would be about 70% of the time. I assumed my little brother would mix as well in the rounds to come when I would no longer obviously mix 50/50 and that he’d reach a similar conclusion – that he should pick 5 more often than 2 to minimize my potential income and end up near the (assumed) long-run equilibrium. After the game my little brother made it clear to me that he had not mixed but had played 5 every time, and he stated that he’d picked that strategy because it was ‘the dominant strategy’ and because it would be his best response to any strategy I could come up with. Which it clearly wasn’t.
ii. I went shopping yesterday. I got to the store and it was full of people. I generally dislike shopping when there’s a lot of people around, and I generally avoid this by strategically shopping at times during the day where I know not very many people go shopping. I have previously arrived to a store, decided it was too full of people and postponed my shopping to a later point in time because of that, but yesterday I decided instead to just get it over with fast. When I came back home I remembered that it’s been mentioned in the papers that a lot of people are sick with influenza in Aarhus, and so I realized that I’d just exposed myself to a huge health risk considering how many people were in the store. If asked about this type of stuff before I left my home, I’d have said that such a risk would be completely unacceptable to me, because I have exams before long and thus it would be very inconvenient for me to get sick at this point. If I’d included that health risk in my model, I would not have gone shopping yesterday.
I will often avoid taking public transportation when it’s possible for me to do so due to similar health-related reasons – diseases are easily transmitted in such environments. People often do not remember to include risks like these in their mental models. That’s poor modelling.
Even (reasonably) simple card games and everyday decisions about stuff like when and where to go grocery shopping can include models too complex for humans to handle well; our cognitive limitations are easy to ignore if we don’t think about them, but they’re there just the same. Social dynamics are usually a lot more complex to model than the stuff in the post. Sometimes it seems almost unbelievable to me that people somehow make all this stuff work – taking all those decisions they do on an average day, interacting with all those other people along the way… Given how complex the world is and how even very simple things like a card game can cause us all kinds of problems when we start thinking about them, I find this pretty amazing to think about.
“A long-held myth regarding development is that as people age, they all become alike. This view is refuted by the third principle of adult development and aging, which asserts that as people age, they become more different from each other rather than more alike. With increasing age, older adults become a more diverse segment of the population in terms of their physical functioning, psychological performance, and conditions of living. In one often-cited study, researchers examined a large number of studies of aging to compare the amount of variability in older versus younger adults (Nelson & Dannefer, 1992). This research established that the variability, or how differently people responded to the measures, was far greater among older adults. Research continues to underscore the notion that individuals continue to become less alike with age. Such findings suggest that diversity becomes an increasingly prominent theme during the adult years, a point we will continue to focus on throughout this book.
The fact that there are increasing differences among adults as they grow older also ties into the importance of experiences in shaping development. As people go through life, their experiences cause them to diverge from others of the same age in more and more ways. You have made the decision to go to college, while others in your age group may have enlisted for military service. You may meet your future spouse in college, while your best friend remains on the dating scene for years. Upon graduation, some may choose to pursue graduate studies as others enter into the workforce. You may or may not choose to start a family, or have already begun the process. With the passage of time, your differing experiences build upon each other to helpmold the person you become. The many possibilities that can stem from the choices you make help to illustrate that the permutations of events in people’s lives are virtually endless. Personal historiesmove in increasingly idiosyncratic directions with each passing day, year, and decade of life.”
I didn’t post this quote when I first blogged Adult Development and Aging mainly because I figured the insight was probably important enough to merit a post of its own, but also because I figured that if they dealt with this aspect in more details later on I’d rather wait until then to handle the specifics. Anyway it’ll be a while until I get to that stuff and I find myself thinking about these things now and then these days. I’m mostly thinking about how this stuff relates to how we form friendships and establish romantic partnerships. As people age it seems to me that they become less likely to meet that ‘someone who’s just right for me’; and not just because of the work of their romantic rivals. Because of the increasing variation in the behaviours, preferences and outcomes perhaps people who are aging gradually realize that it is strategically optimal for them to become more tolerant, more permissive, and so they implicitly gradually implement such strategies to increase their chances – but that’s hardly always the case and to the extent that it is, the process likely involves them making compromises that perhaps would have been unnecessary if the partners in question had met a decade earlier in their lives. (Though I may here underestimate how much work is required to make a relationship last that long.) Path dependence matters a lot when it comes to both friends and relationships. As I’ve underscored before here on the blog a ‘new’ friend is most often introduced by an ‘old’ friend or acquaintance, and most people rely to a very great extent on their existing social network when they want to make adjustments to it. Over time people’s social networks become entrenched; it gets harder to find and keep new friends not only because every potential new friend is competing for your attention with the whole set of friends you already have, but also because the potential new friend becomes increasingly less likely to share your interests or preferences over time, at the very least when compared with the people with whom you frequently interact. Interaction affects preferences and behaviours, for friends, family and partners alike.
Though people in general tend to become more different from each other as they age, I tend to believe that cohabitating partners do not and that they on the other hand tend to become more alike over time. This is of course because they tend to form similar habits, do similar stuff. Another noteworthy dynamic is the ‘I’ve known you a long time and I’ve invested a lot in this relationship at this point, so it doesn’t matter as much to me that you’re not as compatible as I’d like you to be as it would if we’d only just met’. Of course there’s also (hopefully) the frequent feedback from the partner, making you less likely to stray far away from the partner ideal of the other party – such feedback is harder to obtain for people not in a relationship. There’s also the ‘my previous partner/parents/whatever behaved this way (/cheered for the Green team) and so if you don’t behave this way we won’t be compatible’. Politics, religion and similar stuff’s really important, and often people’s opinions about these matters crystallize over time. If crystallization of this kind of stuff takes place over time, it will generally harm outsiders (singles) and benefit insiders (couples); the people in romantic relationships become more alike over time and so they’ll feel a closer bond to each other as time goes by, and the aging single will in the absence of a romantic partner often obtain much of the relevant social feedback from other singles who may not be able to give useful feedback regarding this aspect of life. For example a single aging man may start to think that his religious- or political views cannot possibly matter a great deal to a potential partner because such things do not matter a great deal to the people with whom he usually interacts. It should perhaps also be noted that the potential decreased compatibility of the remaining outsiders with the insiders makes the outside option become less attractive to the insiders (making them less likely to break up with their partners).
About a decade ago I had relatively few problems talking to and interacting with my extended family (cousins, uncles). These days it’s a pain for me to do it for any period of time, and I found myself actively avoiding the presence of some of these people this Christmas. To the extent that I did interact with them I was polite and helpful, but I did avoid them and I did not want to spend time with them. I find myself worried about where I’ll end up in another decade if things do not go well. Or is it ‘if things do go well?’
Even though I mostly don’t post personal stuff here anymore, I felt a personal post this week was probably in order. I wrote another one of those earlier this week, but I pulled it quite fast (for many reasons) – we’ll see if I let this one through my implicit filter.
So, people who’ve read along for a while are probably starting to get worried at this point – ‘personal stuff’, that can’t be good… Well, no need to worry. I’ve had a good week. A close friend who needed a place to crash stayed with me for a few days; this is the first time I’ve ever been in such a situation. I can’t speak for my friend, but I had a good time. I value my privacy very highly, and I generally don’t like being around people for extended periods of time. So the fact that I had a good time is, I think, sort of important. I’ve been thinking that there might be things to learn from the experience so I’ve thought a bit about it along the way. An important insight did not occur to me until today, and that insight is what first motivated me to write this post. But I’ll get to that stuff later.
When my friend (let’s call the individual in question ‘X’) asked me, one of my first reactions was to feel flattered. I’m vain, like most people – sue me. Anyway I realized that I was now in a situation where I had a friend who felt comfortable asking me a favour like that, and I realized that that felt awesome. Especially as I was able to say yes; it felt awesome being able to say yes. I should perhaps point out that even though X would probably argue – indeed has argued (quoting X: “you’ve never had friends who are idiotic enough to get themselves in a situation in which they’d appreciate help like that”) – that it’s not necessarily a good thing that I now have a friend ‘like that’, I really couldn’t take that argument seriously.
When X asked me I also felt a little bit scared and uncomfortable. Though I should make it clear that most of those thoughts only came later, after I’d said yes. What if it didn’t work out? What if I couldn’t stand spending so much time with X, or vice versa? As mentioned I’m a very private person and given the circumstances we’d have to share the same room for a few days – what if that was too much? I really didn’t know if I could handle that; during the last ten years I don’t think I’ve ever been in a situation where I was more or less unable to retreat from other people if it became too much for me for any extended period of time. And what if it became too much for X – what if X couldn’t stand being around me that long? What helped me there, though, was that I knew that X knows at least as much about what’s going on in my life as do my own brothers, and it’s very safe to say that X is personality-wise more like me than anyone in my own family. If I couldn’t even handle a few days in the same room as X, well… As for whether X could handle spending so much time with me, I figured that as long as I at least tried to behave reasonably like the person I’d like to be, which is what I try to a significant extent to do on a day to day basis anyway (though with varying degrees of success), it should be okay. So I ended up thinking that it would be fine and that it might even be fun and/or do me some good – the implicitly added social control element making me marginally more likely to do useful and productive stuff while X was around also had to be considered (the Hawthorne effect). Though on the other hand I’d have to add here that this element should not be overemphasized; X knows me quite well and so I knew that I wouldn’t have to put up any kind of elaborate facade in order to behave in what X would consider an ‘acceptable manner’. If that had not been the case I’d have been a lot more worried about the arrangement, because in that case I’d also had had to worry about significant foreseeable and ‘perceived necessary’ behavioural changes ‘draining me’.
Since I more or less stopped intrinsically caring about grades and how I did in school, I’ve tended to have a bit of a hard time figuring out what I was actually aiming for in life. My brain has tried to convince me that partnership and perhaps children are the sort of things I should aim for, and it has also tried to convince me that I’m not particularly likely to experience that kind of stuff during my life, which is annoying. I’ve long since convinced myself that career-stuff is unlikely to be fulfilling on its own. So what else? An interesting notion here is the fact that I’ve ‘traditionally’ been very skeptical about the value of friendships – close friendships were for people who couldn’t find a partner and then tried to fill out the void in other ways. I’d think that even long-term friends aren’t actually all that close, and how many of the people who cannot even get/keep a partner manage to find/keep a close, long-term friend anyway? I’ve been skeptical.
Since my period of social isolation ended, to the extent that it has, I’ve so far tended to think of friendships as a way to avoid problems, as a strategy to avoid isolation. It was the main reason why I started out interacting with people again; to avoid problems, to avoid a repeat of the hikikomori experience. It wasn’t that I thought I’d find interesting people to interact with – I’d never had close friends at that point. According to this conceptual approach I employed friends were perceived to have but instrumental value – ‘it’s good for you to interact with others so you should do that from time to time’. And that was it. It no longer is. Friendships can be much, much more than that. My friendship with X is not ‘just’ a ‘friendships to avoid problems’-friendship. My friendship with X is at this point, at least to me, probably closer to an ‘X is awesome, I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views, and I’d feel devastated if I no longer had this’-friendship. I don’t interact with X because I know that ‘it’s good for me’; I do it because I want to, because I enjoy it. Maybe I was in the same situation three months ago and it has just taken this long for my self-awareness to truly catch up with me; it’s been a gradual process surely, but it just hit me today: ‘This friendship is an important part of your life, and you should be very careful not to underestimate how valuable it is.’ At this point I’m really starting to realize that a friendship isn’t ‘just’ anything; establishing and maintaining such a social relationship with another individual can meaningfully be considered one of the major life goals.
In case anyone was wondering, X is a female.
Regarding the “I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views”-part, I’m pretty sure I could say that about a commenter or two here as well. ‘Online friendships’ are different from real-life ones but sometimes they end up overlapping and I should probably mention that if one of you people feel like you’d like to know me better and that I’d perhaps like to know you better as well, you’re welcome to reach out in this comment section. I’ve started to use skype regularly and it’s (…almost… – you can’t really disregard the time difference) as easy to skype with someone from Denmark as it is to skype with someone who lives on a completely different continent. I’d probably prefer to establish contact with people who’ve commented here before and/or have read along for a while. And please don’t consider it a one-time offer; consider it a standing invitation.
So I thought about this stuff a while ago while I was out for a walk, and I decided back then that I should blog it when I got home. When I did get home I’d forgotten all about it (it was a long walk). Today I was out walking again, and well…
Okay, so let’s assume a job interviewer asks you how you’d feel about working with X, X being the kind of stuff you could be expected to work with in the job function in question. The obvious answer to many people would be ‘I’d feel great about working with X, I’d be very excited to have that opportunity’ or something along those lines. Though ‘it’s what I’ve dreamt of my entire life’ is probably an unwise reply in some situations (desk clerk, bouncer, renovation worker..), in general it seems obvious that it makes a lot of sense to fake interest and excitement in such a situation; this is because such an approach is usually perceived to make you more likely to land the job.
But why is that again? Let’s think a little bit about the signalling aspects here. People who are intrinsically motivated need lower monetary compensation rates to motivate them to do their jobs than do people who are not; they’ll be happy with a lower wage because they like what they do, and if they really like what they do they’re less likely to complain about stuff like e.g. a poor work environment. So if you signal that you’re eager to work with this stuff, you signal that you have a lower reservation wage. This makes you more likely to land the job if you’re perceived to meet the task requirements, but the deceit should in equilibrium affect the employer’s expectations about your productivity – people who have lower reservation wages are all else equal less productive. On the other hand perhaps the reason why you’re eager is that you know a lot about the subject, which means that all else isn’t equal and that your interest might lead to higher productivity on the job or lower training costs. Depending on the specifics there are likely multiple optimal strategies here; and it’s worth having in mind that individual characteristics are highly likely to impact which strategy is optimal for a given individual in a given setting.
Now consider another variable that’s likely to come up in a job interview setting: Ambition. Again people are often implicitly encouraged to fake ambition because it’s perceived in some areas (though far from all) to increase their employment opportunities. If you’re ambitious you’re willing to work harder than the other guy. If you’re ambitious this means you care about the social hierarchy in the organisation, and if you care about that stuff you’ll be more likely to follow the instructions you’re given which is often a useful ability for an employee to possess. If you’re ambitious you’re probably likely to be willing to do a lot of extra stuff to impress the people above you so that you can rise in the social hierarchy, which corresponds to working harder for a lower level of monetary compensation. On the other hand some employers prefer to limit the competition for the management spots by selecting people who are not ‘too ambitious’ for a given job function. And if a vacancy is created for a job function where it’s unlikely that a satisfactory performance will lead to further advancement in the organisational hierarchy, an employer may prefer an unambitious applicant, as he or she is less likely to become disgruntled by the absence of career advancement opportunities. Ambitious people are incidentally quite likely to be perceived of as more aggressive than their unambitious counterparts, which also translates to higher expected wage demands (for the same amount of work).
If you’re perceived to be dishonest about your goals or attributes to a greater extent than is tolerated in such situations this will most likely harm your opportunities greatly, but it’s worth noting that the tolerated level of dishonesty may vary a lot across organisations. Note that organisations always have an incentive to create the illusion that honesty is your best bet at a job interview; that’s because it’s the best bet for the organisation, i.e. the strategy which, if applied by all applicants, would give the organisation the highest potential payoff. This is because if all applicants supply all the decision-relevant information to the organisation, this will make the organisation most likely to be able to pick the best applicant for the job. But here’s the thing; the organisational payoff should at the point where you’re not yet hired by the organisation be irrelevant to you. You don’t care about the organisational payoff at the job interview stage, at this stage you only care about your likelihood of landing the job and the expected pay; withholding information will most frequently be optimal if that information might make you less likely to land the job or likely to earn less. Please do not assume that just because firms implicitly punish deceit, complete honesty is the best strategy for you – in most settings, it’ll likely be a stochastically dominated strategy. On the other hand if you have to grossly misrepresent who you are in order to land the job, the expected derived utility from landing the job probably isn’t as high as you think it is; the employer is not the only one who should care about whether you’re a good match for the job. The optimal amount of deceit is non-zero, but the risk of getting the wrong job should be weighed against the risk of not getting the job. When deciding on the optimal level of deceit do recall that the firm may have an incentive to withhold information from you as well, either by lying to you about which types of information that are important to them when it comes to whom to hire (in order to stop people from trying to game the system and weed out dishonest candidates), by misrepresenting the career opportunities associated with the job (if applicants think the job is high-profile and is likely to increase their future job market opportunities, they’ll likely decrease their wage demands because of the human capital investment value of the job), or perhaps by misrepresenting to some extent what you’ll actually be doing when you get the job (bait-and-switch type strategies are likely sometimes optimal, because it can lead to lower wage demands).
Like in romantic settings, displaying a low level of self-confidence is likely sub-optimal here. If you can’t convince yourself you’re the applicant they should pick, this is a great example of the kind of information you should be trying to hide from them. Don’t give the people involved the impression that you’re doing them a favour by showing up to the interview. Most of the people who go to an interview don’t get the job, and from a certain point of view the firm you’re interviewing with is quite likely to simply be wasting your time.
I’ve written a lot of stuff about models on this blog in the past, so some of the stuff I’m writing now I’ve probably covered before. I thought it was worth revisiting the subject anyway.
First off, one way to think about a mental model is to consider it a way of thinking about a problem. This also implies that if there’s a problem of some sort, you can construct a model. And thus, from a certain point of view (…the point of view of mathematicians, economists, engineers, or…), there’s always a model. It can be implicit, it can be explicit – but it’s there somewhere. A model is an explanation, and it’s always possible to come up with an explanation. So when you see a model you don’t like, it’s not very helpful to say that ‘it’s only a model’. What else would it be? And so is whatever you’re considering, from a certain point of view. If the model presented is an inaccurate representation of the problem at hand, then it’s the inaccuracy-part that should be the subject of criticism, not the model-part.
Most people dislike formal models that are very specific and give very precise estimates. They know instinctively that these models are simplistic and that the real world is much more complicated than the models – so the perceived over-precise estimates may be way off and may even seem downright silly. Skepticism is warranted, surely. But the precision is also a very helpful aspect of such models, because precision allows us to be demonstratively wrong about something. I’d argue that this is also an important part of why such models are disliked by humans. Many people who’ve worked a bit with models have a quite low regard for formal models because they know the assumptions are driving many of the results. They are skeptical and prefer the models in their own minds. Those ‘mind models’ are much less specific, much more flexible and much less likely to actually generate testable hypotheses. It’s not that they are necessarily wrong – it’s more that they’re unlikely to ever be proven wrong. People who’ve not worked with models also are skeptic when it comes to models, and their mind models are even less specific and testable than the rest.
Here’s the thing: If you think that it makes good sense to be skeptical of models where assumptions are clearly stated beforehand, where parameters/parameter estimates are generated through a clear and transparent process and where limitations are addressed, then you should be a lot more skeptical of models where these conditions are not met.
Most people prefer vague models because they are more convenient. You’re less likely to be proven wrong; you’re less likely to take a stance that are at odds with the tribe; if the model is general enough it will be able to predict anything, making you think that you’re always right. They’re also often less computationally expensive to formulate.
Here’s one hypothesis from a model: ‘Immigrants from country X are 2,5 times as likely to have a criminal record than are non-immigrants.’
Here’s another hypothesis: ‘Immigrants from country X are more likely to have a criminal record than are non-immigrants.’
Here’s a third hypothesis: ‘Some immigrants from country X have a criminal record.’
Here’s a fourth hypothesis: ‘Some people commit crime.’
Which one of these hypotheses has the greatest information potential, that is the potential to tell us the most about the world? The first one, given that all the other three are also true if that one is. Which one is more likely to be considered correct when evaluated against the evidence? The last one.
From an information processing point of view, having nothing but correct beliefs you are certain about is not a good thing. That’s a sign that your models are very poor and don’t contain a lot of information. If you never seem to be (/realize you’re) wrong, that’s a sign that you’re doing things wrong.
Sometimes the ‘models’ we make use of when evaluating evidence is of the variety: ‘I’d like X to be true (because Y, Z), so obviously X is true.’ Sometimes that’s the model you use when you reject the presented formal model with a beta-estimate of 0,21 and a standard deviation of 0,06. This is worth having in mind.
On a related note, of course not all models are about generating hypotheses and testing them – some of them are rather meant to be used to illustrate certain aspects of a problem at hand in a simple and transparent manner. It’s always important to have in mind what the model is trying to achieve. That goes for the ‘mind models’ too. Are you trying to learn new stuff about the world, or are you just trying to be right?
I’ve been thinking about the stuff in this post on and off for a long time. I probably shouldn’t post this and I may still change my mind and pull it down later on.
Anyway, to function well in their daily lives, most people deceive themselves to some degree. They tell themselves that their work matters a great deal (/more than it does); that they make a (/much bigger) difference (/than they actually do); that they are smarter and more accomplished than they really are.
The deluded optimist looks for opportunities he wouldn’t have sought, had he been more realistic. And the deluded pessimist misses options he might have had a shot at, had he been more realistic. If we’re thinking only about maximizing opportunities, it seems that systematic overconfidence/optimism is the strictly dominant strategy. At least if we don’t include costs in the equation. We can’t just ignore those of course, because most people know that if you ask out a girl and she says no, it will hurt. The girl may not feel any pain, but the rejected suitor will. The interesting thing here is that whereas one could in theory say: ‘I should just ignore that it hurts and try finding another girl’, for most people an optimal strategy would seem to have to include previous encounters and previous outcomes because those previous events contain important information that should ideally be included in the decision making process. A low-quality male who does not change his strategy after the first ten rejections will have a lower likelihood of being successful in terms of finding a partner than will a low-quality male that decides to mostly target low-quality females after the first three rejections, although the expected quality of the former’s potential partner is higher than the expected quality of the latter’s. One could make some corresponding remarks regarding the female’s problem; a female who’s never approached should ideally probably have a lower rejection rate than a female who’s approached all the time.
Most people do take previous information into account to some extent and this is, I believe, a huge part of why self-confidence is such a big deal for humans when it comes to figuring out who’s attractive and who isn’t. If you’re very self-confident, it’s most likely because you’ve been given reason to be; if you’re a male, the natural inference to make is to assume that you’ve not been rejected very much in the past and that you’ve had success with attractive partners before – if you’re female, self-confidence means that you’ve been approached a lot and have had to say no to a lot of males and thus you can afford to be picky. Another thing to note is that it takes at least some experience to become self-confident; you can fake it if you’re unexperienced, but that’s not quite the same thing – and females are generally good at spotting fakers because they have to be. Why do they have to be? Because if self-confidence is a very important variable when it comes to assigning value to a potential match, it becomes obvious that males will try to cheat and signal that they are self-confident even though they haven’t had a lot of success in the past. Females who couldn’t spot the cheaters had offspring with the low-quality guys in the past, so they had fewer offspring.
Low-qulity males are telling themselves they’re high quality. High quality males know they are high quality, and that they’re higher quality than low-quality males who tell themselves that they are high quality. And it’s not just ‘high quality’; every male around will try very hard, with a great deal of success, to convince himself that he’d be the best partner of all the potential partners the female would ever meet in a relevant time-frame. The more successful his self-deceit is, the higher quality partner he will gain access to. There’s the truth, and then there’s the truth plus X %. At some point, say X-upper bar, the risk/reward-relationship will become unfavourable to him given his risk profile (he’ll have less success than he would with a lower self-deceit level because all females can see that he’s much lower value than he thinks and put him in the faker category) – but if all other males have a positive X, an X of zero is strictly dominated. In expected terms the worst strategy a male could pick would probably be to try to be completely realistic about his options and not engage in any kind of (self-)deceit at all; a male who doesn’t even pretend to be higher quality than he is will have lower chances than most lower quality males who pretend to be high-quality.
Self-deceit helps on the dating scene. It helps when it comes to finding reasons for getting up in the morning. It helps when you’re telling your own story about how great you are and how every mistake you ever made was really somebody else’s fault.
I know I engage in a lot of self-deceit. We all do. But somehow I seem to have this impression that I’m a lot worse at using it constructively than are most people. Instrumental rationality is all about using rationality to solve problems, to achieve goals. So not to engage in the proper type of and level of self-deceit is not instrumentally rational. But I still much prefer the current me to a me who thinks much more highly of himself – I really dislike that guy whenever I see him in myself. Self-deceit incidentally isn’t the only relevant variable here. Telling myself that I should be more dominant and aggressive would also likely help my options. But I don’t want to be more aggressive or dominant because that’s not who I am and it’s not who I want to be.
I find it frustrating that the person I want to be don’t seem to be able to have the options I want to have. Either I need to change who I am or I need to change what I want. I find changing what I want very hard.
In the real world there are a lot of areas where it is completely natural for a person not to know very much, if anything, about it. Humans are not born imprinted with knowledge about, say, the lastest Greek employment figures, or how photosynthesis works.
Some people would say there’s a difference between the two. And that there are some things which are more important to know than others.
From a practical point of view, this is certainly true; knowledge about the finer details regarding the collapse of the Inca Empire will generally not be as useful when engaging in social interaction with most people as will knowledge about the latest soccer results or the latest political reform proposals (trust me on this one). People usually have a good idea which kind of stuff they’re supposed to know something about in order best to socially engage with others, and as long as other people play along and engage in the same kinds of conversations and search for the same kinds of knowledge social interaction is relatively easy.
Most people who interact with people they don’t know terribly well engage in the same kinds of knowledge exchange dynamics. They know a lot about which subjects are kosher and which aren’t, and the pool of acceptable conversation topics is actually incredibly small once you start to think about it. It’s not that you need to know everything about all the acceptable topics, but if you’ve picked a few of them out and made an effort of obtaining a bit of knowledge about them you should be okay. Social expectations play a large role here. It’s not considered bad form to bring up a subject the other party knows nothing about; what is considered bad form is to bring up a subject the other party ‘cannot be expected to know anything about’. The topics other people can be expected to know something about is drawn from a usually quite short list. Expectations regarding what kind of- and even which specific bits of knowledge you’re supposed to possess are to a large extent formed around the ‘acceptable conversation topics’. Given the expectations people possess it is very important for an individual wishing to engage others socially to know at least something about some of the acceptable conversation topics, and/because if the individual doesn’t know anything about X he might suffer status loss or even social rejection. Given this, an individual will perhaps sometimes feel the need to signal that he knows stuff he doesn’t actually know. He may even feel the need to signal that he knows stuff it would be unreasonable of anyone to expect him to know, given the specific context. The specific context will often be considered irrelevant because expectations are formed mostly independently of these, and the social expectations are considered common knowledge; everyone knows that if you’re heading for a political discussion, you’re supposed to be able to say a few words about, say, global warming, or immigration. These are areas where you’re basically not allowed not to have an opinion.
Every bit of knowledge one obtains is another bit of knowledge not obtained. In order to engage in an acceptable level of social interaction, it may be necessary to obtain information about X which one would not otherwise have obtained. Such information should be considered a cost related to the social exhange. A cost the minimization of which would probably easiest be achieved by trying to impact the expectations of the other people involved. Even though expectations are as mentioned above to a large extent independent of the individual, the community expectations are not completely exogenous – what you expect others to know and be interested in may change their expectations in the long run. That is to say, rather than trying to save face by claiming to know stuff one doesn’t, it might be a strategy worth considering to perhaps rather let the other party know that one does not consider this area of knowledge as important or interesting as Y (‘…which is totally awesome because …’).
I have met a lot of people over time who were claiming to know stuff they clearly didn’t know – and my experience is surely far from unique. When you spend a lot of time in a social environment where people’s expectations about what you have to offer/what you know and what you actually do have to offer/know do not match, or in an environment where you feel that it is very important that you make a good impression, this is clearly what you’ll sometimes get – people pretending to know stuff they don’t, and/or be someone they’re not, because they dislike obtaining knowledge about X but would prefer not to incur a social cost from not knowing about X.
An interesting thing is that the sanction from being ‘overconfident’, or perhaps even a liar, will sometimes be smaller than the implicit sanction from not accepting the, again implicit, ‘acceptable/unacceptable topics’ framework. The first one at least plays the game, the second one doesn’t – and if you don’t, you need a good excuse.
I’m sure pretty much everyone has at least some notion about which kind of knowledge ‘you’re supposed to know’ to be ‘fit for social interaction’. But I also tend to believe that the best way to behave in this weird world is to act as if there isn’t. If adults asked as many questions as children do, people would know a lot more stuff and it would be a lot easier to engage others and find topics to talk about. It’s as if it’s not okay socially not to know stuff and openly display that you don’t know – and I hate that! Not knowing is the default state, and it’s unreasonable to expect people to know very much, compared to how much there is to know, or to expect preference homogeneity; i.e. that the costs incurred from obtaining knowledge about X are the same for everybody. It’s unreasonable also because it will sometimes give people an incentive to behave in a deceitful manner which will only harm both them and you.
I think it makes a lot of sense to deliberately try not to think of oneself as ‘well informed’ or ‘knowledgeable’ when engaging others. I’ve thought this way myself in the past, but I believe it’s the wrong way to approach matters. So what to do instead? Well, it’s simple really: Think of yourself as ‘curious’.
“what kind of stories should we be suspicious of? Again I’m telling you, it’s the stories very often that you like the most, that you find the most rewarding, the most inspiring. The stories that don’t focus on opportunity cost, or the complex unintended consequences of human action. Because that very often does not make for a good story.”
We use narratives to explain stuff. We need an explanation we can understand and if there isn’t one, we will make one up. And we much prefer to believe stuff that is comfortable for us to believe is true. It goes for all areas of life, not just the ones one like to think about. I’ve picked out a few examples but you’re free to add to the list.
Non-smokers and non-drinkers will generally underestimate how hard it is for people who are drinking or smoking to stop drinking or smoking. The convenient story for the non-smoker or non-drinker is about how people who smoke or drink are weaker people (and therefore less deserving). Or perhaps they are less smart, because they could have just never started in the first place. On the other hand some of the people who smoke or drink a lot like to tell themselves that they are not addicted (because addiction will often imply weakness in the mental model applied to the problem) or that they have just as much willpower as the non-smoker/-drinker has, which would become obvious if the latter also smoke/drank as much as them. Notice that there may be multiple, perhaps conflicting, ways to construct a convenient narrative that makes you look good, not just one; it’s both possible for you as a smoker to convince yourself that you’re not addicted and thus isn’t a weak person (‘only weak people become addicts’), and it’s possible for you to convince yourself that you are addicted, but that the addiction means precisely that you’re not weak ‘because if someone as strong and great as you can become addicted, eveybody can’.
People who are not overweight will generally emphasize the importance of their own actions when explaining why they are not overweight and downplay other factors, whereas people who are overweight will often be more comfortable thinking in terms of factors over which they have little to no influence (like genetics). So the person who is not overweight will end up telling himself a convincing and convenient story about how he’s not overweight because he’s doing all the right things while disregarding other factors that may be quite important too, and by telling the narrative that way he may think of himself as a better person than the people whom he think do not behave the way he does, and/or he may think of himself as a better person than the people who do in fact behave in a similar manner, but have gotten different results from the diet- and exercise regime than he has gotten and thus have ended up overweight. The overweight guy will often tell a completely different story, which is just as compelling and convenient to him as the other story is to the non-overweight guy; he’s overweight because of his genes, because of his metabolism, because of his big bones, or perhaps because of his job that makes it hard for him to find time to exercise. He may think he’s better than the other guy because he works harder (or he would have time to exercise), or he may think he’s better because he does not, he tells himself, judge people by their appearance. The more general story about the blameless victim vs the deserving winner can be applied to all areas of life; if people have done well, it’s always because of stuff they did, and if they haven’t done well, nothing they could have done would have made any difference. That is, this is the story most of them will tell you if you ask them. Because that’s the story they tell themselves, and sometimes have told themselves for many years. (Things get more interesting if people can’t decide if they’ve done well or not.)
Often when people engage in political arguments, they downplay the arguments against the position they are defending. And they like political positions which make them look more deserving, make it look obvious that they should have a larger share of the pie. If reality will not play ball that’s often not a problem in political debates; in politics reality is just what people can agree is true. So when arguing about whether the people I like (‘people (/who) like me’) deserve to be in the position they are in, you can claim ‘it’s because of X’ and as long as a lot of people agree with you then X is considered a valid explanation. Note that the most convenient story always has a bad guy, and that in politics the convenient bad guy is almost always the guy who disagrees with you. Note also that in all the narratives you tell yourself, you’re the good guy. And this is the case for everybody else too.
When people think about what motivations others have for doing the things they do, they will often be tempted to try to explain the behaviour of others in terms of reactions to their own behaviour. They will tend to go for explanations involving them first if they can make one such explanation make them look good. ‘If she’s behaving nicely towards me, it must mean that I’m a nice person’ or ‘she’s behaving that way because I deserve to be well treated’. If it’s hard to come up with such an implicit explanation that makes one look good one will be more likely to find and include ‘external factors’ in the model; if she was angry it was not because of anything I did, rather it was because her boss is a silly old man, or because she’s on her period. This model even works when she explains that her anger is caused by something you did: If she’s told you that her anger was because you didn’t clean the house yesterday, you’re quite likely to at least partially disregard that explanation and find another one that better fits the image of you as the perfect husband; either one that does not involve you at all, or perhaps one that does involve you but also ‘shows’ just how unreasonable she is (‘She is probably still mad about that $300 overcoat I bought without asking her first. I should be allowed to buy an overcoat for myself without asking that crazy lady first, dammit!’). And when people tell themselves such narratives one of the funny things is that they both know that she is right (he should have cleaned the house), but they still hold on to the self-serving explanations in order to justify their own actions though they know that
they probably should not do this the partner disapproves of the behaviour. It makes sense though; we’re programmed to constantly look out for subtle ways to do a little less than our ‘fair share’, and you can’t cheat on others as well if you feel really bad about it afterwards and/or if you cannot catch up on the fact that your behaviour might be over the line. Incidentally, chimps have strong views on fairness stuff too.
Now, some of the stories humans made up in the past to explain the stuff we liked to explain back then doesn’t do very well today, when taking all the knowledge that is available to us at this point into account. Stories made up by people who died a long time ago still make up most of the religious texts around today, and you can tell if you read them. But it’s very often inconvenient for religious people to pick a different narrative, it’s in fact often very costly – and once again ‘reality’ is to a great extent just what people around you can agree with you is true. But people without religion do not do without competing convenient narratives; they will probably often tell themselves that they are smarter people for not believing stupid things. Or they will tell themselves that it’s all because of their own actions and ideas that they don’t believe in the stupid narratives, rather than it being to a great extent perhaps just a matter of being born by the right parents in the right century in the right country and being of the right gender (females are generally more likely to be religious than males).
It’s worth mentioning that not all self-serving stories are necessarily untrue or inaccurate. The degree to which such narratives are true or not will often depend upon your own point of view, but this is rather beside the point; the point is that people tell these narratives whether they are true or not, and the accuracy of the narrative often doesn’t much enter the equation in the first place. Sometimes self-serving thoughts like the ones described in the post are not thoughts people actively engage their minds with; often they are not. Rather, they are somehow perhaps best perceived of as part of the OS. The convenient narratives are part of us and there’s no way to get rid of them. But thinking about them every now and then can’t hurt.
Just some random notes, I probably shouldn’t publish this but I decided to do it anyway even though it’s not very structured.
So, I started out just by thinking about a simple question: Why do people talk with/to each other?
Now, we all know that there’s no simple answer to that question. There are answers – many of them. Categories like information exchange and social bonding/social relations management probably cover many of the reasons though there are others. Theoretically there’s probably a distinction to be made between conversations where people are very aware of what they want to accomplish with the conversation and how it can be expected to proceed on the one hand (conversation with a coworker about the new DHL-standards, board-meeting with a 12-point agenda, a doctor’s conversation with a patient); and conversations where the goal(s) is (are) more hazy and the expected duration is much more uncertain. Many of the conversations where people will be uncertain as to why they even engaged in them in the first place if asked directly probably can be argued to have quite clear goals if perceived in a certain light; goals having to do with social relations management and bonding. If you find yourself in a situation where you don’t know why you’re talking, you’re probably doing it for reasons having to do with social relations management/bonding. And if you feel the need to ask yourself why you’re talking with the person with whom you’re talking (‘why am I even talking to this guy?), you probably won’t be for long.
Conversations usually evolve over time because of interaction effects; new inputs are being delivered along the way, shaping the direction of the conversation. Two conversations with roughly the same starting point can end up in very different places. It’s worth noting that inputs supplied can be both verbal or non-verbal and people often underestimate the impact non-verbal behaviour may have on a conversation/social interaction.
Human interaction is too complex for it to be optimal for people engaging in conversations to always think hard about stuff like what to say and what not to say or how and when to say whatever it is that (perhaps?) needs saying. Conversations proceed at a much faster speed than the human brain can process all the potentially relevant information, and so a lot of information get excluded by default. Conveniently we do not think much about the fact that there are a lot of things we don’t think about when interacting with others. Excluding a lot of information and ideas means that the communication gets more efficient, at least if measured in terms of words/minute or similar metrics. Body language can convey a lot of information fast, so people who are good at that (and good at reading it) will ceteris paribus be better communicators than will people who are not.
Many conversations follow, at least to some extent, some basic scripts people have internalized. Most people know pretty well how to react when asked a question like ‘how are you?’ and they know the general direction in which a conversation starting in such a manner may be expected to proceed, just as they know what to say when a person shares the information that he recently got one day older than he was the day before. We often don’t think very much about the meta-aspects related to what to say in any given social situation, because if we had to do that all the time we couldn’t really do anything else.
However even though both a lot of the stuff we talk about and the way we talk about them to a very large extent follow scripts, a lot of feedback still does take place along the way; you need to all the time be aware if the other person is following the script, and you need to be aware which script is the right one to apply to the specific part of the conversation in question (is the secretary bringing up her weekend plans because she’s trying to tell you she can’t work overtime this Saturday, or is it because she wants you to ask her out?). Human behaviour is incredibly complex but we’re much too used to all this complexity to ever truly notice it. When one starts to think about how conversations work, it becomes clear that there are all kinds of ‘crazy’ ways for people to break the script along the way: Shouting loud inappropriate remarks in the middle of a sentence, turning your back on the person with whom you converse, asking a random question having nothing to do with the topic discussed, sitting down on the floor while the other person is talking, start moving your elbows up and down randomly while the other person is talking, punch the other guy in the stomach… The fact that people don’t even think about how it would be inappropriate to just sit down on the floor while talking to a coworker at the watercooler is an indication of just how narrow is the range of what’s considered to be acceptable behaviour. But we don’t notice, because we don’t think about such things. Which i find interesting.
In game theory a well known concept is the idea of a zero-sum game. Many arguments I like to think are zero-sum games, especially political- and similar arguments. X and Y will start out with some different sets of arguments supporting their cause. The ‘winner’ of the argument will say that his set of arguments were better than the arguments of the other party. Rarely will X and Y meet and discuss how to improve the argument sets of both X and Y. The idea is not to weed out bad arguments and replace them with good arguments; the idea is to win and that’s often easier to do with many arguments than with just a few. If X cedes the point that one of his arguments was not convincing it will generally harm the cause of X and help Y to win the argument.
Now one might here argue that human interaction would be more pleasant if people didn’t engage in ‘zero-sum conversation games’ such as the ones described above, but rather tried to always make human interaction be positive-sum. In case you were in doubt this is not where I am heading. The truth is that as long as there are surpluses of some kind somewhere, someone will try to grab part of that surplus if it is within that person’s reach. Organisms which behave that way have more children in the long run, and when it comes to human behaviour there’s a limit to how much culture matters. Another way to think about such ‘political arguments as zero-sum games’ is to think of them as a huge and important technical innovation and a great improvement upon the kind of zero-sum games people engaged in before the advent of political debates as conflict-resolution mechanisms.
“Me: In my opinion it’s really hard to have interesting ideas if you don’t write them down. It’s much, much easier to spot flaws in your reasoning, to add complexity, to take account of -ll- if you write things down.
A friend: I quite agree
Me: It quickly became an argument for keeping my blog alive, back when I wrote a lot of stuff myself rather than leech off the ideas of others as is mostly the case now.
A friend: Why don’t you write more of your own ideas then?
Me: They are not interesting [...] I’d much rather share knowledge with other people than [my] ideas.”
I know I shouldn’t quote myself, nor should I quote a friend who has not even agreed to be quoted. But I thought I’d put that out there anyway, because this is probably something people should have realized by now. There are people who happen to be quite good at getting good ideas, good at thinking about stuff. I realized a long time ago that I am not one of those people, and that I would be wise to limit myself to quoting the ideas of the people who know how to get good ideas, and otherwise just keep my mouth shut. Or share data, which amounts to the same thing when it comes to that. I sometimes fail and I open my mouth anyway, and I do it because I like to think about stuff and I do it a lot. But I’m well aware that there are lots of people who are much better at it than I am and that I really should try not to waste people’s time and humiliate myself in the process.
I know, but sometimes I just don’t care, so here’s something I’ve had on my mind for a while. I’m often asked ‘how I feel.’ We all know that question, and we all know how to answer it. Even a person like me is not unaware of the social conventions related to how you’re supposed to approach that question. So I usually answer ‘okay,’ ‘reasonable’, ‘not bad’ or something like that. It’s what people do.
But such questions always bother me a bit. There are two reasons. The first one is the rather obvious one that well, really, most of the time I have no idea how I feel. I need to think about that question in order to answer it, and the amount of time I’d need to give any kind of semi-sensible answer to the question is way more time than the amount of time that is usually allotted to the purpose, given the social context. Perhaps my emotional states are not as readily available to me as they might be to some people. A related concern here is that it is of course very unpleasant to feel the need to answer a question to which you don’t know the answer, and to be placed in a situation where you’re very aware of the fact that you seem to be trying to guess the teacher’s password. This is a situation you generally try to avoid. The problem is perhaps exacerbated even further by the fact that when I actually do spend time thinking about how I’m feeling in other contexts, quite often it is an activity which is predicated upon the fact that I, well, do not feel good at all; and getting asked how you feel when this is the way things usually work can be unpleasant, because getting asked that question can easily remind you that you’re in fact not as happy as you’d like to be. And then it’s easy to mentally jump along to the question of why you’re not as happy as you’d like to be, and most of the time there are lots of good reasons why you don’t seem to have anybody to blame for this sad state of affairs but yourself. But then you might go even further and argue that you do have happy moments sometimes, and that you’ve actually done some work on actively figuring out when they happen, as they happen – ‘this is a pleasurable moment’-type thinking – and what you’re doing when they happen, and this seems to help you and really there’s no good reason why you should not be having such a moment within a short amount of time and… Meanwhile, the person who asked the question is still waiting for an answer.
The other big reason why such questions bother me a bit is that I have no way of knowing if the answer even makes sense to the person to whom I’m responding, even if I do answer truthfully (which would require a complex and rather detailed answer). How do they define ‘feeling good/ok/not bad/reasonable’? I have never looked inside their heads or hearts, I don’t know the emotional range they inhabit very well. Maybe my answer is completely meaningless to them. Do people have well-defined emotional barometers where you can just go have a look and see; ‘oh – so that’s how you feel, 37°, that’s interesting…’ No, they don’t. Even in the best of cases it’s hard to figure out if the answer you give is actually conveying the information you’d like to share. And the real world don’t do with the best of cases, because I usually don’t answer truthfully, a fact I have no problem sharing here. I always have doubts, regrets and self-hatred bubbling under the surface, and I work on keeping those things far away from my own inner monologue; why in the world would I want to bring them out into polite conversations which take place outside my own head, with people who have perhaps no idea what they are getting themselves into?
I’m quite curious as to how people handle and understand their emotional states. Do people actually walk around knowing ‘how they fell’? I know I don’t and I have a hard time imagining that many other people do. It would be nice if people settled upon a different casual conversation starter – most people who ask this question don’t really want to know anyway.
From the paper:
“Because we began by putting forward a theoretically derived hypothesis and calling its viability into question on the basis of experimental data, it behooves us to listen carefully to what that data has been trying tell us and to draw together plausibly the various strands of evidence. The most parsimonious inductive explanation for our cumulative findings, we contend, is that automatic attitudes are asymmetrically malleable. That is, like creditcard debt and excess calories, they are easier to acquire than they are to cast aside. Thus, when people construe an object for the first time, their conscious fondness or antipathy for it is swiftly supplemented by an automatic positive or negative reaction. However, once people have acquired an attitude toward the object, attempts to subsequently undo it are differentially successful at different levels of the mind and lead its automatic component to lag behind its conscious one. Thus, Devine’s (1989) key prediction—that automatic attitudes will be generally be [sic] harder to shift that their self-reported counterparts — may be correct after all, not under the boundary conditions that we initially proposed but under a new set of boundary conditions that our data have subsequently suggested. [...]
We contend that automatic attitudes operate like rapidly established perceptual defaults: although they can initially be engendered by conscious cognition, they later become relatively resilient to its influence.”
So, there might exist a variety of perhaps even non-overlapping reasons why one might be interested in stuff like this. I’m interested because I believe that some of the automatic attitudes I have implicitly come under the influence of are attitudes which does not make me happy, which is why I feel that I at the very least should try to understand them better. Understanding might make it easier for me to successfully challenge them. Though I’m not optimistic about that. I should specify that the automatic attitudes I have in mind here are perhaps of a somewhat different kind than the ones described in the study; but it doesn’t seem like a lot of stuff is written about how to overcome biological imperatives, and you need to take what you can get.
Human males my age – not only human males my age, but also human males my age – are ‘supposed to’ look for a mate to have children with, and if they can’t find one they are supposed to work towards gathering power and resources so that once someone is there to be found, they can compete more successfully with the other available males in the bidding war that will ensue, and perhaps win the right to have offspring. The male brain has not yet caught on to the fact that contraception has changed everything, in a way that means that power and resources no longer matter all that much when it comes to reproductive success. As Kanazawa put it in this paper; “men’s wealth still translates into their greater reproductive success had it not been for modern contraception, which men’s brain, adapted to the ancestral environment, has difficulty comprehending.”
To the Paleolithic brain, sex = offspring. The whole ‘offspring’-part is why sex feels good. Most (/non-ignorant?) males (/and females) know that the reason why sex feels good is because sex is nature’s (/your genes’) way of tricking you into having offspring. Just as the reason why chocolate cookies taste good is because they contain a lot of fats and sugars, i.e. calories; and calories are good if you want to avoid starving to death, a risk our ancestors spent a lot more time worrying about than we do. But whereas people are quite open about how it’s probably a bad idea to eat too many cookies, because it will make you fat and unhealthy, and thus people do not eat all that many chocolate cookies, there are, to put it bluntly, certainly a lot less people who seem to be open about drawing the conclusion that partnership and children is not worth it and that they ‘refuse to be slaves of their biology’. At least in that area of life…
I have this strange feeling that a lot of male (/and female) behaviour today might look completely crazy to someone who’s not as invested in the underlying ideals of the Paleolithic Era as are (all?) (/fe)males today. For a male, it looks like this: ‘The way to be happy/the good life is to find a fecund-looking female, court her and then have sex with her a lot, have babies and provide for them, die.’ A slightly more elaborate version would also include ‘convince your partner on an ongoing basis that you’re the best male available (by doing all kinds of weird things that signal to the female that you are there for the long haul, even if you’re not – and by golly, the modern economy/-world has certainly increased the number of insane-looking jump-through-the-hoops signals a (self-identified?) high-quality female can demand of her partner..)’, as well as ‘try to cheat on her as often as you can get away with – so that you can have more babies – but try your best to hide the cheating from her so as not to incur significant switching costs.’
The bidding wars these days in the partnership setting relates far more to the quality of the offspring than to the number of offspring. The Paleolithic fecundity markers are more or less completely out of whack with reality today. Today it is mostly preferences – which are to a very large degree driven by socioeconomic factors, religion, culture and societal norms more broadly – and not biological factors (waist-hip ratio etc.) which decide how many children a female is likely to/willing to have. Kanazawa (see above) found that resource access is pretty much irrelevant too. However the lives of most males and females continue to follow the age-old recipe, to some degree. To be happy you need to find a mate and have children. For a male, in order to get the best possible female you need access to resources, you need power. So you need money, which means that you need to work hard, both to obtain access to resources and incidentally also to actually convince the high-quality female that you’re the most suitable partner available. It’s not that these ideals seem completely true to everybody; it’s more that when you defend a different version of the good life, my impression is that you most often will have a hard time making that defense sound credible, even to yourself. People often reject some of the defining characteristics of the traditional partnership equation, like the idea that a partnership necessarily needs to involve children, that it makes sense to look for ‘the one’, that romantic relationships need to involve members of both genders, or perhaps that a monogamous relationship is the best way to deal with the romantic stuff in your life; but how many people openly reject the idea of having a relationship as a major life goal in favour of the alternative in the (‘semi’…, see my remarks below regarding the commitment issues here)-long run, for no other reason than that they think that they will be probably end up happier in the long run if they do? Surely only a person who has no chance in the dating market would do such a thing, right?
I assume the standard narrative will not work for me. It seems like too much hard work that you just know that you’re only undertaking because your Stone Age brain is trying to trick you into undertaking it, just like it’s trying to trick you into eating too many chocolate cookies – and with not too dissimilar consequences. I will probably not be willing to work hard enough to find a long-term partner who would not reject me in favour of someone more suitable, given the amount of competition. And if I do find someone, I will still have major problems trusting her, because I’ll assume that if she follows the standard narrative here, she’ll also follow the Paleolithic recipe later on. Which tells me that she’ll be more likely than not to leave me when I start getting really sick. Yeah, I may not get really sick and a potential she may not leave even if I do, but in expected terms this needs to be taken into account; as does my loss aversion at that point.
So why was I reading the paper again? Because it seems to me at this point that the smartest thing for me to do would be to rewire my brain somehow, to make it like stuff it currently does not like as much as would be optimal, and to dislike stuff it currently seems to enjoy thinking about. To let go of a lot of the counterproductive narratives which were never about people like me in the first place. I’m perfectly well aware that this is all about rationalization, and Paleolithic mind has views about that stuff too. Given what I’ve previously said about the Stoics, naturally I’m not very optimistic about this whole endeavour. But it seems worth trying. Maybe my mind can actually outsmart my Paleolithic mind. In the eyes of most females, I probably won’t be proper partner-material for some time (because of ‘resources, power’) anyway – at least not for the kind of partner my Stone Age brain is trying to convince me I’d like to have. I know about the assortative mating-aspects of the college/university experience, but I also know that that part of the university experience is probably not likely to be relevant for me. Either way, I hope that I can obtain a state of mind such that my period of thinking about dating and similar stuff is over – at least for the time being. The only way not to lose the bidding war is not to play or think about playing.
Incidentally, I ought to at post a few remarks here about how this post relates to my commitment to change: I was writing this and publishing it here at least in part to more efficiently commit myself to this change. I know how strong ‘the opposition’ (‘the Paleolithic mind’ and all its friends and allies…) is, and I might give up on this idea before long. But writing this here can not hurt my chances much, and I’ve been thinking along these lines for a while now. I’ve found that it’s much easier to (knowingly) ‘rationalize’ not looking for a partner than it is to actually be perfectly okay with not doing it. And if it turns out to be impossible to obtain that mind state, it seems suboptimal in most scenarios to not be dating. I’m not trying to commit myself to not dating/finding a girlfriend; I’m trying to commit myself to thinking that I can be perfectly happy even though I don’t. It’s the thoughts in my head, not the behaviour they engender, which are central here. Interestingly enough, if I’m succesful it also probably means that long-run credible commitment to this state of mind is impossible (if preferences such as these can actually be changed over time, such changes can also be reversed later on), which should if anything make commitment in the short run easier, rather than harder, to achieve.
So, imagine this scenario. You live in a (parallel universe/future world/space setting/…) where people know how long they have to live; you know the exact date that you’ll die. It’s quite important to note early on that this date cannot be changed by any future events outside the models below. You have X years left of your life when you get the offer.
You’re now presented with the option, A, of living ‘twice as long’, in the sense that you will have 2*X years left of your life if you pick option A. There’s a downside to the arrangement; you have to double the amount of sleep, Z, you get pr. day (/time period).
Let’s plug in some numbers just for fun. Say you’re 20, you know you’ll die at the age of 70 (X=50), and you can at the current point in time expect to get (Z=7) hours of sleep/day on average during your life. If you pick A, you’ll live to the age of 120 – you’ll gain 50 years – but you’ll have to sleep 14 hours/day. If you decide not to take the offer, you will have 310,250 hours [(24-7)*365*50] left in a conscious (non-sleeping) state and you will die in 50 years. If you take the offer, you’ll have 365,000 hours [(24-14)*365*100] left in a conscious (non-sleeping) state and you’ll die in 100 years. In this case, you both live longer and you will have more hours available to you to do stuff. But what about a 60 year old who sleeps 9 hours/day and can expect to live to the age of 85? In that case, A will give you 109,500 hours and 50 years, whereas the alternative will give you 136,875 hours but only 25 years. When looking at A more generally, it seems clear that the older you are the less years you gain and the worse the tradeoff looks because the natural/baseline sleep requirement is increasing in age. At which points in people’s lives would this look like the most interesting proposition? Would it necessarily be the case that ‘the younger, the better’ – what about, say, sociological factors? How big an impact will the decisions of people close to the decisionmaker have – would the longevity of individuals in this model depend on social ties/skills; and if so, how?
Interesting things happen if you change A and make different restrictions on the choices offered; for instance, what happens in a model, A’, where you gain one hour for each extra hour you sleep? Basically this is just saying that you can decide freely when to live your life (looking forward in time), but not how long you’ll actually live. How would people deal with this choice? What if you made the sleep requirement an increasing function of the years gained and further imposed the restriction that people could at most sleep for 23 hours/day? (you have to add some sort of restriction like that or it starts to get really weird) Like, say, model B, in which you’d gain the first 10 years by just sleeping one extra 1 hour/day, whereas the next decade would cost you an additional 2 hours of sleep pr. day – at which point would people think that the arrangement maximized their lifetime utility, and how would this maximum depend upon the choices made by the people closest to them? Note that in model B, the 20 year old guy from before would (still, just like A) be able to live for another 100 years, but he’d have to sleep 22 hours pr. day to do so; and he’d spend much less time awake in this case than if he did not choose this option.
In the above models, the cost of getting to live longer than you’d otherwise do is ‘sleep’ but it could be other things as well. In the real world, you have a lot of people who stay alive long after their minds are gone – before my grandfather’s mind had gone completely he did live something close to 23 hours in an at least ‘semi-conscious state’, paired with a few clear moments during the day. You also have cancer patients who spend the last weeks or months of their lives either writhing in pain or simply knocked out by painkillers. In these cases, what are ‘we/they’ optimizing? In the real world setting, there’s also stuff like physical exercise, which might add half a decade or more to your life – if you’re willing to incur the cost of actually get sweaty/take time out of your calender and/or sleep more (restitution).
Now imagine another model, C, where what is on offer is not years gained but rather hours awake. It’s the flip side to the first models here. In this model, if you’re willing to drop 2 years of your life you can cut down sleep by 1 hour/day in the years you have left. Say you’re that 20 year old guy again. He can at most cut 7 hours of sleep, which would leave him with 36 years left. The cost imposed is made up for by an additional number of total hours awake while alive: For instance, with the baseline scenario the guy gets 310,250 hours awake, but if he opts to die at the age of 68, he’d get 315,360 hours awake. Given this specification of the model, the total number of hours awake is maximized at the point where he dies after 42 years at the age of 62, sleeping 3 hours/day during his remaining life (hours awake is a parabola; giving up even more years will decrease his total number of hours awake) – this will give him 321,930 hours awake. Would some people choose this model? If you set it up like this, probably not many. But the funny thing is that given how people behave around other variables which are also well known to impact both longevity and subjective utility in not too dis-similar ways (smoking, alcohol, drugs), the obvious answer should be yes. People make not all that dis-similar tradeoffs all the time without even thinking about it.
Also in some of the alternative universes in which one might contemplate making these offers, what is here called a ‘sleep requirement’ is there universally known as ‘sleep dependency’; a chronic, debilitating and incurable disease which causes recurring long-term periods of unconsciousness.
So, every now and then you come across one of these ‘many of my particular habits would fit in well with what I believe 18th century style living was like, maybe I’m living in the wrong century’ type posts. I just read one of them – which is why I’m posting this now. I’m not arguing people aren’t different and as thought experiments go, I guess you could do a lot worse. But here are some reasons why people perhaps don’t really compare apples to apples when engaging in mindgames like these:
i. Most people when engaging in these thought experiments seem to think that if they were to live in the 19th century, they’d be a nobleman or some such. Problem is, most people living in the 19th century (and earlier on) were peasants. Peasants who hadn’t even heard about tractors. Maybe they’d heard about Monsieur the Marquis, but that’s not quite the same thing. If you weren’t a peasant, you were probably a servant. Doing hard labour most of the time for very little pay.
ii. ‘I read a lot – and I mostly read the classics, so it’d be awesome to live back in the day where Dickens or Shakespeare lived!’ Guess what, if you go back 200 years, most people either couldn’t read or read very badly. They also couldn’t afford books, which was what got that whole library thing going. Because even if you could read, books were expensive. So was everything else. And if you’d like to read Mark Twain and you were living in Russia, good luck! Also, hardly anybody but those belonging to the nobility and the clergy spoke a second language. If you were to pop up in a relatively small linguistic region (like Denmark) in 1820, odds are no translations of what we now consider contemporary major works would even be available to you. If the book was not in stock, odds are you could not afford to get your hands on it.
iii. Spare time. A lot of it was spent doing stuff that wasn’t a lot of fun, like washing clothes without a washing machine. Also, for many people there wasn’t as much of it, on account of that ‘working 12 hours/day doing backbreaking labour in the sun’-thing. Further, not a lot of stuff to do if you actually had time for yourself. Reading classics probably isn’t as much fun if you have to do it in a small smelly hut with poor lighting late at night after a long days work.
iv. Travels! How far can you go in a horse carriage compared to a modern airplane? How long would it take you to go to Brazil for a vacation if you were living in Europe (ignoring the fact that you’d never be able to pay the ticket)? Go back two centuries and you’d probably find that a majority of Danes never left the country during their entire lives, perhaps but for a trip or two to North Germany or Sweden.
v. Modern medicine. Likelihood of not dying in child-labour. Probability of surviving to the age of 60. Cancer was a death sentence, but so were lots of common bacterial infections, like those causing tuberculosis or pneumonia, because they were equally untreatable. Also, remember that living even a decade after you’ve retired is a new thing almost unheard of before the 20th century. I’ve previously posted this:
In, say, 1820 people didn’t work to the age of 50 and then retired until they died at the age of 60. Most of them probably died within weeks or days of no longer being abled to work (…if they were lucky?). Being a nobleman was a bit different, yeah, but most people weren’t noblemen. And health is not just about not dying – imagine how much fun it was to go to the dentist in the year 1850. Eyeglasses and that kind of stuff has also come a long way (both in quality and price). Over-the-counter pain medications. Hearing aids.
vi. Mobile phones. Or maybe just phones. Internet. Tv. Cars. Central heating. Also, remember how easy and cheap it was to move tropical fruits like bananas thousands of kilometres back in 1850? Indoor plumbing. Clothes (Hint: There’s a difference between what people actually wore in 1870 and what they wear when you watch a film pretending to be going on in 1870. Also, how do you think a top of the line running shoe looked like in 1845?) Or, if going back to the travel thing, how do you think the roads looked like – like it would be fun to travel hundreds of miles on them in a horse carriage? Credit cards.
viii. If you are a female, your life would have sucked bigtime. Going back just 150 years and in a lot of the places that today really treat females quite well a female would not even have had the ability to own stuff – the property would either belong to your husband or a male guardian, like your father. Arranged marriages are still widespread today in many regions of the world, but they were also pretty much the norm in most developed societies a few hundred years ago, so you can also forget about having much of a say in who you’d marry if you were to go back to 1800 and start a life there. It would also be very difficult for you to divorce the bastard after he’d started beating you or perhaps had taken up drinking (/and)or gambling. Birth control? There’s no such thing. And there’s also no such thing as ‘marital rape’ anywhere in the legal statutes. Add the high likelihood of dying in child labour.
Other people who also would probably have a hard time living a really nice life a couple of centuries ago: Homosexuals, atheists, people who like to make fun of a king and queen wearing ridiculous clothes, modern females who’d like to go topless at the beach, people who’d prefer not to go to church every sunday, people with black skin (and why do so many of these people assume they’d end up as a westerners? Maybe the idea of living in Egypt in the year 1820 isn’t all that compelling, but millions of people did),…
The past isn’t all that it’s cracked up to be. Because of historians, it isn’t even what it used to be.
i. Perhaps most ‘imposter-syndrome’ sufferers are really imposters who do not suffer from imposter-syndrome. Convoluted? Well:
“Social psychologists have studied what they call the impostor phenomenon since at least the 1970s, when a pair of therapists at Georgia State University used the phrase to describe the internal experience of a group of high-achieving women who had a secret sense they were not as capable as others thought. Since then researchers have documented such fears in adults of all ages, as well as adolescents.
Their findings have veered well away from the original conception of impostorism as a reflection of an anxious personality or a cultural stereotype. Feelings of phoniness appear to alter people’s goals in unexpected ways and may also protect them against subconscious self-delusions.
Questionnaires measuring impostor fears ask people how much they agree with statements like these: “At times, I feel my success has been due to some kind of luck.” “I can give the impression that I’m more competent than I really am.” “If I’m to receive a promotion of some kind, I hesitate to tell others until it’s an accomplished fact.”
Researchers have found, as expected, that people who score highly on such scales tend to be less confident, more moody and rattled by performance anxieties than those who score lower. [...]
In short, the researchers concluded, many self-styled impostors are phony phonies: they adopt self-deprecation as a social strategy, consciously or not, and are secretly more confident than they let on.
“Particularly when people think that they might not be able to live up to others’ views of them, they may maintain that they are not as good as other people think,” Dr. Mark Leary, the lead author, wrote in an e-mail message. “In this way, they lower others’ expectations — and get credit for being humble.”
In a study published in September, Rory O’Brien McElwee and Tricia Yurak of Rowan University in Glassboro, N.J., had 253 students take an exhaustive battery of tests assessing how people present themselves in public. They found that psychologically speaking, impostorism looked a lot more like a self-presentation strategy than a personality trait.”
My emphasis, and here’s the link. The interesting thing to me is why exceeding expectations for a given accomplishment level is status-enhancing compared to doing worse than expected. Anyway, this is one of the many ways that people who pretend to be humble brag – by downplaying expectations they increase the status level associated with any given accomplishment-level. Very few people would consider employing a strategy aimed at improving expectations-forming mechanisms to better match reality in the long run a status-enhancing move.
Calvin: “I say it’s a fallacy that kids need 12 years of school! Three months is plenty!”
Calvin: “Look at me. I’m smart! I don’t need 11½ more years of school! It’s a complete waste of my time!”
Hobbes: “How on Earth did you get all the way to the bus stop with both feet through one pant leg?”
Calvin: “I fell down a lot.”
Calvin: “…Why? What’s your point?”
Hobbes: “Nothing. I was just curious.”
Calvin: “Look at all these ants.”
Calvin: “They’re all running like mad, working tirelessly all day, never stopping, never resting.”
Calvin: “And for what? To build a tiny little hill of sand that could be wiped out at any moment! All their work could be for nothing, and yet they keep on building. They never give up!”
Hobbes: “I suppose there’s a lesson in that.”
Calvin: “Yeah … Ants are morons. Let’s see what’s on TV.”
Calvin: “Tigers don’t worry about much, do they?”
Hobbes: “That’s one of the perks of being feral.”
Calvin: “I’m not having enough fun right now.”
Hobbes: “You’re not?”
Calvin: “I’m just having a little bit of fun. I should be having lots of fun.”
Calvin: “It’s Sunday. I’ve just got a few precious hours of freedom left before I have to go to school tomorrow.”
Calvin: “Between now and bedtime, I have to squeeze all the fun possible out of every minute! I don’t want to waste a second of liberty!”
Calvin: “Each moment I should be able to say, “I’m having the time of my life right now!’”
Calvin: “But here I am, and I’m not having the time of my life! Valuable minutes are disappearing forever, even as we speak! We’ve got to have more fun! C’mon!”
[Calvin and Hobbes start running away]
Hobbes: “I didn’t realize fun was so much work.”
Calvin: “Sure! When you’re serious about having fun, it’s not much fun at all.”
When I was a child, I sometimes felt like Calvin did in that last comic. I never do anymore. I guess it’s part of growing up. Reading a strip like this once you have is a good way to make you remember that here is something you’ve probably lost for ever. I have read a lot of Calvin and Hobbes over the last couple of days. I really love that comic but sometimes reading it really hurts. Some of it is a lot deeper than it lets on.
I tweeted this, but in case you missed it: Khan Academy have now added Art History to the list of subjects covered. 300 videos of it. I don’t know how many of my readers have an interest in that stuff (I don’t), but if you do – go knock yourself out! They write in the blogpost that: “we are incredibly excited to push the frontier on freely available content in the Arts and Humanities.” And I’m excited about that too. People really shold not be paying a lot of money for this kind of stuff. Maybe if it’s available for free online – and presented at a site including other stuff as well, such as mathematics, physics ect., more young people will start to realize that…
So, let’s say you think policy X is optimal and policy Y is not. Or perhaps religion X is true and religion Y is not. Or you know something about subject X and you think you’re right, even though other people disagree. Now, if you’re like most people, you haven’t taken a closer look at the data.
Not necessarily, mind you, the policy data or the data supporting or questioning the religious ideas. Most people use some form of this type of data in their arguments, perhaps not as much because they find the data convincing but rather because they think they need to justify their beliefs somehow, and if you say that ‘policy X will result in more poor people’, or some kind of stuff like that, odds are that added information makes your position look more convincing to the opponent than if you chose not to say it. But the ‘unemployment will go up 2,4 % if policy Y is implemented’ is not the kind of data I was thinking about here. I was thinking about the data on who thinks what. Background variables. Do people who think X have stuff in common which might explain why they think the way they do? It’s an important part of understanding the subject – if your age or gender affects your opinion on the subject matter, disregarding those factors when explaining why you think the way you do leads to a potentially huge omitted variables bias. In short, it can cause you to deceive yourself about which factors have been important in the formation and development of your views. You think that you think X because of A and B (‘unemployment will go up 2,4 %’); but really it’s more a mixture of A, B, C and D.
People make arguments constructed like this: I think/like/prefer X because Y, where Y is some variable that pertains somewhat to the validity of the arguments under evaluation. Like, say, unemployment. Maybe I think the other guy’s argument is faulty or incomplete. Perhaps A (‘taxes’) is more important to me than B (‘environmental safety measure Q’). On net, the amount of supporting arguments in favor of X is higher than the amount of arguments in favor of Y. Things like that.
Here are some other things you might say in an argument – I don’t think most people bring up stuff like this very often, and when they do it’s mostly the characteristics of the opponent in the argument that gets the attention. To bring up this kind of stuff in an argument can go from being considered irrelevant to the matter in question to being considered an unjustifiable attempt to smear the opponent. The funny thing is that variables and related inferences like the ones below sometimes have extremely high explanatory power when you want to estimate what individual A thinks about subject X. We know this stuff matters a lot, but people really like to pretend it doesn’t and it’s often considered cynical or perhaps downright rude to bring it up in conversation. Here are some of them. Of course no one of these will have 100 percent explanatory power either, so I urge you not to reject arguments like these out of hand because they only explain part of the variation in the data – think of them as variables you might decide to estimate in an econometric model while trying to explain, say, the distribution of the opinion variable Z:
‘I think X because my mother and father had an academic education.’ ‘My parents (priest/teacher/big brother) told me X and I’ve been taught by them not to question their authority.’ ‘Because I was born in country C instead of country D.’ (related – articles like this one is part of why I keep coming back to tvtropes even though I tell myself not to) ‘Because I was born in the year XXX instead of the year XXY.’ ‘Because I have a girlfriend and a child.’ ‘Because I’m XX years old instead of XY years old’ – or a more specific example: ‘Because I’m 55 and policy X will benefit me personally.’ ‘Most of my friends think X is better/true.’ ‘If I support policy X I will obtain a higher status among my peers, even though at a cursory glance it might look like policy X will hurt me personally.’ ‘Supporting (/cause) X makes me feel special and I like to feel special.’ ‘Because I’m (fe)male.’ ‘Because I like my job and have an optimistic frame of mind.’ ‘I spent a lot of time thinking about these things because I derive status from winning arguments because I think it makes me look smart. If the other guy is perceived to be right and win the argument I won’t look smart.’ ‘I haven’t really thought about this at all and I don’t know what to think, but I’m supposed to participate in arguments like these and provide an opinion so I’ll just say X because it’s the first thing that popped into my mind when they asked me. Also, most people I care about seem to support X.’ ‘I have to support Y because A supported X and I don’t like/trust A’s.’ ‘People with a high education and income tend to believe/support X so if I support/believe X my status will increase.’ ‘I heard argument X before I heard argument Y.’ ‘A supports Y. If I support X then A will become offended and an unpleasant situation might arise. I will therefore support Y.’
Part of why people don’t look at data like this is that it’s often impossible to come by in specific cases and it’s usually very difficult to quantify effects like these. There’s a lot of impact heterogeneity as well when it comes to the impact of specific variables on individuals and you easily risk committing the ecological fallacy without thinking about it if you try to include variables like these in your model of the opinion forming mechanism of your opponent in a debate. Maybe the inclusion of such variables do not really make matters more clear, perhaps the opposite, perhaps some of the included variables are irrelevant. Do I think X because the cute girl in the lab thinks X, because my parents disagrees, because my friends who introduced me to the subject all think X or because of the latest employment figures? Who knows? But we like to pretend that we do know, and that our motives are pure – only the employment figures matter. If somebody cedes the point that that stuff also matters, then even though there’s an effect it still isn’t something important that should merit our attention; quite the opposite, we ought to focus on the employment figures. An interesting thing is also that in some cases it’s very easy to come by the numbers, and even when it is this stuff tends to be ignored. For example, 90 % of all Egyptians are identified as Muslim, so if you grow up in Egypt, there’s a very high likelihood that you’ll be born and raised by people who think the Muslim religion is the ‘true one’ – whereas if you’re on the other hand born in the US there’s something like a less than 1 % chance that you’ll be born and raised by Muslim parents, and there’s a much, much higher chance that you’ll be born and raised by people who consider themselves christians. There’s a very high correlation between the religious views of children and that of their parents.
I tend to think that people who spend time thinking about this kind of stuff are usually not much harder to deceive than people who do not. We’re all rational when it suits us, but when that’s the case is most often not something we spend much time thinking consciously about. Most people pretend to be rational when you question their rationality by bringing up ‘the other stuff’; some are just better pretenders than others.
Not a lot of time spent developing these ideas, just some things that popped into my mind.
i. Most people like living their own lives less than they’d like living the lives of others. That’s why most of them spend a not insignificant amount of the time they have more or less complete control over (leisure) watching made-up people’s lives and their progress – or they read about them in books. A big part of why TV-soaps and fictional accounts of made-up people’s lives are very popular is that most people have a strong wish that they were living some other person’s life, a life far more interesting than their own. Because face it, most people’s lives aren’t that interesting. And even for people who’ve done very well for themselves, reality can’t compete with fantasy. Everybody implicitly know this and when we consider societal norms we usually find that taking the fictional stuff too seriously is considered immature, bordering on childish – but strangely enough, spending quite a bit of time in fictional worlds is not. That’s interesting, it’s okay to try to escape reality on a regular basis but only if you’re not too serious about it.
ii. People are extremely good at coming up with plausible sounding reasons for not parting voluntarily with their money. When I say money people just think ‘money’. But money is a claim on resources. And in a biological evolutionary framework resources really matter, bigtime. A big part of most people’s moral philosophy is stuff that they make up on the go, or perhaps their grandparents did. Their ideas about what is moral usually turn out to be ideas that make them look good and make it okay for them to not part with their ressources. Perhaps the ideas that make it through even make it okay for them to cheat others – like the guy on the right:
That’s because other people (and organisms, this process has been implicitly going on since the time before sexual reproduction) have tried to coax and cheat them for millions of years. When your date demands that you pay for her dinner, she’s engaging in the latest of a very long series of battles about limited ressources between the sexes.
iii. When people think about major threats to humanity (perhaps not extinction risk, most people don’t give that one much thought – but at least major risks), most people either think in terms of environmental parameters (climate, asteroids) or in terms of intraspecific competition (we’ll all kill each other in a nuclear holocaust). We like to think that humans are really important, and we like to think that we’re important enough for other life-forms not to matter all that much in the big picture; we like to think that humans are by now beyond the point where interspecific competition even matters. The funny thing is that a disease like smallpox alone was responsible for an estimated 300–500 million deaths during the 20th century – a death toll high enough to wipe out the entire human race just a thousand years ago. Roughly a third of the world’s population has been infected with tuberculosis. People who think we don’t still compete with other lifeforms all the time don’t think big enough – or rather ‘small enough’, as it were.
So, the other day I had this idea while going home from the grocery store; I considered it quite profound at the time. Now I don’t really know anymore, well, it probably isn’t but maybe I ought to post some stuff about it anyway (I’ve not posted much reasonably good stuff in a long while, as the declining(?) number of readers have surely had no problem noticing).
Anyway, first an observation: People invest a huge amount of effort, time and sometimes money in ideas about the world which have few if any outside consequences whatsoever. I’m talking about religion. I’m talking about politics. Some people spend hours, days, even years or decades trying to refine their theories, their thoughts about how everything ‘ought to be’ in the ideal world that is never to be. They do this even though their opinions really don’t matter much in the big picture and even though what they think ‘ought to be’ is completely irrelevant to what is; as no single person ever decided a big election and everybody know that (or do they? – see below). In national elections the mere vote-counting process matters far more as to the outcome of the election than the opinions of individual X. Yet people keep voting and political scientists and public choice people have set up some smart models to try to explain what’s going on (Downs, Tullock, Riker & Ordeshook, Ledyard, Palfrey, Ferejohn & Fiorina, Buchanan, ect., ect.,…). I haven’t read the literature, but I know it exists. Now, when I was walking home this thought hit me: Well, the outcome that people vote can actually make some sort of sense if you combine two main ideas: The sunk cost fallacy and political views as a tribal affiliation signaling method. It’s actually kind of simple, and therefore likely wrong.
So, the first part is loss aversion/sunk cost fallacy-effects. This is basically saying that people vote because they’ve invested a lot in their opinions, in their world-views. They’ve had a lot of arguments along the way, they’ve perhaps even changed their minds about some things along the way – but no matter what, most of them have spent a lot of time with this stuff. One might argue this is a chicken and egg problem, because if simple loss aversion (‘I’ve spent a lot of time dealing with political ideas and much of that might just turn out to have been a complete waste of time if I were to discontinue participating in political debates/discussions; so even though I might be better off not thinking about that stuff anymore, there’s no way I can give up on that subject now, especially considering all the time and effort I’ve put into it already‘) is to blame, that doesn’t get us any closer to why people start arguing about petrol taxes. Now do remember that even though there’s a lot at stake for the individual when politicians make decisions on a national level, that doesn’t make it rational to the individual to worry much about it because his opinion is irrelevant to the outcome anyway. Part of why people vote is probably that they haven’t realized this, maybe you have to let go of that ‘people know their opinion don’t matter at elections’-assumption, I don’t know.
But the second part is linked to this and even if it doesn’t solve the problem, it does help a bit. Now people like to put people in boxes, ‘friends’ and ‘enemies’. We’ve done it for millions of years, we’ll keep doing it. You need to know whom to trust and cooperate with, and whom to attack/evade. Religion works quite well on that score, most religions know who’re the good guys (people who believe in your god) and who’re the bad guys (people who believe in other gods) and they make these nice systems that enforce cooperation within the tribe in a lot of ways. The thing about religion is that you need to believe in the right god to be accepted in the tribe – but usually it’s not quite that simple. That’s because it’s easy to claim that you believe in X. So along the way methods were developed to control the tribe members, to deal with free riders; behavioral constraints (no pork!, no marrying that God-Y fan when you’re a fan of God-X!) for instance. These methods had to be implemented, if not the signaling value of religion would be very low and it wouldn’t be very efficient in separating the ‘true’ tribe members/loyals from the non-tribe members/disloyals.
Along the way a lot of religious power got to be transferred to the state (state aid to poor people instead of religious donations). So the role of religion went down somewhat. Also, some people figured that that religious stuff was quite stupid. So they found new ways to split the world into ‘my type’ and ‘the other type’. In a way politics always had this role too of course, because politics and religion didn’t really separate until quite late in history (needless to say, a lot of places it hasn’t – and no, I will not add a ‘yet’ to that sentence, as I see no good reason why the long term outcome should turn out to be a godless society). Anyway, democracy made it possible to have status games where people didn’t argue about religion and politics started to matter a lot when it came to tribal affiliation. As the power of the state grew, handling more and more stuff, dealing with all kinds of related – and unrelated – stuff, it became a lot easier to use political cues as tribal markers. Political discussions got both complex enough for people to use discussion performance as an ability and loyalty signal, and the matters the politicians dealt with became important enough to merit people’s attention, at least in theory.
So people started telling their children both which god to believe in and which politician to vote for. They told their children. And they spent a lot of time arguing with other people, the other people who’d found out that ‘politics is the new religion’.
Some people enjoy political debates. Perhaps they like the mental gymnastics that some other people might get by dealing with mathematics or playing chess. Perhaps they think their opinion is important, that other people care about it. Maybe they think that they can change other people’s minds and thereby support the group by converting others to group X, just as they’re told to do by their politicians (and priests).
A lot of political views have an important value as a signal about which kind of person
you are you’d like to be. Part of why you dislike the ‘opponent’ is that you disagree with him, but that’s not really all there’s to it. It’s also that you don’t trust him. He didn’t bow to Huitzilopochtli. His political views might have no influence on anything relevant to your relationship; you might be perfectly able to meet with him, have a long talk with him about his life, his family, his work, his hobbies – and you might end up being his best friend. Only that’s usually not how it goes, because when you hear about that ‘troublesome’ view on ‘the environment’/'god’/'fiscal sustainability’ you tend to make the ‘troublesome’ views relevant, because – he didn’t bow to Huitzilopochtli. Some people overcome politics by finding another individual with the same views or views which are dissimilar but unimportant, because their parents taught them the magic of ‘you should be able to be friends with everybody’ – which works for both until they meet a guy who bows to Huitzilopochtli. He will not be friends with them until they bow to Huitzilopochtli, and just a bow usually isn’t enough. So they have a tribe too which they’re forced into, even if they’d like not to be tribes members at all.
It’s not that political views matter in the big picture. Your political views that is. They don’t, they really don’t. But they matter in the small picture. Once a societal norm is firmly established it tends to get a life of its own. So people talk about windmills and fat taxes and public pension schemes instead of whether they should pray to Ares or Dionysus. If you talk about it many hours each year, you watch news and so on much of which is also just political posturing and games, then to actually go to the election booth on election day and cast your vote isn’t really a big deal. Also, politicians like voters more than supporters who don’t vote, just like priests like believers who give money to the church more than believers who don’t, so there’s consensus in the tribe that voting is the correct behavior, and if you don’t vote, you don’t bow to Huitzilopochtli and then you’d better have a good explanation. A few other things. First, remember that the more costly the signal, the higher signaling value is associated with the action. Second, decreasing returns do not kick in at the same level for all individuals; stratification and sorting within-group is a natural part of the signaling game. That’s another reason why it’s hard to give up political debates: If you cut off the source of status you derive from participating in the political game (political discussions), you might be unable to recoup the lost status elsewhere.
A funny thing is that I still don’t quite know why I wrote this piece. Intellectual posturing probably – though I could hypothetically have done a lot better in that regard as the Flesch Reading Ease score of the piece is ~60. Pieces like this probably make it easier for me to delude myself into thinking that part of why a few people still read me is that they delude themselves into thinking that I’m an original thinker (even if it’s quite likely that there’s nothing original in the piece above). Do I really think anybody will change their behavior one inch because of this piece? No. I would have said ‘of course not’, but then the question arose: Did I think that before I started writing? Had I even considered the question why I should write something like this? Maybe a little – it was mostly that I was lonely and bored.
Players: i,j (think: male, female)
Preferences: U(IO, II),
IO: Interest overlap.
II: Interest Intensity.
(i,j) have (n,m) interests (they don’t necessarily have equally many), (ni,mj). Let (ki) be the subset of individual i’s interests from the total interest set (ni) which is non-overlapping with the interests set (mj) (non-shared interests), and let (li) be the subset of interests from (ni) which do overlap with (mj) (shared interests). Assume that individual i’s total (negative) utility contribution from the interest set (ki) is equal to [-ki*(aiNO*qiNO)] – where II here enters the model as a scaling vector aiNO with 0 < aiNO < 1, where 0 denotes no interest and 1 denotes high interest, where the NO-part denotes 'Non-Overlapping' interests and where q is a relevance factor – some interests are intense but we don't care if the partner shares them. To get a model one can always solve you probably need to assume q is bounded, but in the real world it often isn't ('dealbreakers'). Similarly, the interest set (li) which enter both utility functions Ui and Uj contributes individual i with a utility of [li*(aiO*qiO)] to total utility from entering the relationship, where Oi denotes the interests of individual i which 'Overlaps' with interests from the interest set (mj). Let the reservation utility be zero and total utility from entering the relationsship for individual i be li*(aiO*qiO) – ki*(aiNO*qiNO). Do note that the problem is not perfectly symmetric as the scaling parameter qi is in general not equal to qj, even if (li) = (lj). There's also the problem that the common interest factor might enter (at least in part) the utility function as a share of total interest space – 2 common interests out of 4 might be better than 2 common interests out of 30. Though you might in some cases be able to let this effect enter the model via q.
Utility matters but we need a matching likelihood (ML) as well. Let the likelihood that (i,j) meet be a function of l*(aC), where dML/dl and dML/daC are both positive – so people are more likely to meet if they have many common interests and they are more likely to meet the more intense the interests are (the latter is more dubious than the former, ie. compare internet chess with ballet). Arguably one might include qC in the ML, because some people's interests choices are 'potential partner-relevant', but it's easier if we leave that out for now. Assume further that…
The model I was beginning to outline above had zero dynamics, no risk, no 'family preferences', 'income/status'-variables, 'age/looks' -ll-, geography, beliefs… You might want to remember this model outline next time you hear a social scientist talk about this or that. A very simple model like the one above with few variables and simple relations between the variables can still be quite difficult to solve because you have to think very hard about what’s going on, what you’re assuming along the way and how to implement decision rules in the model that make the resulting equilibrium(/a) appear plausible (and how to get rid of implausible equilibria). Social behaviour is difficult to model and it's hard to get good results in micro setups like these because there are too many variables at play and way too much interaction going on.
The comment section of MR can go to hell, it just erased what I wrote there because I hit the wrong keyboard button when I was about to post it. So I decided to post some of my thoughts here instead.
Tyler asked, or rather a reader of his asked him: “Who will still be famous in 10,000 years?“
Tyler mentions “major religious leaders (Jesus, Buddha, etc.), Einstein, Turing, Watson and Crick, Hitler, the major classical music composers, Adam Smith, and Neil Armstrong.”
Only one of those I can agree a little with – the religious leaders. The question is of course badly posed because it doesn’t clarify what is meant by famous. I think basically nobody alive/known today has much of a chance of being famous in 10.000 years. Fame implies more than name-recognition in my mind, and I’m pretty sure that’s the most you can hope for. To take an example, Aristotle isn’t famous today; some people think they know who he was (a philosopher), a few people know a little more and that’s it. Most people have no clue who he was. How many of the 1 billion people living in Africa know more than the name, if they know that much? The Chinese? People from Brazil? And a) that’s a vastly shorter timeframe, b) when he was alive, there were only maybe 170 million people who could potentially become famous, now there are 7 billion (40 times as many) and the number keeps growing.
Let’s say 10.000 years is 300 generations. Of course it could be more, it could be less, depending on how things develop, but let’s just try out with that. Let’s say that you know all there is to know about famous guy X today and you tell your child. Your child tells everything he knows to his child, ect. for 300 generations. Now assume there’s an information loss of 1 percent per generation, i.e. that your child only gets 99 % of what you had told him about the famous guy, his/her child only gets 99% of that right and so on. After 300 generations, a little less than 5 percent of the information about the famous guy will be left even though there was very little information loss at each information exchange point. If it’s 2 percent of the information that survived that gets lost instead each generation, it’s 0,2 percent of the information that survives to the end. If it’s 5 percent, it’s 2*10^(-7) of the information that’s left [that's about equal to 2 divided by ten million] at the end.
This assumes no Great Disasters, no increasing costs of information storage over time (something I think people tend to forget when looking at very long time frames; this is basically the same as saying that individuals don’t learn new things in the year 8000 that might be more useful to know at that point in time than what color tie Einstein wore in 1934, so there’s no new information crowding out old information. Now ask yourself i) how much you know about the dietary habits of Cleopatra, ii) why you’d want to know something like that and iii) what the growth rate of historical knowledge available generally looks like? Yes, you got it right, barring Great Disasters it is increasing over time at a very fast rate.), no overcut links at any point. This is with a society basically in stasis for 10.000 years with almost perfect information sharing over time.
To be remembered in 10.000 years, you need to be a God. But even they will have a very hard time over that kind of time frame; most humans have already forgotten all about more than 99,99 percent of all the gods we ever made up. Who remembers Dis Pater anyway? Right now I’m thinking about an SMBC-comic I’d like to have linked to illustrating this ‘religions are mortal too’-concept. It does it by showing a dad tell his son some very garbled version of a mix of current religions and the Egyptian sun gods or some such in order ‘to give the child a head start, because that’s what things will be like in the future anyway, all mixed up’ or along those lines, but I can’t remember the number, the comic has over 2000 strips and I can’t find the specific strip via google (please leave a link in the comments if you think you know which one I’m thinking about).
The time frame is the killer. Maybe Muhammed or Jesus will manage, but I severely doubt it. 10.000 years is a very long time. Maybe some combination of Superman, Muhammed and Sauron will still be around at that point. Memetic mutations happen all the time and religions will not be immune to such changes in the long run. Another thing is that to think that a religion has been around for 2000 years is a sign that that religion will keep being around for ever is probably not a good idea. Religions stay around as long as the religious people keep having babies and indoctrinating them. If they do that and don’t get killed or forced to drop their Gods by people who have other Gods, they can manage in the medium to long run. The Gods of christianity and islam have done that so far, but they killed a lot of gods in the process. Who’s to say they won’t share the fate of the gods they replaced in the very long run? Would not that be the most likely outcome? Why not?
Incidentally, when you have a child, the child shares half of your DNA, half of the other parent’s DNA. Over 300 generations, what’s left of your unique original ‘DNA-package’ is equal to 4,90909…*10^(-91). The divisor in that expression is (significantly) larger than the number of atoms in the universe. Getting children is a very bad way to try to live forever, to leave a ‘permanent imprint on the world’ or some such. You might just (think that you’ll) live on a little while longer, but that’s basically it.
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- Flemming Rose
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly