In a post I published a few weeks ago I mentioned that I had decided against including some comments and observations I had written about health economics in that post because the post was growing unwieldy, but that I might post that stuff later on in a separate post. This post will include those observations, as well as some additional details I added to the post later. This sort of post is the sort of post that usually does not get past the ‘draft’ stage (in wordpress you can save posts you intend to publish later on as drafts), and as is usually the case for posts like these I already regret having written it, for multiple reasons. I should warn you from the start that this post is very long and will probably take you some time to read.
Anyway, the starting point for this post was some comments related to health insurance and health economics which I left on SSC in the past. A lot more people read those comments on SSC than will read this post so the motivation for posting it here was not to ‘increase awareness’ of the ideas and observations included in some kind of general sense; my primary motivation for adding this stuff here is rather that it’s a lot easier for me personally to find stuff I’ve written when it’s located here on this blog rather than elsewhere on the internet, and I figure that some of the things I wrote back then are topics which might well come up again later, and it would be convenient for me in that case to have a link at hand. Relatedly I have added many additional comments and observations in this post not included in the primary exchange, which it is no longer possible for me to do on SSC as my comments are no longer editable on that site.
Although the starting point for the post was as mentioned a comment exchange, I decided early on against just ‘quoting myself’ in this post, and I have thus made some changes in wording and structure in order to increase the precision of the statements included and in order to add a bit of context making the observations below easier to read and understand (and harder to misread). Major topics to which the observations included in this post relate are preventable diseases, the level of complexity that is present in the health care sector, and various topics which relate to health care cost growth. Included in the post are some perhaps not sufficiently well known complications which may arise in the context of the discussion of how different financing schemes may relate to various outcomes, and to cost growth. Much of the stuff included will probably be review to people who’ve read my previous posts on health economics, but that’s to be expected considering the nature of this post.
Although ‘normative stuff’ is not what interests me most – I generally tend to prefer discussions where the aim is to identify what happens if you do X, and I’ll often be happy to leave the discussion of whether outcome X or Y is ‘best’ to others – I do want to start out with stating a policy preference, as this preference was the starting point for the aforementioned debate that lead to the origination of this post. At the outset I should thus make clear that I would in general be in favour of changes to the financial structure of health care systems where people who take avoidable risks which systematically and demonstrably increase their expected health care expenditures at the population level pay a larger proportion of the cost than do people who did not take such avoidable risks.
Most developed societies have health care systems which are designed in a way that implicitly to some extent subsidize unhealthy behaviours. An important note in this context is incidentally that one way of looking at these things is that if you are not explicitly demanding people who behave in risky ways which tend to increase their expected costs to pay more for their health care (/insurance), then you are in fact by virtue of not doing this implicitly subsidizing those unhealthy individuals/behaviours. I mention this because some people might not like the idea of ‘subsidizing healthy behaviours’ (‘health fascism’) – which from a certain point of view is what you do if you charge people who behave in unhealthy ways more. Maybe some people would take issue with words like ‘subsidy’ and ‘implicit’, but regardless of what you call these things the major point that is important to have in mind here is that if one group of people (e.g. ‘unhealthy people’) cost more to treat (/are ill more often, get illnesses related to their behaviours, etc., etc.) than another group of people (‘healthy people’), then if you need to finance this shortfall – which you do, as you face a budget constraint – there are only two basic ways to do this; you can either charge the high-cost group (‘unhealthy people’) more, or you can require the other group (‘healthy people’) to make up the difference. Any scheme which deals with such a case of unequal net contribution rates are equivalent either to one of those schemes or a mix of the two, regardless of what you call things and how it’s done, and regardless of which groups we are talking about (old people also have higher health care expenditures than do young people, and most health care systems implicitly redistribute income from the young to the old). If you’re worried about ‘health fascism’ and the implications of subsidizing healthy behaviours (/’punishing’ unhealthy behaviours) you should at least keep in mind that if the health care costs of people who live healthy lives and people who do not are dissimilar then any system that deals with this issue – which all systems must – can either choose to ‘subsidize’ healthy behaviours or unhealthy behaviours; there’s no feasible way to design a ‘neutral system’ if the costs of the groups are dissimilar.
Having said all this, the very important next point is then that it is much more difficult to make simple schemes that would accomplish an outcome in which people who engage in unhealthy behaviours are required to pay more without at the same time introducing a significant number of new problems than people who are not familiar with this field would probably think it is. And it’s almost certainly much harder to evaluate if the proposed change actually accomplished what you wanted to accomplish than you think it is. Even if we are clear about what we want to accomplish and can all agree that that is what we are aiming for – i.e. we are disregarding the political preferences of large groups of voters and whether the setup in question is at all feasible to accomplish – this stuff is really much harder than it looks, for many reasons.
Let’s start out by assuming that smoking increases the risk of disease X by 50%. Say you can’t say which of the cases of X are caused by smoking, all you know is that smoking increases the risk at the population level. Say you don’t cover disease X at all if someone smokes, that is, smokers are required to pay the full treatment cost out of pocket if they contract disease X. It’s probably not too controversial to state that this approach might by some people be perceived of as not completely ‘fair’ to the many smokers who would have got disease X even if they had not smoked (a majority in this particular case, though of course the proportion will vary with the conditions and the risk factors in question). Now, a lot of the excess health care costs related to smoking are of this kind, and it is actually a pretty standard pattern in general with risk factors – smoking, alcohol, physical inactivity, etc. You know that these behaviours increase risk, but you usually can’t say for certain which of the specific cases you observe in clinical practice are actually (‘perfectly’/’completely’/’partially’?) attributable to the behaviour. And quite often the risk increase associated with a specific behaviour is actually really somewhat modest, compared to the relevant base rates, meaning that many of the people who engage in behaviours which increase risk and who get sick might well have got sick even if they hadn’t engaged in those risky behaviours.
On top of this problem usually it’s also the case that risk factors interact with each other. Smoking increases the risk of cancer of the esophagus, but so does alcohol and obesity, and if a person both smokes and drinks the potential interaction effect may not be linear – so you most likely often can’t just identify individual risk factors in specific studies and then pool them later and add them all together to get a proper risk assessment. A further complication is that behaviours may both increase as well as decrease risk – to stick with the example, diets high in fruits and vegetables both lower the risk of cancer of the esophagus. Exercise probably does as well – we know that exercise has important and highly complex effects on immune system function (see e.g. this post). Usually a large number of potential risk factors is at play at the same time, there may be multiple variables which lower risk and are also important to include if you want a proper risk assessment, and even if you knew in theory which interaction terms were likely to be relevant, you might even so find yourself in a situation unable to estimate the interaction terms of interest – this might take high-powered studies with large numbers of patients, which may not be available or the results of such high-powered studies may not apply to your specific subgroup of patients. Cost-effectiveness is also an issue – it’s expensive to assess risk properly. One take-away is that you’ll still have a lot of unfairness in a modified contribution rate model, and even evaluating fairness aspects of the change may be difficult to impossible because to some extent this question is unknowable. You might find yourself in a situation where you charge the obese guy more because obesity means he’s high risk, but in reality he is actually lower risk than is the non-fat guy who is charged a lower rate, because he also exercises and eats a lot of fruits and vegetables, which the other guy doesn’t.
Of course the above paragraph took it for granted that it was even possible to quantify the excess costs attributable to a specific condition. That may not be easy at all to do, and there may be large uncertainties involved. The estimated excess cost will depend upon a variety of factors which may or may not be of interest to the party performing the analysis, for example it may be very important which time frame you’re looking at and which discounting methodology is applied (see e.g. the last paragraph in this post). The usual average vs marginal cost problem (see the third-last paragraph in the post to which I link in the previous sentence – this post also has more on this topic) also applies here and is related to ‘the fat guy who exercises and is low-risk’-problem; ideally you’d want to charge people with higher health care utilization levels more (again, in a setting where we assume the excess cost is associated with life-style variables which are modifiable – this was our starting point), but if there’s a large amount of variation in costs across individuals in the specific subgroups of interest and you only have access to average costs rather than individual-level costs, then a scheme only taking into account the differences in the averages may be very sub-optimal when you look at it from the viewpoint of the individual. Care needs to be taken to avoid problems like e.g. Simpson’s paradox.
Risk factors are not the only things that cluster; so do diseases. An example:
“An analysis of the Robert Koch-Institute (RKI) from 2012 shows that more than 50 % of German people over 65 years suffer from at least one chronic disease, approximately 50 % suffer from two to four chronic diseases, and over a quarter suffer from five or more diseases .” (link)
78.3 % of the type 2 diabetics also suffered from hypertension in that study. Does this fact make it easier or harder to figure out what is ‘the true cost contribution’ of ‘type 2 diabetes’ and ‘hypertension’ (and, what we’re ultimately interested in in this setting – the ‘true cost contribution’ of the unhealthy behaviours which lead some individuals to develop type 2 diabetics and hypertension who would not otherwise have developed diabetes and/or hypertension (…/as early as they did)? It should be noted that diabetes was estimated to account for 11 % of total global healthcare expenditure on adults in 2013 (link). That already large proportion is expected to rise substantially in the decades to come – if you’re interested in cost growth trajectories, this is a major variable to account for. Attributability is really tricky here, and perhaps even more tricky in the case of hypertension – but for what it’s worth, according to a CDC estimate hypertension cost the US $46 billion per year, or ~$150/per person per year.
Anyway, you look at the data and you make guesses, but the point is that doctor Smith won’t know for certain if Mr. Hanson would have had a stroke even if he hadn’t smoked or not. A proposal of not providing payment for a health care service or medical product in the case of an ‘obviously risky-behaviour-related-health-condition’ may sometimes appear to be an appealing proposition and you sometimes see people make this sort of proposal in discussions of this nature, but it tends to be very difficult when you look at the details to figure out just what those ‘obviously risky-behaviour-related-health-conditions’ are, and even harder to make even remotely actuarially fair adjustments to the premiums and coverage patterns to reflect the risk. Smoking and lung cancer is a common example of a relatively ‘clean’ case, but most cases are ‘less clean’ and even here there are complications; a substantial proportion of lung cancer cases are not caused by tobacco – occupational exposures also cause a substantial proportion of cases, and: “If considered in its own disease category […] lung cancer in never smokers would represent the seventh leading cause of cancer mortality globally, surpassing cancers of the cervix, pancreas, and prostate,5 and among the top 10 causes of death in the United States.” (link) Occupational exposures (e.g. asbestos) are not likely to fully account for all cases, and for example it has also been found that other variables, including previous pneumonia infections and tuberculosis, affect risk (here are a couple of relevant links to some previous coverage I wrote on these topics).
I think many people who have preferences of this nature (‘if it’s their own fault they’re sick, they should pay for it themselves’) underestimate how difficult it may be to make changes which could be known with a reasonable level of certainty to actually have the intended consequences, even assuming everybody agreed on the goal to be achieved. This is in part because there are many other aspects and complications which need to be addressed as well. Withholding payment in the case of costly preventative illness may for example in some contexts increase cost, rather than decrease them. The risk of complications of some diseases – an important cost driver in the context of diabetes – tends to be dependent on post-diagnosis behavioural patterns. The risk of developing diabetes complications will depend upon the level of glycemic control. If you say you won’t cover complications at all in the case of ‘self-inflicted disease X’, then you also to some extent tend to remove the option of designing insurance schemes which might lower cost and complication rates post-diagnosis by rewarding ‘good’ (risk-minimizing) behaviours post-diagnosis and punishing ‘bad’ (risk-increasing) behaviours. This is not desirable in the context of diseases where post-diagnosis behaviour is an important component of the cost function, as it certainly is in the diabetes context. There are multiple potential mechanisms here, some of which are disease specific (e.g. suboptimal diet in a diagnosed type 2 diabetic) and some of which may not be (a more general mechanism could e.g. be lowered compliance/adherence to treatment in the uncovered populations because they can’t afford the drugs which are required to treat their illness; though the cost-compliance link is admittedly not completely clear in the general case, there are certainly multiple diseases where lowered compliance to treatment would be expected to increase cost long-term).
And again, also in the context of complications fairness issues are not as simple to evaluate as people might like them to be; some people may have a much harder time controlling their disease than others, or they may be more susceptible to complications given the same behaviour. Some may already have developed complications by the time of diagnosis. Such issues make it difficult to design simple rules which would achieve what you want them to achieve without having unfortunate side-effects; for example a rule that a microvascular diabetes-related complication is automatically ‘your own fault’ (so we won’t pay for it), which might be motivated by the substantial amount of research linking glycemic control with complication risk, would punish some diabetics who have had the disease for a longer amount of time (many complications are not only strongly linked to Hba1c but also display a substantial degree of duration-dependence; for example in type 1 diabetics one study found that diabetic retinopathy was present in 13% of patients with a duration of disease less than 5 years, whereas the corresponding figure was 90% for individuals with a disease duration of 10–15 years (Sperling et al., p. 393). I also recall reading a study finding that Hba1c itself is increasing with diabetes duration, which may be partly accounted for by the higher risk of hypoglycemia related to hypoglycemia-unawareness-syndromes in individuals with long-standing disease), individuals with diseases which are relatively hard to control (perhaps due to genetics, or maybe again due to the fact that they have had the disease for a longer amount of time; the presence of hypoglycemia unawareness is as alluded to above to a substantial degree duration-dependent, and this problem increases the risk of hospitalizations, which are expensive), diabetics who developed complications before they knew they were sick (a substantial proportion of type 2 diabetics develop some degree of microvascular damage pre-diagnosis), and diabetics with genetic variants which confer an elevated risk of complications (“observations suggest that involvement of genetic factors is increasing the risk of complications” (Sperling et al., p. 226), and for example in the DCCT trial familial clustering of both neuropathy and retinopathy was found; clustering which persisted after controlling for Hba1c – for more on these topics, see e.g. Sperling et al.’s chapter 11).
Other decision rules would similarly lead to potentially problematic incentives and fairness issues; for example requiring individuals to meet a specific Hba1c goal might be more desirable than to just not cover complications, but that one also leads to potential problems; ideally such an Hba1c goal should be individualized, because of the above-mentioned complexities and others I have not mentioned here; to require a newly-diagnosed individual to meet the same goals as someone who has had diabetes for decades does not make sense, and neither does it make sense to require these two groups to meet exactly the same Hba1c goal as the middle-aged female diabetic who desires to become pregnant (diabetes greatly increases the risk of pregnancy complications, and strict glycemic control is extremely important in this patient group). It’s important to note that these issues don’t just relate to whether or not the setup is perceived of as fair, but it also relates to whether or not you would expect the intended goals to actually be met or not when you implement the rule. If you were to require that a long-standing diabetic with severe hypoglycemia unawareness had to meet the same Hba1c goal as the newly diagnosed individual, this might well lead to higher overall cost, because said individual might suffer a large number of hypoglycemia-related hospitalizations which would have been avoidable if a more lax requirement was imposed; when you decrease Hba1c you decrease the risk of long-term complications, but you increase the risk of hypoglycemia. A few numbers might make it easier to make sense of how expensive hospitalizations really are, and why I emphasize them here. In this diabetes-care publication they assign a cost for an inpatient day for a diabetes-related hospitalization at $2,359 and an emergency visit at ~$800. The same publication estimates the total average annual excess expenditures of diabetics below the age of 45 at $4,394. Going to the hospital is really expensive (43% of the total medical costs of diabetes are accounted for by hospital inpatient care in that publication).
A topic which was brought up in the SSC discussion was the question of the extent to which private providers have a greater incentive to ‘get things right’ in terms of assessing risk. I don’t take issue with this notion in general, but there are a lot of complicating factors in the health care context. One factor of interest is that it is costly to get things right. If you’re looking at this from an insurance perspective, larger insurance providers may be better at getting things right because they can afford to hire specialists who provide good cost estimates – getting good cost estimates is really hard, as I’ve noted above. Larger providers translate into fewer firms, which increases firm concentration and may thus increase collusion risk, which may again increase the prices of health care services. Interestingly if your aim is to minimize health care cost growth increased market power of private firms may actually be a desirable state of affairs/goal, because cost growth is a function of both unit prices and utilization levels, and higher premiums are likely to translate into lower utilization rates, which may lower overall costs and -cost growth. I decided to include this observation here also in order to illustrate that what is an optimal outcome depends on what your goal is, and in the setting of the health care sector you sometimes need to be very careful about thinking about what your actual goal is, and which other goals might be relevant.
When private insurance providers become active in a market that also includes a government entity providing a level of guaranteed coverage, total medical outlays may easily increase rather than decrease. The firms may meed an unmet need, but some of that unmet need may be induced demand (here’s a related link). Additionally, the bargaining power of various groups of medical personnel may change in such a setting, leading to changes in compensation schedules which may not be considered desirable/fair. An increase in total outlays may or may not be considered a desirable outcome, but this does illustrate once again the point that you need to be careful about what you are trying to achieve.
There’s a significant literature on how the level of health care integration, both at the vertical and horizontal level, both in terms of financial structure and e.g. in terms of service provision structure, may impact health care costs, and this is an active area of research where we in some contexts do not yet know the answers.
Even when cost minimization mechanisms are employed in the context of private firms and the firm in question is efficient, the firm may not internalize all relevant costs. This may paradoxically lead to higher overall cost, due to coverage decisions taken ‘upstream’ influencing costs ‘downstream’ in an adverse manner; I have talked about this topic on this blog before. A diabetic might be denied coverage of glucose testing materials by his private insurer, and that might mean that the diabetic instead gets hospitalized for a foreseeable and avoidable complication (hypoglycemic coma due to misdosing), but because it might not be the same people paying for the testing material and the subsequent hospitalization it might not matter to the people denying coverage of the testing materials, and/so they won’t take it into account when they’re making their coverage decisions. That sort of thing is quite common in the health care sector – different entities pay for and receive payments for different things, and this is once again a problem to keep in mind if you’re interested in health care evaluation; interventions which seem to lower cost may not do so in reality, because the intervention lead to higher health care utilization elsewhere in the system. If incentives are not well-aligned things may go badly wrong, and they are often not well-aligned in the health care sector. When both the private and public sectors are involved in either the financial arrangements and/or actual health service provision – which is the default health care system setup for developed societies – this usually leads to highly complex systems, where the scope for such problems to appear seems magnified, rather than the opposite. I would assume that in many cases it matters a lot more that incentives are well-aligned than which specific entity is providing insurance or health care in the specific context, in part a conclusion drawn from the coverage included in Simmons, Wenzel & Zgibor‘s book.
In terms of the incentive structures of the people involved in the health care sector, this stuff is of course also adding another layer of complexity. In all sectors of the economy you have people with different interests who interact with each other, and when incentives change outcomes change. Outcomes may be car batteries, or baseball bats, or lectures. Evaluating outcomes is easier in some settings than in others, and I have already touched upon some of the problems that might be present when you’re trying to evaluate outcomes in the health care context. How easy it is to evaluate outcomes will naturally vary across sub-sectors of the health care sector but a general problem which tends to surface here is the existence of various forms of asymmetrical information. There are multiple layers, but a few examples are worth mentioning. To put it bluntly, the patient tends to know his symptoms and behavioural patterns – which may be disease-relevant, and this aspect is certainly important to include when discussing preventative illnesses caused at least in part by behaviours which increase the risk of said illnesses – better than his doctor, and the doctor will in general tend to know much more about the health condition and potential treatment options than will the patient. The patient wants to get better, but he also wants to look good in the eyes of the doctor, which means he might not be completely truthful when interacting with the doctor; he might downplay how much alcohol he drinks, misrepresent how often he exercises, or he may lie about smoking habits or about how much he weighs. These things make risk-assessments more difficult than they otherwise might have been. As for the GPs, usually we here have some level of regulation which restricts their behaviour to some extent, and part of the motivation for such regulation is to reduce the level of induced demand which might otherwise be the result of information asymmetry in the context of stuff like relevant treatment effects. If a patient is not sufficiently competent to evaluate the treatments he receives (‘did the drug the doctor ordered really work, or would I have gotten better without it?’), there’s a risk he might be talked into undergoing needless procedures or take medications for which he has no need, especially if the doctor who advises him has a financial interest in the treatment modality on offer.
General physicians have different incentives from nurses and specialists working in hospitals, and all of these groups may experience conflicts of interests when they’re dealing with insurance providers and with each other. Patients as mentioned have their own set of incentives, which may not align perfectly with those of the health care providers. Different approaches to how to deal with such problems lead to different organizational setups, all of which influence both the quantity and quality of care, subject to various constraints. It’s an active area of research whether decreasing competition between stakeholders/service providers may decrease costs; one thing that is relatively clear from diabetes research with which I have familiarized myself is that when different types of care providers coordinate activities, this tends to lead to better outcomes (and sometimes, but not always, lower costs), because some of the externalized costs become internalized by virtue of the coordination. It seems very likely to me that conclusions to such questions will be different for different subsectors of the health care sector. A general point might be that more complex diseases should be expected to be more likely to generate cost savings from increased coordination than should relatively simple diseases (if you’re fuzzy about what the concept of disease complexity refers to, this post includes some relevant observations). This may be important, because complex diseases also should probably tend to be more expensive to treat in general, because the level of need in patients is higher.
It’s perhaps hardly surprising, considering the problems I’ve already discussed related to how difficult it may be to properly assess costs, that there’s a big discussion to be had about how to even estimate costs (and benefits) in specific contexts, and that people write books about these kinds of things. A lot of things have already been said on this topic and a lot more could be said, but one general point perhaps worth repeating is that it may in the health care sector be very difficult to figure out what things (‘truly’) cost (/’is worth’). If you only have a public sector entity dealing with a specific health problem and patients are not charged for receiving treatment, it may be very difficult to figure out what things ‘should’ cost because relevant prices are simply missing from the picture. You know what the government entity paid the doctors in wages and what it paid for the drugs, but the link between payment and value is sometimes a bit iffy here. There are ways to at least try to address some of these issues, but as already noted people write books about these kinds of things so I’m not going to provide all the highlights here – I refer to the previous posts I’ve written on these topics instead.
Another important related point is that medical expenditures and medical costs are not synonyms. There are many costs associated with illness which are not directly related to e.g. a payment to a doctor. People who are ill may be less productive while they are at work, they may have more sick-days, they may retire earlier, their spouse may cut down on work hours to take care of them instead of going to work, a family caretaker may become ill as a result of the demands imposed by the caretaker role (for example Alzheimer’s disease significantly increases the risk of depression in the spouse). Those costs are relevant, there are literatures on these things, and in some contexts such ‘indirect costs’ (e.g. lower productivity at work and early retirement) may make up a very substantial proportion of the total costs of a health condition. I have seen diabetes cost estimates which indicated that the indirect costs may account for as much as 50 % of the total costs.
If there’s a significant disconnect between total costs and medical expenditures then minimizing expenditures may not be desirable from an economic viewpoint. A reasonable assessment model will/should in the context of models of outlays include both a monetary cost parameter and a quality/quantity (ideally both) parameter; if you neglect to take account of the latter, in some sense you’re only dealing with what you pay out, not what you get for that payment (which is relevant). If you don’t take into account indirect costs you implicitly allow cost switching practices to potentially muddle the picture and make assessments more difficult; for example if you provide fewer long-term care facilities then the number of people involved in ‘informal care’ (e.g. family members having to take care of granny) will go up, and that is going to have secondary effects downstream which should also be assessed (you improve the budget in the context of the long-term care facilities, but you may at the same time increase demands on e.g. psychiatric institutions and marginally lower especially the female labour market participation rate. The net effect may still be positive, but the point is that an evaluation will/should include costs like these in the analysis, at least if you want anything remotely close to the full picture).
Let’s return to those smokers we talked about earlier. A general point not mentioned yet is that if you don’t cover smokers in the public sector because of cost considerations, many of them may also not be covered by private insurance either. This is because a group of individuals that is high risk and expensive to treat will be demanded high premiums (or the insurance providers would go out of business), and for the sake of this discussion we’re now assuming smokers are expensive. If that is so, many of them probably would not be able to afford the premiums demanded. Now, one of the health problems which are very common in smokers is chronic obstructive pulmonary disease (COPD). Admission rates for COPD patients differ as much as 10-fold between European countries, and one of the most important parameters regarding pharmacoeconomics is the hospitalization rate (both observations are from this text). What does this mean? It means that we know that admission rate from COPD is highly responsive to the treatment regime; populations well-treated have much fewer hospitalizations. 4% of all Polish hospitalizations are due to COPD. If you remove the public sector subsidies, the most likely scenario you get seems to me to be a poor-outcomes scenario with lots of hospitalizations. Paying for those is likely to be a lot more expensive than it is to treat the COPD pharmacologically in the community. And if smokers aren’t going to be paying for it, someone else will have to do that. If you both deny them health insurance and refuse them treatment if they cannot pay for it they may just die of course, but in most cost-assessment models that’s a high-cost outcome, not a low-cost outcome (e.g. due to lost work-life productivity etc. Half of people with COPD are of working age, see the text referred to above.). This is one example where the ‘more fair’ option might lead to higher costs, rather than lower costs. Some people might still consider such an outcome desirable, it depends on the maximand of interest, but such outcomes are worth considering when assessing the desirability of different systems.
A broadly similar dynamic, in the context of post-diagnosis behaviour and links to complications and costs, may be present in the context of type 2 diabetes. I know much more about diabetes than I do about respirology, but certainly in the case of diabetes this is a potentially really big problem. Diabetics who are poorly regulated tend to die a lot sooner than other people, they develop horrible complications, they stop being able to work, etc. etc. Some of those costs you can ignore if you’re willing to ‘let them die in the streets’ (as the expression goes), but a lot of those costs are indirect costs due to lower productivity, and those costs aren’t going anywhere, regardless of who may or may not be paying the medical bills of these people. Even if they have become sick due to a high-risk behaviour of their own choosing, their health care costs post-diagnosis will still be highly dependent upon their future medical care and future health insurance coverage. Denying them coverage for all diabetes-related costs post-diagnosis may, paradoxical though it may seem to some, not be the cost-minimizing option.
I already talked about information asymmetries. Another problematic aspect linked to information management also presents itself here in a model of this nature (‘deny all diabetes-related coverage to known diabetics’); people who suspect they might be having type 2 diabetes may choose not to disclose this fact to a health care provider because of the insurance aspect (denial of coverage problems). Insurance providers can of course (and will try to) counter this by things like mandatory screening protocols, but this is expensive, and even assuming they are successful you again not only potentially neglect to try to minimize the costs of the high-cost individuals in the population (the known diabetics, who might be cheaper long-term if they had some coverage), you also price a lot of non-diabetics out of the market (because premiums went up to pay for the screening). And some of those non-diabetics are diabetics to-be, who may get a delayed diagnosis as a result, with an associated higher risk of (expensive) complications. Again, as in the smoking context if the private insurer does not cover the high-cost outcomes someone else will have to do that, and the blind diabetic in a wheel-chair is not likely to be able to pay for his dialysis himself.
More information may in some situations lead to a breakdown in insurance markets. This is particularly relevant in the context of genetics and genetic tests. If you have full information, or close to it, the problem you have to some extent stops being an insurance problem and instead becomes a problem of whether or not to, and to which extent you want to-, explicitly compensate people for having been dealt a bad hand by nature. To put it in very general terms, insurance is a better framework for diseases which can in principle be cured than it is for chronic conditions where future outlays are known with a great level of certainty; the latter type of disease tends to be difficult to handle in an insurance context.
People who have one disease may develop other diseases as time progresses, and having disease X may increase or decrease the risk of disease Y. People study such disease variability patterns, and have done so for years, but there’s still a lot of stuff we don’t know – here’s a recent post on these topics. Such patterns are interesting for multiple reasons. One major motivation for studying these things is that ‘different’ diseases may have common mechanisms, and the identification of these mechanisms may lead to new treatment options. A completely different motivation for studying these things relate rather to the kind of stuff covered in this post, where you instead wonder about economic aspects; for example, if the smoker stops smoking he may gain weight and eventually develop type 2 diabetes instead of developing some smoking-related condition. Is this outcome better or worse than the other? It’s important to keep in mind when evaluating changes in compensation schedules/insurance structures that diseases are not independent, and this is a problem regardless of whether you’re interested in total costs or ‘just’ direct outlays. Say you’re ‘only’ worried about outlays and you are trying to figure out if it is a good idea to deny coverage to smokers, and you know that ex-smokers are likely to gain weight and have an increased risk of type 2 diabetes. Then the relevant change in cost is not the money you save on smoking-related illness, it’s the cost change you arrive at when after you account for those savings also account for the increased cost of treating type 2 diabetes. Disease interdependencies are probably as complex as risk factor interdependencies – the two phenomena are to some extent representing the same basic phenomenon – so this makes true cost evaluation even harder than it already was. Not all relevant costs at the societal level are of course medical costs; if people live longer, and they rely partly on a pension scheme to which they are no longer contributing, that cost is also relevant.
If a group of people who live longer cost more than a group of people who do not live as long, and you need to cover the associated shortfall, then – as we concluded in the beginning – there are really only two ways to handle this: Make them pay more than the people who do not live as long, or make the people who do not live as long pay more to cover the shortfall. Another way to look at this is that in this situation you can either tax people ‘for not living long enough’, or you can tax people for ‘not dying at the appropriate time’. On the other hand (?), if a group of people who die early turns out to be the higher-cost group in the relevant comparison (perhaps because they have shorter working lives and so pay into the system for a shorter amount of time), then you can deal with this problem by… either taxing them for ‘not living long enough’ or by punishing the people who live long lives for ‘not dying at the appropriate time’. No, of course it doesn’t matter which group is high cost, the solution mechanism is the same in both cases – make one of the groups pay more. And every time you tweak things you change the incentives of various people, and implicit effects like these hide somewhere in the background.
(Some of the stuff below started out as comments made during a skype conversation with a friend. I added some other unrelated ideas as well. Most of it deals with the job interview setting, but there’s a little bit of other personal stuff at the bottom as well. I don’t really write posts like these anymore and I was strongly considering not posting this, so if you think the post contains some valuable insights you’d probably be well-advised to save it somewhere else; I can’t guarantee that I won’t change my mind about the post later on and delete it when I realize it’s the sort of crap I shouldn’t blog.)
I consider job-interviewing to be a skill that I at some point hopefully not too far into the future will have to try to acquire. Like in other areas of life I’ll probably try to acquire that skill through reading stuff about it – it’s what I do. But it’s probably worth writing down a few observations I’ve already made along the way. It’s my belief that the things that decide whether or not a given person lands a job often are at least somewhat unrelated to the qualifications of said individual, and it probably makes sense to try to optimize along such variables as well. This is hard to do if you’ve not given it some thought. Saying someone got the job because there was a good chemistry between the interviewer and the interviewee may be correct but it is not a very informative statement, and usually some variables go into that equation which can at least be tweaked a little in the right direction.
I’ve occasionally talked to my brother about classic Fermi problems and how to go about answering such questions, which is one angle (some employers do pose such questions during an interview). However a probably much more important angle is the open ended question. Any semi-competent interviewer will probably make use of these during an interview, because they have the potential to give you a lot of information. This is because the potential variation in response strategies is much higher here than in other contexts; people may vary a lot in how many words they use (‘Not enough information (to anwer)’ vs ‘a 10 minute lecture on how you saved the lives of four kittens on November 13, 1999, and because of this – well, also partly due to Marjorie’s accident of course – decided to help out at the local homeless shelter…’), which words they use, how many variables they include in their response, which aspects they emphasize and which factors they exclude/overlook (e.g. intellectual vs social/emotional aspects), and so on and so forth. Interviewers ask such questions at least in part in order to get people to tell stuff about themselves which they might not otherwise have told them. When answering a question like that, one should probably try to keep in mind both why they ask (the answer to the question as such is not that important – which things they may be interested in learning about you is what’s important) and how you’d prefer to present yourself to them (how honest are you going to be in terms of signalling to them which type of person you are, and which types of variables would it be optimal for you to signal that you’d include in a random decision making process?). As always in these contexts, the response strategy will to some extent imply a tradeoff between increasing the likelihood of getting the job and getting a job you don’t want.
I think a common theme in the approach I at this point assume makes sense in the interview context is that you do not in general want to memorize answers to specific questions. This is certainly not the way to handle Fermi problems, and I don’t think it’s a good approach to many other types of questions either. I don’t think the ‘memory strategy’ makes much sense except in so far as it relates to very specific questions which you know will come up during the conversation, and which you know you’ll need to have a good answer for in order to land the job. However in general it probably makes more sense to have some idea which personality traits and behavioural dispositions you’re going to emphasize when talking about yourself (and the sort of work you can do for the employer), and which traits and dispositions it on the other hand would be optimal for you to neglect to tell them about. You probably want to along the way give some thought as to how perceived social signals from them about what they’re looking for should change your response strategies, if at all. Having given such topics some thought beforehand should make the social interaction more natural and make e.g. various ‘evasive maneuvers’ less obvious. A potentially important note is that response relevance (are you answering exactly the question they’re asking you, or are you perhaps answering a slightly different question which you would prefer to answer?) is not necessarily a variable you should always aim to maximize; the importance of this variable will depend upon the question and upon the preferences of the interviewer, and it is likely that you’ll quickly learn how much leeway you’re given in this respect.
All interviews will from a certain point of view contain a lot of elements which are included at least in part in order to make people slip and indicate that they’re not the right person for the job – if 7 people are interviewed and one person gets hired, the interviewer needs to justify why s/he didn’t hire any of the 6 others. Employing strategies such as trying to make people relax and feel comfortable are often effective in terms of squeezing relevant information out of the interviewees, because they tend to increase potential behavioural variation among interviewees; people behave more alike in environmentally-induced high-stress situations than they do in relaxed social environments (see e.g. Funder), and if anything an interviewer wants to maximize behavioural variation (the less important the environmental confound is the more behavioural variation is displayed during each encounter, and the fewer rounds of interviews will be needed to decide upon an optimal candidate). Feeling comfortable during an interview probably should not be considered a state to be avoided as such, as awkward encounters are unlikely to lead anywhere, but it should be kept in mind that there are potential negative behavioural effects associated with feeling ‘too’ comfortable. Extensive knowledge about which sort of social strategies interviewers apply during the interview should not (if you decide to try to obtain such knowledge in order to increase the likelihood of getting hired) make you more overtly cautious or mistrustful, as these are not traits you will want to display too openly (unless you’re applying for a job where such traits may be considered a plus). A better idea may be to signal that you’re comfortable and relaxed, whether or not you actually are comfortable and relaxed – this seems in general to be a much smarter move than would be signalling that ‘you know what they’re trying to do’; the former will, if you do it successfully, both signal confidence and perhaps also make the interviewer believe your behavioural input is more ‘valuable’ (to them) than it may actually be in reality, whereas the latter may put you into a very different box. I have in the past perhaps had a tendency to think of displays of meta-level thinking as a positive factor in these contexts; one example of engaging in this type of behaviour could be to signal that you know some stuff about which traits and behavioural dispositions the employer is likely to consider desirable in an applicant. I’m no longer at all sure such displays are a good idea; there are certainly ways to do these things which are better than others (‘making such comments jokingly and in a light-hearted manner may serve to display both confidence as well as intelligence’). In general displaying and drawing attention to the fact that you’re familiar with mechanisms applied by interviewers and that you’re trying to take them explicitly into account when answering questions may be a bad idea, as it may make your responses less trustworthy. Divulging explicit aspects of your response strategy may not be a good idea.
One thing to remember in the context of the information setting is that the interviewers know next to nothing about you (aside from what they may have learned from your job application and a quick google) and that any variable you have not told them about is a variable they will not take into account when deciding whether or not you should get the job. They’ll ask questions designed to figure out all the relevant information, but sometimes identifying the relevant information is not an easy task, and that helping them along in that respect may be required is something which may be worth keeping in mind. Asking the interviewer questions along the way may be a good idea (if that is ‘permitted’ in the setting in question), in that it may help you get the interviewer to tell you something about herself. Information like that is power because it may help you identify things which you have in common with said individual; the more aspects you have in common, and the more significant these aspects are to the self-perception of the person with whom you’re interacting, the more likely you are to be liked by the interviewer (and the more likely you may be to get the job). Information provided by answers to such questions may also enable you to better gauge which answers they’re looking for, enabling you to potentially switch self-presentation strategies as needed. Even if the setting discourages asking questions, the subtextual information provided by e.g. the type of questions asked by the interviewer may give valuable information that may be applied in a similar manner.
Optimized non-verbal behavioural interaction patterns (eye contact, open body language, etc.), as well as formulation of specific behavioural heuristics derived from the above observations to be applied in the interview setting, are things I’ll have to have a look at later. I should probably also try to at least get some idea about just how ‘normal’ I’ll want to appear to a future employer. Self-presentation strategies, reframing techniques, and perhaps even social inputs from others which might be relevant in the interview setting are potential things to look into later as well. Just like in the dating context, the goal of holding and projecting accurate self-perceptions can be problematic in this context, which is something to have in mind; in this particular context it’s taken as a given that you’ll try to mostly say nice things about yourself and present yourself in the best possible light, and if you don’t do that it may well make you look bad.
I have talked about the Mensa trip I went to this weekend before, so I guess I should add a few remarks about that here – I wrote an account of how it went and how I felt shortly after I’d returned home because I felt a need to do that, but I see no reason to share that stuff here. Intead I’ll keep it brief: It was not very much fun in general, but it wasn’t all bad -> Conclusion: I’m glad I decided to go because of the ‘get outside your comfort zone, try new stuff, learn stuff about yourself’-aspects, but at this point I don’t think I’ll repeat the experience anytime soon. Despite not being all that great it was not a particularly disappointing experience as I had rather low expectations from the outset. Interestingly I only recently realized that I may have initially ‘underestimated’ the value of some social feedback I got during the event; a couple of people there expressed a desire to interact with me at a later point in time (a later specific point in time – it was not ‘a general notion’ but a specific activity they had in mind). That activity is also incidentally placed well outside my comfort zone, but most social activities are anyway and the social angle on offer there is certainly very different from the ones to which I currently have access. I am actually seriously considering participating in that activity as well, if for no other reason then because it’s been a very long time since someone has approached me socially in this specific way. I sometimes forget that it’s actually nice to feel that other people have a desire to interact with you socially.
Before I start out, a few random remarks about other matters: I ran 43 km last week, I’m in excellent shape at this point. I had a doctor’s appointment Tuesday where I was told that my Hba-1c was 6,9%, given the DCCT- HbA1c metric. The numbers look so good that the nurse decided against ordering a lipid panel for the next appointment, even though this is something you normally get done every year; she thought it would not be worth it and I agreed. When viewed from one angle I’m in very good health, yet from another angle it’s also so poor that I’d die without access to my medication. Oh well…
So anyway the reason why I wrote this post is that I have been thinking a little bit on and off about other people’s mental maps – that is to say, how they perceive the world. So for example, do other people see colour the same way I do? Given the existence of colourblind individuals we already know the answer to that one; at least some people perceive light in a different manner. Blind people would be another obvious example; they don’t perceive light at all (/or they are unable to interpret the light waves they do perceive). Of course there is also some variation when it comes to how people’s eyes refract light, some people suffer from myopia, some from presbyopia, etc. – so when other people see stuff, it’s far from obvious why the default assumption should be that they’re seeing the same thing you are. And why for that matter limit the analysis to humans – how do other living creatures perceive the world? As Dawkins points out in The Ancestor’s tale: “mammals in general probably have the poorest colour vision among vertebrates. Most mammals see colour, if at all, only as well as a colourblind man.” How different they must perceive the world! When I was a child I remember asking my father if (and if so, how?) the cat actually knew I was a living creature like itself – if/how it could deduce that the huge lump of cells which was much bigger than it was and which was moving around in various ways was actually one living thing, rather than a big ball of disjoint materials, a huge ball of fur or perhaps something else. I remember also thinking about whether it could tell I was me, rather than some other human walking around there and I tried to come up with a way to test this, without luck. I realized it could tell I was a living creature and I remember finding the thought that it actually could do these things fascinating; it was only later I realized that precisely abilities like these – like, say, the ability to tell the difference between one bunch of living cells and another, and the ability to tell the difference between a bunch of cells and other stuff which are not collections of cells – are extremely useful for living creatures and so it should not be surprising that the cat had evolved senses which could help it figuring out this kind of stuff. (Though perhaps some people would say that this observation should only if anything increase my fascination – I’m not sure if I disagree, I do try to be amazed…).
But vision is but one aspect of our mental map of what the world is like. What about stuff like sounds? Do other people hear and interpret soundwaves the same way you do? I’ve touched upon that one before. In general they probably don’t, at least not completely. There’s a lot of variation – some people are deaf, some people don’t hear very well, some people have excellent hearing; and some people are old and hear sounds of a high frequency much worse than you do; for example my grandmother’s hearing is so bad that without a hearing aid she probably can’t hear birds singing at all. A specific example most people probably are also familiar with is the way other people hear your voice vs the way you hear your own voice and believe you sound like when you talk – most people have, I assume, experienced hearing recordings of their own voices and then been surprised at how they actually sound like. Again the variation in how different organisms approach these things only increases when you include animals in the analysis.
One notion unrelated to sensory stuff which sometimes will surprise me is how other people can actually think about you and talk about you when you’re not around. This always seemed somewhat weird to me that people might do that. Part of why I feel that this is strange or weird is presumably that I don’t really ever think much about this – if I don’t think about it, they can’t be doing this stuff, right? (Wrong). Another factor is this: Why would anyone waste time talking about me, when there are so many other, more interesting, people or subjects around that they might talk about? In a way it’s a mental double standard of sorts to consider that weird and I’m aware of it – I know that I’ll sometimes talk about other people I know when engaged in conversation with others, so the idea that you can talk about other people that way is not unfamiliar to me and such behaviour is not unnatural and I do engage in it. So it’s not that I’m completely in the dark as to why people do it. But the thought that other people are doing this, that I might be one they talk about – that’s somehow a very strange notion to me. I am aware that for some people this is not a strange notion at all, but rather an aspect of human social interaction that they give a lot of thought; and I find this interesting. Some people will worry a lot about what others say about them when they’re not around, if they perhaps recently did something which might lead others to say bad things about them later, how to minimize this risk, etc. Some people are probably much more prone to this kind of thinking than are others; I assume there are gender differences early on (‘teenage boys threaten or beat up their enemies, teenage girls slander them..’), and cost-benefit aspects are also relevant to include here given the underlying coalition-forming and -building aspects – if it’s somehow important that the other person likes you for some reason, you’ll presumably worry more about this stuff and be more likely to engage in this type of thinking than you otherwise would. On a related note we think more about the people we care about than we do about the people we don’t, and so we’re probably also more likely to talk about the people we care about when engaging in conversations with others.
How you look matters when it comes to how other people behave towards you – for some people it matters a lot, for others it matters less. I find this interesting, that some people give this aspect much more attention than I do, because I so very rarely spare a thought about how I happen to look at any given point in time. I don’t have to look at myself much so I don’t really see why I should care how I look, except to the extent that other people care – and I’m aware part of the reason why I do not care is that I do not have much of a reason to care at this point; I rarely interact with other people in the real world, and I never interact with the people who might be the most interested in the way I look – viz. potential romantic partners. Incidentally it seems a reasonable assumption to me that when it comes to this aspect of people’s mental maps – romantic stuff – people who are in relationships think differently about other people in their social spheres than do people who are not. But I’ve never really given it much thought. It seems to be a commonly voiced complaint that males without a romantic partner are highly likely to approach females with this particular type of framework in mind – ‘a woman is a potential sexual partner until proven otherwise’ – whereas women are more likely (it is claimed) to perceive of a male using a different framework; is he a potential friend/ally/…? Anyway, looks is but one of many aspects; people’s mental maps are so different from each other that it’s presumably a very common experience to have people judging you based on criteria you consider to be irrelevant and do not give a moment’s thought. Just as it’s presumably a common experience to have people judging you harshly for the behaviours you consider to be amongst your most virtuous.
A thing that should make it easier to conceptualize how different the mental map of another individual may look like is to try to remember that the other person with whom you’re interacting is always the main character in his or her story.
Last week I visited my brothers and my parents, and I also interacted with a few others along the way. Overall the ‘short vacation’ was an interesting experience. I left my place Tuesday afternoon and I was back home in Aarhus Monday afternoon.
I stayed at my little brother’s place the first few days. He’s a student like me, lives in Copenhagen with his girlfriend. He was at work during the days I stayed there and came home in the late afternoon – not optimal, but that was the way it was going to be and I was happy to be allowed to stay with him for a few days. When I arrived the first evening he was relaxing playing a new consol game he’d gotten – I asked him and I think the name of the game was Skyrim, but I’m not completely sure (anyway that’s not important). He hasn’t played a game like that for a while, he mentioned to me that he’d had problems controlling the amount of time these things take out of his life, and so he’s been trying to avoid them – so maybe the day was badly chosen and unrepresentative, but anyway that was what he was doing that particular evening. The second evening the two of us visited my big brother – my little brother’s girlfriend was having fun with her sisters elsewhere – and we talked, had dinner, observed the little guy. Thursday evening I came ‘home’ late on account of having visited a friend in Copenhagen for most of the afternoon and part of the evening – back at my brother’s place it was an evening with TV, there was a programme about some policemen and the kind of stuff they do when they’re at work. Part of the programme focused on an area where a lot of people were partying, they spent some time covering that stuff. I decided to give it a bit of my attention for a little while, at least in part on account of it being so very different from what I usually do in the evenings (and you’re supposed to do different things from what you normally do when you’re on vacation, or so I’ve been told. I wasn’t completely successful, but I did try.). I left for my parents’ place Friday morning and stayed with them the rest of the time – they had friends from Wales visiting and I was curious to know how they were, what they were like, etc. I have seen them before a few times, but it’s a long time ago.
During the evening we visited my big brother one subject that came up was George Martin’s books – if I remember correctly, my big brother started the first one around Christmas and he was roughly half way through by now. My little brother had also started out on one of those books around Christmas, but the one he’d started was the second book, on account of him having already read the first one – and like my big brother, he was also roughly half way through the book at this point (he’d seen the series though, so he knew what was coming…). I read the first four books roughly within a month, and I completed the second half of the fourth book while I was visiting my little brother – I read more than they do. Visits like these make it easier for me to get an impression of what kind of stuff they might be doing instead.
Thursday evening I had my computer on while the TV programme was on, but I did follow some of the stuff that happened on the (-TV) screen. Along the way I asked some questions about the programme to my brother because I assumed, correctly, that he knew more about the context than I did; he did, in more ways than one. He’d watched the programme before, but aside from that there was also the fact that part of the program was set somewhere both he and his girlfriend had been while they were younger; a Danish area in Aalborg with a lot of bars where young people go to get drunk and have fun. The worlds of my brothers’ are different from my world in some key aspects, and the world at display in the programme was in a way a good illustration of how we’ve had different experiences when we were younger. I could never have enjoyed spending time there, where the programme was set – people seemed to be having fun in a setting where I’d be running for my life (loud music and noise everywhere; people everywhere – no, drunk people everywhere; some of the people you could see were in all likelihood under the influence of stuff besides alcohol). The behaviours of some of the people in the programme were really hard for me to understand – some of the people involved were repeatedly getting into physical fights with each other about ridiculous stuff (…one guy says something, the other guy responds, and one minute later they’re at each other’s throats – some of them seemed to have no self-control at all); one guy told the people who were interviewing him that he’d been convicted of a violent crime ‘three or four times’ (or was it ‘two or three’?) – he seemed to not even be sure how many times he’d committed violent crime and been punished for it by the authorities. He explained that he didn’t back down when people threatened him like some people do – he wasn’t a chicken, he was very focused on getting that point across to the people who interviewed him (…while I was thinking: ‘How old are you again? 5?). I tried to imagine what that guy might be doing for a living, and at the same time I also tried to imagine younger versions of my brothers in those bars some years ago, drinking and having fun like some of the people in the background whom the police did not see any need to talk to. It was quite hard – were they once ‘like that?’ Given that my little brother still goes to parties sometimes with his friends and get drunk with them, I assume ‘this kind of stuff’ (from my perspective) is not a part of his life he’s completely given up yet; this is an interesting thought.
The Welsh guy visiting my parents seemed to enjoy spending an hour or two of his afternoon watching people biking in France. He made it clear that he generally likes watching sports on television, and he made it clear that he was a little bit annoyed by the fact that he would be unable to watch a specific rugby match due to the timing of their vacation (he said it was no big deal, but you don’t bring up that kind of stuff in a discussion in the first place if you don’t care at all, so…). Given the kind of work he does (sales), he normally drives around 300 km per day on average. They were nice people, but they were very different from me.
What is a normal day like for most people? It’s different.
Here are 5 statements:
“You have a nice place.”
“You’re a bit lazy, and I’m sure you’d have gotten more out of the latest lecture if you’d read the material more carefully beforehand.”
“you have a fantastic episodic memory.”
“I love that you actually read these kinds of things…”
“The place would have looked less messy if you’d dusted a bit before we arrived.”
Yesterday I was told 3 of those things. One is a direct quote, the other two are English translations of what was said in Danish. I don’t think it takes a lot of work for you to realize which of the above statements I ‘made up’.
There’s a lot of stuff you can’t say. And a lot of stuff you’re expected to say. And there’s a lot of stuff that doesn’t go into either of those categories.
I assume that saying nice things to others will most often make others think you’re more likely to be a nice person, because saying nice things is certainly something most people would assume that nice people are more likely to do (doing nice things is a stronger signal than saying nice things, but saying nice things provides many psychological benefits as well).
Providing constructive criticism will often be a much more risky thing to do than to say something nice, even if that criticism includes potentially much more useful information. This is, among other things, because the more potentially useful the criticism provided is, the more likely the other party is to respond emotionally, rather than rationally, to the criticism in question. So people are unlikely to run the risk of providing useful constructive criticism to another individual before they know the other party well (…and presumably have said a lot of nice things to them). Granted, someone who knows the other individual well is also more likely to be able to provide constructive criticism so this dynamic is not without benefits (lower signal to noise ratio), but the total amount of constructive criticism supplied would surely be much higher if it was costless to provide it to strangers. One big problem is that it’s hard to credibly commit to not taking constructive criticism personally and responding emotionally.
At this point it seems to me that most people who interact with me regularly are being nice to me and mostly say nice things to me. I find it interesting that I rarely explicitly acknowledge that this fact may not necessarily have anything to do with me and my attributes, and that people may say nice things simply because of how they believe such statements reflect on themselves (‘I’m the kind of person who says that he has a nice place. That’s what nice people say – so I must be a nice person.’). Also, communication strategies may be implicit and not subject to close scrutiny by the people employing them – indeed it may be optimal not to subject your communication strategies to close scrutiny, as an implicit approach to these matters makes it harder to evaluate e.g. the level of sincerity displayed (and thus makes you more likely to successfully claim at the very least plausible deniability when you’re not being perfectly honest). Different perceptions of an individual’s status, attributes, etc., may make some sincere nice statements from one individual to the other seem insincere to the receiver (making a (negative) emotional response more likely).
Maybe a good way of thinking about this stuff is in terms of a binary social (verbal) feedback varible, which may be either ‘nice’ or ‘critical’, and then making an analogy to consumption vs investment. Nice things being said have consumption value; we like when others say nice things about us, and we derive pleasure from that. Criticism has investment characteristics; it’s initially costly (it hurts to be told you’re lazy), but it may have large positive effects in the long run if potential improvement strategies are addressed. Most of income is consumption – we’re mostly told nice things. If consumption is very low (not enough social validation from peers), it may be better for an individual to lower income than to invest the marginal unit of income; even potentially very useful criticism may not be very welcome when you feel socially rejected by others. Actually you’ll only be willing to undertake an investment (accept critical remarks) once your consumption is higher than some specific baseline level (people are required to say a lot of nice things to others before they’re allowed to say less nice things to them without repercussions).
I don’t know. I like when people say nice things to me, so I’m certainly not telling anybody to stop doing that. But social stuff is confusing when you start to think about it.
On a related note – yesterday three people said something nice to me. Yesterday was a good day.
A few recent examples:
i. I played Citadels with my little brother this Christmas. I spotted two obvious instances of poor modelling which happened during the game.
The game is complex and I won’t go over all the rules here – it should be noted that the game complexity is probably part of why these errors to be described below were made in the first place. But anyway, we were in a situation where my brother had picked a specific card. Having picked that specific card he had to try to guess which card I had on my hand – if he guessed correctly, I’d lose my turn and the income that turn would generate (which would benefit him and harm me, making him more likely to win the game). There were two obvious candidates; one card generating a potential income of 2 and another other card generating a potential income of 5. He knew I’d taken one of these cards but not which of them I’d picked – if I randomized my draw completely there’d thus be a 50% chance for him to pick the right card. The situation took place during one ’round’ (subgame) of the game, and both of us knew that this would not be the last round in the game. But we did not know how many more rounds were to be played – a conservative estimate would be at least 4 or 5. Whether it would make sense to consider the round to be one round of several in a semi-‘pure’ repeated game or not, and which type of repeated game we’re talking about, depended to some extent on which cards would be picked in future rounds (as I mentioned, the game is complicated – the fundamentals of the stage game can change during gameplay, e.g. I might end up in my brother’s position, i.e. as the player who should guess which card the other player had taken, in a future round); but it would make little sense to consider it a single-shot game.
Now the first thing to note here is that if you consider it a repeated game, it probably doesn’t make a lot of sense not to at least consider to mix strategies. You could probably make an even stronger argument: Consider that if I play ‘2’ (the card giving me an income of 2) with a probability of 100 % my brother would probably pick up on that relatively fast and pick that card every round, and I’d end up with an income of zero – and if I always played ‘5’, he’d always pick 5. So the second person, the one picking the card to be guessed, has to consider adding some uncertainty to the table or he’s probably going to be in trouble. Now let’s think about how one might best mix strategies in this situation. An important theoretical aspect here is that while it’s certainly a finite game, the lenght of the game is still unknown, or at least uncertain, to the players (they do have some idea how long it’ll take to finish the game). This uncertainty adds complexity, and even though only relatively few rounds of the game is left, the game is still much too complex to be solvable by backwards induction by the players while they play the game even if such a solution might exist. Incidentally in the specific game in question when playing that specific subgame I evaluated the costs of reversing the roles of the players (so that I’d get to be the one guessing, which would be a permissible change to the stage game given a specific subgame strategy constellation) to be too high to implement – but my brother didn’t know that.
The first modelling error here was done by me when I was deciding which card to pick. I did pure randomization when I picked my card – basically I shuffled the cards and picked one of the two cards at random. Basically this was just me being stupid, because this is obviously not the best mixed strategy (it’s only optimal in the case where the expected income derived from the two cards are equal). One way to think about this is that a 50% likelihood of picking either card gives you an expected income of 0,5 x 2 + 0,5 x 5 = 3,5 if your opponent also mixes 50/50 – and foolishly I’d considered only that strategic response to my mixing strategy. The problem is that of course the opponent needn’t mix at all! A mixing strategy on his part is obviously dominated by the pure strategy of always picking ‘5’ – if he always picks ‘5’, I end up with an average income of only 1 (I get an income of 2 every second round). I realized this 5 seconds after I’d picked my card..
This is where we get to the second modelling error. My little brother said after that specific round had been played – where he’d picked 5 and I’d gotten lucky and randomly picked 2 (so the inferior strategy did not cost me anything in this specific case) that ‘of course he’d picked 5, it was the dominant strategy’. I thought that this was obviously true in the specific case of a mixing strategy on my part with 50/50 mixing, but that it would not be an optimal response to other mixing strategies with a low probability of playing ‘5’ (nor would it be an optimal response to the pure strategy of 2). I assumed we’d play at least four more rounds, and in that case it would probably be optimal to go with a mixing strategy with a ~30/70% likelihood or something along those lines (i.e. one ‘5’ and 3 ‘2’s in the rounds to come) – I figured that 5 is 2,5 as much as 2, so I should play ‘2’ 2,5 times as often as ‘5’ in equilibrium; i.e. 2,5 ‘2’s for every 1 ‘5’, meaning I should play ‘2’ in 2,5 out of 3,5 rounds, which would be about 70% of the time. I assumed my little brother would mix as well in the rounds to come when I would no longer obviously mix 50/50 and that he’d reach a similar conclusion – that he should pick 5 more often than 2 to minimize my potential income and end up near the (assumed) long-run equilibrium. After the game my little brother made it clear to me that he had not mixed but had played 5 every time, and he stated that he’d picked that strategy because it was ‘the dominant strategy’ and because it would be his best response to any strategy I could come up with. Which it clearly wasn’t.
ii. I went shopping yesterday. I got to the store and it was full of people. I generally dislike shopping when there’s a lot of people around, and I generally avoid this by strategically shopping at times during the day where I know not very many people go shopping. I have previously arrived to a store, decided it was too full of people and postponed my shopping to a later point in time because of that, but yesterday I decided instead to just get it over with fast. When I came back home I remembered that it’s been mentioned in the papers that a lot of people are sick with influenza in Aarhus, and so I realized that I’d just exposed myself to a huge health risk considering how many people were in the store. If asked about this type of stuff before I left my home, I’d have said that such a risk would be completely unacceptable to me, because I have exams before long and thus it would be very inconvenient for me to get sick at this point. If I’d included that health risk in my model, I would not have gone shopping yesterday.
I will often avoid taking public transportation when it’s possible for me to do so due to similar health-related reasons – diseases are easily transmitted in such environments. People often do not remember to include risks like these in their mental models. That’s poor modelling.
Even (reasonably) simple card games and everyday decisions about stuff like when and where to go grocery shopping can include models too complex for humans to handle well; our cognitive limitations are easy to ignore if we don’t think about them, but they’re there just the same. Social dynamics are usually a lot more complex to model than the stuff in the post. Sometimes it seems almost unbelievable to me that people somehow make all this stuff work – taking all those decisions they do on an average day, interacting with all those other people along the way… Given how complex the world is and how even very simple things like a card game can cause us all kinds of problems when we start thinking about them, I find this pretty amazing to think about.
“A long-held myth regarding development is that as people age, they all become alike. This view is refuted by the third principle of adult development and aging, which asserts that as people age, they become more different from each other rather than more alike. With increasing age, older adults become a more diverse segment of the population in terms of their physical functioning, psychological performance, and conditions of living. In one often-cited study, researchers examined a large number of studies of aging to compare the amount of variability in older versus younger adults (Nelson & Dannefer, 1992). This research established that the variability, or how differently people responded to the measures, was far greater among older adults. Research continues to underscore the notion that individuals continue to become less alike with age. Such findings suggest that diversity becomes an increasingly prominent theme during the adult years, a point we will continue to focus on throughout this book.
The fact that there are increasing differences among adults as they grow older also ties into the importance of experiences in shaping development. As people go through life, their experiences cause them to diverge from others of the same age in more and more ways. You have made the decision to go to college, while others in your age group may have enlisted for military service. You may meet your future spouse in college, while your best friend remains on the dating scene for years. Upon graduation, some may choose to pursue graduate studies as others enter into the workforce. You may or may not choose to start a family, or have already begun the process. With the passage of time, your differing experiences build upon each other to helpmold the person you become. The many possibilities that can stem from the choices you make help to illustrate that the permutations of events in people’s lives are virtually endless. Personal historiesmove in increasingly idiosyncratic directions with each passing day, year, and decade of life.”
I didn’t post this quote when I first blogged Adult Development and Aging mainly because I figured the insight was probably important enough to merit a post of its own, but also because I figured that if they dealt with this aspect in more details later on I’d rather wait until then to handle the specifics. Anyway it’ll be a while until I get to that stuff and I find myself thinking about these things now and then these days. I’m mostly thinking about how this stuff relates to how we form friendships and establish romantic partnerships. As people age it seems to me that they become less likely to meet that ‘someone who’s just right for me’; and not just because of the work of their romantic rivals. Because of the increasing variation in the behaviours, preferences and outcomes perhaps people who are aging gradually realize that it is strategically optimal for them to become more tolerant, more permissive, and so they implicitly gradually implement such strategies to increase their chances – but that’s hardly always the case and to the extent that it is, the process likely involves them making compromises that perhaps would have been unnecessary if the partners in question had met a decade earlier in their lives. (Though I may here underestimate how much work is required to make a relationship last that long.) Path dependence matters a lot when it comes to both friends and relationships. As I’ve underscored before here on the blog a ‘new’ friend is most often introduced by an ‘old’ friend or acquaintance, and most people rely to a very great extent on their existing social network when they want to make adjustments to it. Over time people’s social networks become entrenched; it gets harder to find and keep new friends not only because every potential new friend is competing for your attention with the whole set of friends you already have, but also because the potential new friend becomes increasingly less likely to share your interests or preferences over time, at the very least when compared with the people with whom you frequently interact. Interaction affects preferences and behaviours, for friends, family and partners alike.
Though people in general tend to become more different from each other as they age, I tend to believe that cohabitating partners do not and that they on the other hand tend to become more alike over time. This is of course because they tend to form similar habits, do similar stuff. Another noteworthy dynamic is the ‘I’ve known you a long time and I’ve invested a lot in this relationship at this point, so it doesn’t matter as much to me that you’re not as compatible as I’d like you to be as it would if we’d only just met’. Of course there’s also (hopefully) the frequent feedback from the partner, making you less likely to stray far away from the partner ideal of the other party – such feedback is harder to obtain for people not in a relationship. There’s also the ‘my previous partner/parents/whatever behaved this way (/cheered for the Green team) and so if you don’t behave this way we won’t be compatible’. Politics, religion and similar stuff’s really important, and often people’s opinions about these matters crystallize over time. If crystallization of this kind of stuff takes place over time, it will generally harm outsiders (singles) and benefit insiders (couples); the people in romantic relationships become more alike over time and so they’ll feel a closer bond to each other as time goes by, and the aging single will in the absence of a romantic partner often obtain much of the relevant social feedback from other singles who may not be able to give useful feedback regarding this aspect of life. For example a single aging man may start to think that his religious- or political views cannot possibly matter a great deal to a potential partner because such things do not matter a great deal to the people with whom he usually interacts. It should perhaps also be noted that the potential decreased compatibility of the remaining outsiders with the insiders makes the outside option become less attractive to the insiders (making them less likely to break up with their partners).
About a decade ago I had relatively few problems talking to and interacting with my extended family (cousins, uncles). These days it’s a pain for me to do it for any period of time, and I found myself actively avoiding the presence of some of these people this Christmas. To the extent that I did interact with them I was polite and helpful, but I did avoid them and I did not want to spend time with them. I find myself worried about where I’ll end up in another decade if things do not go well. Or is it ‘if things do go well?’
Even though I mostly don’t post personal stuff here anymore, I felt a personal post this week was probably in order. I wrote another one of those earlier this week, but I pulled it quite fast (for many reasons) – we’ll see if I let this one through my implicit filter.
So, people who’ve read along for a while are probably starting to get worried at this point – ‘personal stuff’, that can’t be good… Well, no need to worry. I’ve had a good week. A close friend who needed a place to crash stayed with me for a few days; this is the first time I’ve ever been in such a situation. I can’t speak for my friend, but I had a good time. I value my privacy very highly, and I generally don’t like being around people for extended periods of time. So the fact that I had a good time is, I think, sort of important. I’ve been thinking that there might be things to learn from the experience so I’ve thought a bit about it along the way. An important insight did not occur to me until today, and that insight is what first motivated me to write this post. But I’ll get to that stuff later.
When my friend (let’s call the individual in question ‘X’) asked me, one of my first reactions was to feel flattered. I’m vain, like most people – sue me. Anyway I realized that I was now in a situation where I had a friend who felt comfortable asking me a favour like that, and I realized that that felt awesome. Especially as I was able to say yes; it felt awesome being able to say yes. I should perhaps point out that even though X would probably argue – indeed has argued (quoting X: “you’ve never had friends who are idiotic enough to get themselves in a situation in which they’d appreciate help like that”) – that it’s not necessarily a good thing that I now have a friend ‘like that’, I really couldn’t take that argument seriously.
When X asked me I also felt a little bit scared and uncomfortable. Though I should make it clear that most of those thoughts only came later, after I’d said yes. What if it didn’t work out? What if I couldn’t stand spending so much time with X, or vice versa? As mentioned I’m a very private person and given the circumstances we’d have to share the same room for a few days – what if that was too much? I really didn’t know if I could handle that; during the last ten years I don’t think I’ve ever been in a situation where I was more or less unable to retreat from other people if it became too much for me for any extended period of time. And what if it became too much for X – what if X couldn’t stand being around me that long? What helped me there, though, was that I knew that X knows at least as much about what’s going on in my life as do my own brothers, and it’s very safe to say that X is personality-wise more like me than anyone in my own family. If I couldn’t even handle a few days in the same room as X, well… As for whether X could handle spending so much time with me, I figured that as long as I at least tried to behave reasonably like the person I’d like to be, which is what I try to a significant extent to do on a day to day basis anyway (though with varying degrees of success), it should be okay. So I ended up thinking that it would be fine and that it might even be fun and/or do me some good – the implicitly added social control element making me marginally more likely to do useful and productive stuff while X was around also had to be considered (the Hawthorne effect). Though on the other hand I’d have to add here that this element should not be overemphasized; X knows me quite well and so I knew that I wouldn’t have to put up any kind of elaborate facade in order to behave in what X would consider an ‘acceptable manner’. If that had not been the case I’d have been a lot more worried about the arrangement, because in that case I’d also had had to worry about significant foreseeable and ‘perceived necessary’ behavioural changes ‘draining me’.
Since I more or less stopped intrinsically caring about grades and how I did in school, I’ve tended to have a bit of a hard time figuring out what I was actually aiming for in life. My brain has tried to convince me that partnership and perhaps children are the sort of things I should aim for, and it has also tried to convince me that I’m not particularly likely to experience that kind of stuff during my life, which is annoying. I’ve long since convinced myself that career-stuff is unlikely to be fulfilling on its own. So what else? An interesting notion here is the fact that I’ve ‘traditionally’ been very skeptical about the value of friendships – close friendships were for people who couldn’t find a partner and then tried to fill out the void in other ways. I’d think that even long-term friends aren’t actually all that close, and how many of the people who cannot even get/keep a partner manage to find/keep a close, long-term friend anyway? I’ve been skeptical.
Since my period of social isolation ended, to the extent that it has, I’ve so far tended to think of friendships as a way to avoid problems, as a strategy to avoid isolation. It was the main reason why I started out interacting with people again; to avoid problems, to avoid a repeat of the hikikomori experience. It wasn’t that I thought I’d find interesting people to interact with – I’d never had close friends at that point. According to this conceptual approach I employed friends were perceived to have but instrumental value – ‘it’s good for you to interact with others so you should do that from time to time’. And that was it. It no longer is. Friendships can be much, much more than that. My friendship with X is not ‘just’ a ‘friendships to avoid problems’-friendship. My friendship with X is at this point, at least to me, probably closer to an ‘X is awesome, I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views, and I’d feel devastated if I no longer had this’-friendship. I don’t interact with X because I know that ‘it’s good for me’; I do it because I want to, because I enjoy it. Maybe I was in the same situation three months ago and it has just taken this long for my self-awareness to truly catch up with me; it’s been a gradual process surely, but it just hit me today: ‘This friendship is an important part of your life, and you should be very careful not to underestimate how valuable it is.’ At this point I’m really starting to realize that a friendship isn’t ‘just’ anything; establishing and maintaining such a social relationship with another individual can meaningfully be considered one of the major life goals.
In case anyone was wondering, X is a female.
Regarding the “I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views”-part, I’m pretty sure I could say that about a commenter or two here as well. ‘Online friendships’ are different from real-life ones but sometimes they end up overlapping and I should probably mention that if one of you people feel like you’d like to know me better and that I’d perhaps like to know you better as well, you’re welcome to reach out in this comment section. I’ve started to use skype regularly and it’s (…almost… – you can’t really disregard the time difference) as easy to skype with someone from Denmark as it is to skype with someone who lives on a completely different continent. I’d probably prefer to establish contact with people who’ve commented here before and/or have read along for a while. And please don’t consider it a one-time offer; consider it a standing invitation.
So I thought about this stuff a while ago while I was out for a walk, and I decided back then that I should blog it when I got home. When I did get home I’d forgotten all about it (it was a long walk). Today I was out walking again, and well…
Okay, so let’s assume a job interviewer asks you how you’d feel about working with X, X being the kind of stuff you could be expected to work with in the job function in question. The obvious answer to many people would be ‘I’d feel great about working with X, I’d be very excited to have that opportunity’ or something along those lines. Though ‘it’s what I’ve dreamt of my entire life’ is probably an unwise reply in some situations (desk clerk, bouncer, renovation worker..), in general it seems obvious that it makes a lot of sense to fake interest and excitement in such a situation; this is because such an approach is usually perceived to make you more likely to land the job.
But why is that again? Let’s think a little bit about the signalling aspects here. People who are intrinsically motivated need lower monetary compensation rates to motivate them to do their jobs than do people who are not; they’ll be happy with a lower wage because they like what they do, and if they really like what they do they’re less likely to complain about stuff like e.g. a poor work environment. So if you signal that you’re eager to work with this stuff, you signal that you have a lower reservation wage. This makes you more likely to land the job if you’re perceived to meet the task requirements, but the deceit should in equilibrium affect the employer’s expectations about your productivity – people who have lower reservation wages are all else equal less productive. On the other hand perhaps the reason why you’re eager is that you know a lot about the subject, which means that all else isn’t equal and that your interest might lead to higher productivity on the job or lower training costs. Depending on the specifics there are likely multiple optimal strategies here; and it’s worth having in mind that individual characteristics are highly likely to impact which strategy is optimal for a given individual in a given setting.
Now consider another variable that’s likely to come up in a job interview setting: Ambition. Again people are often implicitly encouraged to fake ambition because it’s perceived in some areas (though far from all) to increase their employment opportunities. If you’re ambitious you’re willing to work harder than the other guy. If you’re ambitious this means you care about the social hierarchy in the organisation, and if you care about that stuff you’ll be more likely to follow the instructions you’re given which is often a useful ability for an employee to possess. If you’re ambitious you’re probably likely to be willing to do a lot of extra stuff to impress the people above you so that you can rise in the social hierarchy, which corresponds to working harder for a lower level of monetary compensation. On the other hand some employers prefer to limit the competition for the management spots by selecting people who are not ‘too ambitious’ for a given job function. And if a vacancy is created for a job function where it’s unlikely that a satisfactory performance will lead to further advancement in the organisational hierarchy, an employer may prefer an unambitious applicant, as he or she is less likely to become disgruntled by the absence of career advancement opportunities. Ambitious people are incidentally quite likely to be perceived of as more aggressive than their unambitious counterparts, which also translates to higher expected wage demands (for the same amount of work).
If you’re perceived to be dishonest about your goals or attributes to a greater extent than is tolerated in such situations this will most likely harm your opportunities greatly, but it’s worth noting that the tolerated level of dishonesty may vary a lot across organisations. Note that organisations always have an incentive to create the illusion that honesty is your best bet at a job interview; that’s because it’s the best bet for the organisation, i.e. the strategy which, if applied by all applicants, would give the organisation the highest potential payoff. This is because if all applicants supply all the decision-relevant information to the organisation, this will make the organisation most likely to be able to pick the best applicant for the job. But here’s the thing; the organisational payoff should at the point where you’re not yet hired by the organisation be irrelevant to you. You don’t care about the organisational payoff at the job interview stage, at this stage you only care about your likelihood of landing the job and the expected pay; withholding information will most frequently be optimal if that information might make you less likely to land the job or likely to earn less. Please do not assume that just because firms implicitly punish deceit, complete honesty is the best strategy for you – in most settings, it’ll likely be a stochastically dominated strategy. On the other hand if you have to grossly misrepresent who you are in order to land the job, the expected derived utility from landing the job probably isn’t as high as you think it is; the employer is not the only one who should care about whether you’re a good match for the job. The optimal amount of deceit is non-zero, but the risk of getting the wrong job should be weighed against the risk of not getting the job. When deciding on the optimal level of deceit do recall that the firm may have an incentive to withhold information from you as well, either by lying to you about which types of information that are important to them when it comes to whom to hire (in order to stop people from trying to game the system and weed out dishonest candidates), by misrepresenting the career opportunities associated with the job (if applicants think the job is high-profile and is likely to increase their future job market opportunities, they’ll likely decrease their wage demands because of the human capital investment value of the job), or perhaps by misrepresenting to some extent what you’ll actually be doing when you get the job (bait-and-switch type strategies are likely sometimes optimal, because it can lead to lower wage demands).
Like in romantic settings, displaying a low level of self-confidence is likely sub-optimal here. If you can’t convince yourself you’re the applicant they should pick, this is a great example of the kind of information you should be trying to hide from them. Don’t give the people involved the impression that you’re doing them a favour by showing up to the interview. Most of the people who go to an interview don’t get the job, and from a certain point of view the firm you’re interviewing with is quite likely to simply be wasting your time.
I’ve written a lot of stuff about models on this blog in the past, so some of the stuff I’m writing now I’ve probably covered before. I thought it was worth revisiting the subject anyway.
First off, one way to think about a mental model is to consider it a way of thinking about a problem. This also implies that if there’s a problem of some sort, you can construct a model. And thus, from a certain point of view (…the point of view of mathematicians, economists, engineers, or…), there’s always a model. It can be implicit, it can be explicit – but it’s there somewhere. A model is an explanation, and it’s always possible to come up with an explanation. So when you see a model you don’t like, it’s not very helpful to say that ‘it’s only a model’. What else would it be? And so is whatever you’re considering, from a certain point of view. If the model presented is an inaccurate representation of the problem at hand, then it’s the inaccuracy-part that should be the subject of criticism, not the model-part.
Most people dislike formal models that are very specific and give very precise estimates. They know instinctively that these models are simplistic and that the real world is much more complicated than the models – so the perceived over-precise estimates may be way off and may even seem downright silly. Skepticism is warranted, surely. But the precision is also a very helpful aspect of such models, because precision allows us to be demonstratively wrong about something. I’d argue that this is also an important part of why such models are disliked by humans. Many people who’ve worked a bit with models have a quite low regard for formal models because they know the assumptions are driving many of the results. They are skeptical and prefer the models in their own minds. Those ‘mind models’ are much less specific, much more flexible and much less likely to actually generate testable hypotheses. It’s not that they are necessarily wrong – it’s more that they’re unlikely to ever be proven wrong. People who’ve not worked with models also are skeptic when it comes to models, and their mind models are even less specific and testable than the rest.
Here’s the thing: If you think that it makes good sense to be skeptical of models where assumptions are clearly stated beforehand, where parameters/parameter estimates are generated through a clear and transparent process and where limitations are addressed, then you should be a lot more skeptical of models where these conditions are not met.
Most people prefer vague models because they are more convenient. You’re less likely to be proven wrong; you’re less likely to take a stance that are at odds with the tribe; if the model is general enough it will be able to predict anything, making you think that you’re always right. They’re also often less computationally expensive to formulate.
Here’s one hypothesis from a model: ‘Immigrants from country X are 2,5 times as likely to have a criminal record than are non-immigrants.’
Here’s another hypothesis: ‘Immigrants from country X are more likely to have a criminal record than are non-immigrants.’
Here’s a third hypothesis: ‘Some immigrants from country X have a criminal record.’
Here’s a fourth hypothesis: ‘Some people commit crime.’
Which one of these hypotheses has the greatest information potential, that is the potential to tell us the most about the world? The first one, given that all the other three are also true if that one is. Which one is more likely to be considered correct when evaluated against the evidence? The last one.
From an information processing point of view, having nothing but correct beliefs you are certain about is not a good thing. That’s a sign that your models are very poor and don’t contain a lot of information. If you never seem to be (/realize you’re) wrong, that’s a sign that you’re doing things wrong.
Sometimes the ‘models’ we make use of when evaluating evidence is of the variety: ‘I’d like X to be true (because Y, Z), so obviously X is true.’ Sometimes that’s the model you use when you reject the presented formal model with a beta-estimate of 0,21 and a standard deviation of 0,06. This is worth having in mind.
On a related note, of course not all models are about generating hypotheses and testing them – some of them are rather meant to be used to illustrate certain aspects of a problem at hand in a simple and transparent manner. It’s always important to have in mind what the model is trying to achieve. That goes for the ‘mind models’ too. Are you trying to learn new stuff about the world, or are you just trying to be right?
Below some questions that it can be helpful to revisit every now and then when analyzing beliefs one holds:
Very few ideas we hold are ideas we come up with ourselves. What happens is that someone introduces us to an argument. Later on a counterargument is introduced. Often the timing of these things matter a lot; people are often more likely to pick the first side that’s presented to them, especially if they’re encouraged to invest in it early on. However one is also more likely to remember the last argument one heard than the first one. So it’s certainly worth asking: Who presented the idea to you first? How long ago is it? Did you last hear an argument in favour of your belief or an argument against it? If a belief is introduced to you by someone close to you, like a spouse or (if you’re young?) a parent – or someone you look up to and/or would like to impress – then you’re all else equal more likely to be biased and you should act as if the belief in question is less likely to be correct than it would be if a person you didn’t know had presented the idea to you.
How long have you held the belief? All else equal, you should be more skeptical about beliefs you’ve held for a long time. Beliefs we hold for a long time tend to be or become part of the wallpaper – and beliefs you’re not even aware that you hold may still influence you in various ways. If you don’t remember the answers to some of the previous questions, you shouldn’t just ignore them; a better idea would probably be to become more skeptical.
How confident are you that your belief is right? I don’t believe it’s particularly useful to quantify this kind of stuff in detail, but this is a question one should ask oneself from time to time. Changes in confidence levels are important, as are stationary confidence levels.
Do you consider this belief to be an important part of who you are? Could you imagine being wrong about this? What would being wrong about this belief imply? How many other beliefs you hold are contingent upon this particular belief of yours?
Do other people you know share your point of view? Have they influenced you (not just by introducing you to the idea)? Has belief convergence taken place? Do you know people who do not share your belief? Does the disagreement make you perceive them in a different light – how do you feel about people who do not share your belief? Have you ever felt/do you feel that people who hold different beliefs are ‘less worthy’, that they are ‘stupid’, or perhaps that they ‘don’t understand the issue’?
How much time have you spent thinking about the belief? How much of that time was spent gathering data? Which kind of data? Have you spent enough time and/or seen enough data to even have an opinion about this?
People who openly question your beliefs are much more likely to be useful to you when it comes to obtaining correct beliefs about the world than are people who do not. People who are more detached, who care less about specific beliefs, are also likely to be able to help you – they’re less likely to think of open disagreement as a personal attack or as a signal of tribal disloyalty that ought to be punished. Do you take advantage of this fact? Do you have ways to figure out if your belief is wrong, or whether a different belief might be better? If you do, do you use them optimally – could you use them better, or is it perhaps possible for you to find better ways to test your beliefs than the ones you use now?
Do you somehow stand to benefit from holding the belief you do? If other people held your belief, would that make you look good? Is the belief somehow very convenient?
Who other than you care about your belief? Is it important? How important is the belief in question when it comes to ‘real world stuff’? Do you care just because you care – or does your stance actually have major real life consequences? Could these be downplayed if you wanted them to be?
We can’t always ask these questions – they take time and effort, and if we had to think about all that stuff every time we were to make a decision we’d all starve to death. But questions such as these should enter the mind from time to time.
A ‘sufficient’/’proper’ degree of skepticism about your own beliefs will incidentally undoubtedly sometimes make you lose an argument you’d otherwise have won. I consider that outcome to be perfectly acceptable as arguments should not be about winning, but about learning new stuff. If you care a lot about whether you win or lose an argument, you’re arguing with the wrong people and/or you’re not arguing in an optimal manner.
I’ve been thinking about the stuff in this post on and off for a long time. I probably shouldn’t post this and I may still change my mind and pull it down later on.
Anyway, to function well in their daily lives, most people deceive themselves to some degree. They tell themselves that their work matters a great deal (/more than it does); that they make a (/much bigger) difference (/than they actually do); that they are smarter and more accomplished than they really are.
The deluded optimist looks for opportunities he wouldn’t have sought, had he been more realistic. And the deluded pessimist misses options he might have had a shot at, had he been more realistic. If we’re thinking only about maximizing opportunities, it seems that systematic overconfidence/optimism is the strictly dominant strategy. At least if we don’t include costs in the equation. We can’t just ignore those of course, because most people know that if you ask out a girl and she says no, it will hurt. The girl may not feel any pain, but the rejected suitor will. The interesting thing here is that whereas one could in theory say: ‘I should just ignore that it hurts and try finding another girl’, for most people an optimal strategy would seem to have to include previous encounters and previous outcomes because those previous events contain important information that should ideally be included in the decision making process. A low-quality male who does not change his strategy after the first ten rejections will have a lower likelihood of being successful in terms of finding a partner than will a low-quality male that decides to mostly target low-quality females after the first three rejections, although the expected quality of the former’s potential partner is higher than the expected quality of the latter’s. One could make some corresponding remarks regarding the female’s problem; a female who’s never approached should ideally probably have a lower rejection rate than a female who’s approached all the time.
Most people do take previous information into account to some extent and this is, I believe, a huge part of why self-confidence is such a big deal for humans when it comes to figuring out who’s attractive and who isn’t. If you’re very self-confident, it’s most likely because you’ve been given reason to be; if you’re a male, the natural inference to make is to assume that you’ve not been rejected very much in the past and that you’ve had success with attractive partners before – if you’re female, self-confidence means that you’ve been approached a lot and have had to say no to a lot of males and thus you can afford to be picky. Another thing to note is that it takes at least some experience to become self-confident; you can fake it if you’re unexperienced, but that’s not quite the same thing – and females are generally good at spotting fakers because they have to be. Why do they have to be? Because if self-confidence is a very important variable when it comes to assigning value to a potential match, it becomes obvious that males will try to cheat and signal that they are self-confident even though they haven’t had a lot of success in the past. Females who couldn’t spot the cheaters had offspring with the low-quality guys in the past, so they had fewer offspring.
Low-qulity males are telling themselves they’re high quality. High quality males know they are high quality, and that they’re higher quality than low-quality males who tell themselves that they are high quality. And it’s not just ‘high quality’; every male around will try very hard, with a great deal of success, to convince himself that he’d be the best partner of all the potential partners the female would ever meet in a relevant time-frame. The more successful his self-deceit is, the higher quality partner he will gain access to. There’s the truth, and then there’s the truth plus X %. At some point, say X-upper bar, the risk/reward-relationship will become unfavourable to him given his risk profile (he’ll have less success than he would with a lower self-deceit level because all females can see that he’s much lower value than he thinks and put him in the faker category) – but if all other males have a positive X, an X of zero is strictly dominated. In expected terms the worst strategy a male could pick would probably be to try to be completely realistic about his options and not engage in any kind of (self-)deceit at all; a male who doesn’t even pretend to be higher quality than he is will have lower chances than most lower quality males who pretend to be high-quality.
Self-deceit helps on the dating scene. It helps when it comes to finding reasons for getting up in the morning. It helps when you’re telling your own story about how great you are and how every mistake you ever made was really somebody else’s fault.
I know I engage in a lot of self-deceit. We all do. But somehow I seem to have this impression that I’m a lot worse at using it constructively than are most people. Instrumental rationality is all about using rationality to solve problems, to achieve goals. So not to engage in the proper type of and level of self-deceit is not instrumentally rational. But I still much prefer the current me to a me who thinks much more highly of himself – I really dislike that guy whenever I see him in myself. Self-deceit incidentally isn’t the only relevant variable here. Telling myself that I should be more dominant and aggressive would also likely help my options. But I don’t want to be more aggressive or dominant because that’s not who I am and it’s not who I want to be.
I find it frustrating that the person I want to be don’t seem to be able to have the options I want to have. Either I need to change who I am or I need to change what I want. I find changing what I want very hard.
In the real world there are a lot of areas where it is completely natural for a person not to know very much, if anything, about it. Humans are not born imprinted with knowledge about, say, the lastest Greek employment figures, or how photosynthesis works.
Some people would say there’s a difference between the two. And that there are some things which are more important to know than others.
From a practical point of view, this is certainly true; knowledge about the finer details regarding the collapse of the Inca Empire will generally not be as useful when engaging in social interaction with most people as will knowledge about the latest soccer results or the latest political reform proposals (trust me on this one). People usually have a good idea which kind of stuff they’re supposed to know something about in order best to socially engage with others, and as long as other people play along and engage in the same kinds of conversations and search for the same kinds of knowledge social interaction is relatively easy.
Most people who interact with people they don’t know terribly well engage in the same kinds of knowledge exchange dynamics. They know a lot about which subjects are kosher and which aren’t, and the pool of acceptable conversation topics is actually incredibly small once you start to think about it. It’s not that you need to know everything about all the acceptable topics, but if you’ve picked a few of them out and made an effort of obtaining a bit of knowledge about them you should be okay. Social expectations play a large role here. It’s not considered bad form to bring up a subject the other party knows nothing about; what is considered bad form is to bring up a subject the other party ‘cannot be expected to know anything about’. The topics other people can be expected to know something about is drawn from a usually quite short list. Expectations regarding what kind of- and even which specific bits of knowledge you’re supposed to possess are to a large extent formed around the ‘acceptable conversation topics’. Given the expectations people possess it is very important for an individual wishing to engage others socially to know at least something about some of the acceptable conversation topics, and/because if the individual doesn’t know anything about X he might suffer status loss or even social rejection. Given this, an individual will perhaps sometimes feel the need to signal that he knows stuff he doesn’t actually know. He may even feel the need to signal that he knows stuff it would be unreasonable of anyone to expect him to know, given the specific context. The specific context will often be considered irrelevant because expectations are formed mostly independently of these, and the social expectations are considered common knowledge; everyone knows that if you’re heading for a political discussion, you’re supposed to be able to say a few words about, say, global warming, or immigration. These are areas where you’re basically not allowed not to have an opinion.
Every bit of knowledge one obtains is another bit of knowledge not obtained. In order to engage in an acceptable level of social interaction, it may be necessary to obtain information about X which one would not otherwise have obtained. Such information should be considered a cost related to the social exhange. A cost the minimization of which would probably easiest be achieved by trying to impact the expectations of the other people involved. Even though expectations are as mentioned above to a large extent independent of the individual, the community expectations are not completely exogenous – what you expect others to know and be interested in may change their expectations in the long run. That is to say, rather than trying to save face by claiming to know stuff one doesn’t, it might be a strategy worth considering to perhaps rather let the other party know that one does not consider this area of knowledge as important or interesting as Y (‘…which is totally awesome because …’).
I have met a lot of people over time who were claiming to know stuff they clearly didn’t know – and my experience is surely far from unique. When you spend a lot of time in a social environment where people’s expectations about what you have to offer/what you know and what you actually do have to offer/know do not match, or in an environment where you feel that it is very important that you make a good impression, this is clearly what you’ll sometimes get – people pretending to know stuff they don’t, and/or be someone they’re not, because they dislike obtaining knowledge about X but would prefer not to incur a social cost from not knowing about X.
An interesting thing is that the sanction from being ‘overconfident’, or perhaps even a liar, will sometimes be smaller than the implicit sanction from not accepting the, again implicit, ‘acceptable/unacceptable topics’ framework. The first one at least plays the game, the second one doesn’t – and if you don’t, you need a good excuse.
I’m sure pretty much everyone has at least some notion about which kind of knowledge ‘you’re supposed to know’ to be ‘fit for social interaction’. But I also tend to believe that the best way to behave in this weird world is to act as if there isn’t. If adults asked as many questions as children do, people would know a lot more stuff and it would be a lot easier to engage others and find topics to talk about. It’s as if it’s not okay socially not to know stuff and openly display that you don’t know – and I hate that! Not knowing is the default state, and it’s unreasonable to expect people to know very much, compared to how much there is to know, or to expect preference homogeneity; i.e. that the costs incurred from obtaining knowledge about X are the same for everybody. It’s unreasonable also because it will sometimes give people an incentive to behave in a deceitful manner which will only harm both them and you.
I think it makes a lot of sense to deliberately try not to think of oneself as ‘well informed’ or ‘knowledgeable’ when engaging others. I’ve thought this way myself in the past, but I believe it’s the wrong way to approach matters. So what to do instead? Well, it’s simple really: Think of yourself as ‘curious’.
“what kind of stories should we be suspicious of? Again I’m telling you, it’s the stories very often that you like the most, that you find the most rewarding, the most inspiring. The stories that don’t focus on opportunity cost, or the complex unintended consequences of human action. Because that very often does not make for a good story.”
We use narratives to explain stuff. We need an explanation we can understand and if there isn’t one, we will make one up. And we much prefer to believe stuff that is comfortable for us to believe is true. It goes for all areas of life, not just the ones one like to think about. I’ve picked out a few examples but you’re free to add to the list.
Non-smokers and non-drinkers will generally underestimate how hard it is for people who are drinking or smoking to stop drinking or smoking. The convenient story for the non-smoker or non-drinker is about how people who smoke or drink are weaker people (and therefore less deserving). Or perhaps they are less smart, because they could have just never started in the first place. On the other hand some of the people who smoke or drink a lot like to tell themselves that they are not addicted (because addiction will often imply weakness in the mental model applied to the problem) or that they have just as much willpower as the non-smoker/-drinker has, which would become obvious if the latter also smoke/drank as much as them. Notice that there may be multiple, perhaps conflicting, ways to construct a convenient narrative that makes you look good, not just one; it’s both possible for you as a smoker to convince yourself that you’re not addicted and thus isn’t a weak person (‘only weak people become addicts’), and it’s possible for you to convince yourself that you are addicted, but that the addiction means precisely that you’re not weak ‘because if someone as strong and great as you can become addicted, eveybody can’.
People who are not overweight will generally emphasize the importance of their own actions when explaining why they are not overweight and downplay other factors, whereas people who are overweight will often be more comfortable thinking in terms of factors over which they have little to no influence (like genetics). So the person who is not overweight will end up telling himself a convincing and convenient story about how he’s not overweight because he’s doing all the right things while disregarding other factors that may be quite important too, and by telling the narrative that way he may think of himself as a better person than the people whom he think do not behave the way he does, and/or he may think of himself as a better person than the people who do in fact behave in a similar manner, but have gotten different results from the diet- and exercise regime than he has gotten and thus have ended up overweight. The overweight guy will often tell a completely different story, which is just as compelling and convenient to him as the other story is to the non-overweight guy; he’s overweight because of his genes, because of his metabolism, because of his big bones, or perhaps because of his job that makes it hard for him to find time to exercise. He may think he’s better than the other guy because he works harder (or he would have time to exercise), or he may think he’s better because he does not, he tells himself, judge people by their appearance. The more general story about the blameless victim vs the deserving winner can be applied to all areas of life; if people have done well, it’s always because of stuff they did, and if they haven’t done well, nothing they could have done would have made any difference. That is, this is the story most of them will tell you if you ask them. Because that’s the story they tell themselves, and sometimes have told themselves for many years. (Things get more interesting if people can’t decide if they’ve done well or not.)
Often when people engage in political arguments, they downplay the arguments against the position they are defending. And they like political positions which make them look more deserving, make it look obvious that they should have a larger share of the pie. If reality will not play ball that’s often not a problem in political debates; in politics reality is just what people can agree is true. So when arguing about whether the people I like (‘people (/who) like me’) deserve to be in the position they are in, you can claim ‘it’s because of X’ and as long as a lot of people agree with you then X is considered a valid explanation. Note that the most convenient story always has a bad guy, and that in politics the convenient bad guy is almost always the guy who disagrees with you. Note also that in all the narratives you tell yourself, you’re the good guy. And this is the case for everybody else too.
When people think about what motivations others have for doing the things they do, they will often be tempted to try to explain the behaviour of others in terms of reactions to their own behaviour. They will tend to go for explanations involving them first if they can make one such explanation make them look good. ‘If she’s behaving nicely towards me, it must mean that I’m a nice person’ or ‘she’s behaving that way because I deserve to be well treated’. If it’s hard to come up with such an implicit explanation that makes one look good one will be more likely to find and include ‘external factors’ in the model; if she was angry it was not because of anything I did, rather it was because her boss is a silly old man, or because she’s on her period. This model even works when she explains that her anger is caused by something you did: If she’s told you that her anger was because you didn’t clean the house yesterday, you’re quite likely to at least partially disregard that explanation and find another one that better fits the image of you as the perfect husband; either one that does not involve you at all, or perhaps one that does involve you but also ‘shows’ just how unreasonable she is (‘She is probably still mad about that $300 overcoat I bought without asking her first. I should be allowed to buy an overcoat for myself without asking that crazy lady first, dammit!’). And when people tell themselves such narratives one of the funny things is that they both know that she is right (he should have cleaned the house), but they still hold on to the self-serving explanations in order to justify their own actions though they know that
they probably should not do this the partner disapproves of the behaviour. It makes sense though; we’re programmed to constantly look out for subtle ways to do a little less than our ‘fair share’, and you can’t cheat on others as well if you feel really bad about it afterwards and/or if you cannot catch up on the fact that your behaviour might be over the line. Incidentally, chimps have strong views on fairness stuff too.
Now, some of the stories humans made up in the past to explain the stuff we liked to explain back then doesn’t do very well today, when taking all the knowledge that is available to us at this point into account. Stories made up by people who died a long time ago still make up most of the religious texts around today, and you can tell if you read them. But it’s very often inconvenient for religious people to pick a different narrative, it’s in fact often very costly – and once again ‘reality’ is to a great extent just what people around you can agree with you is true. But people without religion do not do without competing convenient narratives; they will probably often tell themselves that they are smarter people for not believing stupid things. Or they will tell themselves that it’s all because of their own actions and ideas that they don’t believe in the stupid narratives, rather than it being to a great extent perhaps just a matter of being born by the right parents in the right century in the right country and being of the right gender (females are generally more likely to be religious than males).
It’s worth mentioning that not all self-serving stories are necessarily untrue or inaccurate. The degree to which such narratives are true or not will often depend upon your own point of view, but this is rather beside the point; the point is that people tell these narratives whether they are true or not, and the accuracy of the narrative often doesn’t much enter the equation in the first place. Sometimes self-serving thoughts like the ones described in the post are not thoughts people actively engage their minds with; often they are not. Rather, they are somehow perhaps best perceived of as part of the OS. The convenient narratives are part of us and there’s no way to get rid of them. But thinking about them every now and then can’t hurt.
Just some random notes, I probably shouldn’t publish this but I decided to do it anyway even though it’s not very structured.
So, I started out just by thinking about a simple question: Why do people talk with/to each other?
Now, we all know that there’s no simple answer to that question. There are answers – many of them. Categories like information exchange and social bonding/social relations management probably cover many of the reasons though there are others. Theoretically there’s probably a distinction to be made between conversations where people are very aware of what they want to accomplish with the conversation and how it can be expected to proceed on the one hand (conversation with a coworker about the new DHL-standards, board-meeting with a 12-point agenda, a doctor’s conversation with a patient); and conversations where the goal(s) is (are) more hazy and the expected duration is much more uncertain. Many of the conversations where people will be uncertain as to why they even engaged in them in the first place if asked directly probably can be argued to have quite clear goals if perceived in a certain light; goals having to do with social relations management and bonding. If you find yourself in a situation where you don’t know why you’re talking, you’re probably doing it for reasons having to do with social relations management/bonding. And if you feel the need to ask yourself why you’re talking with the person with whom you’re talking (‘why am I even talking to this guy?), you probably won’t be for long.
Conversations usually evolve over time because of interaction effects; new inputs are being delivered along the way, shaping the direction of the conversation. Two conversations with roughly the same starting point can end up in very different places. It’s worth noting that inputs supplied can be both verbal or non-verbal and people often underestimate the impact non-verbal behaviour may have on a conversation/social interaction.
Human interaction is too complex for it to be optimal for people engaging in conversations to always think hard about stuff like what to say and what not to say or how and when to say whatever it is that (perhaps?) needs saying. Conversations proceed at a much faster speed than the human brain can process all the potentially relevant information, and so a lot of information get excluded by default. Conveniently we do not think much about the fact that there are a lot of things we don’t think about when interacting with others. Excluding a lot of information and ideas means that the communication gets more efficient, at least if measured in terms of words/minute or similar metrics. Body language can convey a lot of information fast, so people who are good at that (and good at reading it) will ceteris paribus be better communicators than will people who are not.
Many conversations follow, at least to some extent, some basic scripts people have internalized. Most people know pretty well how to react when asked a question like ‘how are you?’ and they know the general direction in which a conversation starting in such a manner may be expected to proceed, just as they know what to say when a person shares the information that he recently got one day older than he was the day before. We often don’t think very much about the meta-aspects related to what to say in any given social situation, because if we had to do that all the time we couldn’t really do anything else.
However even though both a lot of the stuff we talk about and the way we talk about them to a very large extent follow scripts, a lot of feedback still does take place along the way; you need to all the time be aware if the other person is following the script, and you need to be aware which script is the right one to apply to the specific part of the conversation in question (is the secretary bringing up her weekend plans because she’s trying to tell you she can’t work overtime this Saturday, or is it because she wants you to ask her out?). Human behaviour is incredibly complex but we’re much too used to all this complexity to ever truly notice it. When one starts to think about how conversations work, it becomes clear that there are all kinds of ‘crazy’ ways for people to break the script along the way: Shouting loud inappropriate remarks in the middle of a sentence, turning your back on the person with whom you converse, asking a random question having nothing to do with the topic discussed, sitting down on the floor while the other person is talking, start moving your elbows up and down randomly while the other person is talking, punch the other guy in the stomach… The fact that people don’t even think about how it would be inappropriate to just sit down on the floor while talking to a coworker at the watercooler is an indication of just how narrow is the range of what’s considered to be acceptable behaviour. But we don’t notice, because we don’t think about such things. Which i find interesting.
In game theory a well known concept is the idea of a zero-sum game. Many arguments I like to think are zero-sum games, especially political- and similar arguments. X and Y will start out with some different sets of arguments supporting their cause. The ‘winner’ of the argument will say that his set of arguments were better than the arguments of the other party. Rarely will X and Y meet and discuss how to improve the argument sets of both X and Y. The idea is not to weed out bad arguments and replace them with good arguments; the idea is to win and that’s often easier to do with many arguments than with just a few. If X cedes the point that one of his arguments was not convincing it will generally harm the cause of X and help Y to win the argument.
Now one might here argue that human interaction would be more pleasant if people didn’t engage in ‘zero-sum conversation games’ such as the ones described above, but rather tried to always make human interaction be positive-sum. In case you were in doubt this is not where I am heading. The truth is that as long as there are surpluses of some kind somewhere, someone will try to grab part of that surplus if it is within that person’s reach. Organisms which behave that way have more children in the long run, and when it comes to human behaviour there’s a limit to how much culture matters. Another way to think about such ‘political arguments as zero-sum games’ is to think of them as a huge and important technical innovation and a great improvement upon the kind of zero-sum games people engaged in before the advent of political debates as conflict-resolution mechanisms.
“Me: In my opinion it’s really hard to have interesting ideas if you don’t write them down. It’s much, much easier to spot flaws in your reasoning, to add complexity, to take account of -ll- if you write things down.
A friend: I quite agree
Me: It quickly became an argument for keeping my blog alive, back when I wrote a lot of stuff myself rather than leech off the ideas of others as is mostly the case now.
A friend: Why don’t you write more of your own ideas then?
Me: They are not interesting […] I’d much rather share knowledge with other people than [my] ideas.”
I know I shouldn’t quote myself, nor should I quote a friend who has not even agreed to be quoted. But I thought I’d put that out there anyway, because this is probably something people should have realized by now. There are people who happen to be quite good at getting good ideas, good at thinking about stuff. I realized a long time ago that I am not one of those people, and that I would be wise to limit myself to quoting the ideas of the people who know how to get good ideas, and otherwise just keep my mouth shut. Or share data, which amounts to the same thing when it comes to that. I sometimes fail and I open my mouth anyway, and I do it because I like to think about stuff and I do it a lot. But I’m well aware that there are lots of people who are much better at it than I am and that I really should try not to waste people’s time and humiliate myself in the process.
I know, but sometimes I just don’t care, so here’s something I’ve had on my mind for a while. I’m often asked ‘how I feel.’ We all know that question, and we all know how to answer it. Even a person like me is not unaware of the social conventions related to how you’re supposed to approach that question. So I usually answer ‘okay,’ ‘reasonable’, ‘not bad’ or something like that. It’s what people do.
But such questions always bother me a bit. There are two reasons. The first one is the rather obvious one that well, really, most of the time I have no idea how I feel. I need to think about that question in order to answer it, and the amount of time I’d need to give any kind of semi-sensible answer to the question is way more time than the amount of time that is usually allotted to the purpose, given the social context. Perhaps my emotional states are not as readily available to me as they might be to some people. A related concern here is that it is of course very unpleasant to feel the need to answer a question to which you don’t know the answer, and to be placed in a situation where you’re very aware of the fact that you seem to be trying to guess the teacher’s password. This is a situation you generally try to avoid. The problem is perhaps exacerbated even further by the fact that when I actually do spend time thinking about how I’m feeling in other contexts, quite often it is an activity which is predicated upon the fact that I, well, do not feel good at all; and getting asked how you feel when this is the way things usually work can be unpleasant, because getting asked that question can easily remind you that you’re in fact not as happy as you’d like to be. And then it’s easy to mentally jump along to the question of why you’re not as happy as you’d like to be, and most of the time there are lots of good reasons why you don’t seem to have anybody to blame for this sad state of affairs but yourself. But then you might go even further and argue that you do have happy moments sometimes, and that you’ve actually done some work on actively figuring out when they happen, as they happen – ‘this is a pleasurable moment’-type thinking – and what you’re doing when they happen, and this seems to help you and really there’s no good reason why you should not be having such a moment within a short amount of time and… Meanwhile, the person who asked the question is still waiting for an answer.
The other big reason why such questions bother me a bit is that I have no way of knowing if the answer even makes sense to the person to whom I’m responding, even if I do answer truthfully (which would require a complex and rather detailed answer). How do they define ‘feeling good/ok/not bad/reasonable’? I have never looked inside their heads or hearts, I don’t know the emotional range they inhabit very well. Maybe my answer is completely meaningless to them. Do people have well-defined emotional barometers where you can just go have a look and see; ‘oh – so that’s how you feel, 37°, that’s interesting…’ No, they don’t. Even in the best of cases it’s hard to figure out if the answer you give is actually conveying the information you’d like to share. And the real world don’t do with the best of cases, because I usually don’t answer truthfully, a fact I have no problem sharing here. I always have doubts, regrets and self-hatred bubbling under the surface, and I work on keeping those things far away from my own inner monologue; why in the world would I want to bring them out into polite conversations which take place outside my own head, with people who have perhaps no idea what they are getting themselves into?
I’m quite curious as to how people handle and understand their emotional states. Do people actually walk around knowing ‘how they fell’? I know I don’t and I have a hard time imagining that many other people do. It would be nice if people settled upon a different casual conversation starter – most people who ask this question don’t really want to know anyway.
From the paper:
“Because we began by putting forward a theoretically derived hypothesis and calling its viability into question on the basis of experimental data, it behooves us to listen carefully to what that data has been trying tell us and to draw together plausibly the various strands of evidence. The most parsimonious inductive explanation for our cumulative findings, we contend, is that automatic attitudes are asymmetrically malleable. That is, like creditcard debt and excess calories, they are easier to acquire than they are to cast aside. Thus, when people construe an object for the first time, their conscious fondness or antipathy for it is swiftly supplemented by an automatic positive or negative reaction. However, once people have acquired an attitude toward the object, attempts to subsequently undo it are differentially successful at different levels of the mind and lead its automatic component to lag behind its conscious one. Thus, Devine’s (1989) key prediction—that automatic attitudes will be generally be [sic] harder to shift that their self-reported counterparts — may be correct after all, not under the boundary conditions that we initially proposed but under a new set of boundary conditions that our data have subsequently suggested. […]
We contend that automatic attitudes operate like rapidly established perceptual defaults: although they can initially be engendered by conscious cognition, they later become relatively resilient to its influence.”
So, there might exist a variety of perhaps even non-overlapping reasons why one might be interested in stuff like this. I’m interested because I believe that some of the automatic attitudes I have implicitly come under the influence of are attitudes which does not make me happy, which is why I feel that I at the very least should try to understand them better. Understanding might make it easier for me to successfully challenge them. Though I’m not optimistic about that. I should specify that the automatic attitudes I have in mind here are perhaps of a somewhat different kind than the ones described in the study; but it doesn’t seem like a lot of stuff is written about how to overcome biological imperatives, and you need to take what you can get.
Human males my age – not only human males my age, but also human males my age – are ‘supposed to’ look for a mate to have children with, and if they can’t find one they are supposed to work towards gathering power and resources so that once someone is there to be found, they can compete more successfully with the other available males in the bidding war that will ensue, and perhaps win the right to have offspring. The male brain has not yet caught on to the fact that contraception has changed everything, in a way that means that power and resources no longer matter all that much when it comes to reproductive success. As Kanazawa put it in this paper; “men’s wealth still translates into their greater reproductive success had it not been for modern contraception, which men’s brain, adapted to the ancestral environment, has difficulty comprehending.”
To the Paleolithic brain, sex = offspring. The whole ‘offspring’-part is why sex feels good. Most (/non-ignorant?) males (/and females) know that the reason why sex feels good is because sex is nature’s (/your genes’) way of tricking you into having offspring. Just as the reason why chocolate cookies taste good is because they contain a lot of fats and sugars, i.e. calories; and calories are good if you want to avoid starving to death, a risk our ancestors spent a lot more time worrying about than we do. But whereas people are quite open about how it’s probably a bad idea to eat too many cookies, because it will make you fat and unhealthy, and thus people do not eat all that many chocolate cookies, there are, to put it bluntly, certainly a lot less people who seem to be open about drawing the conclusion that partnership and children is not worth it and that they ‘refuse to be slaves of their biology’. At least in that area of life…
I have this strange feeling that a lot of male (/and female) behaviour today might look completely crazy to someone who’s not as invested in the underlying ideals of the Paleolithic Era as are (all?) (/fe)males today. For a male, it looks like this: ‘The way to be happy/the good life is to find a fecund-looking female, court her and then have sex with her a lot, have babies and provide for them, die.’ A slightly more elaborate version would also include ‘convince your partner on an ongoing basis that you’re the best male available (by doing all kinds of weird things that signal to the female that you are there for the long haul, even if you’re not – and by golly, the modern economy/-world has certainly increased the number of insane-looking jump-through-the-hoops signals a (self-identified?) high-quality female can demand of her partner..)’, as well as ‘try to cheat on her as often as you can get away with – so that you can have more babies – but try your best to hide the cheating from her so as not to incur significant switching costs.’
The bidding wars these days in the partnership setting relates far more to the quality of the offspring than to the number of offspring. The Paleolithic fecundity markers are more or less completely out of whack with reality today. Today it is mostly preferences – which are to a very large degree driven by socioeconomic factors, religion, culture and societal norms more broadly – and not biological factors (waist-hip ratio etc.) which decide how many children a female is likely to/willing to have. Kanazawa (see above) found that resource access is pretty much irrelevant too. However the lives of most males and females continue to follow the age-old recipe, to some degree. To be happy you need to find a mate and have children. For a male, in order to get the best possible female you need access to resources, you need power. So you need money, which means that you need to work hard, both to obtain access to resources and incidentally also to actually convince the high-quality female that you’re the most suitable partner available. It’s not that these ideals seem completely true to everybody; it’s more that when you defend a different version of the good life, my impression is that you most often will have a hard time making that defense sound credible, even to yourself. People often reject some of the defining characteristics of the traditional partnership equation, like the idea that a partnership necessarily needs to involve children, that it makes sense to look for ‘the one’, that romantic relationships need to involve members of both genders, or perhaps that a monogamous relationship is the best way to deal with the romantic stuff in your life; but how many people openly reject the idea of having a relationship as a major life goal in favour of the alternative in the (‘semi’…, see my remarks below regarding the commitment issues here)-long run, for no other reason than that they think that they will be probably end up happier in the long run if they do? Surely only a person who has no chance in the dating market would do such a thing, right?
I assume the standard narrative will not work for me. It seems like too much hard work that you just know that you’re only undertaking because your Stone Age brain is trying to trick you into undertaking it, just like it’s trying to trick you into eating too many chocolate cookies – and with not too dissimilar consequences. I will probably not be willing to work hard enough to find a long-term partner who would not reject me in favour of someone more suitable, given the amount of competition. And if I do find someone, I will still have major problems trusting her, because I’ll assume that if she follows the standard narrative here, she’ll also follow the Paleolithic recipe later on. Which tells me that she’ll be more likely than not to leave me when I start getting really sick. Yeah, I may not get really sick and a potential she may not leave even if I do, but in expected terms this needs to be taken into account; as does my loss aversion at that point.
So why was I reading the paper again? Because it seems to me at this point that the smartest thing for me to do would be to rewire my brain somehow, to make it like stuff it currently does not like as much as would be optimal, and to dislike stuff it currently seems to enjoy thinking about. To let go of a lot of the counterproductive narratives which were never about people like me in the first place. I’m perfectly well aware that this is all about rationalization, and Paleolithic mind has views about that stuff too. Given what I’ve previously said about the Stoics, naturally I’m not very optimistic about this whole endeavour. But it seems worth trying. Maybe my mind can actually outsmart my Paleolithic mind. In the eyes of most females, I probably won’t be proper partner-material for some time (because of ‘resources, power’) anyway – at least not for the kind of partner my Stone Age brain is trying to convince me I’d like to have. I know about the assortative mating-aspects of the college/university experience, but I also know that that part of the university experience is probably not likely to be relevant for me. Either way, I hope that I can obtain a state of mind such that my period of thinking about dating and similar stuff is over – at least for the time being. The only way not to lose the bidding war is not to play or think about playing.
Incidentally, I ought to at post a few remarks here about how this post relates to my commitment to change: I was writing this and publishing it here at least in part to more efficiently commit myself to this change. I know how strong ‘the opposition’ (‘the Paleolithic mind’ and all its friends and allies…) is, and I might give up on this idea before long. But writing this here can not hurt my chances much, and I’ve been thinking along these lines for a while now. I’ve found that it’s much easier to (knowingly) ‘rationalize’ not looking for a partner than it is to actually be perfectly okay with not doing it. And if it turns out to be impossible to obtain that mind state, it seems suboptimal in most scenarios to not be dating. I’m not trying to commit myself to not dating/finding a girlfriend; I’m trying to commit myself to thinking that I can be perfectly happy even though I don’t. It’s the thoughts in my head, not the behaviour they engender, which are central here. Interestingly enough, if I’m succesful it also probably means that long-run credible commitment to this state of mind is impossible (if preferences such as these can actually be changed over time, such changes can also be reversed later on), which should if anything make commitment in the short run easier, rather than harder, to achieve.
So, imagine this scenario. You live in a (parallel universe/future world/space setting/…) where people know how long they have to live; you know the exact date that you’ll die. It’s quite important to note early on that this date cannot be changed by any future events outside the models below. You have X years left of your life when you get the offer.
You’re now presented with the option, A, of living ‘twice as long’, in the sense that you will have 2*X years left of your life if you pick option A. There’s a downside to the arrangement; you have to double the amount of sleep, Z, you get pr. day (/time period).
Let’s plug in some numbers just for fun. Say you’re 20, you know you’ll die at the age of 70 (X=50), and you can at the current point in time expect to get (Z=7) hours of sleep/day on average during your life. If you pick A, you’ll live to the age of 120 – you’ll gain 50 years – but you’ll have to sleep 14 hours/day. If you decide not to take the offer, you will have 310,250 hours [(24-7)*365*50] left in a conscious (non-sleeping) state and you will die in 50 years. If you take the offer, you’ll have 365,000 hours [(24-14)*365*100] left in a conscious (non-sleeping) state and you’ll die in 100 years. In this case, you both live longer and you will have more hours available to you to do stuff. But what about a 60 year old who sleeps 9 hours/day and can expect to live to the age of 85? In that case, A will give you 109,500 hours and 50 years, whereas the alternative will give you 136,875 hours but only 25 years. When looking at A more generally, it seems clear that the older you are the less years you gain and the worse the tradeoff looks because the natural/baseline sleep requirement is increasing in age. At which points in people’s lives would this look like the most interesting proposition? Would it necessarily be the case that ‘the younger, the better’ – what about, say, sociological factors? How big an impact will the decisions of people close to the decisionmaker have – would the longevity of individuals in this model depend on social ties/skills; and if so, how?
Interesting things happen if you change A and make different restrictions on the choices offered; for instance, what happens in a model, A’, where you gain one hour for each extra hour you sleep? Basically this is just saying that you can decide freely when to live your life (looking forward in time), but not how long you’ll actually live. How would people deal with this choice? What if you made the sleep requirement an increasing function of the years gained and further imposed the restriction that people could at most sleep for 23 hours/day? (you have to add some sort of restriction like that or it starts to get really weird) Like, say, model B, in which you’d gain the first 10 years by just sleeping one extra 1 hour/day, whereas the next decade would cost you an additional 2 hours of sleep pr. day – at which point would people think that the arrangement maximized their lifetime utility, and how would this maximum depend upon the choices made by the people closest to them? Note that in model B, the 20 year old guy from before would (still, just like A) be able to live for another 100 years, but he’d have to sleep 22 hours pr. day to do so; and he’d spend much less time awake in this case than if he did not choose this option.
In the above models, the cost of getting to live longer than you’d otherwise do is ‘sleep’ but it could be other things as well. In the real world, you have a lot of people who stay alive long after their minds are gone – before my grandfather’s mind had gone completely he did live something close to 23 hours in an at least ‘semi-conscious state’, paired with a few clear moments during the day. You also have cancer patients who spend the last weeks or months of their lives either writhing in pain or simply knocked out by painkillers. In these cases, what are ‘we/they’ optimizing? In the real world setting, there’s also stuff like physical exercise, which might add half a decade or more to your life – if you’re willing to incur the cost of actually get sweaty/take time out of your calender and/or sleep more (restitution).
Now imagine another model, C, where what is on offer is not years gained but rather hours awake. It’s the flip side to the first models here. In this model, if you’re willing to drop 2 years of your life you can cut down sleep by 1 hour/day in the years you have left. Say you’re that 20 year old guy again. He can at most cut 7 hours of sleep, which would leave him with 36 years left. The cost imposed is made up for by an additional number of total hours awake while alive: For instance, with the baseline scenario the guy gets 310,250 hours awake, but if he opts to die at the age of 68, he’d get 315,360 hours awake. Given this specification of the model, the total number of hours awake is maximized at the point where he dies after 42 years at the age of 62, sleeping 3 hours/day during his remaining life (hours awake is a parabola; giving up even more years will decrease his total number of hours awake) – this will give him 321,930 hours awake. Would some people choose this model? If you set it up like this, probably not many. But the funny thing is that given how people behave around other variables which are also well known to impact both longevity and subjective utility in not too dis-similar ways (smoking, alcohol, drugs), the obvious answer should be yes. People make not all that dis-similar tradeoffs all the time without even thinking about it.
Also in some of the alternative universes in which one might contemplate making these offers, what is here called a ‘sleep requirement’ is there universally known as ‘sleep dependency’; a chronic, debilitating and incurable disease which causes recurring long-term periods of unconsciousness.
So, every now and then you come across one of these ‘many of my particular habits would fit in well with what I believe 18th century style living was like, maybe I’m living in the wrong century’ type posts. I just read one of them – which is why I’m posting this now. I’m not arguing people aren’t different and as thought experiments go, I guess you could do a lot worse. But here are some reasons why people perhaps don’t really compare apples to apples when engaging in mindgames like these:
i. Most people when engaging in these thought experiments seem to think that if they were to live in the 19th century, they’d be a nobleman or some such. Problem is, most people living in the 19th century (and earlier on) were peasants. Peasants who hadn’t even heard about tractors. Maybe they’d heard about Monsieur the Marquis, but that’s not quite the same thing. If you weren’t a peasant, you were probably a servant. Doing hard labour most of the time for very little pay.
ii. ‘I read a lot – and I mostly read the classics, so it’d be awesome to live back in the day where Dickens or Shakespeare lived!’ Guess what, if you go back 200 years, most people either couldn’t read or read very badly. They also couldn’t afford books, which was what got that whole library thing going. Because even if you could read, books were expensive. So was everything else. And if you’d like to read Mark Twain and you were living in Russia, good luck! Also, hardly anybody but those belonging to the nobility and the clergy spoke a second language. If you were to pop up in a relatively small linguistic region (like Denmark) in 1820, odds are no translations of what we now consider contemporary major works would even be available to you. If the book was not in stock, odds are you could not afford to get your hands on it.
iii. Spare time. A lot of it was spent doing stuff that wasn’t a lot of fun, like washing clothes without a washing machine. Also, for many people there wasn’t as much of it, on account of that ‘working 12 hours/day doing backbreaking labour in the sun’-thing. Further, not a lot of stuff to do if you actually had time for yourself. Reading classics probably isn’t as much fun if you have to do it in a small smelly hut with poor lighting late at night after a long days work.
iv. Travels! How far can you go in a horse carriage compared to a modern airplane? How long would it take you to go to Brazil for a vacation if you were living in Europe (ignoring the fact that you’d never be able to pay the ticket)? Go back two centuries and you’d probably find that a majority of Danes never left the country during their entire lives, perhaps but for a trip or two to North Germany or Sweden.
v. Modern medicine. Likelihood of not dying in child-labour. Probability of surviving to the age of 60. Cancer was a death sentence, but so were lots of common bacterial infections, like those causing tuberculosis or pneumonia, because they were equally untreatable. Also, remember that living even a decade after you’ve retired is a new thing almost unheard of before the 20th century. I’ve previously posted this:
In, say, 1820 people didn’t work to the age of 50 and then retired until they died at the age of 60. Most of them probably died within weeks or days of no longer being abled to work (…if they were lucky?). Being a nobleman was a bit different, yeah, but most people weren’t noblemen. And health is not just about not dying – imagine how much fun it was to go to the dentist in the year 1850. Eyeglasses and that kind of stuff has also come a long way (both in quality and price). Over-the-counter pain medications. Hearing aids.
vi. Mobile phones. Or maybe just phones. Internet. Tv. Cars. Central heating. Also, remember how easy and cheap it was to move tropical fruits like bananas thousands of kilometres back in 1850? Indoor plumbing. Clothes (Hint: There’s a difference between what people actually wore in 1870 and what they wear when you watch a film pretending to be going on in 1870. Also, how do you think a top of the line running shoe looked like in 1845?) Or, if going back to the travel thing, how do you think the roads looked like – like it would be fun to travel hundreds of miles on them in a horse carriage? Credit cards.
viii. If you are a female, your life would have sucked bigtime. Going back just 150 years and in a lot of the places that today really treat females quite well a female would not even have had the ability to own stuff – the property would either belong to your husband or a male guardian, like your father. Arranged marriages are still widespread today in many regions of the world, but they were also pretty much the norm in most developed societies a few hundred years ago, so you can also forget about having much of a say in who you’d marry if you were to go back to 1800 and start a life there. It would also be very difficult for you to divorce the bastard after he’d started beating you or perhaps had taken up drinking (/and)or gambling. Birth control? There’s no such thing. And there’s also no such thing as ‘marital rape’ anywhere in the legal statutes. Add the high likelihood of dying in child labour.
Other people who also would probably have a hard time living a really nice life a couple of centuries ago: Homosexuals, atheists, people who like to make fun of a king and queen wearing ridiculous clothes, modern females who’d like to go topless at the beach, people who’d prefer not to go to church every sunday, people with black skin (and why do so many of these people assume they’d end up as a westerners? Maybe the idea of living in Egypt in the year 1820 isn’t all that compelling, but millions of people did),…
The past isn’t all that it’s cracked up to be. Because of historians, it isn’t even what it used to be.
i. Perhaps most ‘imposter-syndrome’ sufferers are really imposters who do not suffer from imposter-syndrome. Convoluted? Well:
“Social psychologists have studied what they call the impostor phenomenon since at least the 1970s, when a pair of therapists at Georgia State University used the phrase to describe the internal experience of a group of high-achieving women who had a secret sense they were not as capable as others thought. Since then researchers have documented such fears in adults of all ages, as well as adolescents.
Their findings have veered well away from the original conception of impostorism as a reflection of an anxious personality or a cultural stereotype. Feelings of phoniness appear to alter people’s goals in unexpected ways and may also protect them against subconscious self-delusions.
Questionnaires measuring impostor fears ask people how much they agree with statements like these: “At times, I feel my success has been due to some kind of luck.” “I can give the impression that I’m more competent than I really am.” “If I’m to receive a promotion of some kind, I hesitate to tell others until it’s an accomplished fact.”
Researchers have found, as expected, that people who score highly on such scales tend to be less confident, more moody and rattled by performance anxieties than those who score lower. […]
In short, the researchers concluded, many self-styled impostors are phony phonies: they adopt self-deprecation as a social strategy, consciously or not, and are secretly more confident than they let on.
“Particularly when people think that they might not be able to live up to others’ views of them, they may maintain that they are not as good as other people think,” Dr. Mark Leary, the lead author, wrote in an e-mail message. “In this way, they lower others’ expectations — and get credit for being humble.”
In a study published in September, Rory O’Brien McElwee and Tricia Yurak of Rowan University in Glassboro, N.J., had 253 students take an exhaustive battery of tests assessing how people present themselves in public. They found that psychologically speaking, impostorism looked a lot more like a self-presentation strategy than a personality trait.”
My emphasis, and here’s the link. The interesting thing to me is why exceeding expectations for a given accomplishment level is status-enhancing compared to doing worse than expected. Anyway, this is one of the many ways that people who pretend to be humble brag – by downplaying expectations they increase the status level associated with any given accomplishment-level. Very few people would consider employing a strategy aimed at improving expectations-forming mechanisms to better match reality in the long run a status-enhancing move.
Calvin: “I say it’s a fallacy that kids need 12 years of school! Three months is plenty!”
Calvin: “Look at me. I’m smart! I don’t need 11½ more years of school! It’s a complete waste of my time!”
Hobbes: “How on Earth did you get all the way to the bus stop with both feet through one pant leg?”
Calvin: “I fell down a lot.”
Calvin: “…Why? What’s your point?”
Hobbes: “Nothing. I was just curious.”
Calvin: “Look at all these ants.”
Calvin: “They’re all running like mad, working tirelessly all day, never stopping, never resting.”
Calvin: “And for what? To build a tiny little hill of sand that could be wiped out at any moment! All their work could be for nothing, and yet they keep on building. They never give up!”
Hobbes: “I suppose there’s a lesson in that.”
Calvin: “Yeah … Ants are morons. Let’s see what’s on TV.”
Calvin: “Tigers don’t worry about much, do they?”
Hobbes: “That’s one of the perks of being feral.”
Calvin: “I’m not having enough fun right now.”
Hobbes: “You’re not?”
Calvin: “I’m just having a little bit of fun. I should be having lots of fun.”
Calvin: “It’s Sunday. I’ve just got a few precious hours of freedom left before I have to go to school tomorrow.”
Calvin: “Between now and bedtime, I have to squeeze all the fun possible out of every minute! I don’t want to waste a second of liberty!”
Calvin: “Each moment I should be able to say, “I’m having the time of my life right now!'”
Calvin: “But here I am, and I’m not having the time of my life! Valuable minutes are disappearing forever, even as we speak! We’ve got to have more fun! C’mon!”
[Calvin and Hobbes start running away]
Hobbes: “I didn’t realize fun was so much work.”
Calvin: “Sure! When you’re serious about having fun, it’s not much fun at all.”
When I was a child, I sometimes felt like Calvin did in that last comic. I never do anymore. I guess it’s part of growing up. Reading a strip like this once you have is a good way to make you remember that here is something you’ve probably lost for ever. I have read a lot of Calvin and Hobbes over the last couple of days. I really love that comic but sometimes reading it really hurts. Some of it is a lot deeper than it lets on.
I tweeted this, but in case you missed it: Khan Academy have now added Art History to the list of subjects covered. 300 videos of it. I don’t know how many of my readers have an interest in that stuff (I don’t), but if you do – go knock yourself out! They write in the blogpost that: “we are incredibly excited to push the frontier on freely available content in the Arts and Humanities.” And I’m excited about that too. People really shold not be paying a lot of money for this kind of stuff. Maybe if it’s available for free online – and presented at a site including other stuff as well, such as mathematics, physics ect., more young people will start to realize that…