I recently wanted to look up stuff on optimal information disclosure strategies in social settings – i.e. stuff on how to make the implicit information sharing strategies people use more explicit in order to better optimize them. The goal would be to better understand which (classes of personal-) information to share with whom, at which point in time, etc. This stuff is hard – inappropriate information sharing, both in the form of oversharing and undersharing, as well as related issues such as those of (lack of) reciprocity, are common pitfalls in social settings, and given how social feedback systems tend to work people are often not informed when they make errors in judgment in this area. I haven’t really found the sort of stuff I’ve been looking for, and I think it’s probably because I’m not looking the right places (use the right search terms). If readers know where to find such material I’d be interested to learn more – I have a comment section for a reason..
Here are some examples of what may happen when people don’t optimize:
The sketches are sufficiently exaggerated and sufficiently specific to not feel like personal attacks on people who engage in not-too-dissimilar strategies, which is why (/some people think) they’re funny. Flawed information sharing strategies are not the only things which make these sketches funny, nor are information sharing strategies the only applied strategies which are suboptimal here; but they are an important part of the problem in quite a few of the sketches (do note that non-verbal information shared is relevant as well..). Do note that the examples here are for one domain-specific application only; this stuff also applies to friends, coworkers, acquaintances, and people you’ve never met before. I’m well aware that different strategies are optimal in different domains, even though different domains likely share many similar features at the (optimal) strategy level.
Anyway, coming up with good strategies seems to me to be really hard. I assume the fact that most online dating sites don’t seem to use user-uploaded videos even now in the youtube age is probably a clue that using this medium is highly likely to lead to oversharing. Maybe there’s a cost component as well (it’s easier to just write a bit of stuff about yourself), but I’m not convinced this explanation is satisfactory without adding coordination problems and similar stuff as well (you don’t want to be the only one making a video because that presumably makes you look desperate compared to the people who do not?). I’m still a bit confused as to why videos aren’t more common in this area; they somehow seem efficient. Do privacy concerns drive this as well? I don’t know.
I tend to rely on ‘personal judgment’ regarding when to share what and in which manner; but as mentioned I’ll often find it hard to tell if my ‘personal judgment’ is off because I don’t really know very much about this stuff, and I rarely make an effort of ‘inviting new people into my life’ so I don’t have a lot of experience either. Learning these skills requires a certain amount of trial and error, sure, but it should be possible to study this stuff as well. To some extent I rely on implicit models of my own (‘personal judgment’ does include variables such as ‘time we’ve known each other’, ‘estimated degree of intimacy’, ‘information shared by the other party in the past’, etc.), but these models are likely flawed and incomplete and they don’t contain much information about dynamic elements in the equation because that’s the stuff I find particularly hard to figure out; stuff like who is supposed to ‘escalate’ – and how and ‘how far’ to ‘escalate’ – when a desire to move the social relationship from one point to another on the implicit intimacy-scale exists. Where to find better models, or at least a conceptual treatment of this kind of stuff?
Or am I overthinking all of this and the implementation of near-optimal information sharing strategies is basically considered irrelevant by most people because only severe deviations from the norm are ever (surreptitiously) punished anyway? Social interaction stuff is very complex so this would make sense; if it’s easy to get things not-quite-right it’ll often be optimal for the other party to allow for a wide margin of error.
A problem I have with the explanation in the above paragraph is that even if the level of model complexity involved here is staggering, most people do seem to engage to some extent in such optimization processes anyway – using whichever sources of information they consider to be reliable and informative (for example I’m aware that some subreddits are filled with this kind of stuff). They wouldn’t do this if ‘semi-normal’ deviations from ‘acceptable behaviour’ didn’t matter, so on some ‘relevant’ margins they clearly do.
i. Alone in the Crowd: The Structure and Spread of Loneliness in a Large Social Network, by Cacioppo, Fowler, and Christakis.
“The discrepancy between an individual’s loneliness and the number of connections in a social network is well documented, yet little is known about the placement of loneliness within, or the spread of loneliness through, social networks. We use network linkage data from the population-based Framingham Heart Study to trace the topography of loneliness in people’s social networks and the path through which loneliness spreads through these networks. Results indicated that loneliness occurs in clusters, extends up to three degrees of separation, is disproportionately represented at the periphery of social networks, and spreads through a contagious process. The spread of loneliness was found to be stronger than the spread of perceived social connections, stronger for friends than family members, and stronger for women than for men.”
I almost fell down my chair when I read the first half of this sentence: “The average person spends about 80% of waking hours in the company of others, and the time with others is preferred to the time spent alone (Emler, 1994; Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004).” I really shouldn’t have been all that surprised because I’d seen numbers on related stuff before (“the percentage of people living in single person households [in Denmark is] 20,3 %.”) – here’s another link (in Danish), according to which 33% of adult Danes above the age of 25 do not have a cohabitating partner (as can be inferred from the other estimate, only a subset of these people actually live alone. However do note that not all people who do not ‘live alone’ actually interact socially with the people with whom they live; I’m a case in point, as for various reasons it’s exceedingly rare that I socially interact with my roommate). It’s presumably not surprising that someone like me would tend to underestimate how much time most normal people spend in the company of others during an average day, but the magnitude of the difference did catch me by surprise.
“Humans are an irrepressibly meaning-making species, and a large literature has developed showing that perceived social isolation (i.e., loneliness) in normal samples is a more important predictor of a variety of adverse health outcomes than is objective social isolation (e.g., (Cole et al., 2007; Hawkley, Masi, Berry, & Cacioppo, 2006; Penninx et al., 1997; Seeman, 2000; Sugisawa, Liang, & Liu, 1994). [...]
Loneliness has [...] been associated with the progression of Alzheimer’s Disease (Wilson et al., 2007), obesity (Lauder, Mummery, Jones, & Caperchione, 2006), increased vascular resistance (Cacioppo, Hawkley, Crawford et al., 2002), elevated blood pressure (Cacioppo, Hawkley, Crawford et al., 2002; Hawkley et al., 2006), increased hypothalamic pituitary adrenocortical activity (Adam, Hawkley, Kudielka, & Cacioppo, 2006; Steptoe, Owen, Kunz-Ebrecht, & Brydon, 2004), less salubrious sleep (Cacioppo, Hawkley, Berntson et al., 2002; Pressman et al., 2005), diminished immunity (Kiecolt-Glaser et al., 1984; Pressman et al., 2005), reduction in independent living (Russell, Cutrona, De La Mora, & Wallace, 1997; Tilvis, Pitkala, Jolkkonen, & Strandberg, 2000), alcoholism (Akerlind & Hornquist, 1992), depressive symptomatology (Cacioppo et al., 2006; Heikkinen & Kauppinen, 2004), suicidal ideation and behavior (Rudatsikira, Muula, Siziya, & Twa-Twa, 2007), and mortality in older adults (Penninx et al., 1997; Seeman, 2000).” [...]
“Lower levels of loneliness are associated with marriage (Hawkley, Browne, & Cacioppo, 2005; Pinquart & Sorenson, 2003), higher education (Savikko, Routasalo, Tilvis, Strandberg, & Pitkala, 2005), and higher income (Andersson, 1998; Savikko et al., 2005), whereas higher levels of loneliness are associated with living alone (Routasalo, Savikko, Tilvis, Strandberg, & Pitkala, 2006), infrequent contact with friends and family (Bondevik & Skogstad, 1998; Hawkley et al., 2005; Mullins & Dugan, 1990), dissatisfaction with living circumstances (Hector-Taylor & Adams, 1996), physical health symptoms (Hawkley et al., In press), chronic work and/or social stress (Hawkley et al., In press), small social network (Hawkley et al., 2005; Mullins & Dugan, 1990), lack of a spousal confidant (Hawkley et al., In press), marital or family conflict (Jones, 1992; Segrin, 1999), poor quality social relationships (Hawkley et al., In press; Mullins & Dugan, 1990; Routasalo et al., 2006), and divorce and widowhood (Dugan & Kivett, 1994; Dykstra & De Jong Gierveld, 1999; Holmen, Ericsson, Andersson, & Winblad, 1992; Samuelsson, Andersson, & Hagberg, 1998). [...] When people feel lonely, they tend to be shyer, more anxious, more hostile, more socially awkward, and lower in self esteem (e.g., (Berscheid & Reis, 1998; Cacioppo et al., 2006)).”
ii. Loneliness matters: a theoretical and empirical review of consequences and mechanisms, by Hawkley & Cacioppo. From the article:
“A growing body of longitudinal research indicates that loneliness predicts increased morbidity and mortality [12–19]. The effects of loneliness seem to accrue over time to accelerate physiological aging . For instance, loneliness has been shown to exhibit a dose–response relationship with cardiovascular health risk in young adulthood . [...] The impact of loneliness on cognition was assessed in a recent review of the literature . Perhaps, the most striking finding in this literature is the breadth of emotional and cognitive processes and outcomes that seem susceptible to the influence of loneliness. Loneliness has been associated with personality disorders and psychoses [23–25], suicide , impaired cognitive performance and cognitive decline over time [27–29], increased risk of Alzheimer’s Disease , diminished executive control [30, 31], and increases in depressive symptoms [32–35]. The causal nature of the association between loneliness and depressive symptoms appears to be reciprocal ” [...]
“Our model of loneliness [8, 9] posits that perceived social isolation is tantamount to feeling unsafe, and this sets off implicit hypervigilance for (additional) social threat in the environment. Unconscious surveillance for social threat produces cognitive biases: relative to nonlonely people, lonely individuals see the social world as a more threatening place, expect more negative social interactions, and remember more negative social information. Negative social expectations tend to elicit behaviors from others that confirm the lonely persons’ expectations, thereby setting in motion a self-fulfilling prophecy in which lonely people actively distance themselves from would-be social partners even as they believe that the cause of the social distance is attributable to others and is beyond their own control . This self-reinforcing loneliness loop is accompanied by feelings of hostility, stress, pessimism, anxiety, and low self-esteem  and represents a dispositional tendency that activates neurobiological and behavioral mechanisms that contribute to adverse health outcomes. [...]
Loneliness differences in immunoregulation extend beyond inflammation processes. Loneliness has been associated with impaired cellular immunity as reflected in lower natural killer (NK) cell activity and higher antibody titers to the Epstein Barr Virus and human herpes viruses [70, 80–82]. In addition, loneliness among middle-age adults has been associated with a smaller increase in NK cell numbers in response to the acute stress of a Stroop task and a mirror tracing task . In young adults, loneliness was associated with poorer antibody response to a component of the flu vaccine , suggesting that the humoral immune response may also be impaired in lonely individuals.”
iii. One of the studies also cited above: Women, Loneliness, and Incident Coronary Heart Disease, by Thurston & Kubzansky.
To examine associations between loneliness and risk of incident coronary heart disease (CHD) over a 19-year follow-up period in a community sample of men and women. [...]
Hypotheses were examined using data from the First National Health and Nutrition Survey and its follow-up studies (n = 3003). Loneliness, assessed by one item from the Center for Epidemiologic Studies of Depression scale, and covariates were derived from baseline interviews. Incident CHD was derived from hospital records/death certificates over 19 years of follow-up. Hypotheses were evaluated, using Cox proportional hazards models. [...]
Among women, high loneliness was associated with increased risk of incident CHD (high: hazard ratio = 1.76, 95% Confidence Interval = 1.17-2.63; medium: hazard ratio = 0.98, 95% Confidence Interval = 0.64-1.49; reference: low), controlling for age, race, education, income, marital status, hypertension, diabetes, cholesterol, physical activity, smoking, alcohol use, systolic and diastolic blood pressures, and body mass index. Findings persisted additionally controlling for depressive symptoms. No significant associations were observed among men.”
(The last sentence may be important.)
iv. The clinical significance of loneliness: A literature review, by Heinrich & Gullone. The first parts you can skip without missing out on anything, but there’s a lot of useful stuff in there as well and you shouldn’t give up on it just because the first part isn’t very good (IMO). I’ve quoted extensively from the paper because there’s a lot of stuff in there – from the article:
“With a particular focus on the adolescent developmental period, this review is organized into five sections: Drawing on developmental and evolutionary psychology theories, the nature of social relationships and the function they serve is first discussed. In the second section, loneliness is introduced as an exemplar of social relationship deficits. Here a definition of loneliness is provided, as well as an explanation of why it may pose a situation of concern. This is followed by a review of the prototypic features of loneliness through examination of its affective, cognitive, and behavioral correlates. The fourth section includes a review of theories related to the antecedent and maintenance factors involved in loneliness. Finally, methodological and theoretical considerations are addressed, and conclusions and proposals for future research directions are put forth.” [...]
“Empirical evidence [...] suggests that lonely and nonlonely people do not differ in either the daily activities they engage in, or in the amount of time they spend alone (e.g., see Hawkley et al., 2003).” [My initial reaction is to be very skeptical about that claim/finding.] “Thus, loneliness is clearly distinguishable from the objective state of solitude, social isolation, or being alone. Indeed, in a study examining adolescents’ perceptions of loneliness and aloneness, Buchholz and Catton (1999) found that loneliness was described as an aversive state arising from a sense of yearning for another person(s), and associated with negative feelings such as sadness and hopelessness. In contrast, however, aloneness was not viewed negatively. [...but I'm aware that this distinction is relevant and may be important.] In fact, whereas loneliness is by definition an undesirable condition, aloneness or solitude may actually be a desirable or positive condition fostering creativity, facilitating self-reflection, self-regulation, identity formation, concentration, thinking, and learning (Buchholz & Catton, 1999; Fromm-Reichmann, 1959; Larson, 1999; Larson, Csikszentmihalyi, & Graef, 1982; Storr, 1988; Winnicott, 1958). Burger (1995) and Larson (see Larson, 1999, for a review) have shown that college students and adolescents, respectively, may seek and appreciate solitude for such positive reasons, rather than as a means of avoiding possibly anxiety-provoking social interactions. However, Larson has also shown that while solitude may be associated with cognitive benefits, such as increased concentration, these benefits come at the cost of lowered mood states (e.g., sadness, irritability, loneliness, and boredom).” [...]
“loneliness has been found to be significantly associated with shyness, neuroticism, social withdrawal, and a lower frequency of dating, as well as extracurricular and religious participation (Hojat, 1982b; Horowitz, French, & Anderson, 1982; Jones, Freemon, & Goswick, 1981; Russell et al., 1980; Stephan, Faeth, & Lamm, 1988). Associations between loneliness and poorer social interaction quality have also been demonstrated (Hawkley et al., 2003, Jones et al., 1982, Rotenberg, 1994; Segrin, 1998; Wheeler et al., 1983). For example, Hawkley et al. (2003) found loneliness to be related to less positive and more negative feelings during social interactions. More specifically, loneliness was significantly correlated with less intimacy, comfort, and understanding, and more caution, distrust, and conflict. Importantly, Hawkley et al. also demonstrated that these effects of loneliness on social interaction quality were present after controlling for depressed affect and neuroticism.
Perhaps not surprisingly then, loneliness has also been linked to low social competence, peer rejection and victimization, a lack of high quality friendships, and more negative appraisals of social support (Crick & Ladd, 1993; Kochenderfer & Ladd, 1996; Parker & Asher, 1993; Riggio, Watring, & Throckmorton, 1993; Rubin & Mills, 1988). Larson (1999) has also observed that lonely adolescents are rated by parents and teachers as less well-adjusted. Moreover, loneliness has been found to be associated with higher school dropout rates (Asher & Paquette, 2003), poor academic performance (Larson, 1999; Rotenberg, 1999b; Rotenberg & Morrison, 1993), and juvenile delinquency (Brennan, 1982). However, perhaps most pertinent to the issue of psychosocial problems is the consistent finding that loneliness is associated with low self-esteem (Brage, Meredith, & Woodward, 1993; Hymel, Rubin, Rowden, & LeMare, 1990; Jones, 1982; Larson, 1999; Moore & Sermat, 1974; Olmstead, Guy, O’Mally, & Bentler, 1991; Paloutzian & Ellison, 1982; Schultz & Moore, 1988). Yet, despite the typically lower self-esteem of lonely people, Cacioppo et al. (2000) have reported that lonely people have no less social capital to offer than nonlonely people.” [...]
“it would appear lonely people experience predominantly negative affect, which can be summarized as four clusters of feelings: desperation, depression, impatient boredom, and self-deprecation. [...] while longitudinal investigations (e.g., Brage & Meredith, 1994; Cutrona, 1982; Olmstead et al., 1991) have suggested that low self-esteem plays a causal role in the development and maintenance of loneliness, it is likely that a reciprocal relationship exists between loneliness and low self-esteem (Peplau, Miceli et al., 1982). To elaborate, since social relationships constitute a major aspect of people’s self-conceptions (Parkhurst & Hopmeyer, 1999; Peplau, Miceli et al., 1982; Sippola & Bukowski, 1999), and given its relationship with social relationship deficiencies, loneliness may lead to negative self-conceptions thereby undermining one’s self-regard (Peplau, Miceli et al., 1982), and resulting in a vicious cycle wherein low self-esteem and loneliness reinforce one another.
Not surprisingly then, lonely people have been found to view themselves in a negative and self-depreciating manner, believing that they are inferior, worthless, unattractive, unlovable, and socially incompetent (Horowitz et al., 1982; Jones et al., 1981; Jones & Moore, 1987; Jones, Sansone, & Helm, 1983; Paloutzian & Ellison, 1982; Rubenstein & Shaver, 1982; Spitzberg & Canary, 1985; Zakahi & Duran, 1982, 1985). Lonely people have also been observed to hold greater discrepancies than nonlonely people between their actual selves (i.e., how they believe they are) and their ideal selves (i.e., how they would ideally wish to be; Kupersmidt et al., 1999; Eddy, 1961, cited in Peplau, Miceli et al., 1982).
Unfortunately, given Gardner et al.’s (2000) assertion that “the arousal of social hunger may direct attention toward and bias memory for social cues” (p. 487), and their observation that failure to meet belongingness needs gives rise to selective retention of social information, self-conceptions may also be more salient for lonely people than nonlonely people. In support of this notion, loneliness has indeed been found to be associated with self-consciousness and a heightened degree of self-focus (Goswick & Jones, 1981; Jones, Cavert, Snider, & Bruce, 1985; Jones et al., 1981, 1982; Moore & Schultz, 1983). Moreover,Weiss (1973) has argued that these inclinations may result in a “tendency to misinterpret or exaggerate the hostile or affectionate intent of others” (p. 21). This is a contention that has been at least partially supported by Cutrona’s, (1982) finding that lonely people are more sensitive to rejection.”
Numerous studies have indicated that the social behavior of lonely individuals is marked by inhibited sociability and ineffectiveness. For example, lonely people are typically shy (e.g., Anderson & Harvey, 1988; Cacioppo et al., 2000; Cheek & Busch, 1981; Dill & Anderson, 1999; Hojat, 1982a; Jackson, Soderlind, & Weiss, 2000; Jones et al., 1981; Kalliopuska & Laitinen, 1991; Qualter & Munn, 2002), introverted (Cutrona, 1982; Hojat, 1982a; Jones et al., 1981; Kalliopuska & Laitinen, 1991), less affiliative/sociable (Cacioppo et al., 2000; Cutrona, 1982), and less willing to take social risks (Hojat, 1982a; Jones et al., 1981; Moore & Schultz, 1983). Lonely people also seem to be less assertive than nonlonely people (Bell & Daly, 1985; Cutrona, 1982; Gerson & Perlman, 1979; Hojat, 1982a; Jones et al., 1981; Sermat, 1980; Sloan & Solano, 1984). [...] Jones et al. (1982) have revealed that, at least in mixed-sex college student pairs, lonely people make more statements focusing on themselves, respond more slowly to their partner, ask fewer questions, and change the discussion topic more often than nonlonely people. Thus, the self-focused behavior which lonely people appear to engage in during social interactions may undermine relationship development, furthering feelings of loneliness. [...]
Rubenstein and Shaver (1980, 1982) have observed that people’s responses to loneliness tend to fall into four categories: active solitude (e.g., study or work, write, listen to music, exercise, walk, work on a hobby, go to a movie, read, play music), spending money (e.g., spend money, go shopping), social contact (e.g., call a friend, visit someone), and sad passivity (e.g., cry, sleep, sit and think, do nothing, overeat, take tranquilizers, watch television, drink or get ‘stoned’). In coping with loneliness, they found that severely lonely people characteristically adopt a ‘sad passivity’ coping strategy, whereas people who are infrequently lonely tend to adopt the other three strategies. [...] perceived social skills are affected by loneliness, with greater loneliness being associated with lower self-perceived social competence. Therefore, coping behavior is influenced by perceived social skills, which in turn are negatively affected by loneliness. [...] to summarize, lonely people appear to behave in a self-absorbed, socially ineffective manner towards others, and are typically passive when faced with loneliness and stress.”
v. A Meta-Analysis of Interventions to Reduce Loneliness, by Masi, Chen, Hawkley & Cacioppo.
“In summary, meta-analysis of the randomized group comparison studies revealed a small but significant effect of the interventions on loneliness. Of note, interventions that addressed maladaptive social cognition had a sizable mean effect compared to the other intervention types. [...] The current study used meta-analytic techniques to determine quantitatively whether the outcomes of loneliness interventions varied based on study design, intervention type, or other study characteristic. Compared to single-group pre-post and nonrandomized group comparison studies, randomized group comparison studies had a small but significant mean effect size (–0.198, p < .05). Within this group, the mean effect size for interventions that addressed maladaptive social cognition was larger than that for interventions that attempted to improve social skills, enhance social support, or increase opportunities for social interaction. A primary criterion for empirically supported therapies is that they demonstrate efficacy in randomized controlled trials (Chambless & Hollon, 1998). By this criterion, our meta-analysis suggests certain interventions, particularly those that use CBT, can reduce loneliness. [...]
With an intervention effect size of –0.198, the average treatment group scored 0.198 standard deviations lower in loneliness, which is equivalent to 8.05 × 0.198 = 1.59 units on the UCLA Scale. Thus, with the control group mean at 41.17, the reduction in loneliness in the average treatment group was equivalent to a decrease from 41.17 to 39.58 on the UCLA Loneliness Scale. [...] Because clinical significance is defined as “returning to normal functioning” (Jacobson, Roberts, Berns, & McGlinchey, 1999), a 1.59-point decrease in the UCLA Loneliness score clearly did not return study participants to the level of healthy, community-living individuals. Moreover, a meta-analysis of 302 social and behavioral intervention meta-analyses (reviewed in Lipsey & Wilson, 2001) showed that, on average, interventions in this field have generated a mean effect size of 0.50. A mean effect size of –0.198 falls in the bottom 15% of this distribution, suggesting that loneliness interventions to date have not attained the degree of efficacy achieved by interventions targeting other social and behavioral outcomes.”
I’ve completed the book. The last part of the book wasn’t that bad, though I remain unconvinced of some of the findings in these chapters because of methodological issues I have with the way they do things (Here’s a link relevant to one of the issues I have: “Whether individual Likert items can be considered as interval-level data, or whether they should be treated as ordered-categorical data is the subject of considerable disagreement in the literature, with strong convictions on what are the most applicable methods. This disagreement can be traced back, in many respects, to the extent to which Likert items are interpreted as being ordinal data. [...] Non-parametric tests should be preferred for statistical inferences [...] While some commentators consider that parametric analysis is justified for a Likert scale using the Central Limit Theorem, this should be reserved for when the Likert scale has suitable symmetry and equidistance” (I’m far from sure these requirements are met). It was still interesting stuff, though I believe the chapters in the middle were the most interesting ones. I’d say that if your impression after reading some of the quotes I’ve posted is that ‘it’d be an interesting read’, you should probably read it. A few quotes from the last part of the book:
i. “until the late 1980s we had never conducted research on college students. When the NEO Personality Inventory was first published (Costa & McCrae, 1985), we heard from psychologists around the country who thought there was something wrong with our norms, because their student samples were far from average. Colleagues generously provided data from students at two West Coast universities, one East Coast, and one Southern university. A comparison of these data showed several striking effects: All the students differed substantially from our adult norms; all the subsamples showed very similar patterns; and men and women had parallel age trends (Costa & McCrae, 1989). The implication was that college students differed systematically from adults in the mean levels of many traits. [...] We began our careers looking for signs of adult development and found mainly stability. Examining what might be expected to be the most volatile time of life, the teenage years, we now find—mostly stability. As Figure 9.7 shows, adolescence appears to occupy a plateau before the important changes of the next decade. But there is one extremely important difference between this plateau and that seen after age 30. As Roberts and DelVeccio (2000) show, the stability of individual differences is inversely related to age. Costa, Herbst et al. (2000) studied 40-year-olds over a 6- to 9-year interval and reported a median retest correlation of .83 across the five factors. Over the 4 years of college, Robins et al. (2001) reported a substantially lower median retest correlation of .60, and the median retest for gifted 12-yearolds was only .38 (Costa, Parker, & McCrae, 2000).
What these dramatically lower stability coefficients mean is that adolescence really is a turbulent time in which the personality traits of any given individual may change considerably. But across individuals, there is no uniform trend. Some teenagers become more agreeable—more courteous, generous, and modest—as they go through junior high and high school, but an equal number become more antagonistic, belligerent, and arrogant. Similarly, individuals’ shifts in N, E, and C appear to yield no net effect on mean levels at this age.
The exception is O, on which both boys and girls show systematic change in mean level. [...] Social class certainly has marked effects on the life course, but there is little data on whether personality traits develop differently in different social groups. Physical health status in general has little effect on personality or its stability (Costa, Metter, & McCrae, 1994)” (from chapter 9)
“individuals were more compartmentalized when stress was high than when stress was low (t(13) = 2.71, p < .02). With the exception of the low vulnerability, minor events group, there was a tendency for all groups to be more compartmentalized when stress was high than when stress was low. [...] among those who were experiencing high levels of stress, greater compartmentalization was associated with less negative mood.
Although these data are correlational and must be interpreted cautiously, they are consistent with the notion that increases in compartmentalization may be an effective response to stressful life events. Individuals who have the flexibility to change their type of self-organization may experience less negative mood when stressful events occur.” (from chapter 11)
“Interpersonal behavior [...] involves the temporal coordination of behavior at divergent levels of analysis, from basic movements and utterances to broad action categories reflecting momentary goals and long-range plans. Even something as elemental as leaving a room, after all, requires that the room’s occupants coordinate their physical movements so as not to stumble over each other. As group action becomes more complex, the ability of group members to coordinate their activities in time becomes correspondingly more important.
To distinguish the dynamic aspects of coordination from its conventional interpretation, we employ the term synchronization. Synchronization refers to the fact that the actions, thoughts, and feelings of one person are temporally related to the actions, thoughts, and feelings of one or more other people. [...] In its most basic form, synchronization refers to the coupling of behavior patterns [...] Synchronization [...] is likely to become more difficult as the action in question becomes more complex. It may be impossible, for example, for two unacquainted people to synchronize their efforts sufficiently to assemble a mechanical device or create a piece of art. The ability to synchronize in more complex modes requires at least some semblance of concordance in the requisite internal states of each person. [...] The importance of similarity in facilitating synchronization is apparent with respect to stable characteristics such as attitudes, values, talents, temperament, and personality traits. Indeed, similarity with respect to such characteristics has been shown consistently to be among the strongest preconditions for interpersonal attraction (cf. Byrne, Clore, & Smeaton, 1986; Newcomb, 1961). By the same token, individuals avoid forming relationships with people who appear to be different from them in their personal characteristics (e.g., Rosenbaum, 1986). [...]
Even if someone’s internal state is readily detectable, it may prove difficult to modify one’s own state to match it. It is hard to change one’s cognitive style or temperament, for example, regardless of how pragmatic it would be do so in preparing for an interaction with someone whose way of thinking and tempo of expression is markedly different from one’s own. There is evidence, for example, that differences in temperament can hinder effective emotional and behavioral coordination (e.g., Dunn & Plomin, 1990). In this sense, personality sets constraints on social interaction. People’s stable characteristics—traits, values, and the like—bias the choice of interaction partners and dictate the likely success of establishing relationships with those who are chosen.
But one can look at the process in reverse to ask how social interactions shape personality. Personality, after all, comes from somewhere. [...] We propose that individual differences are shaped by the history of social interactions. [...] In essence, the model envisions social interaction as a vehicle for coupling the dynamics of individuals. Each individual brings his or her personal dynamic tendencies to social interaction and attempts to synchronize these tendencies with his or her interaction partner. As a result of these attempts, social interaction revises the settings for each individual, or engraves entirely new settings, which then provide the foundation for subsequent social interactions. In principle, this reciprocal relation between settings of internal parameters and social interaction iterates continuously throughout social life. In reality, the engravings of some tendencies are likely to become particularly stable and thus resistant to modification in the ordinary course of social encounters. [...]
With respect to modeling human dynamics, the dynamical variable (x) can be interpreted as behavior. Changes in x thus reflect variation in the intensity of behavior. The control parameter, r, corresponds to internal states (e.g., personality traits, moods, values, etc.) that shape the person’s pattern of behavior (i.e., changes in x over time). [...] (α) corresponds to the strength of coupling and reflects the mutual interdependency of the relationship. When the fraction is 0, there is no coupling on the behavior level. When the fraction is 1, each person’s behavior is determined equally by his or her preceding behavior and the preceding behavior of the other person. [...] The main results [...of their simulations - US] were straightforward. In general, the degree of synchronization between partners’ behaviors increased both with α and similarity in r. This implies that similarity in internal states and interdependence can compensate for one another in achieving or maintaining a given level of synchronization. [...] Modeling the direct synchronization of control parameters is relatively straightforward. One need only assume that on each simulation step, the value of each person’s control parameter drifts somewhat in the direction of the value of the partner’s control parameter. The rate of this drift and the size of the initial discrepancy between the values of the respective control parameters determine how quickly the control parameters begin to match. This mechanism assumes that both interaction partners can directly estimate the settings of one another’s control parameters. In many types of relationships, considerable effort may be focused on communicating or inferring these settings (cf. Jones & Davis, 1965; Kunda, 1999; Nisbett & Ross, 1980; Wegner & Vallacher, 1977). Even with such effort, however, the exact values of the relevant control parameters may be difficult or impossible to determine.
Control parameters can also become synchronized through behavioral coordination. Research concerning the facial feedback hypothesis, for instance, has established that when people are induced to mechanically adopt a specific facial configuration linked to a particular mood (e.g., disgust), they tend also to adopt the corresponding affective state (e.g., Strack, Martin, & Stepper, 1988). This matching of internal states to overt behavior is enhanced when the behavior is interpersonal in nature. Even role playing, in which a person simply follows a behavioral script in social interaction, often produces pronounced changes in attitudes and values on the part of the role player (e.g., Zimbardo, 1970). [...]
Figure 12.1 shows the time course of synchronization as two maps progressively match each other’s control parameters [...] This simulation was run for relatively weak coupling (α = 0.25). The x-axis corresponds to time in simulation steps, and the y-axis portrays the value of the difference between the two maps. The thin line corresponds to the difference in the dynamic variables, whereas the thicker line corresponds to the difference in r. Over time, the difference in the respective control parameters of the two maps decreases and the maps become perfectly synchronized in their behavior. This suggests that attempting behavioral synchronization with weak levels of influence and control over one another’s behavior will facilitate matching of one another’s internal states.
Figure 12.2 shows the results when the simulation was run with a stronger value of coupling (α = 0.7). Note that although coordination in behavior develops almost immediately, the control parameters fail to converge, even after 1,000 simulation steps. This is because strong coupling causes full synchronization of behavior, even for maps with quite different control parameters. Once the behavior is in full synchrony, the two maps do not have a clue that their control parameters are different. Hence, if the coupling were removed, the dynamics of the two respective maps would immediately diverge. This result suggests that using very strong influence to obtain coordination of behavior may effectively hinder synchronization at a deeper level. More generally, there is optimal level of influence and control over behavior in relationships. If influence is too weak, synchronization may fail to develop. Very strong influence, on the other hand, can prevent the development of a relationship based on mutual understanding and empathy. Although highly controlled partners may fully synchronize their behavior, they are unlikely to internalize the values of control parameters necessary to maintain such behavior in the absence of interpersonal influence. For such internalization to occur, intermediate levels of mutual influence would seem to be most effective.” (from the last chapter of the book)
I like this book, it’s quite good. Some more quotes:
“Perhaps one of the most useful contributions of understanding the patterns of regional brain activity that characterize personality traits and clinical syndromes is the potential insight it provides into individual differences in cognitive capabilities and styles. Numerous studies have shown that activity in regions of the cortex specialized for particular modes of information processing predicts performance on tasks that benefit from that type of computation. In the vast majority of studies [...] increased activity is associated with better performance, whereas deficient activity is associated with decrements in performance. [...] Both anxious apprehension and anxious arousal types of anxiety, as well as depression, are characterized by specific cognitive biases and impairments [...] Anxiety in general has been strongly associated with an attentional bias to threatening stimuli [...] We have argued that these attentional biases toward threat-related stimuli dovetail with specializations of the right posterior region for visual and spatial attention, vigilance, and autonomic arousal, reflecting the activity of an emotional surveillance system (Nitschke et al., 2000). Our recent fMRI research suggests that this area may include temporal, parietal, and occipital regions of the right hemisphere (Compton et al., 2001; Miller, 2000). [...] In depression, deficits have been described for explicit memory, executive functions, and visuospatial skills [...] We have argued that decreased activity in prefrontal brain regions can account for many of the cognitive impairments in depression, including memory for material on tasks that require or benefit from information-organizing strategies, the ability to access errors accurately, problem solving, and cognitive flexibility. Depression has also been consistently associated with impairments on tasks associated with right posterior regions of the brain (e.g., Deldin, Keller, Gergen, & Miller, 2000; Keller et al., 2000; for review of earlier studies, see Heller & Nitschke, 1997). These findings are consistent with evidence that there is decreased activity in these brain regions (Banich et al., 1992; Liotti & Tucker, 1992; Otto, Yeo, & Dougher, 1987).” (from chapter 4)
“Schmidt (1999) reported that left and right anterior brain activity is differentially associated with sociability and shyness, respectively. From an incentive–threat perspective, sociability reflects an incentive-oriented view of other individuals (often major sources of reward), whereas shyness reflects a threat-oriented view of others (often, also, major sources of punishment). [...] Henriques and Davidson (1990, 1991) have shown that currently or previously depressed individuals (i.e., those who present a significant lack of incentive motivation) exhibit relatively less resting left frontal activity. In summary, there is a substantial set of studies demonstrating links between left- and right-sided anterior cortical activity with incentive and threat motivation sensitivity, respectively. [...] Given that one component of motivation is the detection, selection, and orientation toward incentives and threats, it appears likely that individual differences in the strength of these two systems would influence information processing in a way that is consistent with the stronger of the two systems under circumstances in which other factors are controlled for. [This hypothesis has been tested and it holds, at least to some extent. Such biological differences may be used to explain part of the inter-individual variation in e.g. risk preferences - US.]” (from chapter 5)
“In the models of development as they are related to personality, we use a combination of approaches. Here I consider three models of the development of personality: a trait or status model, a contextual or environmental model, and an interactional model. [...] The trait or status model is characterized by its simplicity. It holds to the view that a trait, or the status of the child at one point in time, is likely to predict a trait or status at a later point in time. A trait model is not interactive and does not provide for the effects of the environment. In fact, in the most extreme form, the environment is thought to play no role either in effecting its display or in transforming its characteristics. A particular trait may interact with the environment, but the trait is not changed by that interaction. [...] The prototypic environmental model holds that exogenous factors influence development. Two of the many problems in using this model are (1) our difficulty in defining what environments are and (2) the failure to consider the impact of environments throughout the lifespan. In fact, the strongest form of the developmental environmental or contextual model argues for the proposition that adaptation to current environment throughout the life course has a major influence on our behavior and on our personalities. Moreover, such a model is familiar to students of personality because it represents the idea that context, to a large degree, determines behavior. As environments change, so too does the individual (Lewis, 1997). This dynamic and changing view of environments and adaptation is in strong contrast to the earlier models of environments as forces that act on the individual and that act on the individual only in the early years of life. [...] in the study of personality development, even though we recognize that environments can cause both normal and abnormal behavior, we prefer to treat the person—to increase coping skills or to alter specific behaviors—rather than to change the environment (Lewis, 1997). Yet we can imagine the difficulties that are raised when we attempt to alter specific maladaptive behaviors in environments in which such behaviors are adaptive [...] The general environmental model that I have suggested (Lewis, 1997) holds that children’s behavior always is a function of the environment in which the behavior occurs, because the task of the individual is to adapt to its current environment.1 As long as the environment appears consistent, the child’s behavior will be consistent; if the environment changes, so, too, will the child’s behavior. It is the case that maladaptive environments produce both normal and abnormal behavior. From a developmental point of view, I would hold that maladaptive behavior is caused by maladaptive environments; if we change those environments, we may be able to alter the behavior. [...]
Although the trait model is most often used in research, the interactional model is usually held to be the one which, from a theoretical point of view, is most likely to account for the development and change in personality. This mismatch between theory and research has serious implications for the growth of our knowledge about human development. Interactional models vary; some researchers prefer to call them “interactional” and others “transactional” (Lewis, 1972; Sameroff & Chandler, 1975). All these models have in common the role of both child and environment in determining the course of development. In these models, the nature of the environment and the characteristics or traits of the child are needed to explain concurrent, as well as subsequent, behavior and adjustment. Such models usually require an active child and an active environment; however, they need not do so. What they do require is the notion that behavior is shaped by its adaptive ability and that this ability is related to environments. Maladaptive behavior may be misnamed because the behavior may be adaptive to a maladaptive environment. [...] Although there is some evidence in the developmental literature for an interactional approach, the data are not that strong (Lewis, 1999c). Moreover, without a consideration of the environment over time, it is still relatively unproven whether any interactional model accounts for more of the variance than does an environmental model alone. As is discussed subsequently, without the proper environmental measurement, any serious test of the various developmental models is not possible. [...]
earlier traits or characteristics have little relation to later personality characteristics and [...] the concurrent environment is most predictive of these characteristics. These results, in general, hold across most longitudinal studies. It has been labeled a simplex pattern, as prediction grows weak as the two age points increase in time. [...] I have approached the topic of continuity and discontinuity of personality characteristics by looking at the data in the early period of the lifespan. How do the data for the earlier part of the lifespan, the first two dozen years, agree with the data obtained across the whole lifespan? There appears to be general agreement that the correlations (or stability) of personality characteristics are quite low for the first 30–40 years of life but become stronger at later ages (Caspi & Roberts, 2001). This is explained in such terms as crystallization (Caspi & Roberts, 2001) [...] As a field, we have argued for a very powerful hypothesis. We have argued that personality characteristics of individuals are enduring across time and context. In this chapter, I have raised the question of whether or not the data from the past 75 years of development and study supports this powerful hypothesis. It is not supported in the first third of life, and the correlations are, overall, rather weak for the rest of the lifespan.” (from chapter 6)
“From biological perspectives, phenotypic (and thereby genotypic) variability is the raw material on which natural selection operates. Selection in general is seen as a homogenizing force that culls less optimal variants in favor of those that foster survival and reproduction (Willams, 1966). Members of any population differ from one another in what is seen as largely inconsequential ways; individual differences are noise in the evolutionary process that results from nonselective mechanisms such as mutation, recombination, and drift. Studies in evolutionary psychology accordingly focus on topics such as mate selection, with little consideration for individual-difference variables [...] This species-general approach to evolutionary psychology confronts long-held beliefs in other fields of psychology that individual differences are anything but inconsequential. Personality theorists, for example, have long shown how individual differences are related to adaptation across the lifespan. Only more recently has it been suggested that these differences are themselves related to reproductive success [...] Work in behavioral genetics has shown that a good portion of this variation in personal characteristics is heritable Heritability is often considered to support evolutionary arguments, as “adaptations” must have a genetic component. Paradoxically, however, the more the variability in a population on a certain trait is due to genetic differences (i.e., heritability), the less likely it is that the trait is an “adaptation.” Naturally selected adaptations tend to have by definition very low heritabilities because there is little variation across individuals (e.g., a four-chambered heart). In other words, “heritable diversity is inversely proportional to adaptive importance” (Tooby & Cosmides, 1990, p. 49). [This is not news to me, but this is an important point perhaps not all readers are familiar with - US] [...]
Similar to several personality theories, theories of motivation [...] and models of social competence [...] resource control theory has the balancing of self and other goals at its core. This theoretical perspective is based on the assumption that humans, like other social species, should optimally behave in ways that facilitate personal resource acquisition while at the same time maintaining friendly bonds with other group members. The necessity of meeting one’s needs and simultaneously being a good group member underlies the evolution of much of human behavior and psychological organization: It implies that individuals must balance being egoistic and other-oriented [...] the presence of others intensifies intragroup competition for the very resources that the group acquires. In other words, “cooperative” relationships are inevitably contaminated by competition. [...]
McGuire and associates [...] have demonstrated that hierarchy ascendance is associated with serum serotonin elevations (Brammer, Raleigh, & McGuire, 1994; McGuire, Raleigh, & Brammer, 1984). In the presence of others, top-ranked male vervets had higher than average serotonin levels. When removed from the group, they returned to normal levels (i.e., dominance requires the presence of others). In the absence of the reigning alpha, a new male rose in the hierarchy and, accordingly, enjoyed elevated serotonin until the reigning male returned. The authors concluded that elevated serotonin was both a cause and effect of successful ascendance. The effects of serotonin on mood and behavior are well known: Low levels of serotonin are associated with low self-esteem, anxiety, and depression. Pharmacologically enhanced serotonin (e.g., serotonin reuptake inhibition) decreases anxiety and increases sociability and assertiveness [...] In a similar vein, social subordinance in nonhuman primates has been linked with hypercortisolism, an exaggerated stress response (Sapolsky, Alberts, & Altmann, 1997). [...]
Shahar, Grob, and Little (2001) examined the relationship between depression and the reported attainment of the goal of achieving an intimate relationship. In young adulthood (ages 18–39), there were no differences on depression—males and females who either had or had not achieved intimate-relationship status were similar in their low levels of depressive symptomology. In mid-age, on the other hand, substantial gaps appeared to emerge; in fact, all four groups differed. Males who had not attained intimate-relationship status showed the highest levels of depression, followed by females who had not, then males who had, and then females who had. By old age, the gender differences disappeared, but the effect of either having attained or not having attained an intimate relationship was pronounced. Those persons who had not established an intimate relationship reported greater depressive symptoms than those who had achieved intimacy with another person.” (from chapter 7)
“Most studies assume that genetics and individual family environment represent the only two influences on personality traits. [...] [Many] authors seem to assume that the environment stops at the door of the family home. As many theorists and researchers have argued, the larger sociocultural environment can have a substantial effect on personality and development [...] Examining “nonshared” environment (Plomin & Daniels, 1987) does not solve this problem, because twins (identical or fraternal) are necessarily the same birth cohort; thus the larger sociocultural environment is still “shared” environment (but “shared” environment outside of the family). Thus birth cohort might be a nongenetic explanation for the similarity between identical twins raised apart. [...] The fact that twins share a birth cohort also means that these studies have likely overestimated the heritability estimates for personality traits. Almost all of the studies examining genetic and environmental effects on personality have included samples of only one birth cohort (or samples very close in birth cohort). If more cohorts were included, then the variance in personality would likely increase. [...]
Working with my colleague Keith Campbell, I performed a metaanalysis on college students’ scores on the Rosenberg Self-Esteem Scale (RSE) between 1968 and 1994. We found that college students’ self-esteem scores increased about two thirds of a standard deviation over this 26-year period (Twenge & Campbell, 2001). Thus 1990s college undergraduates scored considerably higher in self-esteem than had their late-1960s counterparts. It is possible that today’s undergraduates actually do feel better about themselves compared with the college students of the 1960s. It could also be that college students now speak the language of self-esteem and understand that one is supposed to possess copious amounts of this supposedly precious substance. [...] In sum, the societal trend has been toward meeting new people more often and toward more fluid and changeable relationships. In this world, being extraverted is no longer a quirk of genetics; it is a virtual requirement (e.g., Whyte, 1956). When you are expected to leave your birth family at a young age and create an entirely new support system of your own, being outgoing becomes essential. Extraversion, in its classic form, is not simply about liking to be with people; it is about liking to be with many people (such as at a party) and being comfortable meeting new people. In modern society, both family and career depend, at least in part, on being extraverted. The available evidence suggests that extraversion began to increase in American college students beginning in the late 1960s. In a recent paper (Twenge, 2001a), I found that American undergraduates scored 0.79 to 0.97 standard deviations higher on extraversion scales in 1993 than in 1966 (the measures were the Extraversion scales of the Eysenck Personality Inventory and the Eysenck Personality Questionnaire). [...] During the most recent time period included in [Twenge, 2001b], sex differences in assertiveness decreased from a male advantage of 0.40 standard deviations in 1968 to no difference (d = –.07) in 1993. Thus assertiveness, once a solid sex difference favoring men, is now a personality trait with no discernible differences between men and women.” (from chapter 8)
I’m currently reading this book. A quote from the beginning of the first chapter which made me question if I should even keep on reading:
“we urge sensitivity to the potential limitations of some traditional scientific methods. The most distinguishing feature of persons is that they construct meaning by reflecting on themselves, the past, and the future. Many writers have questioned the assumptions that meaning construction can be adequately captured by the traditional methods of natural science or that persons can be construed as a collection of quantifiable personality variables that index essential qualities of the individual (Geertz, 1973, 2000; Polkinghorne, 1988; Shweder, 2000; Shweder & Sullivan, 1990; Taylor, 1989). They discourage the positing of abstract global tendencies, urging instead that personal qualities be studied within the specific physical, social, and cultural contexts that comprise the individual’s life (Kagan, 1998)—a theme that has been sounded by a variety of scholars throughout the history of the field (Shweder, 1999). These concerns are as much a part of personality science as are investigations that happily make these assumptions.”
I post this quote here to make clear that if they ‘go too much in that direction’ I’ll just stop reading (related to Miao’s recent comment). I’m not interested in crap, I’m interested in the science. But on another note, if that quote had made me stop reading I’d have missed out on some good stuff – so far it’s mostly been about the science. They haven’t gone too far in the other direction yet – actually chapter 2 was quite technical – and I’ve read about a third of the book at this point. Chapter 2 was interesting, but if I had to quote from that one (In Search of the Genetic Engram of Personality) I’d have to include a lot of quotes like: “studies have generated mixed results” and “studies [...] have produced negative results”, “The results of these studies have been mixed” etc. – not a lot of convincing and ‘obviously true’ findings have surfaced yet; as the authors put it: “So what can be said to summarize the discussion in this chapter? In short, the story is complex.” I found chapter 3, on Individual Differences in Childhood Shyness to be quite interesting too, and I’ll quote a bit from that below – it’s interesting also because stuff like this seems far more relevant to me (and perhaps also a few of the readers?) than does stuff on, say, the genetics of bipolar disorder or -schizophrenia (slightly related link).
So, what about that shyness stuff?
“Shyness reflects a preoccupation of the self in response to, or anticipation of, novel social encounters. Although shyness is a ubiquitous phenomenon that a large percentage of adults have reported experiencing at some point in their lives (Zimbardo, 1977), a smaller percentage of adults and children (around 10–15%) are consistently anxious, quiet, and behaviorally inhibited during social situations, particularly unfamiliar social situations (see Cheek & Buss, 1981; Kagan, 1994). [...] many of these shy children exhibit a distinct pattern of behavioral and physiological responses during baseline conditions and in response to social challenge in infancy and through the early school-age years [...]
Current thinking suggests that the origins of shy behavior may be linked to the dysregulation of some components of the fear system (LeDoux, 1996; Nader & LeDoux, 1999). Fear is a highly conserved emotion that is seen across mammals. It is the study of fear that has produced the most reliable evidence to date concerning the neuroanatomical circuitry of emotion. There is a rich and growing literature from studies of conditioned fear in animals that suggests that the frontal cortex and forebrain limbic areas are important components of the fear system. The frontal cortex is known to play a key role in the regulation of fear and other emotions. This region is involved in the motor facilitation of emotion expression, the organization and integration of cognitive processes that underlie emotion, and the ability to regulate emotions (see Fox, 1991, 1994). The frontal region also appears to regulate forebrain sites involved in the expression of emotion. The amygdala (and central nucleus) is one such forebrain/limbic site. There are demonstrated functional anatomical connections between the amygdala and the frontal region. [...] The amygdala (particularly the central nucleus) is known to play a significant role in the autonomic and behavioral aspects of conditioned fear (LeDoux, Iwata, Cicchetti, & Reis, 1988). [...]
“infants who exhibit a high degree of motor activity and distress in response to the presentation of novel auditory and visual stimuli during the first 4 months of life exhibit a high degree of behavioral inhibition and shyness during the preschool and early school-age years. There is, in addition, evidence to suggest that there may be a genetic etiology to inhibited behavior. [...]
In a series of studies with adults, Davidson and his colleagues have noted a relationship between the pattern of resting frontal EEG activity and affective style. Adults who exhibit a pattern of greater relative resting right frontal EEG activity are known to rate affective film clips more negatively [...] and are likely to be more depressed [...] than adults who exhibit greater relative resting left frontal EEG activity. [...] The startle response is a brain-stem- and forebrain-mediated behavioral response that occurs to the presentation of a sudden and intense stimulus, and its neural circuitry is well mapped (Davis, Hitchcock, & Rosen, 1987). Although the startle paradigm has been used extensively in studies of conditioned fear in animals, this measure has been adapted for studies concerning the etiology of anxiety in humans. [...] individual differences in the startle response are linked to affective style. For example, adults who score high on trait measures of anxiety (Grillon, Ameli, Foot, & Davis, 1993) and children who are behaviorally inhibited (Snidman & Kagan, 1994) are known to exhibit a heightened baseline startle response. [...]
These data [too many findings to quote here] suggest that children who are classified as temperamentally shy during the preschool and early school-age years exhibit a distinct pattern of frontal brain activity, heart rate, and salivary cortisol levels during baseline conditions and in response to stress. [...]
One of the goals of our research program on shyness has been to examine the developmental course and outcomes of temperamental shyness beyond the early childhood years, given that temperamental shyness appears to remain stable and predictive of developmental outcomes (Caspi et al., 1988). Overall, the behavioral and physiological correlates and outcomes associated with temperamentally shy children are comparable with those seen in adults who score high on trait measures of shyness. For example, adults who report a high degree of trait shyness are likely to report concurrent feelings of negative self-worth and problems with depression in both elderly (Bell et al., 1993) and young (Schmidt & Fox, 1995) adult populations and to display a distinct pattern of central and autonomic activity during resting conditions and in response to social stressors (see Schmidt & Fox, 1999, for a review). [...]
Not all temperamentally shy adults or children are alike. Our research suggests that different etiologies, correlates, and developmental outcomes are associated with individual differences in temperamental shyness [...] Cheek and Buss (1981) described at least two types of shyness in undergraduates: individuals who are shy and low in sociability and individuals who are shy and high in sociability. [...]
Buss(1986) presented a theory in which he argued that there may be at least two types of shyness: an early-developing fearful shyness that is linked to stranger fear and wariness (perhaps analogous to the behaviorally inhibited children described by Kagan, 1994) and a later-developing self-conscious shyness that is linked to concerns with self-presentation. Little empirical research, however, has been done to substantiate Buss’s theoretical model. Two studies that do exist in the literature have found support for Buss’s claim in young adults. [...] Schmidt and Robinson (1992) found differences in self-esteem between the two shyness subtypes; the fearfully shy group reported significantly lower self-esteem than the self-consciously shy and nonshy groups. [...]
Cheek and Buss argued that people avoid social situations for different reasons. Some people avoid social situations because they experience fear and anxiety in such situations (i.e., they are shy); others avoid social situations because they prefer to be alone rather than with others (i.e., they are introverted). Cheek and Buss (1981) then noted that if shyness is nothing more than low sociability, then the two traits should be highly related such that being high on one trait means being low on the other. The extent to which they might be orthogonal was an empirical question. Cheek and Buss (1981) noted that the two traits were only modestly related, and they were able to distinguish them on a behavioral level. High shy–high social undergraduates exhibited significantly more behavioral anxiety than did undergraduates who reported other combinations of shyness and sociability.
We (Schmidt, 1999; Schmidt & Fox, 1994) examined the extent to which shyness and sociability were distinguishable on electrocortical and autonomic measures. Using a design identical to that reported by Cheek and Buss (1981), we attempted to distinguish shyness and sociability on regional EEG, heart rate, and heart rate variability measures collected during baseline and during a social stressor. We found that high shy–high social (i.e., the conflicted subtype) undergraduates exhibited a significantly faster and more stable heart rate than high shy–low social (i.e., the avoidant subtype) participants in response to an anticipated unfamiliar social situation (Schmidt & Fox, 1994). [...] the two subtypes were distinguishable based on the pattern of activity in the left, but not the right, frontal area. High shy–high social (the conflicted subtype) participants exhibited significantly greater activity in the left frontal EEG lead than did high shy–low social (the avoidant subtype) participants. A similar pattern of resting frontal EEG activity has been found in high shy–high social and high shy–low social 6-year-olds (Schmidt & Sniderman, 2001) [...] These sets of findings, taken together, suggest that different types of shyness are distinguishable on behavioral, cortical, and autonomic levels during baseline conditions and in response to social challenge. [...]
We speculate that genes that code for the transportation of serotonin may play an important role in the regulation of some components of the fear system, which includes the frontal cortex, forebrain limbic area, and HPA system. Serotonin has been implicated as a major neurotransmitter involved in anxiety and withdrawal (Westernberg, Murphy, & Den Boer, 1996). Some temperamentally shy individuals may possess a genetic polymorphism that contributes to a reduced efficiency of the transportation of serotonin. Such a genetic polymorphism has been noted in adults who score high on measures of neuroticism (Lesch et al., 1996). [...] The reduction of serotonin contributes to overactivation of the amygdala and the HPA system in some individuals. The overactive amygdala stimulates the HPA system and the release of increased cortisol. This increase in cortisol may contribute to the pattern of frontal EEG activity noted early between shyness subtypes. [...]
When the two shy subtypes encounter actual or perceived social stress, there is an increase in heart rate, cortisol, and frontal EEG activity. The two subtypes will differ, however, in the pattern of behavior and left frontal EEG activity. The shy–social subtype will experience an approach-avoidance conflict and a greater increase in left frontal EEG activity; the shy–low-social subtype will not experience the same conflict, as they do not have the same need to affiliate. Thus this subtype will tend to avoid social situations and will not present with the same pattern of left frontal EEG activity, although they may evidence an increase in cortisol and heart rate. [...] we believe that the conflicted and avoidant subtypes may be on different pathways of developmental problems. Conflicted children are likely to be highly reticent, desiring to be a part of the peer group but having problems doing so and, we think, might be on a pathway to social anxiety; the avoidant child, on the other hand, may have problems simply engaging in any social situations and may avoid them all together, desiring instead to be alone, and, we think, on a pathway to social withdrawal and depression.”
As already mentioned, so far the book has been quite interesting. We’re complicated creatures and this kind of stuff is stuff you can add to the list of things you do not think about but which still have major consequences for how you live your life, what you feel about the life you live, and where you end up.
“Summary. —The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force’s final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.”
I unfortunately can’t find an ungated copy of this paper online, but here’s a little more stuff from the paper:
“Cohen (1962) concluded, “Increased sample size is likely to prove the most effective general prescription for improving power” (p. 153), but there is little evidence that the field has taken note. After reviewing the literature, Holmes (1979) reported finding only two studies that examined sample sizes directly. One study reported the number of articles published about single-subject samples (Dukes, 1965), and the other examined sample sizes reported in two British journals, finding that every reported study had N ≤ 25 (Cochrane & Duffy, 1974).
Holmes (1979, 1983) himself examined sample sizes in four APA journals in 1955 and 1977, and reported median sample sizes for the total study and each of the comparison groups. His general conclusions were that sample size had not changed significantly between 1955 and 1977, and that the typical sample size in psychology did not seem large [...] the purpose of the present study was to examine sample sizes reported in the same four journals examined by Holmes (1979, 1983), but in more recent volumes. Two additional data collections were undertaken, one in 1995 (about the time the Task Force was formed), and the other in 2006 [...]
So yeah, the median sample size was 32 in 1995 and 40 in 2006. 25% of published studies had n=14 or less in 1995, and n=18 or less in 2006. The sample size that occured most often in the 1995 sample was n=8; in 2006 it was 16.
“Our modeling showed that sample size depends on the field. Smaller samples are needed in experimental settings, presumably because sufficient control of extraneous variation is in place, and standard errors tend to be smaller. (Higher cost per participant may also be a factor, due to sophisticated measurement equipment or laboratory controls.) However some fields, such as applied and developmental psychology, depend much more on quasi-experimental research because of their greater emphasis on comparisons of naturally occurring groups and ecological validity. Such research designs result in more variation in the data, and larger samples are necessary to gain feasible standard errors. (Lower cost per participant may also be a factor, because of the availability of institutional archival data.) [...]
We found that overall, the relatively small sample sizes found by Holmes did not increase significantly over the next 29 years. However, there was significant variability in the change in sample size over time by field, with increases from 1977 to 2006 appearing in the Journal of Abnormal Psychology and Developmental Psychology, and no change in Experimental Psychology or Applied Psychology (which actually showed a slight decrease for individual sample size).
The third hypothesis was that sample sizes remained unchanged after the Task Force report in 1999. A change would have been reflected in a significant difference in sample size between 1995 and 2006, but none was found. This result is not surprising, given previous research on power (e.g., Cohen, 1962; Sedlmeier & Gigerenzer, 1989; Rossi, 1990; Maddock & Rossi, 2001; Maxwell, 2004) and Holmes’ own studies on sample size (Holmes, 1979, 1983; Holmes, et al., 1981). However, it is troubling, especially when one considers the increased use of sophisticated multivariate analyses and statistical modeling techniques during this time that would require the employment of larger sample sizes (Merenda, 2007; Rodgers, 2010).”
Here’s a link to one of the ungated power studies mentioned in the paper.
iv. “What [would happen] if I took a swim in a typical spent nuclear fuel pool? Would I need to dive to actually experience a fatal amount of radiation? How long could I stay safely at the surface?”
There’s a little background stuff on the subject here.
v. For some reason this picture touched me deeply (click to view full size):
vi. “Facebook killed TV.” – from this Paul Graham essay on Why TV Lost.
“We measured the personalities, values, and preferences of more than 19,000 people who ranged in age from 18 to 68 and asked them to report how much they had changed in the past decade and/or to predict how much they would change in the next decade. Young people, middle-aged people, and older people all believed they had changed a lot in the past but would change relatively little in the future. People, it seems, regard the present as a watershed moment at which they have finally become the person they will be for the rest of their lives. This “end of history illusion” had practical consequences, leading people to overpay for future opportunities to indulge their current preferences.”
Unfortunately I’ve not been able to find an ungated link, but here’s a bit more from the concluding remarks of the paper:
“Across six studies of more than 19,000 participants, we found consistent evidence to indicate that people underestimate how much they will change in the future, and that doing so can lead to suboptimal decisions. Although these data cannot tell us what causes the end of history illusion, two possibilities seem likely. First, most people believe that their personalities are attractive, their values admirable, and their preferences wise (10); and having reached that exalted state, they may be reluctant to entertain the possibility of change. People also like to believe that they know themselves well (11), and the possibility of future change may threaten that belief. In short, people are motivated to think well of themselves and to feel secure in that understanding, and the end of history illusion may help them accomplish these goals.
Second, there is at least one important difference between the cognitive processes that allow people to look forward and backward in time (12). Prospection is a constructive process, retrospection is a reconstructive process, and constructing new things is typically more difficult than reconstructing old ones (13, 14). The reason this matters is that people often draw inferences from the ease with which they can remember or imagine (15, 16). If people find it difficult to imagine the ways in which their traits, values, or preferences will change in the future, they may assume that such changes are unlikely. In short, people may confuse the difficulty of imagining personal change with the unlikelihood of change itself.
Although the magnitude of this end of history illusion in some of our studies was greater for younger people than for older people, it was nonetheless evident at every stage of adult life that we could analyze. Both teenagers and grandparents seem to believe that the pace of personal change has slowed to a crawl and that they have recently become the people they will remain. History, it seems, is always ending today.”
“Polygynous animals are often highly dimorphic, and show large sex-differences in the degree of intra-sexual competition and aggression, which is associated with biased operational sex ratios (OSR). For socially monogamous, sexually monomorphic species, this relationship is less clear. Among mammals, pair-living has sometimes been assumed to imply equal OSR and low frequency, low intensity intra-sexual competition; even when high rates of intra-sexual competition and selection, in both sexes, have been theoretically predicted and described for various taxa. Owl monkeys are one of a few socially monogamous primates. Using long-term demographic and morphological data from 18 groups, we show that male and female owl monkeys experience intense intra-sexual competition and aggression from solitary floaters. Pair-mates are regularly replaced by intruding floaters (27 female and 23 male replacements in 149 group-years), with negative effects on the reproductive success of both partners. Individuals with only one partner during their life produced 25% more offspring per decade of tenure than those with two or more partners. The termination of the pair-bond is initiated by the floater, and sometimes has fatal consequences for the expelled adult. The existence of floaters and the sporadic, but intense aggression between them and residents suggest that it can be misleading to assume an equal OSR in socially monogamous species based solely on group composition. Instead, we suggest that sexual selection models must assume not equal, but flexible, context-specific, OSR in monogamous species.”
You sort of want to extrapolate out of sample (/…out of species?) here, but be careful:
“Our findings differ from those reported for some monogamous birds, where remaining life-time reproductive success (i.e., the expected future gains) of the individual that initiates or tolerates a ‘divorce’ was higher than if it remained with its initial partner. For example, in kittiwakes (Rissa tridactyla) and many other pair-living birds, but also in some human societies, it is sometimes advantageous to ‘divorce’, if partners prove incompatible , , . In contrast, our data strongly indicate that break-ups were associated with factors extrinsic to the pair, and that partners did not voluntarily leave or “divorce” as it has been reported for birds, gibbons, and (in at least one case) brown titi monkeys (Callicebus brunneus) –, , . On the other hand, in some species (oystercatchers, Haematopus ostralegus), the reproductive success of stable pairs is not only higher, but there are also accrued benefits with increased duration of the pair-bond, independent of effects of age or experience . This was not the case for owl monkeys, since the number of offspring produced did not change with increased duration of the pair-bond (Fig. 2).”
ii. Smbc (click to watch in a higher resolution):
“The ability to control fire was a crucial turning point in human evolution, but the question when hominins first developed this ability still remains. Here we show that micromorphological and Fourier transform infrared microspectroscopy (mFTIR) analyses of intact sediments at the site of Wonderwerk Cave, Northern Cape province, South Africa, provide unambiguous evidence—in the form of burned bone and ashed plant remains—that burning took place in the cave during the early Acheulean occupation, approximately 1.0 Ma. To the best of our knowledge, this is the earliest secure evidence for burning in an archaeological context.”
[Another reminder that SMBC is awesome: Here's a recent comic which is very handy here - it explains what a Fourier transform is, in case you don't know... (If you actually want to know there's always wikipedia...)]
iv. I never covered this here and though some of you may already have read it I thought I might as well link to Ed Yong’s write-up on replication studies in Nature published last year. A few quotes from the article:
“Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler. “There are some experiments that everyone knows don’t replicate, but this knowledge doesn’t get into the literature,” says Wagenmakers. The publication barrier can be chilling, he adds. “I’ve seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.
These problems occur throughout the sciences, but psychology has a number of deeply entrenched cultural norms that exacerbate them. It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results. And once positive results are published, few researchers replicate the experiment exactly, instead carrying out ‘conceptual replications’ that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.
These problems have been brought into sharp focus by some high-profile fraud cases, which many believe were able to flourish undetected because of the challenges of replication. Now psychologists are trying to fix their field.”
Good luck with that. I don’t see a fix happening anytime soon. A few numbers:
“In a survey of 4,600 studies from across the sciences, Daniele Fanelli, a social scientist at the University of Edinburgh, UK, found that the proportion of positive results rose by more than 22% between 1990 and 2007 (ref. 3). Psychology and psychiatry, according to other work by Fanelli4, are the worst offenders: they are five times more likely to report a positive result than are the space sciences, which are at the other end of the spectrum [...]. The situation is not improving. In 1959, statistician Theodore Sterling found that 97% of the studies in four major psychology journals had reported statistically significant positive results5. When he repeated the analysis in 1995, nothing had changed6.”
But maybe other fields are just as bad? Well, as already mentioned the space sciences do better – and that goes for other fields too (though I’d say there seems to be major problems in many areas besides psychology and psychiatry):
A major problem here is that unless you’re actually a researcher in the field or know whom to ask, the file drawer effect can be completely invisible to you.
v. Globalization of Diabetes – The role of diet, lifestyle, and genes. A new publication in Diabetes Care. As usual when they say ‘diabetes’ they mean ‘type 2 diabetes’. Some numbers from the article:
“According to the International Diabetes Federation (1), diabetes affects at least 285 million people worldwide, and that number is expected to reach 438 million by the year 2030, with two-thirds of all diabetes cases occurring in low- to middle-income countries. The number of adults with impaired glucose tolerance will rise from 344 million in 2010 to an estimated 472 million by 2030.
Globally, it was estimated that diabetes accounted for 12% of health expenditures in 2010, or at least $376 billion—a figure expected to hit $490 billion in 2030 (2). [...] Asia accounts for 60% of the world’s diabetic population. [Do note that this does not mean that Asian countries are on average overrepresented in the diabetes statistics. Asia also has roughly 60% of the World's population. - US] [...] In 1980, less than 1% of Chinese adults had the disease. By 2008, the prevalence had reached nearly 10% [...] in urban areas of south India, the prevalence of diabetes has reached nearly 20% [...] Compared with Western populations, Asians develop diabetes at younger ages, at lower degrees of obesity, and at much higher rates given the same amount of weight gain [...]
If current worldwide trends continue, the number of overweight people (BMI >25 kg/m^2) is projected to increase from 1.3 billion in 2005 to nearly 2.0 billion by 2030 (6). [...] the prevalence of overweight and obesity in Chinese adults increased from 20% in 1992 to 29.9% in 2002 (8) [...]
In the NHS (26), each 2-h/day increment of time spent watching television (TV) was associated with a 14% increase in diabetes risk. [...] Each 1-h/day increment of brisk walking was associated with a 34% reduction in risk [...] Cigarette smoking is an independent risk factor for type 2 diabetes. A meta-analysis found that current smokers had a 45% increased risk of developing diabetes compared with nonsmokers (29). Moreover, there was a dose-response relationship between the number of cigarettes smoked and diabetes risk. [That one I did not know about!] [...] Light-to-moderate alcohol consumption is associated with reduced risk of diabetes. A meta-analysis of 370,000 individuals with 12 years of follow-up showed a U-shaped relationship, with a 30–40% reduced risk of the disease among those consuming 1–2 drinks/day compared with heavy drinkers or abstainers (37). [...]
common variants of the TCF7L2 gene that are significantly associated with diabetes risk are present in 20–30% of Caucasian populations but only 3–5% of Asians [...] Conversely, a variant in the KCNQ1 gene associated with a 20–30% increased risk of diabetes in several Asian populations (43,44) is common in East Asians, but rare in Caucasians [...]
Several randomized clinical trials have demonstrated that diabetes is preventable. One of the first diabetes prevention trials was conducted in Daqing, China (58). After 6 years of active intervention, risk was reduced by 31, 46, and 42% in the diet-only, exercise-only, and diet-plus-exercise groups, respectively, compared with the control group. In a subsequent 14-year follow-up study, the intervention groups were combined and compared with control subjects to assess how long the benefits of lifestyle change can extend beyond the period of active intervention (59). Compared with control subjects, individuals in the combined lifestyle intervention group had a 51% lower risk of diabetes during the active intervention period, and a 43% lower risk over a 20-year follow-up.”
vi. Why chess sucks.
I’m now more than half-way through and I’m no longer in doubt this book is great, so I should make that clear right away.
There’s a lot of stuff about variables of interests and qualitative results, but not much stuff on, say, effect sizes, statistical power, or similar stuff. A lot of the studies covering these things involve WEIRD people. But it’s interesting stuff anyway, and the book is great at handling the conceptual stuff and telling you what people in the field find and how they arrive at the findings they do. I may post one more post about it, but I probably won’t; there’s just way too much good stuff to cover it all here and I don’t want to struggle with the question of what to include and what not to include. You should just read the damn book.
Below some stuff from the book that I put into this post before I realized that I really shouldn’t blog this in that much detail:
“many individuals assume that they have adequately conveyed their attraction to a partner when in fact they have not. The signal amplification bias occurs when people believe that their overtures communicate more romantic interest to potential partners than is actually the case; consequently, they fail to realize that the partner may not be aware of their attraction (Vorauer, Cameron, Holmes, & Pearce, 2003). [...]
Most relationship scholars now agree that relationships develop gradually over time rather than by passing through a series of discrete stages. Process models suggest that relationship development is fueled by sometimes imperceptible changes in intimacy, self-disclosure, exchange of benefits and costs, and other interpersonal processes that occur between partners. [...]
it is not only the depth and the breadth of self-disclosure that propel a relationship along its developmental path but also how responsive each partner is to the other’s disclosures. Intimacy Theory, developed by psychologist Harry Reis and his colleagues (Reis, Clark, & Holmes, 2004; Reis & Patrick, 1996; Reis & Shaver, 1988), posits that attentive, supportive responses that leave the partner feeling validated, understood, cared for, and accepted promote the growth of intimacy and the subsequent development of the relationship. These responses may be of a verbal or a nonverbal nature. In their review of the literature, Karen Prager and Linda Roberts (2004; also see Prager, 2000) observed that an individual who is engaged in an intimate interaction displays a host of behavioral cues that signal attentiveness and responsiveness to the partner as well as positive involvement in the interaction. These include increased eye contact, more forward lean and direct body orientation, more frequent head nods, increased physical proximity, greater facial expressiveness, longer speech duration, more frequent or more intense interruptions, and more intense paralinguistic cues (e.g., speaking rate, tone of voice, pauses, silences, laughter). Recent research reveals that people do, in fact, interpret these behavioral cues as communicating validation, understanding, and caring—in short, responsiveness (see Maisel, Gable, & Strachman, 2008). [...] it is not simply the act of disclosing information or making personal revelations that contributes to relationship development. Rather, reciprocal and responsive disclosures that contribute to feelings of intimacy — in other words, verbal and nonverbal behaviors that reflect mutual perceptions of understanding, caring, and validation — are what encourage and sustain the growth of relationships. [...]
self-disclosure and intimacy appear to be integrally connected with both relationship satisfaction and stability. Research conducted with romantic partners and with friends generally reveals that people who self-disclose, who perceive their partners as self-disclosing, and who believe that their disclosures and confidences are understood by their partners experience greater satisfaction, closeness, commitment, need fulfillment, and love than people whose relationships contain lower levels of intimacy and disclosure (e.g., Laurenceau, Barrett, & Rovine, 2005; Meeks, Hendrick, & Hendrick, 1998; Morry, 2005; Prager & Buhrmester, 1998; Rosenfeld & Bowen, 1991; Sprecher & Hendrick, 2004). [...]
U.S. census data indicate that between the years 1935 and 1939, approximately 66% of men and 83% of women were married by the age of 25. Twenty years later, between 1955 and 1959, 51% of men and 65% of women were married by the time they reached 25 years of age. And two decades after this, between 1975 and 1979, only 37% of 25-year-old men and 50% of 25-year-old women were married (U.S. Census Bureau, 2007a). Currently, approximately one third of the adult U.S. population consists of single men and women who have never married; an additional 10% of adults are divorced and single (U.S. Census Bureau, 2007b, 2007c). [...]
recent surveys conducted in Turkey, Jordan, Yemen, Afghanistan, and Pakistan revealed that approximately 20% to 50% of all marriages were between first cousins (e.g., Gunaid, Hummad, & Tamim, 2004; Kir, Gulec, Bakir, Hosgonul, & Tumerdem, 2005; Sueyoshi & Ohtsuka, 2003; Wahab & Ahmad, 2005; Wahab, Ahmad, & Shah, 2006). [...]
More than 40 years ago, social scientist William Kephart (1967) asked a sample of young men and women whether they would marry someone with whom they were not in love if that person possessed all of the other qualities they desired in a spouse. More than one third (35%) of the men and three fourths (76%) of the women responded affirmatively—they were willing to marry without love. However, by the mid-1980s there was evidence of a dramatic shift in attitude. When psychologists Jeffrey Simpson, Bruce Campbell, and Ellen Berscheid (1986) asked a group of young adults the very same question, only 14% of the men and 20% of the women indicated that they would marry someone they did not love [...] A similar attitude shift is occurring around the world. In the mid-1990s another group of researchers (Levine, Sato, Hashimoto, & Verma, 1995) asked a large sample of adults from 11 countries to answer the question first posed by Kephart [...] the percentage of participants who said “no” in response to the question was as follows: United States (86%), England (84%), Mexico (81%), Australia (80%), Philippines (64%), Japan (62%), Pakistan (39%), Thailand (34%), and India (24%). [...] sociologist Fumie Kumagai (1995) reported that the ratio of arranged (miai ) to love-based (renai) marriages in Japan shifted dramatically over the last half of the twentieth century. Specifically, during the time of World War II, approximately 70% of new marriages were arranged by parents whereas 30% were love-based or personal choice matches. By 1988, however, only 23% of new marriages were arranged; the rest either were completely love-based (75%) or refl ected a combination of parental arrangement and personal choice (2%). Data collected more recently reveal an even greater decline in the proportion of arranged marriages: among Japanese couples marrying in 2005, only 6.4% reported an arranged marriage (National Institute of Population and Social Security Research, 2005, as cited in Farrer, Tsuchiya, & Bagrowicz, 2008). Similar changes have been documented in other countries (e.g., China, Nepal; Ghimire et al., 2006; Xu & Whyte, 1990). [...]
longitudinal research consistently reveals that most newlywed couples (whether in their first or subsequent marriage) begin their married lives with a “honeymoon” period characterized by high amounts of satisfaction and well-being which then progressively decline during the next several years, stabilize for a period of time (often between the fourth and sixth years of marriage), and then continue to decline, assuming the couple stays together. In general, husbands and wives show the same changes in marital happiness. [...] A large literature about the impact of parenthood on marital quality exists, with the majority of studies finding that the transition to parenthood is marked by a reduction in marital satisfaction (e.g., Perren et al., 2005; for reviews, see Belsky, 1990, 2009; Sanders, Nicholson, & Floyd, 1997; Twenge, Campbell, & Foster, 2003). [...] there is some evidence that spouses’ marital satisfaction levels may increase once their children reach adulthood and leave home (see Gorchoff, John, & Helson, 2008). [...]
A vast body of social psychological research reveals that, as people go about their daily lives, they tend to interpret the situations they encounter and the events they experience in a decidedly selfcentered, self-aggrandizing, and self-justifying way (Greenwald, 1980). For example, the majority of men and women possess unrealistically positive self-views—they judge positive traits as overwhelmingly more characteristic of themselves than negative traits; dismiss any unfavorable attributes they may have as inconsequential while at the same time emphasizing the uniqueness and importance of their favorable attributes; recall personal successes more readily than failures; take credit for positive outcomes while steadfastly denying responsibility for negative ones; and generally view themselves as “better” than the average person (and as better than they actually are viewed by others; for reviews, see Mezulis, Abramson, Hyde, & Hankin, 2004; Taylor & Brown, 1988). In addition, people often fall prey to an illusion of control consisting of exaggerated perceptions of their own ability to master and control events and situations that are solely or primarily determined by chance (e.g., Langer, 1975; for reviews, see Taylor & Brown, 1988; Thompson, 1999). Moreover, most individuals are unrealistically optimistic about the future, firmly believing that positive life events are more likely (and negative events are less likely) to happen to them than to others (Weinstein, 1980, 1984). [...] These cognitive processes, collectively known as self-serving biases or self-enhancement biases, not only function to protect and enhance people’s self-esteem (see Taylor & Brown, 1988, 1994) but also color perceptions of the events that occur in their closest and most intimate relationships. For example, two early investigations (Ross & Sicoly, 1979; Thompson & Kelley, 1981) demonstrated that married individuals routinely overestimate the extent of their own contributions, relative to their spouses, to a variety of joint marital activities (e.g., planning mutual leisure activities, carrying the conversation, resolving conflict, providing emotional support, initiating discussions about the relationship). Moreover, they more readily call to mind instances of the specific ways in which they (as opposed to their partners) contribute to each activity.
Research also demonstrates that people tend to adopt a self-serving orientation when interpreting and responding to negative relationship events. [...] Although self-serving biases may benefit the individual partners by protecting their self-esteem, such cognitions may have additional, less-than-beneficial consequences for their relationship. [...]
People not only perceive their own attributes, behaviors, and future outcomes in an overly positive manner, but they also tend to idealize the characteristics of their intimate partners and relationships. Several relationship-enhancement biases have been identified. For example, research reveals a pervasive memory bias for relationship events, such that partners recall more positive experiences, fewer negative experiences, and greater improvement over time in relationship well-being than actually occurred (e.g., Halford, Keefer, & Osgarby, 2002; Karney & Coombs, 2000). [...]
Not only do people rewrite the history of their relationships, but they also tend to view those relationships (and their partners) in an overly positive manner (e.g., Barelds & Dijkstra, 2009; Buunk, 2001; Buunk & van der Eijnden, 1997; Murray & Holmes, 1999; Murray, Holmes, & Griffin, 1996a; Neff & Karney, 2002; Van Lange & Rusbult, 1995). A large body of research reveals that most of us:
● perceive our own relationships as superior to the relationships of other people;
● view our current partners more favorably than we view other possible partners;
● view our partners more positively than our partners view themselves;
● minimize any seeming faults that our partners possess by miscasting them as virtues (“Sure, she can seem kind of rude, but that’s because she’s so honest”) or downplaying their significance (“He’s not very communicative, but it’s no big deal. He shows his love for me in many other ways”);
● accentuate our partners’ virtues by emphasizing their overall impact on the relationship (“Because she is so honest, I know I can trust her completely—she will never give me any reason to doubt her love”). [...]
Together, these findings suggest that most people “see their partners through the filters provided by their ideals, essentially seeing them . . . as they wish to see them” (Murray et al., 1996a, p. 86).
The idealization effect is not limited to perceptions of romantic partners. Research indicates that parents view their children as possessing more positive qualities than the average child (Cohen & Fowers, 2004; Wenger & Fowers, 2008). Similarly, adults rate their friends more favorably than those friends rate themselves (Toyama, 2002). [...] In sum, people appear to see their partners as their partners see themselves—only better. [...]
Current evidence suggests that [...] Partners are happiest and most satisfied when they are realistically idealistic—that is, when they possess an accurate understanding of each other’s most self-relevant attributes but maintain an exaggeratedly positive view of each other’s overall character and their relationship.”
I thought I should update the blog even though these days I don’t do a lot of blogging-worthy stuff.
i. A blog I recently discovered: Empirical Zeal. There’s some interesting posts there, for example I liked this one on the state of Indian rural education (though the findings reported are not exactly worthy of celebration).
ii. The acquisition of language by children. From the introduction:
“Imagine that you are faced with the following challenge. You must discover the internal structure of a system that contains tens of thousands of units, all generated from a small set of materials. These units, in turn, can be assembled into an infinite number of combinations. Although only a subset of those combinations is correct, the subset itself is for all practical purposes infinite. Somehow you must converge on the structure of this system to use it to communicate. And you are a very young child.
This system is human language. The units are words, the materials are the small set of sounds from which they are constructed, and the combinations are the sentences into which they can be assembled. Given the complexity of this system, it seems improbable that mere children could discover its underlying structure and use it to communicate. Yet most do so with eagerness and ease, all within the first few years of life.”
It’s actually pretty wild, once you start thinking about it.
iii. The Null Ritual – What You Always Wanted to Know About Significance Testing but Were Afraid to Ask (via Gwern? I no longer remember how I found this.). An excerpt from the article:
“Question 1: What Does a Significant Result Mean?
What a simple question! Who would not know the answer? After all, psychology students spend months sitting through statistics courses, learning about null hypothesis tests (significance tests) and their featured product, the p-value. Just to be sure, consider the following problem (Haller & Krauss, 2002; Oakes, 1986):
Suppose you have a treatment that you suspect may alter performance on a certain task. You compare the means of your control and experimental groups (say, 20 subjects in each sample). Furthermore, suppose you use a simple independent means t-test and your result is signifi cant (t = 2.7, df = 18, p = .01). Please mark each of the statements below as “true” or “false.” False means that the statement does not follow logically from the above premises. Also note that several or none of the statements may be correct.
(1) You have absolutely disproved the null hypothesis (i.e., there is no difference between the population means). ® True False ®
(2) You have found the probability of the null hypothesis being true. ® True False ®
(3) You have absolutely proved your experimental hypothesis (that there is a difference between the population means). ® True False ®
(4) You can deduce the probability of the experimental hypothesis being true. ® True False ®
(5) You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision. ® True False ®
(6) You have a reliable experimental finding in the sense that if, hypothetically, the experiment were repeated a great number of
times, you would obtain a significant result on 99% of occasions. ® True False ®
Which statements are true? If you want to avoid the I-knew-it-all-along feeling, please answer the six questions yourself before continuing to read. When you are done, consider what a p-value actually is: A p-value is the probability of the observed data (or of more extreme data points), given that the null hypothesis H0 is true, defined in symbols as p(D |H0).Th is defi nition can be rephrased in a more technical form by introducing the statistical model underlying the analysis (Gigerenzer et al., 1989, chap. 3). Let us now see which of the six answers are correct:
Statements 1 and 3: Statement 1 is easily detected as being false. A significance test can never disprove the null hypothesis. Significance tests provide probabilities, not definite proofs. For the same reason, Statement 3, which implies that a significant result could prove the experimental hypothesis, is false. Statements 1 and 3 are instances of the illusion of certainty (Gigerenzer, 2002).
Statements 2 and 4: Recall that a p-value is a probability of data, not of a hypothesis. Despite wishful thinking, p(D |H0) is not the same as p(H0 |D), and a significance test does not and cannot provide a probability for a hypothesis. One cannot conclude from a p-value that a hypothesis has a probability of 1 (Statements 1 and 3) or that it has any other probability (Statements 2 and 4). Therefore, Statements 2 and 4 are false. The statistical toolbox, of course, contains tools that allow estimating probabilities of hypotheses, such as Bayesian statistics (see below). However, null hypothesis testing does not.
Statement 5: The “probability that you are making the wrong decision” is again a probability of a hypothesis. This is because if one rejects the null hypothesis, the only possibility of making a wrong decision is if the null hypothesis is true. In other words, a closer look at Statement 5 reveals that it is about the probability that you will make the wrong decision, that is, that H0 is true. Thus, it makes essentially the same claim as Statement 2 does, and both are incorrect.
Statement 6: Statement 6 amounts to the replication fallacy. Recall that a p-value is the probability of the observed data (or of more extreme data points), given that the null hypothesis is true. Statement 6, however, is about the probability of “significant” data per se, not about the probability of data if the null hypothesis were true. The error in Statement 6 is that p = 1% is taken to imply that such significant data would reappear in 99% of the repetitions. Statement 6 could be made only if one knew that the null hypothesis was true. In formal terms, p(D |H0) is confused with 1 – p(D). The replication fallacy is shared by many, including the editors of top journals. [...] To sum up, all six statements are incorrect. Note that all six err in the same direction of wishful thinking: They overestimate what one can conclude from a p-value. [...]
We posed the question with the six multiple-choice answers to 44 students of psychology, 39 lecturers and professors of psychology, and 30 statistics teachers [...] How many students and teachers noticed that all of the statements were wrong? As Figure 1 shows, none of the students did. [...] Ninety percent of the professors and lecturers also had illusions, a proportion almost as high as among their students. Most surprisingly, 80% of the statistics teachers shared illusions with their students.”
The article has much more.
“More than 25% of the U.S. population aged [>65] years has diabetes (1), and the aging of the overall population is a significant driver of the diabetes epidemic. [...] The incidence of diabetes increases with age until about age 65 years, after which both incidence and prevalence seem to level off”. I should have known the first number was in that neighbourhood, but somehow I had failed to realize that it was that high; most often prevalence estimates are calculated/reported using the entire population in the denominator, but of course such estimates can be deceiving if you do not think about how they are calculated and I clearly hadn’t. At least 1 in 4 in the above-65 age bracket. That’s a lot of people. The article doesn’t have a lot of data, it’s a ‘consensus report’ handling mostly various treatment guideline suggestions and similar stuff.
v. What is the most uncomfortable situation have you ever been put in- by a guy? Any kind of unwanted flirtation- or something of that nature (Reddit). Lots of really horrible stuff; reading stuff like this makes what might be perceived of as some females’ ‘somewhat overcautious’ behaviour towards members of the opposite sex easier to understand. An example from the link:
“The last stranger-danger moment I will share tonight was at an end-of-midterms party sponsored by the student union at a local bar. I was there with my best friend, and she’s very pretty and very friendly, so we’d very quickly attracted a group of four or five men who were hanging around with us for most of the night. I hadn’t seen any of them before, so I assumed they were students from a different department, and we end up getting a table together and talking for a while. Once my friend mentions that she has a boyfriend, most of them shift their attention to me, though there’s one who still seems interested in her. As I’m talking to them, I find that they’re not students at our university, but that they’re a group of friends visiting from the a couple towns over. Nothing too creepy, so far.
My friend finishes her drink, so the guy she’s talking to goes to buy her another. She’s a little suspicious, so she starts drinking it VERY slowly. Meanwhile, I’m getting distracted talking to one of the guys who works in the same field I’ll be entering soon, and we end up talking for a while about that. He keeps telling me that I’m very beautiful, which I keep brushing off because I knew he was interested in my friend initially, and I was interested in someone else at the time, anyway. Somewhere in the middle of all this, my friend has stopped drinking the drink that was bought for her, and someone asks if she’s going to finish it. She says no.
Eventually, the guy I’m talking to apologizes for his “bad” English, saying that he hasn’t really had to use it since he was in school, which was OVER TEN YEARS AGO. At about the same time, my friend is telling the guy she’s talking to that it’s funny that they decided to visit our city on that particular weekend, because this is a student end-of-midterm party, and he answers, “I know. That’s kind of why we came here.” Someone else asks my friend if she’s going to finish her drink, and she says no, but he can have it if he wants. The drink ‘accidentally’ gets spilled in the process, and she’s signalling me to get the fuck out of there, so I take the opportunity to drag her to the bathroom. I start to notice that she’s acting really fucked up – she can usually drink a ton more than I can, and she’d only had one drink of her own and maybe a third (probably less than that, actually) of the one that guy bought for her. She says she thinks the drink they gave her was drugged, and then she gets sick. I ended up staying the night at her place to keep an eye on her, but I didn’t think to take her to the hospital or anything, so I guess we’ll never know what exactly happened…”
Of course if you’re like me you don’t engage in risky behaviours like drinking with strangers and in that case it doesn’t really matter much if you’re male or female, but then again I’m not like normal people. Most males probably significantly underestimate how risky some of their behaviours – behaviours they would not ever even think of as ‘particularly risky’ – are when a female engages in them. Note that even males that fall into the “I can’t imagine you raising your voice”-category (a female friend said this about me in a conversation I had with her earlier today) are likely to be affected by the behaviours of the (type of) males described in the link; once a female has been through situations like the ones described at the link, she’s less likely to give males the benefit of the doubt and more likely to misinterpret behaviour and the motivations driving behaviour. Reading this stuff has made me believe that the behaviour of ‘overcautious’ females may be better justified and less ‘irrational’ than males tend to think it is.
vi. I haven’t commented on the new DSM-5 – let’s just say I’ve had better things to do. Here’s one take on it (“It’s arcane, contradictory and talks about invisible entities which no-one can really prove. Yes folks, the new psychiatric bible has been finalised.”). The most ‘relevant’ change to me is the fact that they’ll remove the Asperger Syndrome diagnosis, and instead merge it with other autism spectrum disorders. If you’re asking me what I think about that, the answer is that I don’t really care.
vii. Cheetahs on the Edge (via Ed Yong). A must-see:
“Using a Phantom camera filming at 1200 frames per second while zooming beside a sprinting cheetah, the team captured every nuance of the cat’s movement as it reached top speeds of 60+ miles per hour.
The extraordinary footage that follows is a compilation of multiple runs by five cheetahs during three days of filming.”
i. Temporal view of the costs and benefits of self-deception, by Chance, Nortona, Ginob, and Ariely. The abstract:
“Researchers have documented many cases in which individuals rationalize their regrettable actions. Four experiments examine situations in which people go beyond merely explaining away their misconduct to actively deceiving themselves. We find that those who exploit opportunities to cheat on tests are likely to engage in self-deception, inferring that their elevated performance is a sign of intelligence. This short-term psychological benefit of self-deception, however, can come with longer-term costs: when predicting future performance, participants expect to perform equally well—a lack of awareness that persists even when these inflated expectations prove costly. We show that although people expect to cheat, they do not foresee self-deception, and that factors that reinforce the benefits of cheating enhance self-deception. More broadly, the findings of these experiments offer evidence that debates about the relative costs and benefits of self-deception are informed by adopting a temporal view that assesses the cumulative impact of self-deception over time.”
A bit more from the paper:
“People often rationalize their questionable behavior in an effort to maintain a positive view of themselves. We show that, beyond merely sweeping transgressions under the psychological rug, people can use the positive outcomes resulting from negative behavior to enhance their opinions of themselves—a mistake that can prove costly in the long run. We capture this form of self-deception in a series of laboratory experiments in which we give some people the opportunity to perform well on an initial test by allowing them access to the answers. We then examine whether the participants accurately attribute their inflated scores to having seen the answers, or whether they deceive themselves into believing that their high scores reflect new-found intelligence, and therefore expect to perform similarly well on future tests without the answer key.
Previous theorists have modeled self-deception after interpersonal deception, proposing that self-deception—one part of the self deceiving another part of the self—evolved in the service of deceiving others, since a lie can be harder to detect if the liar believes it to be true (1, 2). This interpersonal account reflects the calculated nature of lying; the liar is assumed to balance the immediate advantages of deceit against the risk of subsequent exposure. For example, people frequently lie in matchmaking contexts by exaggerating their own physical attributes, and though such deception might initially prove beneficial in convincing an attractive prospect to meet for coffee, the ensuing disenchantment during that rendezvous demonstrates the risks (3, 4). Thus, the benefits of deceiving others (e.g., getting a date, getting a job) often accrue in the short term, and the costs of deception (e.g., rejection, punishment) accrue over time.
The relative costs and benefits of self-deception, however, are less clear, and have spurred a theoretical debate across disciplines (5–10). [...]
As we had expected, social recognition exacerbated self-deception: those who were commended for their answers-aided performance were even more likely to inflate their beliefs about their subsequent performance. The fact that social recognition, which so often accompanies self-deception in the real world, enhances self-deception has troubling implications for the prevalence and magnitude of self-deception in everyday life.”
ii. Nonverbal Communication, by Albert Mehrabian. Some time ago I decided that I wanted to know more about this stuff, but I haven’t really gotten around to it until now. It’s old stuff, but it’s quite interesting. Some quotes:
“The work of Condon and Ogston (1966, 1967) has dealt with the synchronous relations of a speaker’s verbal cues to his own and his addressee’s nonverbal behaviors. One implication of their work is the existence of a kind of coactive regulation of communicator-addressee behaviors which is an intrinsic part of social interaction and which is certainly not exhausted through a consideration of speech alone. Kendon (1967a) recognized these and other functions that are also served by implicit behaviors, particularly eye contact. He noted that looking at another person helps in getting information about how that person is behaving (that is, to monitor), in regulating the initiation and termination of speech, and in conveying emotionality or intimacy. With regard to the regulatory function, Kendon’s (1967a) findings showed that when the speaker and his listener are baout to change roles, the speaker looks in the direction of his listener as he stops talking, and his listener in turn looks away as he starts speaking. Further, when speech is fluent, the speaker looks more in the direction of his listener than when his speech is disrupted with errors and hesitations. Looking away during these awkward moments implies recognition by the speaker that he has less to say, and is demanding less attention from his listener. It also provides the speaker with some relief to organize his thoughts.
The concept of regulation has also been studied by Scheflen (1964, 1965). According to him, a communicator may use changes in posture, eye contact, or position to indicate that (1) he is about to make a new point, (2) he is assuming an attitude relative to several points being made by himself or his addresse, or (3) he wishes to temporarily remove himself from the communication situation, as would be the case if he were to select a great distance from the addressee or begin to turn his back on him. There are many interesting aspects of this regulative function of nonverbal cues that have been dealt with only informally. [...]
One of the first attempts for a more general characterization of the referents of implicit behavior and, therefore, possibly of the behaviors themselves, was made by Schlosberg (1954). He suggested a three-dimensional framework involving pleasantness-unpleasantness, sleep-tension, and attention-rejection. Any feeling could be assigned a value on each of these three dimensions, and different feelings would correspond to different points in this three-dimensional space. This shift away from the study of isolated feelings and their corresponding nonverbal cues and toward a characterization of the general referents of nonverbal behavior on a limited set of dimensions was seen as beneficial. It was hoped that it could aid in the identification of large classes of interrelated nonverbal behaviors.
Recent factor-analytic work by Williams and Sundene (1965) and Osgood (1966) provided further impetus for characterizing the referents of implicit behavior in terms of a limited set of dimensions. Williams and Sundene (1965) found that facial, vocal, or facial-vocal cues can be categorized primarily in terms of three orthogonal factors: general evalution, social control, and activity.
For facial expression of emotion, Osgood (1966) suggested the following dimensions as primary referents: pleasantness (joy and glee versus dread and anxiety), control (annoyance, disgust, contempt, scorn, and loathing versus dismay, bewilderment, surprise, amazement, and excitement), and activation (sullen anger, rage, disgust, scorn, and loathing versus despair, pity, dreamy sadness, boredom, quiet pleasure, complacency, and adoration). [...]
Scheflen (1964, 1965, 1966) provided detailed observations of an informal quality on the significance of postures and positions in interpersonal situations. Along similar lines, Kendon (1967a) and Exline and his colleagues explored the many-faceted significance of eye contact with, or observation of, another [...] These investigations consistently found, among same-sexed pairs of communicators, that females generally had more eye contact with each other than did males; also, members of both sexes had less eye contact with one another when the interaction between them was aversive [...] In generally positive exchanges, males had a tendency to decrease their eye contact over a period of time, whereas females tended to increase it (Exline and Winters, 1965). [...]
extensive data provided by Kendon (1967a) showed that observation of another person duing a social exchange varied from about 30 per cent of 70 per cent, and that corresponding figures for eye contact ranged from 10 per cent to 40 per cent. [...]
Physical proximity, touching, eye contact, a forward lean rather than a reclining position, and an orientation of the torso toward rather than away from an addressee have all been found to communicate a more positive attitude toward him. A second set of cues that indicates postural relaxation includes asymmetrical placement of the limbs, a sideways lean and/or reclining position by the seated communicator, and specific relaxation measures of the hands or neck. This second set of cues relates primarily to status differences between the communicator and his addressee: there is more relaxation with an addressee of lower status, and less relaxation with one of higher status. [...]
In sum, the findings from studies of posture and position and subtle variations in verbal statements [...] show that immediacy cues primarily denote evaluation, and postural relaxation ues denote status or potency in a relationship. It is interesting to note a weaker effect: less relaxation of one’s posture also conveys a more positive attitude toward another. One way to interpret this overlap of the referential significance of less relaxation and more immediacy in communicating a more positive feeling is in terms of the implied positive connotations of higher status in our culture. A respectful attitude (that is, when one conveys that the other is of higher status) does indeed have implied positive connotations. Therefore it is not surprising that the communication of respect and of positive attitude exhibits some similarity in the nonverbal cues that they require. However, whereas the communication of liking is more heavily weighted by variations in immediacy, that of respect is weighted more by variations in relaxation.”
I should probably note here that whereas it makes a lot of sense to be skeptical of some of the reported findings in the book, simply to get an awareness of some of the key variables and some proposed dynamics may actually be helpful. I don’t know how deficient I am in these areas because I haven’t really given body language and similar stuff much thought; I assume most people haven’t/don’t, but I may be mistaken.
iii. A friend let me know about this ressource and I thought I should share it here. It’s a collection of free online courses/lectures provided by Yale University.
iv. Prevalence, Heritability, and Prospective Risk Factors for Anorexia Nervosa. It’s a pretty neat setup: “During a 4-year period ending in 2002, all living, contactable, interviewable, and consenting twins in the Swedish Twin Registry (N = 31 406) born between January 1, 1935, and December 31, 1958, underwent screening for a range of disorders, including AN. Information collected systematically in 1972 to 1973, before the onset of AN, was used to examine prospective risk factors for AN.”
“Results The overall prevalence of AN was 1.20% and 0.29% for female and male participants, respectively. The prevalence of AN in both sexes was greater among those born after 1945. Individuals with lifetime AN reported lower body mass index, greater physical activity, and better health satisfaction than those without lifetime AN. [...]
This study represents, to our knowledge, the largest twin study conducted to date of individuals with rigorously diagnosed AN. Our results confirm and extend the findings of previous studies on prevalence, risk factors, and heritability.
Consistent with several studies, the lifetime prevalence of AN identified by all sources was 1.20% in female participants and 0.29% in male participants, reflecting the typically observed disproportionate sex ratio. Similarly, our data show a clear increase in prevalence of DSM-IV AN (broadly and narrowly defined) with historical time in Swedish twins. The increase was apparent for both sexes. Hoek and van Hoeken3 also reported a consistent increase in prevalence, with a leveling out of the trajectory around the 1970s. Future studies in younger STR participants will allow verification of this observation.
Several observed differences between individuals with and without AN were expected, ie, more frequent endorsement of symptoms of eating disorders. Other differences are noteworthy. Consistent with previous observations, individuals with lifetime AN reported lower BMIs at the time of interview than did individuals with no history of AN. Although this could be partially accounted for by the presence of currently symptomatic individuals in the sample, our results remained unchanged when we excluded individuals likely to have current AN (ie, current BMI, ≤17.5). Previous studies have shown that, even after recovery, individuals with a history of AN have a low BMI.59 Although perhaps obvious, a history of AN appears to offer protection against becoming overweight. The protective effect also holds for obesity (BMI, ≥30), although there were too few individuals in the sample with histories of AN who had become obese for meaningful analyses. Despite the obvious nature of this observation, the mechanism whereby protection against overweight is afforded is not immediately clear. Those with a history of AN reported greater current exercise and a perception of being in better physical health. One possible interpretation of this pattern of findings is that individuals with a history of AN continue to display subthreshold symptoms of AN (ie, excessive exercise and caloric restriction) that contribute to their low BMIs. Alternatively, symptoms that were pathologic during acute phases of AN, such as excessive exercise and decreased caloric intake, may resolve over time into healthy behaviors, such as consistent exercise patterns and a healthful diet, that result in better weight control and self-rated health.
Regardless of which of these hypotheses is true, another intriguing difference is that individuals with lifetime AN report a lower age at highest BMI, although the magnitude of the highest lifetime BMI does not differ in those with and without a history of AN. Those with AN report their highest lifetime BMIs early in their fourth decade of life on average, whereas those without AN report their highest BMIs in the middle of their fifth decade of life (close to the age at interview). On a population level, adults tend to gain on average 2.25 kg (5 lb) per decade until reaching their eighth decade of life.60 Although more detailed data are necessary to make definitive statements about different weight trajectories, our results suggest not only that individuals with AN may maintain low BMIs but also that they may not follow the typical adult weight gain trajectories. These data are particularly intriguing in light of recent reports of AN being associated with reduced risk of certain cancers61 - 62 and protective against mortality due to diseases of the circulatory system.63 - 64 Energy intake is closely related to fat intake and obesity, both of which have also been related to cancer development65 - 66 and both of which are reduced in AN. Further detailed studies of the weight trajectories and health of individuals with histories of AN are required to explicate the nature and magnitude of these intriguing findings.
Of the variables assessed in 1972 to 1973, neuroticism emerged as the only significant prospective predictor of AN. This is notable because there have been few truly prospective risk factor studies of AN.”
v. The music is a bit much for me towards the end, but this is just an awesome video. I think I’d really have liked to know that guy:
vi. Political Sorting in Social Relationships: Evidence from an Online Dating Community, by Huber and Malhotra.
I found these data surprising (and I’m skeptical about the latter finding):
“Among paid content, online dating is the third largest driver of Internet traffic behind music and games (Jupiter Research 2011).A substantial number of marriages also result from interactions started online. For instance, a Harris Interactive study conducted in 2007 found that 2% of U.S. marriages could be traced back to relationships formed on eHarmony.com, a single online dating site (Bialik 2009).”
Anyway I’ll just post some data/results below and leave out the discussion (click to view tables in full size). Note that there are a lot of significant results here:
The last few figures are also interesting (people really care about that black/white thing when they date (online)…). but you can go have a look for yourself. As I’ve already mentioned there are a lot of significant results – they had a huge number of data to work with (170,413 men and 132,081 women).
“Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. This is evident in a variety of areas including intelligence, performance on tasks or tests, and the possession of desirable characteristics or personality traits. It is one of many positive illusions relating to the self, and is a phenomenon studied in social psychology.”
(wikipedia). Some data from the article as well as some of the sources and related material. Unless a link is provided, the quote/data is from the wiki article:
“When more than 90 percent of faculty members rate themselves as above-average teachers, and two-thirds rate themselves among the top quarter, the outlook for much improvement in teaching seems less than promising.” (link)
“social feedback tends to be incredibly misleading. Social psychologist David Sears has studied what he calls the “person-positivity bias”—people’s tendency to evaluate other people positively in the absence of any good reason not to.7 In an examination of student evaluations of their professors at UCLA, comprising literally hundreds of thousands of ratings, Sears found that the average was 7.22 on a nine-point scale. This is well above the midpoint of five, which was designated “average” on the evaluation forms.” (link)
“One of the first studies that found the effect of illusory superiority was carried out in 1976 by the College Board in the USA. A survey was attached to the SAT exams (taken by approximately one million students per year), asking the students to rate themselves relative to the median of the sample (rather than the average peer) on a number of vague positive characteristics. In ratings of leadership ability, 70% of the students put themselves above the median. In ability to get on well with others, 85% put themselves above the median, and 25% rated themselves in the top 1%.”
Self-esteem may have an important modifying role here:
“Extending the better than average effect, 3 studies examined self-, friend, and peer comparisons of personal attributes. Participants rated themselves as better off than friends, who they rated as superior to generalized peers. The exception was in direct comparisons, where the self and friends were not strongly differentiated on unambiguous negative attributes. Self-esteem and construal played moderating roles, with persons with high self-esteem (HSEs) exploiting both ambiguous positive and ambiguous negative traits to favor themselves. Persons lower in self-esteem exploited ambiguous positive traits in their favor but did not exploit ambiguous negative traits. Across self-esteem level, ratings of friends versus peers were exaggerated when attributes were ambiguous. HSEs seemed to take advantage of ambiguity more consistently to present favorable self-views; people with low self-esteem used ambiguity to favor their friends but were reluctant to minimize their own faults.” (link)
And similar findings are reported here:
“Three investigations are reported that examined the relation between self-appraisals and appraisals of others. In Experiment 1, subjects rated a series of valenced trait adjectives according to how well the traits described the self and others. Individuals displayed a pronounced “self-other bias,” such that positive attributes were rated as more descriptive of self than of others, whereas negative attributes were rated as less descriptive of self than of others. Furthermore, in contrast to C. R. Rogers’s (1951) assertion that high self-esteem is associated with a comparable regard for others, the tendency for individuals to evaluate the self in more favorable terms than they evaluated people in general was particularly pronounced among those with high self-esteem. These findings were replicated and extended in Experiment 2, where it also was found that self-evaluations were more favorable than were evaluations of a friend and that individuals with high self-esteem were most likely to appraise their friends more positively than they appraised the average person. The findings of Experiment 3 revealed that the tendency for those with high self-esteem to judge themselves and their friends more favorably than they assessed most other people was not restricted to only those individuals showing a high need for social approval.” (link)
Age matters too, at least in some contexts: “People generally evaluate their own attributes and abilities more favorably than those of an average peer. The current study explored whether age moderates this better-than-average effect. We asked young (n = 87), middle-aged (n = 75), and older adults (n = 77) to evaluate themselves and an average peer on a variety of trait and ability dimensions. On most dimensions, a better-than-average effect was observed for young, middle-aged, and older adults. However, on dimensions for which older individuals have clear deficiencies (i.e., athleticism, physical attractiveness), a better-than-average effect was observed for young and middle-aged adults, while a worse-than-average effect was observed for older adults. We argue that egocentrism accounts for these age differences in comparative self-evaluations.” (link)
This paper, which was one of the few I was able to find a full version of online, reports some more details about the MBA-study, among other things. Data and a few observations from that paper: “87 percent of Stanford MBA students recently rated their academic performance to be in the top two quartiles (“It’s Academic” 2000). Compared with their peers, 90 percent of these students also believed that they were either average or above average in terms of quantitative abilities; only 10 percent judged themselves to be below average. [...]
“In virtually any population, the majority of individuals have fewer friends than do their own friends (Feld 1991). We found, however, that 41.7 percent of those who responded to the survey claimed to have more friends than their own friends; this figure is almost three times greater than the proportion who reported having fewer friends than their own friends (16.1%). Again, the mean response (3.33) differed significantly from the neutral midpoint of the scale [...]
“we found that self-enhancement was greater in self versus friend comparisons than in self versus “typical other” comparisons. This finding supports Tesser’s model of self-evaluation maintenance, which holds that people may be more threatened by the success of friends than by that of strangers under conditions of high personal relevance (Tesser 1988; Tesser and Campbell 1982; Tesser et al. 1989). The central message in this line of researchis that we feel the need “to keep up with the Joneses” precisely because they are our neighbors.”
Back to the wiki: “Researchers have also found the effects of illusory superiority in studies into relationship satisfaction. For example, one study found that participants perceived their own relationships as better than others’ relationships on average, but thought that the majority of people were happy with their relationships. Also, this study found evidence that the higher the participants rated their own relationship happiness, the more superior they believed their relationship was.” (link to abstract)
Most people consider themselves to be more skilled at driving a vehicle than are other people: “In [previous] studies subjects were asked to judge how safely they drove in comparison with the average driver, vaguely defined as drivers in general. Typically, the results showed that around 70-80% of the subjects were reported to put themselves in the safer half of the distribution. [...] In the US group 88% and in the Swedish group 77% believed themselves to be safer than the median driver.
The medians for the distributions of skill judgments fall in the interval 61-70% for the US group and between 51-60% for the Swedish group. Of the US sample 46.3% regard themselves among the most skillful 20%. The corresponding number in the Swedish group was only 15.5%. In the US sample 93% believed themselves to be more skillful drivers than the median driver and 69% of the Swedish drivers shared this belief in relation to their comparison group.
In summary, there was a strong tendency to believe oneself as safer and more skillful than the average driver. In addition, there seemed to be a stronger tendency to believe oneself as safer than and more skillful than the average person.” (link)
Some critical remarks here: “There is a further problem with attributing self-enhancement bias to all people who rate themselves “better off than most.” Ranking oneself relative to “most others” on a broadly construed dimension is inherently problematic. If people are asked to rank themselves relative to others on happiness, for example, Jeff might rank himself highly because of his ability as a baseball player, Jackie might rank herself highly because of her musical talents, and John might rank himself highly because of the money he has accumulated. Because these are important and defining characteristics of one’s self-concept, they represent appropriate choices on which to compare the self with others. It is thus conceivable that a majority of people can be better off than most when the dimension to be rated is vaguely defined and people are given the latitude to rank themselves on self-selected, often idiosyncratic categories. It has been demonstrated that when a dimension is clearly and precisely defined, thereby limiting private interpretations, the better-off-than-most effect diminishes ( Dunning, Meyerowitz, & Holzberg, 1989).” [I'm pretty sure I've touched upon this before, but I don't consider it a strong point of criticism; the fact of the matter is that if it's in any way possible for people to do this, they will when asked to compare themselves with others pick variables and interpretations of the questions which make them look good. In real life people always have some leeway, so what would happen if they couldn't manipulate the comparison to make them look better is sort of a moot point. Be that as it may, the main result of this paper is intriguing:]
“In the longitudinal studies, self-enhancement was associated with poor social skills and psychological maladjustment 5 years before and 5 years after the assessment of self-enhancement. In the laboratory study, individuals who exhibited a tendency to self-enhance displayed behaviors, independently judged, that seemed detrimental to positive social interaction. These results indicate there are negative short-term and long-term consequences for individuals who self-enhance and, contrary to some prior formulations, imply that accurate appraisals of self and of the social environment may be essential elements of mental health. [...] It seems abundantly clear from the present data that self-enhancement, far from serving as an aid to interpersonal or psychological adjustment, is part of a pattern of self-perception and behavior that must be viewed as unhealthy overall.” (“Since [this paper was published], further research has both undermined that conclusion and offered new evidence associating illusory superiority with negative effects on the individual.” (quote from the wiki related to this paper).)
“Take home messages:
- You can watch evolution in progress in quickly-reproducing organisms, like malaria
- Over 100,000 Americans die of infections that were easily treated 30 years ago due to the evolution of resistance (twice the number of people who will die in car crashes).
- In an arms race between us and infectious diseases, we lose.
- We need to understand the evolutionary forces unleashed by medicine before we can manage infectious disease
- We need to ask, “Will (this drug) STAY safe, and CONTINUE to work”, not just if it is safe and whether it will work.
- The Lancet (a high impact medical journal) rejected an evolutionary paper addressing malaria because, “a good understanding of evolutionary biology is beyond most of our readers.””
Point 3 is one I try to remember to bring up every time I find myself in a discussion about matters related to how the future development of medicine will look like. Unless you do not agree with that one, it’s very hard to be an optimist about the future of medicine.
ii. I discussed this subject briefly yesterday, and I later started thinking about whether I’d actually blogged this (pdf) publication (in Danish, sorry), PISA København 2010, which deals with the educational achievements of Danish children in Copenhagen who left the 9th grade in 2010. I don’t think I have (I couldn’t find anything in the archives), so I decided to add a link here as well as a few observations from the paper:
“Opdeles eleverne efter andelen af indvandrerelever på deres skoles niende klassetrin, finder man generelt, at jo større andel af indvandrere, des lavere gennemsnitlig læsetestscore.” (a (loose) translation: ‘If the students are distributed according to the proportion of immigrant-pupils in the 9th grade, a general finding is that the larger the share of immigrants, the lower the average reading test score’).
Immigrant groups perform worse than non-immigrant groups, and the proportion of immigrants also affects the performance of the non-immgrant pupils (negatively) for some, though not all, specifications. The ‘Danish’ pupils enrolled in schools where the proportion of immigrant pupils exceeds 50% do significantly worse than do Danish pupils who are enrolled in schools where the immigrant pupil proportion is below 25% (p.11). Immigrant pupils also do better in schools with less than 25% immigrants than they do in schools where the proportion of immigrant pupils exceeds that number (p.11).
A table from the report (p.31), click to view full size:
The above table contains some numbers related to PISA’s reading test, with a special focus on the proportion of pupils in the sample who are functionally illiterate, corresponding to a reading performance of less than level 2 on the PISA scale (which is described in more details in the Appendix, p. 82-83 – I will not go into details here unless asked). In 2010, 14% of ‘Danish’ pupils and 42% of ‘immigrant pupils’ from schools in Copenhagen were functionally illiterate judging from the PISA reading test. There’s a big gender gap – 17% of the girls and 30% of the boys were functionally illiterate. The difference between the performances of first (44%) and second (41%) generation immigrant pupils is not statistically significant. Almost half of all 9th grade immigrant pupils in the public school system – 48% of first generation immigrant pupils in public schools and 46% of second generation immigrant pupils in public schools – were functionally illiterate.
There’s a lot of hidden variation in the immigrant numbers and not all immigrant groups do equally badly. It’s worth having in mind that these results are actually averages. Taking not-insignificant heterogeneity in the immigrant sample into account, it’s surely the case that some immigrant groups do even worse than these numbers might imply. If you look at the school level, some of the numbers probably get much worse. The Rockwool Foundation found in 2007 that 64% of pupils of Arab origin in the 9th grade were functionally illiterate (Danish link). In the PISA report they don’t go into much details, but they do note that pupils of Lebanese (/Palestinians), Iraqi and Turkish origin do worse than do pupils of Pakistani origin (also from p. 11).
iii. Another paper, Moral Hypocrisy, Power and Social Preferences, by Rustichini and Villeval (via Robin Hanson):
“Abstract: We show with a laboratory experiment that individuals adjust their moral principles to the situation and to their actions, just as much as they adjust their actions to their principles. We first elicit the individuals’ principles regarding the fairness and unfairness of allocations in three different scenarios (a Dictator game, an Ultimatum game, and a Trust game). One week later, the same individuals are invited to play those same games with monetary compensation. Finally in the same session we elicit again their principles regarding the fairness and unfairness of allocations in the same three scenarios.
Our results show that individuals adjust abstract norms to fit the game, their role and the choices they made. First, norms that appear abstract and universal take into account the bargaining power of the two sides. The strong side bends the norm in its favor and the weak side agrees: Stated fairness is a compromise with power. Second, in most situations, individuals adjust the range of fair shares after playing the game for real money compared with their initial statement. Third, the discrepancy between hypothetical and real behavior is larger in games where real choices have no strategic consequence (Dictator game and second mover in Trust game) than in those where they do (Ultimatum game). Finally the adjustment of principles to actions is mainly the fact of individuals who behave more selfishly and who have a stronger bargaining power.
The moral hypocrisy displayed (measured by the discrepancy between statements and actions chosen followed by an adjustment of principles to actions) appears produced by the attempt, not necessarily conscious, to strike a balance between self-image and immediate convenience.”
iv. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant, by Simmons, Nelson and Simonsohn. A pretty neat paper:
In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (! .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.”
v. Cognitive Sophistication Does Not Attenuate the Bias Blind Spot, by West, Meserve & Stanovich:
“The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in one’s own judgments. Most previous research on the bias blind spot has focused on bias in the social domain. In 2 studies, we found replicable bias blind spots with respect to many of the classic cognitive biases studied in the heuristics and biases literature (e.g., Tversky & Kahneman, 1974). Further, we found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. We discuss these findings in terms of a generic dual-process theory of cognition.”
I’ll just repeat part of that abstract: “none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability.“
A few other remarks from the paper (but do read all of it if you find the result interesting):
“the bias blind spot joins a small group of other effects such as myside bias and noncausal base-rate neglect (Stanovich & West, 2008b; Toplak & Stanovich, 2003) in being unmitigated by increases in intelligence. That cognitive sophistication does not mitigate the bias blind spot is consistent with the idea that the mechanisms that cause the bias are quite fundamental and not easily controlled strategically — that they reflect what is termed Type 1 processing in dual-process theory (Evans, 2008; Evans & Stanovich, in press). Two of the theoretical explanations of the effect considered by Pronin (2007)—naive realism and defaulting to introspection—posit the bias as emanating from cognitive mechanisms that are evolutionarily and computationally basic. Much research on the bias blind spot describes the asymmetry in bias detection in self compared to others as being spawned by a belief in naive realism—the idea that one’s perception of the world is objective and thus would be mirrored by others who are open-minded and unbiased in their views (Griffin & Ross, 1991; Pronin et al., 2002; Ross & Ward, 1996). Naive realism is developmentally primitive (Forguson & Gopnik, 1988; Gabennesch, 1990) and thus likely to be ubiquitous and operative in much of our basic information processing.
[rereading this, it reminded me of this quote, from a recent lesswrong article: "if you aren't treating humans more like animals than most people are, then you're modeling humans poorly. You are not an agenty homunculus "corrupted" by heuristics and biases. You just are heuristics and biases. And you respond to reinforcement, because most of your motivation systems still work like the motivation systems of other animals."]
It is likewise with self-assessment based on introspective information, rather than behavioral information (Pronin & Kugler, 2007). The bias blind spot arises, on this view, because we rely on behavioral information for evaluations of others, but on introspection for evaluations of ourselves. The biases of others are easily detected in their overt behaviors, but when we introspect we will largely fail to detect the unconscious processes that are the sources of our own biases (Ehrlinger et al., 2005; Kahneman, 2011; Pronin et al., 2004; Wilson, 2002). When we fail to detect evidence of bias, we are apt to decide no bias has occurred and that our decision-making process was indeed objective and reasonable. This asymmetry in bias assessment information has as its source a ubiquitous and pervasive processing tendency— introspective reliance — that again is developmentally basic (Dennett, 1991; Sterelny, 2003).”
vi. Via Gwern, a meta-analysis on depression and exercise. There seems to be a short-term positive effect, but “there is little evidence of a long-term beneficial effect of exercise in patients with clinical depression.”
From the paper:
“Because we began by putting forward a theoretically derived hypothesis and calling its viability into question on the basis of experimental data, it behooves us to listen carefully to what that data has been trying tell us and to draw together plausibly the various strands of evidence. The most parsimonious inductive explanation for our cumulative findings, we contend, is that automatic attitudes are asymmetrically malleable. That is, like creditcard debt and excess calories, they are easier to acquire than they are to cast aside. Thus, when people construe an object for the first time, their conscious fondness or antipathy for it is swiftly supplemented by an automatic positive or negative reaction. However, once people have acquired an attitude toward the object, attempts to subsequently undo it are differentially successful at different levels of the mind and lead its automatic component to lag behind its conscious one. Thus, Devine’s (1989) key prediction—that automatic attitudes will be generally be [sic] harder to shift that their self-reported counterparts — may be correct after all, not under the boundary conditions that we initially proposed but under a new set of boundary conditions that our data have subsequently suggested. [...]
We contend that automatic attitudes operate like rapidly established perceptual defaults: although they can initially be engendered by conscious cognition, they later become relatively resilient to its influence.”
So, there might exist a variety of perhaps even non-overlapping reasons why one might be interested in stuff like this. I’m interested because I believe that some of the automatic attitudes I have implicitly come under the influence of are attitudes which does not make me happy, which is why I feel that I at the very least should try to understand them better. Understanding might make it easier for me to successfully challenge them. Though I’m not optimistic about that. I should specify that the automatic attitudes I have in mind here are perhaps of a somewhat different kind than the ones described in the study; but it doesn’t seem like a lot of stuff is written about how to overcome biological imperatives, and you need to take what you can get.
Human males my age – not only human males my age, but also human males my age – are ‘supposed to’ look for a mate to have children with, and if they can’t find one they are supposed to work towards gathering power and resources so that once someone is there to be found, they can compete more successfully with the other available males in the bidding war that will ensue, and perhaps win the right to have offspring. The male brain has not yet caught on to the fact that contraception has changed everything, in a way that means that power and resources no longer matter all that much when it comes to reproductive success. As Kanazawa put it in this paper; “men’s wealth still translates into their greater reproductive success had it not been for modern contraception, which men’s brain, adapted to the ancestral environment, has difficulty comprehending.”
To the Paleolithic brain, sex = offspring. The whole ‘offspring’-part is why sex feels good. Most (/non-ignorant?) males (/and females) know that the reason why sex feels good is because sex is nature’s (/your genes’) way of tricking you into having offspring. Just as the reason why chocolate cookies taste good is because they contain a lot of fats and sugars, i.e. calories; and calories are good if you want to avoid starving to death, a risk our ancestors spent a lot more time worrying about than we do. But whereas people are quite open about how it’s probably a bad idea to eat too many cookies, because it will make you fat and unhealthy, and thus people do not eat all that many chocolate cookies, there are, to put it bluntly, certainly a lot less people who seem to be open about drawing the conclusion that partnership and children is not worth it and that they ‘refuse to be slaves of their biology’. At least in that area of life…
I have this strange feeling that a lot of male (/and female) behaviour today might look completely crazy to someone who’s not as invested in the underlying ideals of the Paleolithic Era as are (all?) (/fe)males today. For a male, it looks like this: ‘The way to be happy/the good life is to find a fecund-looking female, court her and then have sex with her a lot, have babies and provide for them, die.’ A slightly more elaborate version would also include ‘convince your partner on an ongoing basis that you’re the best male available (by doing all kinds of weird things that signal to the female that you are there for the long haul, even if you’re not – and by golly, the modern economy/-world has certainly increased the number of insane-looking jump-through-the-hoops signals a (self-identified?) high-quality female can demand of her partner..)’, as well as ‘try to cheat on her as often as you can get away with – so that you can have more babies – but try your best to hide the cheating from her so as not to incur significant switching costs.’
The bidding wars these days in the partnership setting relates far more to the quality of the offspring than to the number of offspring. The Paleolithic fecundity markers are more or less completely out of whack with reality today. Today it is mostly preferences – which are to a very large degree driven by socioeconomic factors, religion, culture and societal norms more broadly – and not biological factors (waist-hip ratio etc.) which decide how many children a female is likely to/willing to have. Kanazawa (see above) found that resource access is pretty much irrelevant too. However the lives of most males and females continue to follow the age-old recipe, to some degree. To be happy you need to find a mate and have children. For a male, in order to get the best possible female you need access to resources, you need power. So you need money, which means that you need to work hard, both to obtain access to resources and incidentally also to actually convince the high-quality female that you’re the most suitable partner available. It’s not that these ideals seem completely true to everybody; it’s more that when you defend a different version of the good life, my impression is that you most often will have a hard time making that defense sound credible, even to yourself. People often reject some of the defining characteristics of the traditional partnership equation, like the idea that a partnership necessarily needs to involve children, that it makes sense to look for ‘the one’, that romantic relationships need to involve members of both genders, or perhaps that a monogamous relationship is the best way to deal with the romantic stuff in your life; but how many people openly reject the idea of having a relationship as a major life goal in favour of the alternative in the (‘semi’…, see my remarks below regarding the commitment issues here)-long run, for no other reason than that they think that they will be probably end up happier in the long run if they do? Surely only a person who has no chance in the dating market would do such a thing, right?
I assume the standard narrative will not work for me. It seems like too much hard work that you just know that you’re only undertaking because your Stone Age brain is trying to trick you into undertaking it, just like it’s trying to trick you into eating too many chocolate cookies – and with not too dissimilar consequences. I will probably not be willing to work hard enough to find a long-term partner who would not reject me in favour of someone more suitable, given the amount of competition. And if I do find someone, I will still have major problems trusting her, because I’ll assume that if she follows the standard narrative here, she’ll also follow the Paleolithic recipe later on. Which tells me that she’ll be more likely than not to leave me when I start getting really sick. Yeah, I may not get really sick and a potential she may not leave even if I do, but in expected terms this needs to be taken into account; as does my loss aversion at that point.
So why was I reading the paper again? Because it seems to me at this point that the smartest thing for me to do would be to rewire my brain somehow, to make it like stuff it currently does not like as much as would be optimal, and to dislike stuff it currently seems to enjoy thinking about. To let go of a lot of the counterproductive narratives which were never about people like me in the first place. I’m perfectly well aware that this is all about rationalization, and Paleolithic mind has views about that stuff too. Given what I’ve previously said about the Stoics, naturally I’m not very optimistic about this whole endeavour. But it seems worth trying. Maybe my mind can actually outsmart my Paleolithic mind. In the eyes of most females, I probably won’t be proper partner-material for some time (because of ‘resources, power’) anyway – at least not for the kind of partner my Stone Age brain is trying to convince me I’d like to have. I know about the assortative mating-aspects of the college/university experience, but I also know that that part of the university experience is probably not likely to be relevant for me. Either way, I hope that I can obtain a state of mind such that my period of thinking about dating and similar stuff is over – at least for the time being. The only way not to lose the bidding war is not to play or think about playing.
Incidentally, I ought to at post a few remarks here about how this post relates to my commitment to change: I was writing this and publishing it here at least in part to more efficiently commit myself to this change. I know how strong ‘the opposition’ (‘the Paleolithic mind’ and all its friends and allies…) is, and I might give up on this idea before long. But writing this here can not hurt my chances much, and I’ve been thinking along these lines for a while now. I’ve found that it’s much easier to (knowingly) ‘rationalize’ not looking for a partner than it is to actually be perfectly okay with not doing it. And if it turns out to be impossible to obtain that mind state, it seems suboptimal in most scenarios to not be dating. I’m not trying to commit myself to not dating/finding a girlfriend; I’m trying to commit myself to thinking that I can be perfectly happy even though I don’t. It’s the thoughts in my head, not the behaviour they engender, which are central here. Interestingly enough, if I’m succesful it also probably means that long-run credible commitment to this state of mind is impossible (if preferences such as these can actually be changed over time, such changes can also be reversed later on), which should if anything make commitment in the short run easier, rather than harder, to achieve.
Another one of Paul Graham’s essays. A very, very good read, so I’ve quoted extensively from the essay below:
“Let’s start with a test: Do you have any opinions that you would be reluctant to express in front of a group of your peers?
If the answer is no, you might want to stop and think about that. If everything you believe is something you’re supposed to believe, could that possibly be a coincidence? Odds are it isn’t. Odds are you just think whatever you’re told. [...]
What can’t we say? One way to find these ideas is simply to look at things people do say, and get in trouble for. 
Of course, we’re not just looking for things we can’t say. We’re looking for things we can’t say that are true, or at least have enough chance of being true that the question should remain open. But many of the things people get in trouble for saying probably do make it over this second, lower threshold. No one gets in trouble for saying that 2 + 2 is 5, or that people in Pittsburgh are ten feet tall. Such obviously false statements might be treated as jokes, or at worst as evidence of insanity, but they are not likely to make anyone mad. The statements that make people mad are the ones they worry might be believed. I suspect the statements that make people maddest are those they worry might be true. [...]
In every period of history, there seem to have been labels that got applied to statements to shoot them down before anyone had a chance to ask if they were true or not. “Blasphemy”, “sacrilege”, and “heresy” were such labels for a good part of western history, as in more recent times “indecent”, “improper”, and “unamerican” have been. [...]
We have such labels today, of course, quite a lot of them, from the all-purpose “inappropriate” to the dreaded “divisive.” In any period, it should be easy to figure out what such labels are, simply by looking at what people call ideas they disagree with besides untrue. When a politician says his opponent is mistaken, that’s a straightforward criticism, but when he attacks a statement as “divisive” or “racially insensitive” instead of arguing that it’s false, we should start paying attention. [...]
Moral fashions more often seem to be created deliberately. When there’s something we can’t say, it’s often because some group doesn’t want us to.
The prohibition will be strongest when the group is nervous. [...] To launch a taboo, a group has to be poised halfway between weakness and power. A confident group doesn’t need taboos to protect it. It’s not considered improper to make disparaging remarks about Americans, or the English. And yet a group has to be powerful enough to enforce a taboo. [...]
I suspect the biggest source of moral taboos will turn out to be power struggles in which one side only barely has the upper hand. That’s where you’ll find a group powerful enough to enforce taboos, but weak enough to need them.
Most struggles, whatever they’re really about, will be cast as struggles between competing ideas. The English Reformation was at bottom a struggle for wealth and power, but it ended up being cast as a struggle to preserve the souls of Englishmen from the corrupting influence of Rome. It’s easier to get people to fight for an idea. And whichever side wins, their ideas will also be considered to have triumphed, as if God wanted to signal his agreement by selecting that side as the victor.
We often like to think of World War II as a triumph of freedom over totalitarianism. We conveniently forget that the Soviet Union was also one of the winners.
I’m not saying that struggles are never about ideas, just that they will always be made to seem to be about ideas, whether they are or not. [...]
To do good work you need a brain that can go anywhere. And you especially need a brain that’s in the habit of going where it’s not supposed to.
Great work tends to grow out of ideas that others have overlooked, and no idea is so overlooked as one that’s unthinkable. Natural selection, for example. It’s so simple. Why didn’t anyone think of it before? Well, that is all too obvious. Darwin himself was careful to tiptoe around the implications of his theory. He wanted to spend his time thinking about biology, not arguing with people who accused him of being an atheist. [...]
When you find something you can’t say, what do you do with it? My advice is, don’t say it. Or at least, pick your battles.
Suppose in the future there is a movement to ban the color yellow. Proposals to paint anything yellow are denounced as “yellowist”, as is anyone suspected of liking the color. People who like orange are tolerated but viewed with suspicion. Suppose you realize there is nothing wrong with yellow. If you go around saying this, you’ll be denounced as a yellowist too, and you’ll find yourself having a lot of arguments with anti-yellowists. If your aim in life is to rehabilitate the color yellow, that may be what you want. But if you’re mostly interested in other questions, being labelled as a yellowist will just be a distraction. Argue with idiots, and you become an idiot.
The most important thing is to be able to think what you want, not to say what you want. And if you feel you have to say everything you think, it may inhibit you from thinking improper thoughts. I think it’s better to follow the opposite policy. Draw a sharp line between your thoughts and your speech. Inside your head, anything is allowed. Within my head I make a point of encouraging the most outrageous thoughts I can imagine. But, as in a secret society, nothing that happens within the building should be told to outsiders. The first rule of Fight Club is, you do not talk about Fight Club. [...]
The trouble with keeping your thoughts secret, though, is that you lose the advantages of discussion. Talking about an idea leads to more ideas. So the optimal plan, if you can manage it, is to have a few trusted friends you can speak openly to. This is not just a way to develop ideas; it’s also a good rule of thumb for choosing friends. The people you can say heretical things to without getting jumped on are also the most interesting to know. [...]
Who thinks they’re not open-minded? Our hypothetical prim miss from the suburbs thinks she’s open-minded. Hasn’t she been taught to be? Ask anyone, and they’ll say the same thing: they’re pretty open-minded, though they draw the line at things that are really wrong. (Some tribes may avoid “wrong” as judgemental, and may instead use a more neutral sounding euphemism like “negative” or “destructive”.)
When people are bad at math, they know it, because they get the wrong answers on tests. But when people are bad at open-mindedness they don’t know it. In fact they tend to think the opposite. [...]
To see fashion in your own time, though, requires a conscious effort. Without time to give you distance, you have to create distance yourself. Instead of being part of the mob, stand as far away from it as you can and watch what it’s doing. And pay especially close attention whenever an idea is being suppressed. Web filters for children and employees often ban sites containing pornography, violence, and hate speech. What counts as pornography and violence? And what, exactly, is “hate speech?” This sounds like a phrase out of 1984.
Labels like that are probably the biggest external clue. If a statement is false, that’s the worst thing you can say about it. You don’t need to say that it’s heretical. And if it isn’t false, it shouldn’t be suppressed. So when you see statements being attacked as x-ist or y-ic (substitute your current values of x and y), whether in 1630 or 2030, that’s a sure sign that something is wrong. When you hear such labels being used, ask why.
Especially if you hear yourself using them. It’s not just the mob you need to learn to watch from a distance. You need to be able to watch your own thoughts from a distance. That’s not a radical idea, by the way; it’s the main difference between children and adults. When a child gets angry because he’s tired, he doesn’t know what’s happening. An adult can distance himself enough from the situation to say “never mind, I’m just tired.” I don’t see why one couldn’t, by a similar process, learn to recognize and discount the effects of moral fashions.
You have to take that extra step if you want to think clearly. But it’s harder, because now you’re working against social customs instead of with them. Everyone encourages you to grow up to the point where you can discount your own bad moods. Few encourage you to continue to the point where you can discount society’s bad moods.
How can you see the wave, when you’re the water? Always be questioning. That’s the only defence. What can’t you say? And why?”
No, they don’t need to be consistent with each other. And no, truth is not a 0-1 variable. Anyway…:
i. I got to where I am now because of my hard work (if you’ve done well). Even if I had worked very hard, it wouldn’t have made any difference (if you have not done well).
ii. People who talk badly about others behind their backs with me never engage with others in the same kinds of conversations about me.
iii. If data do not support my view of the world, it’s completely okay to disregard the data. But it’s not okay for other people to do that, unless they reach the same conclusions I do and/or we disregard the same data.
iv. I don’t care nearly as much about status and related matters as other people do.
v. ‘I don’t much care how she looks, because she’s just a wonderful person’ (male, about his partner). ‘I would love him just the same if his income were half of what it is today’ (female, about her partner).
vi. I’m a good person. What I did was justified. (If morality was based on what people thought about themselves, there’d pretty much be no evil). Most of the time people also hold this belief, at least implicitly: I’m a better/more deserving person than [other people].
vii. When I answer a question related to ethics, political principles and similar matters, I do not base my answer on what people I care about would like me to answer. If the answer is to a political question which relates to distributional tradeoffs, I don’t choose my answer based on what benefits me (rather than society) the most.
viii. Ageing and dying is something that happens to other people. Divorce and cancer as well.
ix. If you don’t know much about another person, you can nevertheless infer a lot of stuff about that person from the things that you do know.
x. I am less judgmental and prejudiced than the people I compare myself with.
xi. If people don’t understand me, they are the ones with a problem – not me.
xii. Admitting your faults is a sign of weakness and should be avoided. If a person does not try to hide one of his/her fault, I am allowed to think that I’m a better person than s/he is.
xiii. ‘My life will be much better/easier/simpler when I [...]‘. [... = am 18. ... finally move away from my parents. ... find a girlfriend/boyfriend. ... get married. ... have a child. ... find a better job. ... retire...] (related link)
xiv. The more time and emotion I have invested in X, the less likely I am to be wrong about X.
Some of them might just be me projecting.
Perhaps some of these already apply to you, but probably not very many of them. I know I’ve mentioned a few of them before, but not too many of them. Try to imagine how your life would be like/-different if you were:
i. A foster child/an orphan.
ii. Unable to read.
iii. Able to read, but had never learned how to use the internet. Perhaps you also don’t know how to speak English.
iv. Of the opposite gender.
v. The child of muslim parents living in the Middle East.
vi. Deaf/blind/dumb (pick one, or any combination…).
vii. (/had been) Sexually abused by your parents when you were a child.
viii. The child of parents with a severe genetic disorder (Fanconi anemia, Huntington’s).
ix. Born somewhere where toilet paper is considered a luxury.
x. The child of multimillionaires living in the United States.
xi. The child of Chinese rice farmers living in the 7th century BC.
xii. Addicted to an illicit substance/alcohol (in some places alcohol is an illicit substance…)/smoking.
xiii. Born with only one arm.
xiv. Living in a country where there had been a civil war within the last couple of decades.
xv. An only child because your big brother/sister committed suicide while you were very young.
xvi. Married to a partner you no longer love.
xvii. Very rich because you’d just won the lottery.
xviii. Recently divorced after 20 years of marriage.
xx. Living in a country where one-third of all children die before the age of 5.
xxi. 15 centimeters taller/shorter than you are now.
xxii. Travelling around the world, working as a circus artist.
xxiii. 25 years older/younger than you are now.
xxiv. Forced (either by circumstances or the government) to work doing something you hate.
xxv. Sometimes hearing or seeing things which are not real, because of mental illness.
xxvi. A highly social individual who loves to hang out with people all the time and dislikes being alone.
xxvii. Extremely conceited about your own abilities.
xxviii. A homosexual/an asexual/unable to achieve orgasm.
xxix. Able to read the minds of others (/fly/move things with you mind/…).
xxx. Unable to distrust people with whom you’d never interacted in the past.
xxxi. One of those people who’ve never even heard about the concept of ‘cognitive biases’.
xxxii. A sincere believer in God/Yahweh/Allah/…
xxxiii. Unable to see colours (only black/white).
xxxiv. Sitting on death row, about to be executed.
xxxv. A person for whom conventional measures of status (money/power/…) is of the very highest importance.
xxxvi. 15 kilograms leaner/heavier than you are now.
xxxvii. 15 IQ-points smarter/dumber than you are now.
xxxviii. Unable to form new memories/remember anything from your life that happened before some recent traumatic event.
Given some of the scenarios, you have to do that anyway – but for added fun, try to combine some of them (i.e. ‘one-armed, orphan, illiterate, alcoholic, recently divorced, middle-aged, dwarf, circus artist’. It should not be too hard for you to come up with a less depressing combination.)
The planning fallacy refers to a prediction phenomenon, all too familiar to many, wherein people underestimate the time it will take to complete a future task, despite knowledge that previous tasks have generally taken longer than planned. In this chapter, we review theory and research on the planning fallacy, with an emphasis on a programmatic series of investigations that we have conducted on this topic. We first outline a definition of the planning fallacy, explicate controversies and complexities surrounding its definition, and summarize empirical research documenting the scope and generality of the phenomenon. We then explore the origins of the planning fallacy, beginning with the classic inside–outside cognitive model developed by Kahneman and Tversky [Kahneman, D., & Tversky, A. (1979). Intuitive prediction: biases and corrective procedures. TIMS Studies in Management Science, 12, 313–327]. Finally, we develop an extended inside–outside model that integrates empirical research examining cognitive, motivational, social, and behavioral processes underlying the planning fallacy.”
From The Planning Fallacy: Cognitive, Motivational, and Social Origins by Buehler et al. A few snips of interest from the paper:
“3.1. The inside versus outside view
Given the prevalence of optimistic predictions, and ample empirical evidence of the planning fallacy, we now turn to examining the psychological mechanisms that underlie people’s optimistic forecasts. In particular, how do people segregate their general theories about their predictions (i.e., that they are usually unrealistically optimistic) from their specific expectations for an upcoming task? Kahneman and Tversky (1979) explained the prediction failure of the curriculum development team through the inside versus outside analysis of the planning fallacy. This analysis builds upon a perceptual metaphor of how people view a planned project. In the curriculum development example, the group of authors focused on the specific qualities of the current task, and seemed to look inside their representation of the developing project to assess its difficulty. The group of authors failed, however, to look outside of the specific project to evaluate the relevant distribution of comparable projects. Even when they asked for information about the outside viewpoint, they neglected to incorporate it in their predictions or even to moderate their confidence. An inside or internal view of a task focuses on singular information: specific aspects of the target task that might lead to longer or shorter completion times. An outside or external view of the task focuses on distributional information: how the current task fits into the set of related tasks. Thus, the two general approaches to prediction differ primarily in whether individuals treat the target task as a unique case or as an instance of a category or ensemble of similar problems. [...]
We suggest that people often make attributions that diminish the relevance of past experiences to their current task. People are probably most inclined to deny the significance of their personal history when they dislike its implications (e.g., that a project will take longer than they hope). If they are reminded of a past episode that could challenge their optimistic plans, they may invoke attributions that render the experience uninformative for the present forecast. This analysis is consistent with evidence that individuals are inclined to explain away negative personal outcomes (for reviews, see Miller & Ross, 1975; Taylor & Brown, 1988). People’s use of others’ experiences are presumably restricted by the same two factors: a focus on the future reduces the salience of others’ experiences, and the tendency to attribute others’ outcomes to their dispositions (Gilbert & Malone, 1995) limits the inferential value of others’ experiences. Furthermore, our understanding of other people’s experiences is typically associated with uncertainty about what actually happened; consequently, we can readily cast doubt on the generalizability of those experiences. To quote Douglas Adams, ‘‘Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.’’ (Adams & Carwardine, 1991, p. 116) In sum, we note three particular impediments to using the outside perspective in estimating task completion times: the forward nature of prediction which elicits a focus on future scenarios, the elusive definition of ‘‘similar’’ experiences, and attributional processes that diminish the relevance of the past to the present.
3.3. Optimistic plans
People’s completion estimates are likely to be overly optimistic if their forecasts are based exclusively on plan-based, future scenarios. A problem with the scenario approach is that people generally fail to appreciate the vast number of ways in which the future may unfold (Arkes et al., 1988; Fischhoff et al., 1978; Hoch, 1985; Shaklee & Fischhoff, 1982). For instance, expert auto mechanics typically consider only a small subset of the possible things that can go wrong with a car, and hence underestimate the probability of a breakdown (Fischhoff et al., 1978). Similarly, when individuals imagine the future, they often fail to entertain alternatives to their favored scenario and do not consider the implications of the uncertainty inherent in every detail of a constructed scenario (Griffin et al., 1990; Hoch, 1985). When individuals are asked to predict based on ‘‘best guess’’ scenarios, their forecasts are generally indistinguishable from those generated by ‘‘best-case’’ scenarios (Griffin et al., 1990; Newby-Clark et al., 2000). The act of scenario construction itself may lead people to exaggerate the likelihood of the scenario unfolding as envisioned. Individuals instructed to imagine hypothetical outcomes for events ranging from football games to presidential elections subsequently regard these imagined events as more likely (for reviews, see Gregory & Duran, 2001; Koehler, 1991). Focusing on the target event (the successful completion of a set of plans) may lead a predictor to ignore or underweight the chances that some other event will occur. Even when a particular scenario is relatively probable, a priori, chance will still usually favor the whole set of possible alternative events because there are so many (Dawes, 1988; Kahneman & Lovallo, 1993).”
The paper has a lot more stuff and details.
Of course I read the tvtropes article a long time ago, but this presentation goes into a lot more detail. Some of you might find it worth watching (/listening to, while you’re doing other stuff..), I did:
I think a lot of people have some incorrect ideas about what ‘rationality’ actually is and what ‘behaving (/more) rationally’ implies in a real-world setting. The above presentation tries to correct some probably quite common misconceptions. Incidentally, I think that the subset of people who would gain the most from watching the presentation is probably the subset of people who are the most skeptical about watching a presentation like this.
Does high self-esteem cause better performance, interpersonal success, happiness, or healthier lifestyles?
“Summary — Self-esteem has become a household word. Teachers, parents, therapists, and others have focused efforts on boosting self-esteem, on the assumption that high self-esteem will cause many positive outcomes and benefits — an assumption that is critically evaluated in this review.
Appraisal of the effects of self-esteem is complicated by several factors. Because many people with high self-esteem
exaggerate their successes and good traits, we emphasize objective measures of outcomes. High self-esteem is also a heterogeneous category, encompassing people who frankly accept their good qualities along with narcissistic, defensive, and conceited individuals.
The modest correlations between self-esteem and school performance do not indicate that high self-esteem leads to
good performance. Instead, high self-esteem is partly the result of good school performance. Efforts to boost the self-esteem of pupils have not been shown to improve academic performance and may sometimes be counterproductive. Job performance in adults is sometimes related to self-esteem, although the correlations vary widely, and the direction of causality has not been established. Occupational success may boost self-esteem rather than the reverse. Alternatively, self-esteem may be helpful only in some job contexts. Laboratory studies have generally failed to find that self-esteem causes good task performance, with the important exception that high self-esteem facilitates persistence after failure.
People high in self-esteem claim to be more likable and attractive, to have better relationships, and to make better impressions on others than people with low self-esteem, but objective measures disconfirm most of these beliefs. Narcissists are charming at first but tend to alienate others eventually. Self-esteem has not been shown to predict the quality or duration of relationships.
High self-esteem makes people more willing to speak up in groups and to criticize the group’s approach. Leadership does not stem directly from self-esteem, but self-esteem may have indirect effects. Relative to people with low self-esteem, those with high self-esteem show stronger in-group favoritism, which may increase prejudice and discrimination.
Neither high nor low self-esteem is a direct cause of violence. Narcissism leads to increased aggression in retaliation for wounded pride. Low self-esteem may contribute to externalizing behavior and delinquency, although some studies have found that there are no effects or that the effect of self-esteem vanishes when other variables are controlled. The highest and lowest rates of cheating and bullying are found in different subcategories of high self-esteem.
Self-esteem has a strong relation to happiness. Although the research has not clearly established causation, we are persuaded that high self-esteem does lead to greater happiness. Low self-esteem is more likely than high to lead to depression under some circumstances. Some studies support the buffer hypothesis, which is that high self-esteem mitigates the effects of stress, but other studies come to the opposite conclusion, indicating that the negative effects of low self-esteem are mainly felt in good times. Still others find that high self-esteem leads to happier outcomes regardless of stress or other circumstances.
High self-esteem does not prevent children from smoking, drinking, taking drugs, or engaging in early sex. If anything, high self-esteem fosters experimentation, which may increase early sexual activity or drinking, but in general effects of self-esteem are negligible. One important exception is that high self-esteem reduces the chances of bulimia in females.
Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeutic interventions or school programs) causes benefits. Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes. In view of the heterogeneity of high self-esteem, indiscriminate praise might just as easily promote narcissism, with its less desirable consequences. Instead, we recommend using praise to boost self-esteem as a reward for socially desirable behavior and self-improvement.” [my emphasis]
Here’s the link. A bit more from the paper:
“The role of self-esteem in romantic relationships has received fairly little attention. In particular, little is known about whether self-esteem predicts the durability of romantic relationships. One study with a very small sample (N=30) found that couples with low self-esteem were more likely than couples with high self-esteem to break up over a 1-month period (S.S. Hendrick, Hendrick, & Adler, 1988). Data on love styles and self-esteem support this finding, showing that low self-esteem is related to feelings of manic love, which is characterized by extreme feelings of both joy and anguish over the love object (W.K. Campbell, Foster, & Finkel, 2002). High self-esteem is related to passionate, erotic love, which is marked by the escalation of erotic feelings for the love object. These findings are consistent with other studies showing that, compared with people with high self-esteem, those with low self-esteem experience more instances of unrequited love (Dion & Dion, 1975) and more intense feelings of love for others (C. Hendrick & Hendrick, 1986).
Several findings indicate that relationship behavior differs as a function of self-esteem. Murray, Rose, Bellavia, Holmes, and Kusche (2002) found that people low in self-esteem engage in a variety of potentially destructive behaviors. They tend to distrust their partners’ expressions of love and support, and so they act as though they are constantly expecting their partners to reject and abandon them. Thus far, however, these patterns have not translated into any evidence that the relationships are actually more likely to dissolve.
Thus, despite the relationship problems caused by low self-esteem, relationships are no more likely to break up if a partner has low self-esteem than if a partner has high self-esteem. Possibly the reason for this is that high self-esteem leads to relationship problems, too. Rusbult, Morrow, and Johnson (1987) examined four types of responses to problems within close relationships, and found that self-esteem produced the largest difference in the active-destructive (“exit”) category of responses. People with high self-esteem were significantly more likely than others to respond to problems and conflicts by deciding to leave the relationship, seeking other partners, and engaging in other behaviors that would actively contribute to the deterioration of the relationship. These results were based on responses to hypothetical scenarios, which share many of the drawbacks of self-report measures. However, as the authors noted, it seems unlikely that their findings can be attributed to a simple response bias because people with high self-esteem were admitting to more undesirable, rather than desirable, behaviors.
Shackelford (2001) found that self-esteem was intertwined with a variety of patterns in marriage, although he did not provide evidence as to whether high self-esteem affects the durability of marriages. Spouses showed similar levels of self-esteem, with global self-esteem of spouses correlating at .23 and physical self-esteem (including self-rated attractiveness) correlating fairly strongly at .53. Significantly, Shackelford regarded self-esteem as an outcome rather than a cause of marital interactions, although his data were correlational. Wives’ fidelity was the strongest predictor of husbands’ self-esteem. This might indicate that men with high self-esteem cause their wives to remain faithful, or—as Shackelford speculated—that cuckolded husbands experience a loss of self-esteem.
Most important, women complained more about husbands with low than with high self-esteem. Low self-esteem men were derided by their wives as jealous, possessive, inconsiderate, moody, prone to abuse alcohol, and emotionally constricted. Again, the direction of causality is difficult to determine. Possibly, husbands’ low self-esteem elicits negative perceptions among wives. Conversely, being disrespected or despised by his wife may lower a man’s self-esteem. Yet another possibility is that having a variety of bad traits leads both to low self-esteem and to being disrespected by one’s wife. Meanwhile, the self-esteem of wives was unrelated to their husbands’ complaints about them, except that husbands who criticized or insulted their wives’ appearance were generally married to wives with low self-esteem, and indeed Shackelford (2001) found that this was the most consistent predictor of low self-esteem among wives.” (pp.18-19)
This is interesting stuff, worth pondering. Incidentally, my level of self-esteem is higher than it has been for a long time. That’s not saying much, but even so…
I have made this letter longer than usual, because I lack the time to make it short (Pascal)
I read Paul Graham’s essay some time ago, but I don’t think I’ve ever linked to it here. Since I recently had an excuse to read it again, I figured I might as well put up a link. It’s a very US-centric piece, but worth reading. A few passages from the post:
“And that, I think, is the root of the problem. Nerds serve two masters. They want to be popular, certainly, but they want even more to be smart. And popularity is not something you can do in your spare time, not in the fiercely competitive environment of an American secondary school. [...]
For example, teenage kids pay a great deal of attention to clothes. They don’t consciously dress to be popular. They dress to look good. But to who? To the other kids. Other kids’ opinions become their definition of right, not just for clothes, but for almost everything they do, right down to the way they walk. And so every effort they make to do things “right” is also, consciously or not, an effort to be more popular.
Nerds don’t realize this. They don’t realize that it takes work to be popular. In general, people outside some very demanding field don’t realize the extent to which success depends on constant (though often unconscious) effort. For example, most people seem to consider the ability to draw as some kind of innate quality, like being tall. In fact, most people who “can draw” like drawing, and have spent many hours doing it; that’s why they’re good at it. Likewise, popular isn’t just something you are or you aren’t, but something you make yourself.
The main reason nerds are unpopular is that they have other things to think about. Their attention is drawn to books or the natural world, not fashions and parties.
But I think the main reason other kids persecute nerds is that it’s part of the mechanism of popularity. Popularity is only partially about individual attractiveness. It’s much more about alliances. To become more popular, you need to be constantly doing things that bring you close to other popular people, and nothing brings people closer than a common enemy.
Like a politician who wants to distract voters from bad times at home, you can create an enemy if there isn’t a real one. By singling out and persecuting a nerd, a group of kids from higher in the hierarchy create bonds between themselves. Attacking an outsider makes them all insiders. This is why the worst cases of bullying happen with groups.”
I think he overstates the case here:
“In almost any group of people you’ll find hierarchy. When groups of adults form in the real world, it’s generally for some common purpose, and the leaders end up being those who are best at it. The problem with most schools is, they have no purpose. But hierarchy there must be. And so the kids make one out of nothing.
We have a phrase to describe what happens when rankings have to be created without any meaningful criteria. We say that the situation degenerates into a popularity contest. And that’s exactly what happens in most American schools.”
However I couldn’t really say for sure because I’ve never set foot in an American high school. I don’t think this is the case in Denmark.
A funny thing about reading the essay was that while it would be easy for me to use my own experiences to affirm the theory of the ‘unpopular nerd’ in the 7-8th grade, a period where I was periodically bullied in school, I find that it does not very well match the (Danish) high-school experience. I btw. didn’t much think of myself as a nerd before the age of, what, 20? Nerds were other people, people much more strange than me – in my self-narrative, I didn’t get bullied because I was a nerd but because those other kids were jerks. I wasn’t really all that different from anyone else (so I told myself). I was told sometimes that I was a nerd in high school, but I shrugged it off because it didn’t matter, because I wasn’t. I didn’t have much clue when it came to the social dynamics of the high school environment, but I don’t think I was ever unpopular. I don’t think I was all that popular either – if so I didn’t notice – it was just that I didn’t pay much attention to that kind of stuff (about this part Graham is right). But an important observation here is that I was allowed not to care by the others.
Graham’s treatment of status as a unidimensional variable is of course a gross simplification of the actual dynamics. One thing I’d add related to the ‘important observation’ above is that whereas status might not be too complex to be semi-reliably measured on a unidimensional scale, it should indeed surprise us a great deal if people who did not do all that well on such a scale would care much about their ordering on such a scale, at least if given any choice in the matter. Any sort of aggregate popularity function would have to be constructed by aggregating stuff that in many cases has little to nothing to do with each other and we should expect people, especially people at the lower ends of the status spectrum, to actually only really care much about the status markers on which they do well. Everyone wants be think that s/he is better than other people, more deserving, so most people just pick a narrative that makes this (…idea? …delusion?) come true, which is one major reason why most people care about but a few dimensions of the social hierarchy. The flip side of the ‘nerds don’t care about being (conventionally) popular’ is that ‘there’s a lot of stuff non-nerds also don’t care about which makes them less popular among nerds’ (and/or other sub-groups) – so why do you care so much about the popularity functions of non-nerds?
Graham spends some time on that one, on why nerds care about the opinions of non-nerds. You have a real problem when you can find no dimensions where you do better than others, or at least no dimensions that many other people care about in the status game. One thing Graham isn’t explicit about though (have thought about?) is that status – like safety (…and money, and…) – is something that people will very often start to care a lot about when they don’t really have much of it. This angle is not really explored in his piece and I find it quite important: A very unpopular nerd probably is more status-conscious than the higher-status bully who makes his life miserable, because he’s forced to confront this aspect of his existence all the time; the bully isn’t. As a general rule, bullied people spend orders of magnitude more time thinking about bullying and related status-stuff than do bullies. I think Graham is missing that part of the equation – being unpopular makes you status-conscious, just like being poor makes you care more about money (maybe I should write a post about that one too? It seems to me that a lot of people with abundant resources are unaware of the fact that part of the reason why they don’t much ‘care about money’ is due to the fact that they have a lot of it, which is precisely what enables them to not care). Anyway, moving along that diagonal; perhaps the people who are (conventionally) popular in the eyes of people who are not don’t really know that they are popular? Perhaps it doesn’t even necessarily take a lot of work to become (conventionally) popular? That would be unfair, but that doesn’t make it wrong. Perhaps some popular high schoolers are just likeable people who do not need to actually do much to stay popular? Perhaps some of them – many of them? – are quite smart and could have become ‘nerds’ but instead decided not to?
Update: I decided to just get rid of four paragraphs because I didn’t much like how they turned out. If you were wondering about that Pascal quote in the beginning, the post used to be a lot longer. It turned out I did have the time to use the mouse to select those passages and delete them.
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- Flemming Rose
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly