i. “Temperate, sincere, and intelligent inquiry and discussion are only to be dreaded by the advocates of error. The truth need not fear them…” (Benjamin Rush)

ii. “No man likes to have his intelligence or good faith questioned, especially if he has doubts about it himself.” (Henry Adams)

iii. “What one knows is, in youth, of little moment; they know enough who know how to learn.” (-ll-)

iv. “No one means all he says, and yet very few say all they mean, for words are slippery and thought is viscous.” (-ll-)

v. “It is a testament to our naïveté about culture that we think that we can change it by simply declaring new values. Such declarations usually produce only cynicism.” (Peter Senge)

vi. “We may be through with the past, but the past is not through with us.” (Bergen Evans)

vii. “No person’s gain in wisdom is diminished by anyone else’s gain.” (Charles Reich)

viii. “Old age is like learning a new profession. And not one of your own choosing.” (Jacques Barzun)

ix. “Children are like wet cement. Whatever falls on them makes an impression.” (Haim Ginott)

x. “All things are to be examined and called into question. There are no limits set to thought.” (Edith Hamilton)

xi. “It has always seemed strange to me that in our endless discussions about education so little stress is laid on the pleasure of becoming an educated person, the enormous interest it adds to life. To be able to be caught up into the world of thought — that is to be educated.” (-ll-)

xii. “Wisdom is knowing what to do next. Virtue is doing it.” (David Starr Jordan)

xiii. “The whole thing that makes a mathematician’s life worthwhile is that he gets the grudging admiration of three or four colleagues.” (Donald Knuth)

xiv. “It is less than honest to give one’s own religion the benefit of every possible doubt while imposing unsympathetic readings on other religions. Yet this is what practically all religious people do.” (Walter Kaufmann)

xv. “All of us have so much more time than we use well. How many hours in a life are spent in a way of which one might be proud, looking back?” (-ll-)

xvi. “Man is to be held only by the slightest chains; with the idea that he can break them at pleasure, he submits to them in sport.” (Maria Edgeworth)

xvii. “Surely it is much more generous to forgive and remember, than to forgive and forget.” (-ll-)

xviii. “If error is corrected whenever it is recognized as such, the path of error is the path of truth.” (Hans Reichenbach)

xix. “The facts are indispensable; they are not sufficient. To solve a problem it is necessary to think. It is necessary to think even to decide what facts to collect.” (Robert Hutchins)

xx. “The art of teaching consists in large part of interesting people in things that ought to interest them, but do not.” (-ll-)

November 30, 2014 Posted by | quotes | Leave a comment

Delusion and Self-Deception – Affective and Motivational Influences on Belief Formation

I almost gave up on this book after having read 10-15 pages, but I decided in the end to read on and I now believe this was the right decision; some of this stuff is quite interesting.

Some observations from the book below.

“Do delusion and self-deception involve departures from the procedural norms of belief formation? Self-deception — at least, everyday self-deception — need involve no such departure from the procedural norms of belief formation. There is overwhelming evidence that normal human beings have a systematically distorted self-conception […] Drivers tend to believe that their driving abilities are above average […], teachers tend to believe that their teaching abilities are above average […], and most of us believe that we are less prone to self-serving biases than others are […] [See also this previous post of mine…]. Having an overly positive self-image seems to be part of the functional profile of the human doxastic system; indeed, one might even argue that having an accurate self-conception is an indication of doxastic malfunction

“On the face of things, it seems obvious that delusions involve departures — typically, quite radical departures — from the procedural norms of human belief formation. Delusions stand out as exotic specimens in the garden of belief, as examples of what happens precisely when the mechanisms of belief formation break down. In support of this point, it is of some note that the DSM characterization of delusion includes a special exemption for religious beliefs. This exemption appears to be ad hoc from the perspective of the epistemic account of delusions […and I believe it’s fair to say that far from all people agree this exception should be made…], but it is perfectly appropriate in the context of the procedural norms account, because — certain commentators to the contrary — there is no reason to suppose that religious belief as such is indicative of doxastic malfunction.[2] Delusions, by contrast, do seem to be symptomatic of doxastic malfunction [my italics, US].”

“Affect and motivation are typically contrasted with cognition and perception; the former constitute “hot cognition” and the latter constitute “cold cognition.” Although intuitively compelling, the contrast between hot and cold cognition is difficult to spell out with any precision. […] Hot cognition has traditionally been regarded as an enemy of rationality […] But to say that affect can derail belief formation is not to say that it always derails belief formation. It is now widely granted that affect contributes to belief formation and cognition more generally in a number of positive ways […] Judged against epistemic norms there is not much to recommend motivated reasoning, for epistemic justification is constitutively tied to avoiding falsehood and detecting truth. But there are other norms against which to judge belief formation. For example, one might evaluate the mechanisms of belief fixation in terms of how well they enhance the agent’s well-being, reproductive fitness, or some such property. Arguably, one is generally better off believing that the world is how it is rather than how one wants it to be, but there are some domains — perhaps quite a number of domains — in which false but motivated belief brings significant benefit at little cost. A life governed by indiscriminately motivated belief might be nasty, brutish, and short, but suitably filtered motivational effects on belief might be expected to increase the agent’s well-being in any number of ways.”

“The judgments that make up the most pivotal points in our lives are seldom made dispassionately. When we await news from our beloved regarding her deliberations on a proposal of marriage, news from our doctor regarding the results of a medical test, or (perhaps as important to many of us) news from a journal editor regarding the fate of our latest manuscript, we do not approach that information with the cold detachment of a computer awaiting its next input. When we process information about valued aspects of self like our attractiveness, health, or intelligence, we almost always have clear preferences for what we want that information to hold. Rather than being indifferent to whether information suggests that we are loved or spurned, healthy or ill, published or one step closer to perishing, our processing of self-relevant information is usually accompanied by strong hopes and fears — hope that the information will favor the judgment conclusion we want to reach and fear that it will not.
Given the ubiquity of motivational forces as concomitants of important real-world judgments, it seems strange that documenting their role in judgment processes has been one of the thorniest problems in the history of experimental psychology. Terms such as “denial” and “wishful thinking” are mainstays of the contemporary vernacular and evidence for their role in everyday judgment would likely seem so obvious to the average person as to defy the need for empirical confirmation. At a formal scientific level, however, the simple proposition that what people believe can be affected by what they want to believe has proven to be a surprisingly controversial idea. […] the view I will present in this chapter is that people often come to believe what they want to believe (and disbelieve what they want not to believe) because of a quite reasonable tendency to think more deeply about negative information than positive information. By conceiving of motivation as affecting the quantity rather than the quality of cognitive processing, much of the mystery surrounding motivated reasoning is removed, and it can be understood as simply another example of the pervasive tendency in human thought to allocate cognitive resources strategically.”

“space precludes a nuanced treatment of the large corpus of empirical findings regarding self-serving bias, but the essence of the phenomenon, demonstrated across a number of studies, is that individuals receiving success feedback tend to report more internal and less external attributions for the causes of the feedback than do individuals receiving identically structured failure feedback”

“in comparison to the cognitive view, which explained errors and biases as unintentional miscues of imperfect but essentially functional information-processing strategies, motivational phenomena like “defensiveness” and “self-enhancement” implied a less benign view of people as intentionally distorting reality to serve their own egocentric purposes. […] [this] idea […] posed a significant challenge to most people’s intuition and thus dampened many researchers’ enthusiasm for motivational accounts of judgmental bias. […] At the theoretical level, the maturing field of social cognition has witnessed a gradual breakdown of the artificial barrier that originally existed between motivational and cognitive processes […]. Against this backdrop, a number of theories were generated during the late 1980s that attempted to specify how motivational forces might enter into and perturb the generic information-processing sequence […]. The key insight in this regard was the simple idea (absent in almost all early treatments of motivated bias) that if motivational factors are to affect cognitive outcomes, they must do so by affecting some aspect of cognitive process.
Together, these empirical and theoretical advances ushered in a new era of research on motivated bias, allowing researchers to move beyond the first-generation question of determining whether motivational forces affect cognitive processes to more interesting second-generation questions focused on distinguishing between different accounts of how this influence occurs”

“The prototypical phenomenon in the motivated reasoning literature is the pervasive tendency for individuals to accept more readily the validity of information consistent with a preferred judgment conclusion (preference-consistent information) than that of information inconsistent with a preferred judgment conclusion (preference-inconsistent information). Both perceptual defense and the self-serving attributional bias can be framed as examples of this general phenomenon, and similar effects have been found to occur whether the flattering or threatening information concerns one’s intelligence […], professional competence […], personality […], social sensitivity […], or vulnerability to future illness […].
But why does this differential acceptance occur? How does the processing of preference-consistent information differ from that of preference-inconsistent information? Most treatments of motivated reasoning suggest, either explicitly or implicitly, that the difference lies in the kind of processing people apply to the two types of information. […] According to this view, then, the desire to reach a specific judgment conclusion affects the quality of information processing: People approach preference-consistent and preference-inconsistent information with different processing goals and then use a biased set of cognitive operations to pursue those goals actively. […] a large body of research in social cognition [show] that negative information and negative affective states produce more systematic, detail-oriented cognitive processing than do positive information and positive affective states […] The most common explanation for this asymmetry is an adaptive one. Negative stimuli are more likely than positive ones to require an immediate behavioral response (to avoid loss or harm). As such, negative stimuli tend to evoke a “mobilization” response that includes a narrowing and focusing of attention and an increase in detail-oriented cognitive analysis […] This body of work suggests that the key difference in the processing of preference- consistent and preference-inconsistent information may not lie in the kind of processing each receives, but rather in the intensity of that processing. That is, rather than actively working to construct justifications for preference-consistent information […], information we want to believe may often be accepted unthinkingly at face value. In contrast, because information inconsistent with a preferred judgment conclusion is more likely to initiate an effortful cognitive appraisal, alternative explanations for the unwanted information are likely to be considered, generating uncertainty regarding the validity of the information. Ditto and Lopez (1992) referred to this view of motivated reasoning as the quantity of processing (QOP) view to highlight the contention that it is the amount or intensity of cognitive processing that most clearly differentiates the treatment of preference-consistent and preference-inconsistent information rather than the direction or intended goal of that processing.”

“As a motivational theory, the QOP model predicts that people will respond more skeptically to preference-inconsistent than preference-consistent information even when the consistency of the two types of information with prior expectations is equivalent. […] the QOP view does not deny that factors such as the consistency of information with prior expectations affect how effortfully that information is processed. For example, an individual who discovers that she is holding a multimillion-dollar lottery ticket is initially likely to respond quite skeptically, checking and rechecking the number on her ticket against the number on the television screen in an attempt to confirm that this highly unexpected windfall is actually true.
What the QOP view does suggest, however, is that the consistency of information with an individual’s expectations and the consistency of information with an individual’s preferences have analogous but independent effects on intensity of cognitive processing. People should be prompted to think deeply about events that they do not expect and those they do not want. In fact, the reason that the roles of expectation and motivation (i.e., positive vs. negative outcome) have historically been so difficult to disentangle is that both factors are typically posited to have identical effects on judgment. At an empirical level, this means that any attempt to confirm the QOP model (or any other motivational model for that matter) must take care to mimic the approach used in the example to rule out differential expectations as a plausible alternative for any putatively preference-based effects. […] the QOP model assume that merely thinking more intensely about a piece of information leads to a greater likelihood of considering multiple explanations for it. This assumption seems particularly noncontroversial. The guiding presupposition of the entire attributional perspective in psychology is that almost all human events are causally ambiguous, and thus people must infer why things occur from very limited observational data […]. Stated more simply, given a little motivation, people can generate multiple plausible explanations for virtually any piece of information. […] if negative affect indeed promotes more intensive cognitive analysis than does positive affect, it is almost inevitable that people will be more likely to consider multiple explanations for unwanted outcomes than wanted ones.”

“The QOP model does not require that people convince themselves of the inaccuracy of undesirable information. Instead, it predicts that people will be more uncertain about the validity of preference-inconsistent than preference-consistent information because of their greater likelihood of entertaining the possibility that unwanted information might be explainable in more than one way […] Because people adopt this more skeptical stance toward preference-inconsistent than preference-consistent information, it should simply require more (or better) information to convince someone of something he or she does not want to believe than of something he or she does. […] At the empirical level, there seems solid support for the predictions of the QOP model of motivated reasoning. Both of the key predictions of the model—that people are more likely to spontaneously question the validity of preference-inconsistent than preference-consistent information and that people are more sensitive to the quality of preference-inconsistent than preference-consistent information—have been confirmed by experimental research. This research has taken care to rule out nonmotivational explanations for the observed effects, and the findings are equally difficult to explain based on competing conceptualizations of how motivation alters cognitive processing. [I’d prefer to see more studies before concluding anything, but their findings certainly seem to support the model. It would go too far to cover the findings in detail here, but I’d note that a few of the studies done on this topic are actually quite clever – US].

“People frequently believe things that they would rather not believe. I would rather be taller, more athletic, and have a better head of hair, but I do not believe that I possess any of these characteristics because the data simply will not let me. In some of the earliest work on motivated cognition, no lesser figures than Bruner […], Festinger […], and Heider […] all suggest that what we ultimately see and believe is not solely what we wish to see and believe, but rather represents a compromise between our wishes and the objective stimulus information provided by sense and reason. As such, any analysis of motivated reasoning must account for both sides of the resistance–sensitivity coin.
Central to the QOP view is an image of people as fundamentally adaptive information processors. Whereas qualitative treatments of motivated reasoning portray people as intentionally pursuing the goal of reaching a desired conclusion, the QOP view sees the reluctance of people to acknowledge the validity of unwanted information as an unintentional by-product of a quite reasonable strategy of directing detail-oriented cognitive processing toward potentially threatening environmental stimuli”

“A datum’s vividness for us often is a function of such things as its concreteness and its sensory, temporal, or spatial proximity. Vivid data are more likely to be recognized, attended to, and recalled than pallid data. Consequently, vivid data tend to have a disproportional influence on the formation and retention of beliefs. […] Although sources of biased belief [such as vividness of information] apparently can function independently of motivation, they also may be triggered and sustained by desires in the production of motivationally biased beliefs.[3] For example, desires can enhance the vividness or salience of data.”

“At least some delusional patients show considerable appreciation of the implausibility of their delusional beliefs […] Capgras delusion patients can be…able to appreciate that they are making an extraordinary claim. If you ask “what would you think if I told you my wife had been replaced by an impostor,” you will often get answers to the effect that it would be unbelievable, absurd, an indication that you had gone mad.” [Yet they continue to believe it’s true in their own case. This stuff is just weird.] […] Some patients in whom overt face recognition is impaired (patients with prosopagnosia) nevertheless show autonomic responses that distinguish familiar from unfamiliar faces […]. Ellis and Young propose that the reverse dissociation of impairments is found in Capgras patients: Overt face recognition is intact but the normal autonomic response to a familiar face (such as the face of a spouse) is absent. Ellis and colleagues […] provide evidence supporting this proposal. In patients with the Capgras delusion, skin conductance response (a measure of autonomic response) does not distinguish between familiar faces (famous faces or family faces) and unfamiliar faces. Ellis and Young (1990) thus suggest that the Capgras patient has an experience of seeing a face that looks just like the spouse, but without the affective response that would normally be an integral part of that experience. The delusion then arises as the patient tries to explain this peculiar experience.”


November 30, 2014 Posted by | books, Psychology | Leave a comment

The Changing Nature of Pain Complaints over the Lifespan

“I was annoyed by the poor quality of the coverage throughout, but I read on because occasionally some observations were included which made reading the rest semi-worthwhile. However after 113 pages I’d had enough. This is a terrible book.”

This is what I wrote on goodreads – I of course gave the book one star. I had actually talked myself into not covering the book here at all due to the low quality of the coverage, but I later changed my mind; the main reason probably being that it’d be easier for me to justify having spent time on it if I at least try to retain some of the knowledge included in the book – I generally remember the stuff I blog a lot better than the stuff I don’t.

Below I have added some observations from the book.

“Fordyce’s (1983) most significant contribution to the psychology of pain is his operant conditioning model which proposed that pain behaviors are maintained, and even enhanced by verbal reinforcement by the intimates of the patient. This proposition has been tested with RAP patients and their families and to date has some degree of validation […] [One study in this context on a related note found that] parents exhibiting and engaging in pain behaviors were more likely to reinforce illness behaviors in their children. […] [According to learning theory,] there is a distinction between the initial physiological reaction to a painful stimulus and the resulting behaviors. Respondent pain behaviors are conceptualized as actions that result from actual nociception. Operant pain behaviors are actions that develop when the pain experience is linked to forms of reinforcement such as receiving pain medication or attention, or being allowed to avoid unpleasant situations. The association between pain and reinforcement increases the likelihood of the persistence of pain behaviors, which often become separated from the original painful stimulus. If respondent pain behaviors last long enough, learning will occur and behaviors may then be controlled by the operant pain behaviors. In chronic pain both respondent and operant pain behaviors are likely to coexist”

“In summary, among pain prevalence rates in children and adolescents for establishing possible patterns for pain that can be associated with life transition, headache is the most researched pain condition. A number of good epidemiological studies establish prevalence rates during a 1-year interval for nonspecific headaches as somewhere between 70 and 80%. With specific migraine headaches, the prevalence rates across 1 year intervals are less uniformly established in the literature. There is greater variability reported, but most results generally fall between 3 and 20%, with more studies reporting findings in the lower range of figures.” [I have a friend who’s never had a headache (the skeptic would say: ‘claims to never have had…’), so I found these numbers interesting] […] There is also evidence that adolescents who experience frequent headache have more daily stress (Brattberg, 1994), such as a mother having pain, not getting enough sleep, loneliness, and fear of being bullied by other students in school. […] Of all the biological factors associated with patterns of headache, there appears to be only one variable that is consistently reported: A gender variable is present, with females reporting higher prevalence rates. This is true for both nonspecific headaches and migraines. […] The literature that exists, which is very limited, suggests that children from families where chronic pain is present tend to develop pain problems themselves; this is especially the case when the chronic pain is headache”

One observation I wrote down in the margin here is that headaches are generally impossible for a parent to accurately observe/test for in a child, which may not be a coincidence; it’s easier to lie about a headache than about a fever. The authors do not mention this aspect, which is in a way not surprising; they to me seem generally to be very uncritical of the data they have to work with, though they do admittedly observe elsewhere in the coverage that: “Since pain is strictly an individual experience, sometimes with limited medical explanations for its frequency, severity, and duration, it is a phenomenon that is almost exclusively dependent on self-reports”.

“[Longitudinal studies tend] to confirm a generally held belief that the family plays an intricate role in JRA [juvenile rheumatoid arthritis]. The family impacts on the child and vice versa. Families with resilience, those who adapt to the changing demands of living with a young person with JRA, those who show psychological family resources, or families that are “healthy” are best suited to provide a nurturing environment and enable their children to cope successfully with their disease.” [Or, put another way, if you live in a dysfunctional family environment, this may well have negative health consequences and affect physiological responses to pain in a negative manner].

“Thomas (1994) found that the distribution of pain complaints in young adults was not random across individuals but instead was focused frequently within some families and not present or minimally so in other families. […] Payne and Norfleet (1986) reported that 78% of family members of chronic pain patients also reported pain. […] Apley (1975) also found that clinical pain in one sibling was associated with six times the frequency of pain in other siblings compared to controls. Family influences appear to be critical in the distribution of clinical pain behaviors in young adults [also in the nonclinical population]. Yet, it is still not clear what mechanisms are responsible for a skewed distribution of pain complaints within family units rather than having pain complaints randomly distributed on an individual basis in young adults.”

“The pain literature has investigated several possible explanations that could account for biasing factors in distribution of pain complaints in young adults. Possible explanations include such widely divergent transmission mechanisms as reinforcement and modeling to genetic predispositions for the development of pain behaviors […] Attempts to explain systematic patterns of location for pain complaints by age, family, or gender have focused most often on learning theory. This theory asserts that reinforcement patterns of family members for pain behaviors are the primary mechanism to influence individuals’ pain complaints […]. For example, reinforcement of pain behaviors by parents at an early age could influence lifetime pain experiences of individuals […], such that parents who positively respond to their children when they express illness behaviors may predispose their children to chronic pain problems in the future […]. Vicarious learning by observing family models is another possible explanation that might influence distribution of pain complaints, causing them to be focused within families. This process could also be responsible for transmitting physiological responses, which are associated with the experience of pain, from parents to children […] Alternately, the skewed distribution of pain reports within families may indicate the possibility that certain pain experiences are inherited […] Another possible genetic explanation has been that particular physical conditions are inherited” [I’d conclude from the coverage that there’s not nearly enough data and methodologically valid research presented in the book to even attempt to draw a conclusion about which explanation(s) might be the most important one(s).]

“The proposition that somatic pain in young adults can be an expression of childhood neglect and abuse has been proposed […] Berger, Knutson, Mehm, and Perkins (1988) showed that their sample, which consisted primarily of middle-class young adults, had experienced a wide range of physical discipline during childhood. Over 12% of their sample described being injured by the discipline of their parents and identified the specific injuries. It is interesting to note that even though 12.1% of respondents identified themselves as being injured by parents, fewer than 3% of respondents labeled themselves as having been physically abused as a child. This suggests a disparity between what one considers to have been abuse toward oneself and professional or public criteria for abuse. Of those who received broken bones, only 43% classified themselves as being physically abused. Also, only 35-38% of those receiving burns, cuts, dental injuries, or head injuries classified themselves as physically abused.” (Wow…)

“theoretical explanations for pain must include psychological and social context factors, since pain can be experienced in the absence of organic cause, such as with somatoform disorders. Another psychological factor affecting pain reporting is attention […]. Thus, inclusion of psychological factors as contextual variables surrounding pain increase explanatory power for theories attempting to account for variability in pain reporting across age groups as well as individuals. At the same time, psychological context variables also increase the complexity in understanding pain. […] The longer an individual experiences pain and there is some systematic sign of increased autonomic nervous system activity, the greater the likelihood that the pain experience will be influenced by psychological and social factors. For example, in people with chronic pain due to organic etiology, it is estimated that 20 to 40% experience major depression and another 40% have mood disorders […]. Thus, the comorbidity of chronic pain and depression is of continuing research interest. It is currently hypothesized that the relationship is bidirectional. Thus, high levels of depression may result in high levels of reported pain; conversely, high levels of chronicity of pain may lead to development of a major depression”


November 28, 2014 Posted by | books, medicine, Psychology | Leave a comment

An Introduction to the Theory of Knowledge

“The theory of knowledge, or epistemology, is one of the main areas of philosophy. […] This book is intended to introduce the reader to some of the main problems in epistemology and to some proposed solutions. It is primarily intended for students taking their first course in the theory of knowledge, but it should also be useful to the generally educated reader interested in learning something about epistemology. I do not assume that the reader has an extensive background in philosophy.”

I’ve read Lemos’ book. It’s always bothersome to blog philosophy and I’ve been uncertain how to best blog this. At the end I decided to add some links covering a lot of the material covered in the book as well, and then add a few comments about the book. I haven’t quoted very much from it, frankly because life’s too short for that. I didn’t rate the book, but would have given it either one star or two if I could make up my mind. If you read all the links below I think you’ll have a pretty good idea about what kind of stuff’s covered in this book. No, I haven’t read the stuff in the links, but from a brief skim of the material included in those articles they seem to deal with many of the same topics and specific issues encountered in the book coverage. I’ve read about many of the topics covered in the book before, but generally in much less detail.

Okay, links first – you should note that most of these links are not to wikipedia articles, but rather to articles from the Stanford Encyclopedia of Philosophy, which I’ve talked about before, as that site has much better coverage of the relevant topics than does wikipedia:

Is Justified True Belief Knowledge? (‘The Gettier problem‘).
Foundationalist Theories of Epistemic Justification.
The Coherence Theory of Truth (/or perhaps better: ‘…of justification’).
Virtue Epistemology.
Inference to the best explanation (/abduction).
Internalist vs. Externalist Conceptions of Epistemic Justification.
A Priori Justification and Knowledge.
The Analytic/Synthetic Distinction.
Naturalized Epistemology.

A quote from the book:

“There are many forms of naturalized epistemology [NE] and it is hard to say exactly what it is. The various forms have different views about the relations between natural science and traditional epistemology. In its most radical forms, naturalized epistemology holds that traditional epistemology should be abandoned or at least replaced by some empirical science, such as psychology. Other less radical forms of naturalized epistemology don’t call for the abandonment of traditional epistemology but hold that the empirical sciences, especially psychology, can solve or help to resolve many of the problems confronting traditional epistemology. […] In general, proponents of naturalized epistemology stress the importance of the natural sciences for epistemological inquiry. […] Instead of focusing on the justification of our beliefs, [Quine, one of the proponents of NE, thinks] we should rather be seeking a scientific explanation of how we get those beliefs. Instead of being concerned with the normative or evaluative status of beliefs we would be concerned with a descriptive inquiry about the psychological processes that produce them. […] Traditional epistemology is concerned with normative or evaluative concepts such as justification, reasonableness, and knowledge. It asks, for example, how do our sensory experiences justify our beliefs about the external world. In contrast, Quine seems to propose that we set aside these normative or evaluative questions, and ask how our sensory experiences cause or bring about our beliefs. Traditional epistemology and the sort of inquiry Quine advocates are thus concerned with different relations between sensory experience and belief.”

This quote is from the last chapter. The word ‘science’ is mentioned exactly once in the first 182 pages of this book. This is perhaps the easiest way for me to express how irrelevant I think many of the thoughts included in the book are to anything ‘real’ or ‘useful’. My impression is that these people are using ill-defined concepts to talk about other ill-defined concepts in order to solve theoretical problems of limited relevance to anyone. Even the chapters and approaches that make some sense are way too lacking in detail to be anywhere near informative enough to be all that interesting; the author uses a lot of pages to say little, and he frequently repeats himself.

The author frequently states in the chapters that ‘it’s obvious that we know X’, where X is some specific thing the author considers it to be obvious that we know, even though the whole book is basically about how we can even determine how to justify knowledge claims in the first place, making it far from obvious whether and how we actually do know what he claims that we know. I often thought along the way when he came up with specific examples of things we obviously knew: ‘…we do? How? You have not defined your terms in a manner clearly enough that that claim can even be evaluated.’

The author often takes it as a given throughout most of the book that what people claim to know and supposedly feel justified in knowing is relevant to how to optimally justify beliefs, or whatever it is he and his colleagues are hoping to do, even though an inquiry like this one really ought to address whether this is even true. It’s not like this problem is not addressed at all, but to say that it is remotely satisfactorily addressed would certainly be a statement with which I would disagree. It’s really problematic because sometimes knowledge beliefs people are claimed to hold are used as arguments justifying approaches to justifying beliefs (‘it’s common for people to believe X (…think themselves justified in believing X), so it must be good and proper to believe X (…)’), without those belief claims being at all closely scrutinized. There’s (almost) no science included in this book on how often people are wrong, what they’re likely to be wrong about and in which situations they’re most likely to be wrong, in which direction they’re likely to be wrong or anything like that, though reliabilism does sort of step closer than the others to such things. The idea of explicitly including such stuff in epistemological research is briefly addressed in the last chapter, as also implied in the quote mentioning Quine above (see also below).

The first nine chapters and most of epistemology is almost exclusively dealing with justification (and arguments), not the question of why people hold the beliefs they do. What you get is a lot of arguments about why X (or Y, or Z…) is clearly the best way to justify evaluating beliefs in a specific manner which is dissimilar from the other approaches available, and why Y (…or X, or Z) is clearly inferior – some are directly related, with one approach being in some sense ‘the opposite’, along some relevant dimension, of one of the others. Arguments are mostly based on very simple logic and/or examples of various kinds meant to illustrate certain aspects which are potentially problematic, or not, to a given argument. There is pretty much zero science to test which of the methodological approaches are more likely to yield accurate beliefs as far as we can even test those, though the literature on the reliabilism methodology, an approach of belief justification where justification is based on whether the processes causing the beliefs are likely to be reliable and so yield accurate beliefs, might have some stuff on that (which is not included in the book).

Some of the claims in the book I have no idea how they even justify making in the first place, which makes it awkward to criticize the ideas presented, especially as some of the most problematic hidden assumptions are implemented implicitly, in some sense before the analysis even begins; you’re supposed to agree on this part for any of the stuff that follows to even make sense, and if you don’t agree, or would perhaps prefer to understand some of the implications of what it might mean to agree before moving on, then you’re in trouble. There seems to me to be a huge number of assumptions hidden all over the place in this book, and it’s really annoying that these assumptions are not addressed; I have a distinct impression that I believe some of those hidden assumptions to be either stupid, wrong, or some combination of the two.

Most of the work in this book, and most of epistemology, it seems from the coverage, apparently deal with the question of how best to justify believing things while completely ignoring all data about how people actually go about forming the beliefs they hold. I find this approach frankly incredibly stupid. But then again this may just relate to the fact that I’m a lot more interested in the latter question than in the former, and some people will find said approach perfectly reasonable. I’m actually really uncertain about who ignores what in the main chapters because it seems to me that a reliabilism approach not informed by actual knowledge about belief formation is completely meaningless, yet the author seems to claim later on that the only people who do not implicitly deliberately ignore data like this are the people belonging to various schools of the naturalized epistemology-branch of epistemology. I’m not confused enough to find out what’s going on here, because I don’t really care.

Although I’m as mentioned not completely certain about the details, it does seem that many epistemologists find it reasonable to ignore where the beliefs people hold come from. I feel like I should remind people reading along that from what I’ve gathered so far, in other areas of research people have often found that when you answer some of the types of questions I’m most interested in in this context, questions about stuff like where beliefs come from, you also automatically tend to answer some of the questions the ‘let’s not use science’ crowd likes to ask (like the question of how justifiable various approaches really are); either that or you demonstrate how some of those questions don’t make sense. It seems to me that the more you know about how beliefs are actually formed, the easier it gets to substitute theoretical models with actual variables of interest; the more you know about why people hold the beliefs they do, the easier it may well be to evaluate specific approaches to how to judge them because as you proceed, you’ll gradually substitute your judgmentalism with actual knowledge. It doesn’t make sense to me to fault people for using a specific approach to belief evaluation which is less likely to yield accurate estimates of what the world is really like than a competing approach, demonstrated in a theoretical framework to be more accurate, if the optimal theoretical framework derived from ‘pure epistemology’ is based on an infeasible model of belief formation. If you’re not justified in believing that flying elephants are real when you’ve been drinking a lot of alcohol, then it might be a good idea to address whether or not you’ve been drinking alcohol when evaluating the beliefs you hold. If you’re more likely to become religious if your parents were religious and if the religious beliefs you hold seem to be socially mediated to some extent, then that likewise seems like relevant information in terms of evaluating how to justify religious beliefs. Stuff, including beliefs and belief-formation processes, whether or not the beliefs in question are ‘moral beliefs’ or not, which you can explain (using data) may be easier to justify, or not justify, whatever the case may be, than stuff you can’t explain. These remarks of course pertain not only to stuff like epistemology but also to other branches of philosophy, like moral philosophy. Of course I’m aware that some people from this field might argue/object that you can’t know that you actually know what I implied that we might get to know from data (like alcohol leading to people being more likely to observe flying elephants) – this stuff is complicated. My point is that I think a lot of it is needlessly complicated, and/or perhaps that it’s ‘the wrong complications’ people are looking at.

Should we justify our beliefs based on whether they agree with other beliefs we have? How can we say, before we’ve figured out to which extent we actually do that? If humans don’t do that kind of thing, then why would you ask a question like that in the first place? If you can’t figure out the extent to which they do, the same question apply – why should we care about the answer to that question? Yet according to the book coherentists don’t seem to care much about how people form beliefs; they mostly care about how people justify their beliefs. Analogous stuff seems to be going on in other contexts. Do note that it’s possible to obtain data both on which beliefs people hold and data on how they justify holding said beliefs, at least in theory (you can just ask people – but there are other ways to approach such questions as well, and you needn’t always have to make do with the lies and confused feedback people might come up with when asked questions like those…); regardless of whether you think epistemologists should only concern themselves with the question of justification (many philosophers seem to hold this view) or whether you’d like them to also address questions pertaining to which beliefs people actually do hold (and how they get to hold them), there is both a ‘descriptive justificationalism’ and a ‘normative justificationalism’ (the latter is just classical epistemology, it seems, judging from the coverage), and if you’re doing only one of those you’re probably missing out on some relevant stuff. Instead of having all those arguments about what’s the proper way to think about these things, why not at least try to address the descriptive part – find some data and figure out how people justify believing the things they do? At first I thought this was the approach called reliabilism in the book, but now I really am not sure what that stuff’s about. Anyway collecting data and starting to figure out how people justify their beliefs would seem to me to be a necessary starting point for any sort of analysis of these sorts of things; the sort of thing these people should have done a long time ago. There are lots of claims about what people may justify believing in the book and how they should go about doing it, but there’s not much data and this field really could use some of that stuff. What if people tend to use some approaches (e.g. coherence) to justify some types of beliefs, but other approaches (reliabilism) to justify others? How is that not potentially relevant? Do some of the main claims of specific theories even make sense, in light of scientific discoveries made over the years? Here’s a related quote from the Stanford Epistemology-article:

“According to an extreme version of naturalistic epistemology, the project of traditional epistemology, pursued in an a priori fashion from the philosopher’s armchair, is completely misguided. The “fruits” of such activity are demonstrably false theories such as foundationalism, as well as endless and arcane debates in the attempt to tackle questions to which there are no answers.” (my emphasis).

I had an impression that foundationalism might just be ‘complicated bullshit’ while reading the book, but I found it really hard to even figure out exactly what these people were actually trying to argue so I decided to withhold judgment. I’m still not sure what exactly they’re arguing, nor for that matter do I understand why they’d ever think it’s a good idea to approach these sorts of questions in the proposed manner, but it’s safe to say that the proponents haven’t exactly convinced me that this framework is the right one. Maybe it goes without saying but I am of course somewhat sympathetic to naturalistic epistemological approaches.

One of the main problems I think I have with this book is a problem the book shares with some other philosophical works I’ve encountered; in this field people seem to have a tendency not to evaluate ideas or arguments based on how well they explain data, but instead mainly on how internally consistent and logically coherent the various theories are. These people consider it to be very relevant if a given theory can handle all potential counterexamples and counterarguments etc.; if you can find a clever idea illustrating that the theory doesn’t work in some specific context because of some implication of what’s already been assumed and/or some contrived example, then you’re golden, but very few people go out to pick up data and look at how the theories relate to those, because data is not the currency of philosophy. If you write a philosophical text, you’d better have an argument ready to explain an elephant carrying around a radio playing Tchaikovsky (an actual example from the book included in one of the chapters to illustrate a problem with a specific theory). Nobody knows if the elephant is relevant because nobody ever seems to bother to look at the data and try to figure out how often people encounter elephants carrying around radios playing Tchaikovsky. I find this frustrating.

I have added one more quote from the book’s last chapter below, as well as some related remarks:

“The limited naturalist holds that defining or giving an analysis of central epistemic concepts such as knowledge, justification, or evidence is a properly philosophical activity. There are also normative questions and issues that are appropriate topics for philosophical investigation. Thus, it is the business of philosophy to discover what makes beliefs justified or reasonable, to discover criteria for justified belief. […] So far this sounds very much like traditional epistemology. But now suppose that we want to know, for example, whether a belief is an instance of knowledge. In order to know whether it was we would need to know whether it met our standard. We would need to know whether it did in fact come from a cognitive process with the appropriate degree of reliability. Presumably, empirical psychology would be relevant to telling us whether our beliefs did in fact meet that standard. Empirical psychology could identify what cognitive processes did in fact produce our beliefs and tell us whether those processes met the requisite standard of reliability. So, according to this view, empirical psychology can be relevant to whether some belief of ours counts as knowledge.”

The above seems, judging from the book, to be ‘as far’ as most epistemologists seem to want to go at the moment. It’s very curious to me that they seem to think cognitive processes is the only thing that may matter, and that cognitive science and psychology is all you really need in order to evaluate beliefs and belief-formation processes. I wonder if these people have ever heard about the problem of how different sources may not be equally reliable in terms of telling you stuff about the world, stuff relevant to belief formation and how to evaluate the knowledge one possesses (Daily Mail vs New England Journal of Medicine)? How sources of different reliabilities may be mixed up with each other in a non-trivial manner, and yet you’re still supposed to come up with some idea about what to think about X? Have they perhaps even heard about statistical analysis?

It seems to me that a lot of scientists these days are working really hard to do exactly the sort of work epistemologists (claim) they’re trying to do, yet repeatedly fail at. Or if you’re more gracious to the work being done in that field, the scientists are doing complementary work of some importance. Are you better justified trusting a scientific report than a random newspaper article, and in which cases might or might you not be? Might there be some sort of systematic way to approach the question of which types of evidence is best when making judgments; might there perhaps even be some sort of natural hierarchical ordering of the scientific evidence (prospective studies > retrospective studies, all else equal? Meta-review of prospective studies > one prospective study, all else equal) available to us, which might be helpful in terms of promoting accurate belief formation (/and belief formation strategies)? This field could be very broad. Perhaps it really is, and some people are working on these sorts of questions. But you wouldn’t know that from this book, and I’m not sure people addressing such far more relevant questions than many of the ones addressed in the book go by the name of epistemologists.

November 27, 2014 Posted by | books, philosophy | Leave a comment

The Voyage of the Beagle (I)

“”You care for nothing but shooting, dogs, and rat-catching,” Dr. Darwin had said, “and you will be a disgrace to yourself and all your family.””

The above quote is included in the introduction to the book, by David Quammen. I somehow really like that quote.

I generally don’t blog fiction unless I have a good excuse, but I didn’t seem to have many problems coming up with excuses for covering this book here. When I first read the book some years ago I read a version of it I did not personally own, in Danish – later on I bought a(n English-language) copy for myself. Lately I have been rereading it. It’s a nice book. The world was a very different place 180 years ago, and a book like this gives you a lot of details which help you understand just how different. The book is interesting not only for the many glimpses it gives you into the lives of different people living in the 19th century, but of course also from a history of science point of view; this relates not only to how the trip helped Darwin develop his ideas about natural selection but also relates to many other aspects, some more related to his ‘big idea’ than others. Darwin’s geological observations more than a century before the idea of plate tectonics was accepted by the geological community are for example fascinating, but he also shares some of his ideas about e.g. local weather pattern and meteorological phenomena encountered along the way, and stuff like animal breeding. It should be emphasized that this is a travel book and not a scientific treatise. I’m reasonably certain that if Darwin had never come up with the idea for which he is now known and had not included any biology-related observations in the book, there would still be many parts of this book which would be very much worth reading for other reasons; Darwin experienced a lot of interesting stuff during this trip, and the trip would have made for a very good story even if they’d skipped the trip to the Galapagos Islands and had returned home to Britain after the visit to Chile. Though it’s a good thing for biology that they didn’t.

Many of his observations are interesting not because they tell you what a young man like Darwin might have thought about, say, weather patterns in 1835, but because they give you some wonderful insights into stuff like prevailing social mores and dynamics in South American societies around the 1830s; as mentioned, the world was a very different place back then. Just how different should be clear from some of the quotes below, and part of what I sometimes found really interesting about the account was how Darwin would not comment on or question specific things he observed – things someone born in the late 20th century might well have thought about in a very different manner. Darwin took a lot of stuff for granted, just like someone born in the late 20th century does, but the things he took for granted were sometimes very different from the things we might take for granted. It’s however noteworthy in this context that although there’s a big gap, both in terms of knowledge and presumably also in terms of values, between Darwin and the modern reader, there often seems to be if anything a bigger gap between him and some of the people he encounters along the way; sometimes you sort of think it would be easier for you, the modern reader, to understand ‘where Darwin was coming from’ than it must have been for some of the people whom he met during his travels. There’s more than one good reason to read this book.

I’ve added some quotes from the book below. I should point out that Darwin writes in a manner some might well find boring in part because he tends to go into a lot of detail when talking about some specific thing he considers to be interesting (you may find that boring if you don’t find that topic interesting), but there’s some really nice stuff hidden in there and it’s very much worth reading the book even if you don’t enjoy all of it equally well.

“As it was growing dark we passed under one of the massive, bare, and steep hills of granite which are so common in this country. This spot is notorious from having been, for a long time, the residence of some runaway slaves, who, by cultivating a little ground near the top, contrived to eke out a subsistence. At length they were discovered, and a party of soldiers being sent, the whole were seized with the exception of one old woman, who, sooner than again be led into slavery, dashed herself to pieces from the summit of the mountain. In a Roman matron this would have been called the noble love of freedom: in a poor negress it is mere brutal obstinacy. We continued riding for some hours. [No more comments about this event – three sentences later he talks about fireflies…]”

“On such fazêndas as these, I have no doubt the slaves pass happy and contented lives. On Saturday and Sunday they work for themselves, and in this fertile climate the labour of two days is sufficient to support a man and his family for the whole week.”

“While staying at this estate, I was very nearly being an eyewitness to one of those atrocious acts which can only take place in a slave country. Owing to a quarrel and a lawsuit, the owner was on the point of taking all the women and children from the male slaves, and selling them separately at the public auction at Rio. Interest, and not any feeling of compassion, prevented this act. Indeed, I do not believe the inhumanity of separating thirty families, who had lived together for many years, even occurred to the owner.”

“I first visited the forest in which these Plenariae [some worms] were found, in company with an old Portuguese priest who took me out to hunt with him. The sport consisted in turning into the cover a few dogs, and then patiently waiting to fire at any animal which might appear.”

“I may mention, as a proof of how cheap everything is in this country, that I paid only two dollars a day, or eight shillings, for two men, together with a troop of about a dozen riding-horses. My companions were well armed with pistols and sabres; a precaution which I thought rather unnecessary: but the first piece of news we heard was, that, the day before, a traveller from Monte Video had been found dead on the road, with his throat cut. This happened close to a cross, the record of a former murder.
On the first night we slept at a retired little country-house; and there I soon found out that I possessed two or three articles, especially a pocket compass, which created unbounded astonishment. In every house I was asked to show the compass, and by its aid, together with a map, to point out the direction of various places. It excited the liveliest admiration that I, a perfect stranger, should know the road (for direction and road are synonymous in this open country) to places where I had never been. […] this retired part of the country is seldom visited by foreigners. I was asked whether the earth or sun moved; whether it was hotter or colder to the north; where Spain was, and many other such questions. The greater number of inhabitants had an indistinct idea that England, London, and North America, were different names for the same place; but the better informed well knew that London and North America were separate countries close together, and that England was a large town in London!”

“On approaching the house of a stranger, it is usual to follow several little points of etiquette: riding up slowly to the door, the salutation of Ave Maria is given, and until somebody comes out and asks you to alight, it is not customary even to get off your horse […] Having entered the house, some general conversation is kept up for a few minutes, till permission is asked to pass the night there. This is granted as a matter of course. The stranger then takes his meals with the family, and a room is assigned him, where with the horsecloths belonging to his recado (or saddle of the Pampas) he makes his bed. It is curious how similar circumstances produce such similar results in manners. At the Cape of Good Hope the same hospitality, and very nearly the same points of etiquette, are universally observed.”

“To the northward of the Rio Negro, between it and the inhabited country near Buenos Ayres, the Spaniards have only one small settlement, recently established at Bahia Blanca. The distance in a straight line to Buenos Ayres is very nearly five hundred British miles. The wandering tribes of horse Indians, which have always occupied the greater part of this country, having of late much harassed the outlying estancias, the government at Buenos Ayres equipped some time since an army under the command of General Rosas for the purpose of exterminating them. The troops were now encamped on the banks of the Colorado; a river lying about eighty miles northward of the Rio Negro. […] It was supposed that General Rosas had about six hundred Indian allies. […] The duty of the women is to load and unload the horses; to make the tents for the night; in short to be, like the wives of all savages, useful slaves. The man fight, hunt, take care of the horses, and make the riding gear. […] While changing horses at the Guardia several people questioned us much about the army, – I never saw anything like the enthusiasm for Rosas, and for the success of “the most just of all wars, because against barbarians.””

“A few days afterwards I saw another troop of these banditti-like soldiers start on an expedition against a tribe of Indians at the small Salinas, who had been betrayed by a prisoner cacique. The Spaniard who brought the orders of this expedition was a very intelligent man. He gave me an account of the last engagement at which he was present. Some Indians, who had been taken prisoners, gave information of a tribe living north of the Colorado. Two hundred soldiers were sent; and they first discovered the Indians by a cloud of dust from their horses’ feet, as they chanced to be travelling. The country was mountainous and wild, and it must have been far in the interior, for the Cordillera was in sight. The Indians, men, women, and children, were about one hundred and ten in number, and they were nearly all taken or killed, for the soldiers sabre every man. The Indians are now so terrified that they offer no resistance in a body, but each neglecting even his wife and children; but when overtaken, like wild animals, they fight against any number to the last moment. […] This is a dark picture; but how much more shocking is the unquestionable fact, that all the women who appear above twenty years old are massacred in cold blood! When I exclaimed that this appeared rather inhuman, he answered, “Why, what can be done? They breed so!”
Every one here is fully convinced that this is the most just war, because it is against barbarians. Who would believe in this age that such atrocities could be committed in a Christian civilized country? The children of the Indians are saved, to be sold or given away as servants, or rather slaves for as long time as the owners can make them believe themselves slaves; but I believe in their treatment there is little to complain of.”

“We passed a train of waggons and a troop of beasts on their road to Mendoza. The distance is about 580 geographical miles, and the journey is generally performed in fifty days.”

“a niata bull and cow invariably produce niata calves. A niata bull with a common cow, or the reverse cross, produces offspring having an intermediate character, but with the niata characters strongly displayed: according to Señor Muniz, there is the clearest evidence, contrary to the common belief of agriculturalists in analogous cases, that the niata cow when crossed with a common bull transmits her peculiarities more strongly than the niata bull when crossed with a common cow. When the pasture is tolerably long, the niata cattle feed with the tongue and palate as well as common cattle, but during the great droughts, when so many animals perish, the niata breed is under a great disadvantage, and would be exterminated if not attended to; for the common cattle, like horses, are able just to keep alive, by browsing with their lips on twigs of trees and reeds; this the niata cannot so well do, as their lips do not join, and hence they are found to perish before the common cattle. This strikes me as a good illustration of how little we are able to judge from the ordinary habits of life, on what circumstances, occurring only at long intervals, the rarity of extinction of a species may be determined.”

“Police and justice are quite inefficient. If a man who is poor commits murder and is taken, he will be imprisoned, and perhaps even shot; but if he is rich and has friends, he may rely on it no very severe consequences will ensue. […] A traveller has no protection besides his fire-arms; and the constant habit of carrying them is the main check to more frequent robberies. […] Nearly every public officer can be bribed. The head man in the post-office sold forged government franks.”

“The results of all the attempts to colonize this side of America south of 41°, has been miserable. Port Famine expresses by its name the lingering and extreme sufferings of several hundred wretched people, of whom one alone survived to relate their misfortunes. At St. Joseph’s Bay, on the coast of Patagonia, a small settlement was made; but during one Sunday the Indians made an attack and massacred the whole party, excepting two men, who remained captives during many years. At the Rio Negro I conversed with one of these men, now in extreme old age.”

“Every animal in a state of nature regularly breeds; yet in a species long established, any great increase in numbers is obviously impossible, and must be checked by some means. We are, nevertheless, seldom able with certainty to tell in any given species, at what period of life, or at what period of the year, or whether only at long intervals, the check falls; or, again, what is the precise nature of the check. Hence probably it is, that we feel so little surprise at one, of two species closely allied in habits, being rare and the other abundant in the same district; or, again, that one should be abundant in one district, and another, filling the same place in the economy of nature, should be abundant in a neighbouring district, differing very little in its conditions. If asked how this is, one immediately replies that it is determined by some slight difference, in climate, food, or the number of enemies: yet how rarely, if ever, we can point out the precise cause and manner of action of the check! We are, therefore, driven to the conclusion, that causes generally quite inappreciable by us, determine whether a given species shall be abundant or scanty in numbers. […] To admit that species generally become rare before they become extinct – to feel no surprise at the comparative rarity of one species with another, and yet to call in some extraordinary agent and to marvel greatly when a species ceases to exist, appears to me much the same as to admit that sickness in the individual is the prelude to death – but when the sick man dies to wonder, and to believe that he died through violence.”

November 26, 2014 Posted by | biology, books | Leave a comment

Introduction to Meta Analysis (III)


This will be my last post about the book. Below I have included some observations from the last 100 pages.

“A central theme in this volume is the fact that we usually prefer to work with effect sizes, rather than p-values. […] While we would argue that researchers should shift their focus to effect sizes even when working entirely with primary studies, the shift is absolutely critical when our goal is to synthesize data from multiple studies. A narrative reviewer who works with p-values (or with reports that were based on p-values) and uses these as the basis for a synthesis, is facing an impossible task. Where people tend to misinterpret a single p-value, the problem is much worse when they need to compare a series of p-values. […] the p-value is often misinterpreted. Because researchers care about the effect size, they tend to take whatever information they have and press it into service as an indicator of effect size. A statistically significant p-value is assumed to reflect a clinically important effect, and a nonsignificant p-value is assumed to reflect a trivial (or zero) effect. However, these interpretations are not necessarily correct. […] The narrative review typically works with p-values (or with conclusions that are based on p-values), and therefore lends itself to […] mistakes. p-values that differ are assumed to reflect different effect sizes but may not […], p-values that are the same are assumed to reflect similar effect sizes but may not […], and a more significant p-value is assumed to reflect a larger effect size when it may actually be based on a smaller effect size […]. By contrast, the meta-analysis works with effect sizes. As such it not only focuses on the question of interest (what is the size of the effect) but allows us to compare the effect size from study to study.”

“To compute the summary effect in a meta-analysis we compute an effect size for each study and then combine these effect sizes, rather than pooling the data directly. […] This approach allows us to study the dispersion of effects before proceeding to the summary effect. For a random-effects model this approach also allows us to incorporate the between-studies dispersion into the weights. There is one additional reason for using this approach […]. The reason is to ensure that each effect size is based on the comparison of a group with its own control group, and thus avoid a problem known as Simpson’s paradox. In some cases, particularly when we are working with observational studies, this is a critically important feature. […] The term paradox refers to the fact that one group can do better in every one of the included studies, but still do worse when the raw data are pooled. The problem is not limited to studies that use proportions, but can exist also in studies that use means or other indices. The problem exists only when the base rate (or mean) varies from study to study and the proportion of participants from each group varies as well. For this reason, the problem is generally limited to observational studies, although it can exist in randomized trials when allocation ratios vary from study to study.” [See the wiki article for more]

“When studies are addressing the same outcome, measured in the same way, using the same approach to analysis, but presenting results in different ways, then the only obstacles to meta-analysis are practical. If sufficient information is available to estimate the effect size of interest, then a meta-analysis is possible. […]
When studies are addressing the same outcome, measured in the same way, but using different approaches to analysis, then the possibility of a meta-analysis depends on both statistical and practical considerations. One important point is that all studies in a meta-analysis must use essentially the same index of treatment effect. For example, we cannot combine a risk difference with a risk ratio. Rather, we would need to use the summary data to compute the same index for all studies.
There are some indices that are similar, if not exactly the same, and judgments are required as to whether it is acceptable to combine them. One example is odds ratios and risk ratios. When the event is rare, then these are approximately equal and can readily be combined. As the event gets more common the two diverge and should not be combined. Other indices that are similar to risk ratios are hazard ratios and rate ratios. Some people decide these are similar enough to combine; others do not. The judgment of the meta-analyst in the context of the aims of the meta-analysis will be required to make such decisions on a case by case basis.
When studies are addressing the same outcome measured in different ways, or different outcomes altogether, then the suitability of a meta-analysis depends mainly on substantive considerations. The researcher will have to decide whether a combined analysis would have a meaningful interpretation. […] There is a useful class of indices that are, perhaps surprisingly, combinable under some simple transformations. In particular, formulas are available to convert standardized mean differences, odds ratios and correlations to a common metric [I should note that the book covers these data transformations, but I decided early on not to talk about that kind of stuff in my posts because it’s highly technical and difficult to blog] […] These kinds of conversions require some assumptions about the underlying nature of the data, and violations of these assumptions can have an impact on the validity of the process. […] A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test [because the test will often have low power anyway]. […] The report of a meta-analysis should generally include a forest plot.”

“The issues addressed by a sensitivity analysis for a systematic review are similar to those that might be addressed by a sensitivity analysis for a primary study. That is, the focus is on the extent to which the results are (or are not) robust to assumptions and decisions that were made when carrying out the synthesis. The kinds of issues that need to be included in a sensitivity analysis will vary from one synthesis to the next. […] One kind of sensitivity analysis is concerned with the impact of decisions that lead to different data being used in the analysis. A common example of sensitivity analysis is to ask how results might have changed if different study inclusion rules had been used. […] Another kind of sensitivity analysis is concerned with the impact of the statistical methods used […] For example one might ask whether the conclusions would have been different if a different effect size measure had been used […] Alternatively, one might ask whether the conclusions would be the same if fixed-effect versus random-effects methods had been used. […] Yet another kind of sensitivity analysis is concerned with how we addressed missing data […] A very important form of missing data is the missing data on effect sizes that may result from incomplete reporting or selective reporting of statistical results within studies. When data are selectively reported in a way that is related to the magnitude of the effect size (e.g., when results are only reported when they are statistically significant), such missing data can have biasing effects similar to publication bias on entire studies. In either case, we need to ask how the results would have changed if we had dealt with missing data in another way.”

“A cumulative meta-analysis is a meta-analysis that is performed first with one study, then with two studies, and so on, until all relevant studies have been included in the analysis. As such, a cumulative analysis is not a different analytic method than a standard analysis, but simply a mechanism for displaying a series of separate analyses in one table or plot. When the series are sorted into a sequence based on some factor, the display shows how our estimate of the effect size (and its precision) shifts as a function of this factor. When the studies are sorted chronologically, the display shows how the evidence accumulated, and how the conclusions may have shifted, over a period of time.”

“While cumulative analyses are most often used to display the pattern of the evidence over time, the same technique can be used for other purposes as well. Rather than sort the data chronologically, we can sort it by any variable, and then display the pattern of effect sizes. For example, assume that we have 100 studies that looked at the impact of homeopathic medicines, and we think that the effect is related to the quality of the blinding process. We anticipate that studies with complete blinding will show no effect, those with lower quality blinding will show a minor effect, those that blind only some people will show a larger effect, and so on. We could sort the studies based on the quality of the blinding (from high to low), and then perform a cumulative analysis. […] Similarly, we could use cumulative analyses to display the possible impact of publication bias. […] large studies are assumed to be unbiased, but the smaller studies may tend to over-estimate the effect size. We could perform a cumulative analysis, entering the larger studies at the top and adding the smaller studies at the bottom. If the effect was initially small when the large (nonbiased) studies were included, and then increased as the smaller studies were added, we would indeed be concerned that the effect size was related to sample size. A benefit of the cumulative analysis is that it displays not only if there is a shift in effect size, but also the magnitude of the shift. […] It is important to recognize that cumulative meta-analysis is a mechanism for display, rather than analysis. […] These kinds of displays are compelling and can serve an important function. However, if our goal is actually to examine the relationship between a factor and effect size, then the appropriate analysis is a meta-regression”

“John C. Bailar, in an editorial for the New England Journal of Medicine (Bailar, 1997), [wrote] that mistakes […] are common in meta-analysis. He argues that a meta-analysis is inherently so complicated that mistakes by the persons performing the analysis are all but inevitable. He also argues that journal editors are unlikely to uncover all of these mistakes. […] The specific points made by Bailar about problems with meta-analysis are entirely reasonable. He is correct that many meta-analyses contain errors, some of them important ones. His list of potential (and common) problems can serve as a bullet list of mistakes to avoid when performing a meta-analysis. However, the mistakes cited by Bailar are flaws in the application of the method, rather than problems with the method itself. Many primary studies suffer from flaws in the design, analyses, and conclusions. In fact, some serious kinds of problems are endemic in the literature. The response of the research community is to locate these flaws, consider their impact for the study in question, and (hopefully) take steps to avoid similar mistakes in the future. In the case of meta-analysis, as in the case of primary studies, we cannot condemn a method because some people have used that method improperly. […] In his editorial Bailar concludes that, until such time as the quality of meta-analyses is improved, he would prefer to work with the traditional narrative reviews […] We disagree with the conclusion that narrative reviews are preferable to systematic reviews, and that meta-analyses should be avoided. The narrative review suffers from every one of the problems cited for the systematic review. The only difference is that, in the narrative review, these problems are less obvious. […] the key advantage of the systematic approach of a meta-analysis is that all steps are clearly described so that the process is transparent.”

November 21, 2014 Posted by | books, statistics | Leave a comment

Open Thread

It’s been a long time since I had one of these. Questions? Comments? Random observations?

I hate posting posts devoid of content, so here’s some random stuff:


If you think the stuff above is all fun and games I should note that the topic of chiralty, which is one of the things talked about in the lecture above, was actually covered in some detail in Gale’s book, which hardly is a book which spends a great deal of time talking about esoteric mathematical concepts. On a related note, the main reason why I have not blogged that book is incidentally that I lost all notes and highlights I’d made in the first 200 pages of the book when my computer broke down, and I just can’t face reading that book again simply in order to blog it. It’s a good book, with interesting stuff, and I may decide to blog it later, but I don’t feel like doing it at the moment; without highlights and notes it’s a real pain to blog a book, and right now it’s just not worth it to reread the book. Rereading books can be fun – I’ve incidentally been rereading Darwin lately and I may decide to blog this book soon; I imagine I might also choose to reread some of Asimov’s books before long – but it’s not much fun if you’re finding yourself having to do it simply because the computer deleted your work.

ii. Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.

Here’s the abstract:

“Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.”

If a study has low power, you can get into a lot of trouble. Some problems are well known, others probably aren’t. A bit more from the paper:

“design calculations can reveal three problems:
1. Most obvious, a study with low power is unlikely to “succeed” in the sense of yielding a statistically significant result.
2. It is quite possible for a result to be significant at the 5% level — with a 95% confidence interval that entirely excludes zero — and for there to be a high chance, sometimes 40% or more, that this interval is on the wrong side of zero. Even sophisticated users of statistics can be unaware of this point — that the probability of a Type S error is not the same as the p value or significance level.[3]
3. Using statistical significance as a screener can lead researchers to drastically overestimate the magnitude of an effect (Button et al., 2013).

Design analysis can provide a clue about the importance of these problems in any particular case.”

“Statistics textbooks commonly give the advice that statistical significance is not the same as practical significance, often with examples in which an effect is clearly demonstrated but is very small […]. In many studies in psychology and medicine, however, the problem is the opposite: an estimate that is statistically significant but with such a large uncertainty that it provides essentially no information about the phenomenon of interest. […] There is a range of evidence to demonstrate that it remains the case that too many small studies are done and preferentially published when “significant.” We suggest that one reason for the continuing lack of real movement on this problem is the historic focus on power as a lever for ensuring statistical significance, with inadequate attention being paid to the difficulties of interpreting statistical significance in underpowered studies. Because insufficient attention has been paid to these issues, we believe that too many small studies are done and preferentially published when “significant.” There is a common misconception that if you happen to obtain statistical significance with low power, then you have achieved a particularly impressive feat, obtaining scientific success under difficult conditions.
However, that is incorrect if the goal is scientific understanding rather than (say) publication in a top journal. In fact, statistically significant results in a noisy setting are highly likely to be in the wrong direction and invariably overestimate the absolute values of any actual effect sizes, often by a substantial factor.”

iii. I’m sure most people who might be interested in following the match are already well aware that Anand and Carlsen are currently competing for the world chess championship, and I’m not going to talk about that match here. However I do want to mention to people interested in improving their chess that I recently came across this site, and that I quite like it. It only deals with endgames, but endgames are really important. If you don’t know much about endgames you may find the videos available here, here and here to be helpful.

iv. A link: Crosss Validated: “Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization.”

A friend recently told me about this resource. I knew about the existence of StackExchange, but I haven’t really spent much time there. These days I mostly stick to books and a few sites I already know about; I rarely look for new interesting stuff online. This also means you should not automatically assume I surely already know about X when you’re considering whether to tell me about X in an Open Thread.

November 18, 2014 Posted by | Chess, Lectures, mathematics, Open Thread, papers, statistics | Leave a comment

Female Infidelity and Paternal Uncertainty – Evolutionary Perspectives on Male Anti-Cuckoldry Tactics

I finished this book yesterday. Below I have posted my goodreads review:

“A couple of chapters were really nice, but the authors repeat themselves *a lot* throughout the book and some chapters are really weak. I was probably at three stars after approximately 100 pages, but the book in my opinion lost steam after that. A couple of chapters are in my opinion really poor – basically they’re just a jumble of data-poor theorizing which is most likely just plain wrong. A main hypothesis presented in one of the chapters is frankly blatantly at odds with a lot of other evidence, some of which is even covered earlier in the same work, but the authors don’t even mention this in the coverage.

I don’t regret reading the book, but it’s not that great.”

Let’s say you have a book where Hrdy’s idea that it’s long been in the interest of human females to confuse paternity by various means, e.g. through extra-pair copulations, because such behaviour reduces the risk of infanticide (I’ve talked about these things before here on the blog, if you’re unfamiliar with this work and haven’t read my posts on the topics see for example this post) is covered, and where various other reasons why females may choose to engage in extra-pair copulations (e.g. ‘genetic benefits’) are also covered. Let’s say that in another, problematic, chapter of said book, a theory is proposed that ‘unfamiliar sperm’ (sperm from an individual the female has not had regular sex with before) leading to pregnancy is more likely to lead to preeclampsia in a female, a pregnancy complication which untreated will often lead to the abortion of the fetus. Let’s say the authors claim in that problematic chapter that the reason why females are more likely to develop preeclampsia in case of a pregnancy involving unfamiliar sperm is that such a pregnancy is likely to be a result of rape, and that the physiological mechanism leading to the pregnancy complication is an evolved strategy on part of the female, aimed at enabling her to exercise (post-copulatory) mate choice and reduce the negative fitness consequences of the rape. Let’s say the authors of the preeclampsia chapter/theory don’t talk at all about e.g. genetic benefits derived from extra-pair copulations which are not caused by rape but are engaged in willingly by the female because it’s in her reproductive interests to engage in them, and that the presumably common evolutionary female strategy of finding a semi-decent provider male as a long-term partner while also occasionally sleeping around with high-quality males (and low quality males – but only when not fertile (e.g. when pregnant…)) and have their children without the provider male knowing about it is not even mentioned. Assume the authors of the chapter seem to assume that getting a child by a male with unfamiliar sperm is always a bad idea.

Yeah, the above is what happened in this book, and it’s part of why it only gets two stars. These people are way too busy theorizing, and that specific theory is really poor – or at least the coverage of it was, as they don’t address the obvious issues which people reading the other chapters wouldn’t even have a hard time spotting. Kappeler et al. is a much better book, and it turns out that there was much less new stuff in this book than I’d thought – a lot of ‘the good stuff’ is also covered there.

It doesn’t help that many of the authors are systematically overestimating the extra-pair paternity rate by relying on samples/studies which are obviously deeply suspect due to selection bias. Not all of them goes overboard and claim the number is 10% or something like that, but many of them do – ‘the number is between 1-30%, with best estimates around 10%’ is a conclusion drawn in at least a couple of chapters. This is wrong. Only one contributor talking about these numbers come to the conclusion that the average number is likely to have been less than 5% in an evolutionary context (“Only very tentative conclusions about typical EPP [extra-pair paternity] rates throughout recent human history (e.g. in the past 50 000 years) can be drawn […] It seems reasonable to suggest that rates have typically been less than 10% and perhaps in most cases less than 5%. It also seems reasonable to suggest that they have probably also been variable across time and place, with some populations characterized by rates of 10% or higher.”). An idea worth mentioning in this context is that human behaviour can easily have been dramatically impacted by things which rarely happen now, because the reason why those things may be rare may well be that a lot of behaviour is aimed towards making sure it is rare and stays rare – this idea should be well known to people familiar with Hrdy’s thesis, and it also to me seems to apply to cuckoldry; cuckoldry may happen relatively infrequently, but perhaps the reason for this is that human males with female partners are really careful not to allow their partners to sleep around quite as much as their genetic code might like them to do. I mentioned in the coverage of Kappeler et al. that female sexual preferences change over the course of her menstrual cycle – they also talk about this in this book, but a related observation also made in the book is that males seem to be more vigilant and seem to intensify their level of mate guarding when their partner is ovulating. There’s probably a lot of stuff which goes on ‘behind the scenes’ which we humans are not aware of. Human behaviour is really complicated.

All these things said, there’s some really nice stuff in the book as well. The basic idea behind much of the coverage is that whereas females always know that their children are their children, males can never know for sure – and in a context where males may derive a fitness benefit from contributing to their offspring and a fitness loss by contributing to another male’s child, this uncertainty is highly relevant for how they might choose to behave in many contexts related to partnership dynamics. Many different aspects of the behaviour of human males is to some extent directed towards minimizing the risk of getting cuckolded and/or the risk of a partner in whom they have invested leaving him. They may choose to hide the female partner from competitors e.g. by monopolizing her time or by using violence to keep her from interacting with male competitors, they may signal to competitors that she is taken and/or perhaps that it may be costly to try to have sex with her (threats to other males, violence directed towards the competitor rather than the partner), they may try to isolate her socially by badmouthing her to potential competitors (e.g. male friends and acquaintances). On a more positive note males may also choose to do ‘nice things’ to keep the partner from leaving him, like ‘giving in to sexual requests’ and ‘performing sexual favors to keep her around’ (in at least one study, “men partnered to women who [were] more likely to be sexually unfaithful [were] also more likely to perform sexual inducements to retain their partners” – but before women reading this conclude that their incentives may look rather different from what they thought they did, it’s probably worth noting that the risk of abuse also goes up when the male thinks the partner might be unfaithful (see below)). If the first anti-cuckold approach, the mate-guarding strategy of trying to keep her from having sex with others, fails, then the male has additional options – one conceptualization in the book splits the strategy choices up into three groups; mate-guarding strategies, intra-vaginal strategies and post-partum strategies (in another chapter they distinguish among “preventative tactics, designed to minimize female infidelity; sperm-competition tactics, designed to minimize conception in the event of female infidelity; and differential paternal investment” – but the overall picture is reasonably similar). Intra-vaginal strategies relate to sperm competition and for example more specifically relate to e.g. the observation that a male may try to minimize the risk of being cuckolded after having been separated from the partner by having sex with the partner soon after they meet up again. A male may also increase the amount of sperm deposited during intercourse in such a context, compared to normal, and ‘sexual mechanics’ may also change as a function of cuckoldry risk (deeper thrusts and longer duration of sex if they’ve been separated for a while). There are five chapters on this stuff in the book, but I’ve limited coverage of this stuff because I don’t think it’s particularly interesting. Post-partum strategies naturally relate to strategies employed after the child has been born. Here the father may observe the child after it’s been born and then try to figure out if it looks like him/his family, and then adjust investment in the child based on how certain he is that he’s actually the father:

“There is growing evidence that human males are […] affected by […] evolutionary pressures to invest in offspring as a function of paternal certainty”, and “Burch and Gallup (2000) have shown that males spend less time with, invest fewer resources in, and are more likely to abuse ostensibly unrelated children than children they assume to be their genetic offspring. They also found that the less a male thinks a child (unrelated or genetic) looks like him, the worse he treats the child and the worse he views the relationship with that child.”

It’s worth mentioning that dividing the strategy set up into three, and exactly three, overall categories seem to me slightly artificial, also because some relevant behaviours may not fit very well into any of them; to take an example, “There is growing evidence that males who question their partner’s fidelity show an increase in spouse abuse during pregnancy, and the abuse is often directed toward the female’s abdomen” – this behavioural pattern relates to none of the three strategy categories mentioned, but also seems ‘relevant’. In general it’s important to observe that employment of a specific type of tactic does not necessarily preclude the employment of other tactics as well – as pointed out in the book:

“A male’s best strategy is to prevent female infidelity and, if he is unsuccessful in preventing female infidelity, he would benefit by attempting to prevent conception by a rival male. If he is unsuccessful in preventing conception by a rival male, he would benefit by adjusting paternal effort according to available paternity cues. The performance of one tactic does not necessitate the neglect of another tactic; indeed, a reproductively wise strategy would be to perform all three categories of anti-cuckoldry tactics”

There’s a lot of food for thought in the book. I’ve included some more detailed observations from the book below – in particular I’ve added some stuff closely related to what I believe people might normally term ‘red flags’ or similar in a relationship context. I’d say that enough research has been done on this kind of stuff for it to make a lot of sense for women to read some of it – in light of the evidence, there are certain types of male behaviours which should most definitely be considered strong warning signs that it may be a bad idea to engage with this individual. (I was annoyed that the book only dealt with male abuse, as there are quite a few female abusers as well, but I can’t really fault the authors for limiting coverage to male behaviours).

“Paternal investment in humans and many other species is facultatively expressed: it often benefits offspring but is not always necessary for their survival and thus the quantity and quality of human paternal investment often varies with proximate conditions […] The facultative expression of male parenting reflects the […] cost–benefit trade-offs as these relate to the current social and ecological contexts in which the male is situated. The degree of male investment (1) increases with increases in the likelihood that investment will be provided to his own offspring (i.e. paternity certainty), (2) increases when investment increases the survival and later reproductive prospects of offspring, and (3) decreases when there are opportunities to mate with multiple females. […] the conditional benefits of paternal investment in these species results in simultaneous cost–benefit trade-offs in females. Sometimes it is in the females’ best interest (e.g. when paired with an unhealthy male) to cuckold their partner and mate with higher-quality males […] As a result, women must balance the costs of reduced paternal investment or male retaliation against the benefits of cuckoldry; that is, having their children sired by a more fit man while having their social partner assist in the rearing of these children.”

“In several large but unrepresentative samples, 20–25% of adult women reported having had at least one extra-pair sexual relationship during their marriage […] Using a nationally representative sample in the USA, Wiederman (1997) found that 12% of adult women reported at least one extra-pair sexual relationship during their marriage, and about 2% reported such a relationship during the past 12 months; Treas and Giesen (2000) found similar percentages for another nationally representative sample. These may be underestimates, given that people are reluctant to admit to extra-pair relationships. In any case, the results indicate that some women develop simultaneous and multiple opposite-sex relationships, many of which become sexual and are unknown to their social partner […] The dynamics of these extra-pair relationships are likely to involve a mix of implicit (i.e. unconscious) and explicit (i.e. conscious) psychological processes (e.g. attention to symmetric facial features) and social strategies. […] the finding that attraction to extra-pair partners is influenced by hormonal fluctuations points to the importance of implicit mechanisms. […] The emerging picture is one in which women appear to have an evolved sensitivity to the proximate cues of men’s fitness, a sensitivity that largely operates automatically and implicitly and peaks around the time women ovulate. The implicit operation of these mechanisms enables women to assess the fitness of potential extra-pair partners without a full awareness that they are doing so. In this way, women are psychologically and socially attentive to the relationship with their primary partner and most of the time have no explicit motive to cuckold this partner. If their social partners monitor for indications of attraction to extra-pair men, which they often do […], then these cues are only emitted during a short time frame. Moreover, given that attraction to a potential extra-pair partner is influenced by hormonal mechanisms, often combined with some level of pre-existing and non-sexual emotional intimacy with the extra-pair male […], many of these women may have no intention of an extra-pair sexual relationship before it is initiated. Under these conditions, the dynamics of cuckoldry may involve some level of self deception on women’s part, a mechanism that facilitates their ability to keep the extra-pair relationship hidden from their social partners. […] As with women, men’s anti-cuckoldry biases almost certainly involve a mix of implicit processes and explicit behavioral strategies that can be directed toward their mates, toward potential rivals, and toward the evaluation of the likely paternity of children born to their partners”

“Males have evolved psychological adaptations that produce mate guarding and jealousy […] to reduce or to prevent a mate from being inseminated by another male. Recent evidence suggests that males maximize the utility of their mateguarding strategies by implementing them at ovulation, a key reproductive time in a female’s menstrual cycle […]. Further, jealousy appears to fluctuate with a man’s mate value and, hence, risk of cuckoldry. Brown and Moore (2003), for example, found that males who were less symmetrical were significantly more jealous. These and other data suggest that jealousy has evolved as a means by which males can attempt to deter extra-pair copulations […] When triggered, jealousy often results in a variety of behavioral responses, including male-on-female aggression […], divorce […], the monitoring and attempted control of the social and sexual behavior of their partners […], enhancement of their attractiveness as a mate […], and the monitoring of and aggression toward actual or perceived sexual rivals […]. In total, these behaviors encompass tactics that function to ensure, through coercion or enticement, that their reproductive investment and that of their mate is directed toward the man’s biological children. […] One of the more common behavioral responses to relationship jealousy is mate guarding. For men this involves reducing their partner’s opportunity to mate with other men.”

“Cuckoldry is a reproductive cost inflicted on a man by a woman’s sexual infidelity or temporary defection from her regular long-term relationship. Ancestral men also would have incurred reproductive costs by a long-term partner’s permanent defection from the relationship. These costs include loss of the time, effort, and resources the man has spent attracting his partner, the potential misdirection of his resources to a rival’s offspring, and the loss of his mate’s investment in offspring he may have had with her in the future […] Expressions of male sexual jealousy historically may have been functional in deterring rivals from mate poaching […] and deterring a mate from a sexual infidelity or outright departure from the relationship […] Buss (1988) categorized the behavioral output of jealousy into different ‘‘mate-retention’’ tactics, ranging from vigilance over a partner’s whereabouts to violence against rivals […] Performance of these tactics is assessed by the Mate Retention Inventory (MRI[)] […] Buss’s taxonomy (1988) partitioned the tactics into two general categories: intersexual manipulations and intrasexual manipulations. Intersexual manipulations include behaviors directed toward one’s partner, and intrasexual manipulations include behaviors directed toward same-sex rivals. Intersexual manipulations include direct guarding, negative inducements, and positive inducements. Intrasexual manipulations include public signals of possession. […] Unfortunately, little is known about which specific acts and tactics of men’s mate-retention efforts are linked with violence. The primary exception is the study by Wilson, Johnson, and Daly (1995), which identified several predictors of partner violence – notably, verbal derogation of the mate and attempts at sequestration such as limiting access to family, friends, and income.”

“Tactics within the direct guarding category of the MRI include vigilance, concealment of mate, and monopolization of time. An exemplary act for each tactic is, respectively, ‘‘He dropped by unexpectedly to see what she was doing,’’ ‘‘He refused to introduce her to his same-sex friends,’’ and ‘‘He monopolized her time at the social gathering.’’ Each of these tactics implicates what Wilson and Daly (1992) term ‘‘male sexual proprietariness,’’ which refers to the sense of entitlement men sometimes feel that they have over their partners […] Wilson et al. (1995) demonstrated that violence against women is linked closely to their partners’ autonomy-limiting behaviors. Women who affirmed items such as ‘‘He is jealous and doesn’t want you to talk to other men,’’ were more than twice as likely to have experienced serious violence by their partners.” [What was the base rate? I find myself asking. But it’s still relevant knowledge.] […] Not all mate-retention tactics are expected to predict positively violence toward partners. Some of these tactics include behaviors that are not in conflict with a romantic partner’s interests and, indeed, may be encouraged and welcomed by a partner […] Holding his partner’s hand in public, for example, may signal to a woman her partner’s commitment and devotion to her. […] Tactics within the public signals of possession category include verbal possession signals (e.g. ‘‘He mentioned to other males that she was taken’’), physical possession signals (e.g. ‘‘He held her hand when other guys were around’’), and possessive ornamentation (e.g. ‘‘He hung up a picture of her so others would know she was taken’’).”

“The current studies examined how mate-retention tactics are related to violence in romantic relationships, using the reports of independent samples of several hundred men and women in committed, romantic relationships […], and using the reports of 107 married men and women […] With few exceptions, we found the same pattern of results using three independent samples. Moreover, these samples were not just independent, but provided different perspectives (the male perpetrator’s, the female victim’s, and a combination of the two) on the same behaviors – men’s mate-retention behaviors and men’s violence against their partners. We identified overlap between the best predictors of violence across the studies. For example, men’s use of emotional manipulation, monopolization of time, and punish mate’s infidelity threat are among the best predictors of female-directed violence, according to independent reports provided by men and women, and according to reports provided by husbands and their wives. The three perspectives also converged on which tactics are the weakest predictors of relationship violence. For example, love and care and resource display are among the weakest predictors of female-directed violence. […] The tactic of emotional manipulation was the highest-ranking predictor of violence in romantic relationships in study 1, and the second highest-ranking predictor in studies 2 and 3. The items that comprise the emotional manipulation tactic include, ‘‘He told her he would ‘die’ if she ever left,’’ and ‘‘He pleaded that he could not live without her.’’ Such acts seem far removed from those that might presage violence. […] Monopolization of time also ranked as a strong predictor of violence across the three studies. Example acts included in this tactic are ‘‘He spent all his free time with her so that she could not meet anyone else’’ and ‘‘He would not let her go out without him.’’ […] The acts ‘‘Dropped by unexpectedly to see what my partner was doing’’ and ‘‘Called to make sure my partner was where she said she would be’’ are the third and fifth highest-ranking predictors of violence, respectively. These acts are included in the tactic of vigilance, which is the highest-ranking tactic-level predictor of violence in study 3. Given that (1) two of the top five actlevel predictors of violence are acts of vigilance, (2) the numerically best tactic-level predictor of violence is vigilance, and (3) seven of the nine acts included within the vigilance tactic are correlated significantly with violence […], a man’s vigilance over his partner’s whereabouts is likely to be a key signal of his partner-directed violence. […] Wilson et al. (1995) found that 40% of women who affirmed the statement ‘‘He insists on knowing who you are with and where you are at all times’’ reported experiencing serious violence at the hands of their husbands.”

“Relative to women’s reports of their partners’ behavior, men self-reported more frequent use of intersexual negative inducements, positive inducements, and controlling behavior. Although not anticipated, the sex difference in reported frequency of controlling behaviors is not surprising upon examination of the acts included in the CBI [Controlling Behavior Index]. More than half of the acts do not require the woman’s physical presence or knowledge, for example ‘‘Deliberately keep her short of money’’ and ‘‘Check her movements.’’ In addition, such acts might be more effective if the woman is not aware of their occurrence. […] Increased effort devoted to mate retention is predicted to occur when the adaptive problems it was designed to solve are most likely to be encountered – when a mate is particularly desirable, when there exist mate poachers, when there is a mate-value discrepancy, and when the partner displays cues to infidelity or defection”

“Although sometimes referred to as marital rape, spouse rape, or wife rape,we use the term forced in-pair copulation (FIPC) to refer to the forceful act of sexual intercourse by a man against his partner’s will. […] FIPC is not performed randomly […] FIPC reliably occurs immediately after extra-pair copulations, intrusions by rival males, and female absence in many species of waterfowl […] and other avian species […] FIPC in humans often follow[s] accusations of female infidelity”

November 16, 2014 Posted by | anthropology, biology, books, evolution, Psychology | Leave a comment


i. “While we stop to think, we often miss our opportunity.” (Publilius Syrus)

ii. “The civility which money will purchase, is rarely extended to those who have none.” (Charles Dickens)

iii. “Grief can take care of itself; but to get the full value of a joy you must have somebody to divide it with.” (Mark Twain)

iv. “Long books, when read, are usually overpraised, because the reader wants to convince others and himself that he has not wasted his time.” (E. M. Forster)

v. “I do not envy people who think they have a complete explanation of the world, for the simple reason that they are obviously wrong.” (Salman Rushdie)

vi. “To be conscious that you are ignorant of the facts is a great step to knowledge.” (Benjamin Disraeli)

vii. “To hear, one must be silent.” (Ursula Guin)

viii. “The danger in trying to do good is that the mind comes to confuse the intent of goodness with the act of doing things well.” (-ll-)

ix. “Things don’t have purposes, as if the universe were a machine, where every part has a useful function. What’s the function of a galaxy? I don’t know if our life has a purpose and I don’t see that it matters.” (-ll-)

x. “In the land of the blind, the one-eyed man will poke out his eye to fit in.” (Caitlín R. Kiernan)

xi. “The greatest happiness you can have is knowing that you do not necessarily require happiness.” (William Saroyan)

xii. “An original idea. That can’t be too hard. The library must be full of them.” (Stephen Fry)

xiii. “It is a cliché that most clichés are true, but then like most clichés, that cliché is untrue.” (-ll-)

xiv. “Of what use is freedom of speech to those who fear to offend?” (Roger Ebert)

xv. “The assumption that anything true is knowable is the grandfather of paradoxes.” (William Poundstone)

xvi. “Approved attributes and their relation to face make every man his own jailer; this is a fundamental social constraint even though each man may like his cell.” (Erving Goffman)

xvii. “There may be no good reason for things to be the way they are.” (Alain de Botton)

xviii. “It is striking how much more seriously we are likely to be taken after we have been dead a few centuries.” (-ll-)

xix. “Deciding to avoid other people does not necessarily equate with having no desire whatsoever for company; it may simply reflect a dissatisfaction with what — or who — is available.” (-ll-)

xx.  “We are able to breathe, drink, and eat in comfort because millions of organisms and hundreds of processes are operating to maintain a liveable environment, but we tend to take nature’s services for granted because we don’t pay money for most of them.” (Eugene Odum)



November 15, 2014 Posted by | quotes | Leave a comment

Self-Esteem (II)

Here’s my first post about the book. I was disappointed by some of the chapters in the second half of the book and I think a few of them were quite poor. I have been wondering what to cover from the second half, in part because some of the authors seem to proceed as if e.g. the work of these authors does not exist (key quote: Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes) – I was thinking this about the authors of the last chapter, on ‘Changing self-esteem through competence and worthiness training’, in particular; their basic argument seems to be that since CWT (Competence and Worthiness Training) has been shown to improve self-esteem, ‘good things will follow’ people who make use of such programs. Never mind the fact that causal pathways between self-esteem and life outcomes are incredibly unclear, never mind that self-esteem is not the relevant outcome measure (and studies with good outcome measures do not exist), and never mind that effect persistence over time is unknown, to take but three of many problems with the research. They argue/conclude in the chapter that CWT is ’empirically validated’, an observation which almost made me laugh. I’m in a way slightly puzzled that whereas doctors contributing to Springer publications and similar are always supposed to disclose conflicts of interest in the publications, no similar demands are made in the context of the psychological literature; these people obviously make money off of these things, and yet they’re the ones evaluating the few poor studies that have been done, often by themselves, while pretending to be unbiased observers with no financial interests in whether the methods are ‘validated’ or not. Oh well.

Although some chapters are poor (‘data-poor and theory rich’, might not be a bad way to describe them – note that the ‘data poor’ part relates both to low amounts of data and the use of data of questionable quality; I’m thinking specifically about the use of measures of ‘implicit self-esteem’ in chapter 6 – the authors seem confused about the pattern of results and seem to have a hard time making sense of them (they seem to keep having to make up new ad-hoc explanations for why ‘this makes sense in context’), but I don’t think the results are necessarily that confusing; the variables probably aren’t measuring what they think they’re measuring, not even close, and the two different types of measures probably aren’t remotely measuring anything similar (I have a really hard time figuring out why anyone would ever think that they do), so it makes good sense that findings are all over the place..), chapter 8, on ‘Self-esteem as an interpersonal signal, was however really great and I thought I should share some observations from that chapter here – I have done this below. Interestingly, people who read the first post about the book would in light of the stuff included in that chapter do well to forget my personal comments in the first post about me having low self-esteem; interpersonal outcomes seem to be likely to be better if you think the people with whom you interact have high self-esteem (there are exceptions, but none of them seem relevant in this context), whether or not that’s actually true. Of course the level of ‘interaction’ going on here on the blog is very low, but even so… (I may be making a similar type of mistake the authors make in the last chapter here, by making unwarranted assumptions, but anyway…).

Before moving on, I should perhaps point out that I just finished the short Springer publication Appointment Planning in Outpatient Clinics and Diagnostic Facilities. I’m not going to blog this book separately as there frankly isn’t enough stuff in there for it to make sense to devote an entire blog post to it, but I thought I might as well add a few remarks here before moving on. The book contains a good introduction to some basic queueing theory, and quite a few important concepts are covered which people working with those kinds of things ought to know about (also, if you’ve ever had discussions about waiting lists and how ‘it’s terrible that people have to wait so long’ and ‘something has to be done‘, the discussion would have had a higher quality if you’d read this book first). Some chapters of the book are quite technical – here are a few illustrative/relevant links dealing with stuff covered in the book: Pollaczek–Khinchine formula, Little’s Law, the Erlang C formula, the Erlang B formula, Laplace–Stieltjes transform. The main thing I took away from this book was that this stuff is a lot more complicated that I’d thought. I’m not sure how much the average nurse would get out of this book, but I’m also not sure how much influence the average nurse has on planning decisions such as those described in this book  – little, I hope. Sometimes a book contains a few really important observations and you sort of want to recommend the book based simply on these observations, because a lot of people would benefit from knowing exactly those things; this book is like that, as planners on many different decision-making levels would benefit from knowing the ‘golden rules’ included in section 7.1. When things go wrong due to mismanagement and very long waiting lists develop, it’s obvious that however you look at it, if people had paid more attention to those aspects, this would probably not have happened. An observation which is critical to include in the coverage of a book like this is that it may be quite difficult for an outside observer (e.g. a person visiting a health clinic) to evaluate the optimality of scheduling procedures except in very obvious cases of inefficiently long queues. Especially in the case of excess capacity most outsiders do not know enough to evaluate these systems fairly; what may look like excess capacity to the outsider may well be a necessary buffer included in the planning schedule to keep waiting times from exploding at other points in time, and it’s really hard to tell those apart if you don’t have access to relevant data. Even if you do, things can be, complicated (see the links above).

Okay, back to the self-esteem text – some observations from the second half of the book below…

“low self-esteem is listed as either a diagnostic criterion or associated feature of at least 24 mental disorders in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV- TR). Low self-esteem and an insufficient ability to experience self-relevant positive emotions such as pride is particularly strongly linked to depression, to such a degree that some even suggest conceptualizing self-esteem and depression as opposing end points of a bipolar continuum [] The phenomenology of low self-esteem – feeling incompetent and unworthy, unfit for life – inevitably translates into experiencing existence as frightening and futile. This turns life for the person lacking in self-esteem into a chronic emergency: that person is psychologically in a constant state of danger, surrounded by a feeling of impending disaster and a sense of helplessness. Suffering from low self-esteem thus involves having one’s consciousness ruled by fear, which sabotages clarity and efficiency (Branden, 1985). The main goal for such a person is to keep the anxieties, insecurities, and self-doubts at bay, at whatever cost that may come. On the other hand, a person with a satisfying degree of self-respect, whose central motivation is not fear, can afford to rejoice in being alive, and view existence as a more exciting than threatening affair.” [from chapter 7, on ‘Existential perspective on self-esteem’ – I didn’t particularly like that chapter and I’m not sure to which extent I agree with the observations included, but I thought I should add the above to illustrate which kind of stuff is also included in the book.]

“Although past research has emphasized how social environments are internalized to shape self-views, researchers are increasingly interested in how self-views are externalized to shape one’s social environment. From the externalized perspective, people will use information about another’s self-esteem as a gauge of that person’s worth […] self-esteem serves a “status-signaling” function that complements the status-tracking function […] From this perspective, self-esteem influences one’s self-presentational behavior, which in turn influences how others view the self. This status-signaling system in humans should work much like the status-signaling models developed in non-human animals [Aureli et al. and Kappeler et al. are examples of places to go if you’re interested in knowing more about this stuff] […] Ultimately, these status signals have important evolutionary outcomes, such as access to mates and consequent reproductive success. In essence, self-esteem signals important status-related information to others in one’s social world. […] the basic notion here is that conveying high (or low) self-esteem provides social information to others.”

“In an effort to understand their social world, people form lay theories about the world around them. These lay theories consist of information about how characteristics covary within individuals […] Research on the status-signaling function of self-esteem […] and on self-esteem stereotypes […] report a consistent positive bias in the impressions formed about high self-esteem individuals and a consistent negative bias about those with low self-esteem. In several studies conducted by Cameron and her colleagues […], when Canadian and American participants were asked to rate how the average person would describe a high self-esteem individual, they universally reported that higher self-esteem people were attractive, intelligent, warm, competent, emotionally stable, extraverted, open to experience, conscientious, and agreeable. Basically, on all characteristics in the rating list, high self-esteem people were described as superior. […] Whereas people sing the praises of high self-esteem, low self-esteem is viewed as a “fatal flaw.” In the same set of studies, Cameron and her colleagues […] found that participants attributed negative characteristics to low self-esteem individuals. Across all of the characteristics assessed, low self-esteem people were seen as inferior. They were described as less attractive, less intelligent, less warm, less competent, less sociable, and so forth. The only time that the stereotypes of low self-esteem individuals were rated as “more” than the group of high self-esteem individuals was on negative characteristics, such as experiencing more negative moods and possessing more interpersonally disadvantageous characteristics (e.g., jealousy). […] low self-esteem individuals were seen just as negatively as welfare recipients and mentally ill people on most characteristics […] All cultures do not view self-esteem in the same way. […] There is some evidence to suggest that East Asian cultures link high self-esteem with more negative qualities”

“Zeigler-Hill and his colleagues […] presented participants with a single target, identified as low self-esteem or high self-esteem, and asked for their evaluations of the target. Whether the target was identified as low self-esteem by an explicit label (Study 3), a self-deprecating slogan on a T-shirt (Study 4), or their email address (Study 5, e.g., sadeyes@), participants rated an opposite-sex low self-esteem target as less romantically desirable than a high self-esteem target […]. However, ascribing negative characteristics to low self-esteem individuals is not just limited to decisions about an opposite-sex target. Zeigler-Hill and colleagues demonstrated that, regardless of match or mismatch of perceiver-target gender, when people thought a target had lower self-esteem they were more likely to ascribe negative traits to him or her, such as being lower in conscientiousness […] Overall, people are apt to assume that people with low self-esteem possess negative characteristics, whereas those with high self-esteem possess positive characteristics. Such assumptions are made at the group level […] and at the individual level […] According to Cameron and colleagues […], fewer than 1% of the sample ascribed any positive characteristics to people with low self-esteem when asked to give open-ended descriptions. Furthermore, on the overwhelming majority of characteristics assessed, low self-esteem individuals were rated more negatively than high self-esteem individuals”

“Although for the most part it is low self-esteem that people associate with negative qualities, there is a dark side to being labeled as having high self-esteem. People who are believed to have high self-esteem are seen as more narcissistic […], self-absorbed, and egotistical […] than those believed to possess low self-esteem. Moreover, the benefits of being seen as high self-esteem may be moderated by gender. When rating an opposite-sex target, men were often more positive toward female targets with moderate self-esteem than those with high self-esteem”

“Not only might perceptions of others’ self-esteem influence interactions among relative strangers, but they may also be particularly important in close relationships. Ample evidence demonstrates that a friend or partner’s self-esteem can have actual relational consequences […]. Relationships involving low self-esteem people tend to be less satisfying and less committed […], due at least in part to low self-esteem people’s tendency to engage in defensive, self-protective behavior and their enhanced expectations of rejection […]. Mounting evidence suggests that people can intuit these disadvantages, and thus use self-esteem as an interpersonal signal. […] Research by MacGregor and Holmes (2007) suggests that people expect to be less satisfied in a romantic relationship with a low self-esteem partner than a high self-esteem partner, directly blaming low self-esteem individuals for relationship mishaps […] it appears that people use self-esteem as a signal to indicate desirability as a mate: People report themselves as less likely to date or have sex with those explicitly labeled as having “low self-esteem” compared to those labeled as having “high self-esteem” […] Even when considering friendships, low self-esteem individuals are rated less socially appealing […] In general, it appears that low self-esteem individuals are viewed as less-than-ideal relationship partners.”

“Despite people’s explicit aversion to forming social bonds with low self-esteem individuals, those with low self-esteem do form close relationships. Nevertheless, even these established relationships may suffer when one person detects another’s low self-esteem. For example, people believe that interactions with low self-esteem friends or family members are more exhausting and require more work than interactions with high self-esteem friends and family […]. In the context of romantic relationships, Lemay and Dudley’s (2011) findings confirm the notion that relationships with low self-esteem individuals require extra relationship maintenance (or “work”) as people attempt to “regulate” their romantic partner’s insecurities. Specifically, participants who detected their partner’s low self-esteem tended to exaggerate affection for their partner and conceal negative sentiments, likely in an effort to maintain harmony in their relationship. Unfortunately, this inauthenticity was actually associated with decreased relationship satisfaction for the regulating partner over time. […] MacGregor and colleagues […] have explored a different type of communication in close relationships. Their focus was on capitalization, which is the disclosure of positive personal experiences to others […]. In two experiments […], participants who were led to believe that their close other had low self-esteem capitalized less positively (i.e., enthusiastically) compared to control participants. […] Moreover, in a study involving friend dyads, participants reported capitalizing less frequently with their friend to the extent they perceived him or her as having low self-esteem […] low self-esteem individuals are actually no less responsive to others’ capitalization attempts than are high self-esteem partners. Despite this fact, MacGregor and Holmes (2011) found that people are reluctant to capitalize with low self-esteem individuals precisely because they expect them to be less responsive than high self-esteem partners. Thus people appear to be holding back from low self-esteem individuals unnecessarily. Nevertheless, the consequences may be very real given that capitalization is a process associated with personal and interpersonal benefits”

“Cameron (2010) asked participants to indicate how much they tried to conceal or reveal their self-feelings and insecurities with significant others (best friends, romantic partners, and parents). Those with lower self-esteem reported attempting to conceal their insecurities and self-doubts to a greater degree than those with higher self-esteem. Thus, even in close relationships, low self-esteem individuals appear to see the benefit of hiding their self-esteem. Cameron, Hole, and Cornelius (2012) further investigated whether concealing self-esteem was linked with relational benefits for those with low self-esteem. In several studies, participants were asked to report their own self-esteem and then to provide their “self-esteem image”, or what level of self-esteem they thought they had conveyed to their significant others. Participants then indicated their relationship quality (e.g., satisfaction, commitment, trust). Across all studies and across all relationship types studied (friends, romantic partners, and parents), people reporting a higher self-esteem image, regardless of their own self-esteem level, reported greater relationship quality. […] both low and high self-esteem individuals benefit from believing that a high self-esteem image has been conveyed, though this experience may feel “inauthentic” for low self-esteem people. […] both low and high self-esteem individuals may hope to been seen as they truly are by their close others. […] In a recent meta-analysis, Kwang and Swann (2010) proposed that individuals desire verification unless there is a high risk for rejection. Thus, those with negative self-views may desire to be viewed positively, but only if being seen negatively jeopardizes their relationship. From this perspective, romantic partners should signal high self-esteem during courtship, job applicants should signal high self-esteem to potential bosses, and politicians should signal high self-esteem to their voters. Once the relationship has been cemented (and the potential for rejection has been reduced), however, people should desire to be seen as they are. Importantly, the results of the meta-analysis supported this proposal. While this boundary condition has shed some light on this debate, more research is needed to understand fully under what contexts people are motivated to communicate either positive or negative self-views.”

“it appears that people’s judgments of others’ self-esteem are partly well informed, yet also based on inaccurate stereotypes about characteristics not actually linked to self-esteem. […] Traits that do not readily manifest in behavior, or are low in observability, should be more difficult to detect accurately (see Funder & Dobroth, 1987). Self-esteem is one of these “low-observability” traits […] Although the operationalization of accuracy is tricky […], it does appear that people are somewhat accurate in their impressions of self-esteem […] research from various laboratories indicates that both friends […] and romantic partners […] are fairly accurate in judging each other’s self-esteem. […] However, people may also use information that has nothing to do with the appearances or behaviors of target. Instead, people may make judgements about another’s personality traits based on how they perceive their own traits […] people tend to project their own characteristics onto others […] People’s ratings of others’ self-esteem tend to be correlated with their own, be it for friends or romantic partners”

November 12, 2014 Posted by | books, economics, health care, Psychology | Leave a comment

Introduction to Meta Analysis (II)

You can read my first post about the book here. Some parts of the book are fairly technical, so I decided in the post below to skip some chapters in my coverage, simply because I could see no good way to cover the stuff on a wordpress blog (which as already mentioned many times is not ideal for math coverage) without spending a lot more time on that stuff than I wanted to. If you’re a new reader and/or you don’t know what a meta-analysis is, I highly recommend you read my first post about the book before moving on to the coverage below (and/or you can watch this brief video on the topic).

Below I have added some more quotes and observations from the book.

“In primary studies we use regression, or multiple regression, to assess the relationship between one or more covariates (moderators) and a dependent variable. Essentially the same approach can be used with meta-analysis, except that the covariates are at the level of the study rather than the level of the subject, and the dependent variable is the effect size in the studies rather than subject scores. We use the term meta-regression to refer to these procedures when they are used in a meta-analysis.
The differences that we need to address as we move from primary studies to meta-analysis for regression are similar to those we needed to address as we moved from primary studies to meta-analysis for subgroup analyses. These include the need to assign a weight to each study and the need to select the appropriate model (fixed versus random effects). Also, as was true for subgroup analyses, the R2 index, which is used to quantify the proportion of variance explained by the covariates, must be modified for use in meta-analysis.
With these modifications, however, the full arsenal of procedures that fall under the heading of multiple regression becomes available to the meta-analyst. […] As is true in primary studies, where we need an appropriately large ratio of subjects to covariates in order for the analysis be to meaningful, in meta-analysis we need an appropriately large ratio of studies to covariates. Therefore, the use of meta-regression, especially with multiple covariates, is not a recommended option when the number of studies is small.”

“Power depends on the size of the effect and the precision with which we measure the effect. For subgroup analysis this means that power will increase as the difference between (or among) subgroup means increases, and/or the standard error within subgroups decreases. For meta-regression this means that power will increase as the magnitude of the relationship between the covariate and effect size increases, and/or the precision of the estimate increases. In both cases, a key factor driving the precision of the estimate will be the total number of individual subjects across all studies and (for random effects) the total number of studies. […] While there is a general perception that power for testing the main effect is consistently high in meta-analysis, this perception is not correct […] and certainly does not extend to tests of subgroup differences or to meta-regression. […] Statistical power for detecting a difference among subgroups, or for detecting the relationship between a covariate and effect size, is often low [and] failure to obtain a statistically significant difference among subgroups should never be interpreted as evidence that the effect is the same across subgroups. Similarly, failure to obtain a statistically significant effect for a covariate should never be interpreted as evidence that there is no relationship between the covariate and the effect size.”

“When we have effect sizes for more than one outcome (or time-point) within a study, based on the same participants, the information for the different effects is not independent and we need to take account of this in the analysis. […] When we are working with different outcomes at a single point in time, the plausible range of correlations [between outcomes] will depend on the similarity of the outcomes. When we are working with the same outcome at multiple time-points, the plausible range of correlations will depend on such factors as the time elapsed between assessments and the stability of the relative scores over this time period. […] Researchers who do not know the correlation between outcomes sometimes fall back on either of two ‘default’ positions. Some will include both [outcome variables] in the analysis and treat them as independent. Others would use the average of the [variances of the two outcomes]. It is instructive, therefore, to consider the practical impact of these choices. […] In effect, […] researchers who adopt either of these positions as a way of bypassing the need to specify a correlation, are actually adopting a correlation, albeit implicitly. And, the correlation that they adopt falls at either extreme of the possible range (either zero or 1.0). The first approach is almost certain to underestimate the variance and overestimate the precision. The second approach is almost certain to overestimate the variance and underestimate the precision.” [A good example of a more general point in the context of statistical/mathematical modelling: Sometimes it’s really hard not to make assumptions, and trying to get around such problems by ‘ignoring them’ may sometimes lead to the implicit adoption of assumptions which are highly questionable as well.]

“Vote counting is the name used to describe the idea of seeing how many studies yielded a significant result, and how many did not. […] narrative reviewers often resort to [vote counting] […] In some cases this process has been formalized, such that one actually counts the number of significant and non-significant p-values and picks the winner. In some variants, the reviewer would look for a clear majority rather than a simple majority. […] One might think that summarizing p-values through a vote-counting procedure would yield more accurate decision than any one of the single significance tests being summarized. This is not generally the case, however. In fact, Hedges and Olkin (1980) showed that the power of vote-counting considered as a statistical decision procedure can not only be lower than that of the studies on which it is based, the power of vote counting can tend toward zero as the number of studies increases. […] the idea of vote counting is fundamentally flawed and the variants on this process are equally flawed (and perhaps even more dangerous, since the basic flaw is less obvious when hidden behind a more complicated algorithm or is one step removed from the p-value). […] The logic of vote counting says that a significant finding is evidence that an effect exists, while a non-significant finding is evidence that an effect is absent. While the first statement is true, the second is not. While a nonsignificant finding could be due to the fact that the true effect is nil, it can also be due simply to low statistical power. Put simply, the p-value reported for any study is a function of the observed effect size and the sample size. Even if the observed effect is substantial, the p-value will not be significant unless the sample size is adequate. In other words, as most of us learned in our first statistics course, the absence of a statistically significant effect is not evidence that an effect is absent.”

“While the term vote counting is associated with narrative reviews it can also be applied to the single study, where a significant p-value is taken as evidence that an effect exists, and a nonsignificant p-value is taken as evidence that an effect does not exist. Numerous surveys in a wide variety of substantive fields have repeatedly documented the ubiquitous nature of this mistake. […] When we are working with a single study and we have a nonsignificant result we don’t have any way of knowing whether or not the effect is real. The nonsignificant p-value could reflect either the fact that the true effect is nil or the fact that our study had low power. While we caution against accepting the former (that the true effect is nil) we cannot rule it out. By contrast, when we use meta-analysis to synthesize the data from a series of studies we can often identify the true effect. And in many cases (for example if the true effect is substantial and is consistent across studies) we can assert that the nonsignificant p-value in the separate studies was due to low power rather than the absence of an effect. […] vote
counting is never a valid approach.”

“The fact that a meta-analysis will often [but not always] have high power is important because […] primary studies often suffer from low power. While researchers are encouraged to design studies with power of at least 80%, this goal is often elusive. Many studies in medicine, psychology, education and an array of other fields have power substantially lower than 80% to detect large effects, and substantially lower than 50% to detect smaller effects that are still important enough to be of theoretical or practical importance. By contrast, a meta-analysis based on multiple studies will have a higher total sample size than any of the separate studies and the increase in power can be substantial. The problem of low power in the primary studies is especially acute when looking for adverse events. The problem here is that studies to test new drugs are powered to find a treatment effect for the drug, and do not have adequate power to detect side effects (which have a much lower event rate, and therefore lower power).”

“Assuming a nontrivial effect size, power is primarily a function of the precision […] When we are working with a fixed-effect analysis, precision for the summary effect is always higher than it is for any of the included studies. Under the fixed-effect analysis precision is largely determined by the total sample size […], and it follows the total sample size will be higher across studies than within studies. […] in a random-effects meta-analysis, power depends on within-study error and between-studies variation […if you don’t recall the difference between fixed-effects models and random effects models, see the previous post]. If the effect sizes are reasonably consistent from study to study, and/or if the analysis includes a substantial number of studies, then the second of these will tend to be small, and power will be driven by the cumulative sample size. In this case the meta-analysis will tend to have higher power than any of the included studies. […] However, if the effect size varies substantially from study to study, and the analysis includes only a few studies, then this second aspect will limit the potential power of the meta-analysis. In this case, power could be limited to some low value even if the analysis includes tens of thousands of persons. […] The Cochrane Database of Systematic Reviews is a database of systematic reviews, primarily of randomized trials, for medical interventions in all areas of healthcare, and currently includes over 3000 reviews. In this database, the median number of trials included in a review is six. When a review includes only six studies, power to detect even a moderately large effect, let alone a small one, can be well under 80%. While the median number of studies in a review differs by the field of research, in almost any field we do find some reviews based on a small number of studies, and so we cannot simply assume that power is high. […] Even when power to test the main effect is high, many meta-analyses are not concerned with the main effect at all, but are performed solely to assess the impact of covariates (or moderator variables). […] The question to be addressed is not whether the treatment works, but whether one variant of the treatment is more effective than another variant. The test of a moderator variable in a meta-analysis is akin to the test of an interaction in a primary study, and both suffer from the same factors that tend to decrease power. First, the effect size is actually the difference between the two effect sizes and so is almost invariably smaller than the main effect size. Second, the sample size within groups is (by definition) smaller than the total sample size. Therefore, power for testing the moderator will often be very low (Hedges and Pigott, 2004).”

“It is important to understand that the fixed-effect model and random-effects model address different hypotheses, and that they use different estimates of the variance because they make different assumptions about the nature of the distribution of effects across studies […]. Researchers sometimes remark that power is lower under the random-effects model than for the fixed-effect model. While this statement may be true, it misses the larger point: it is not meaningful to compare power for fixed- and random-effects analyses since the two values of power are not addressing the same question. […] Many meta-analyses include a test of homogeneity, which asks whether or not the between-studies dispersion is more than would be expected by chance. The test of significance is […] based on Q, the sum of the squared deviations of each study’s effect size estimate (Yi) from the summary effect (M), with each deviation weighted by the inverse of that study’s variance. […] Power for this test depends on three factors. The larger the ratio of between-studies to within-studies variance, the larger the number of studies, and the more liberal the criterion for significance, the higher the power.”

“While a meta-analysis will yield a mathematically accurate synthesis of the studies included in the analysis, if these studies are a biased sample of all relevant studies, then the mean effect computed by the meta-analysis will reflect this bias. Several lines of evidence show that studies that report relatively high effect sizes are more likely to be published than studies that report lower effect sizes. Since published studies are more likely to find their way into a meta-analysis, any bias in the literature is likely to be reflected in the meta-analysis as well. This issue is generally known as publication bias. The problem of publication bias is not unique to systematic reviews. It affects the researcher who writes a narrative review and even the clinician who is searching a database for primary papers. […] Other factors that can lead to an upward bias in effect size and are included under the umbrella of publication bias are the following. Language bias (English-language databases and journals are more likely to be searched, which leads to an oversampling of statistically significant studies) […]; availability bias (selective inclusion of studies that are easily accessible to the researcher); cost bias (selective inclusion of studies that are available free or at low cost); familiarity bias (selective inclusion of studies only from one’s own discipline); duplication bias (studies with statistically significant results are more likely to be published more than once […]) and citation bias (whereby studies with statistically significant results are more likely to be cited by others and therefore easier to identify […]). […] If persons performing a systematic review were able to locate studies that had been published in the grey literature (any literature produced in electronic or print format that is not controlled by commercial publishers, such as technical reports and similar sources), then the fact that the studies with higher effects are more likely to be published in the more mainstream publications would not be a problem for meta-analysis. In fact, though, this is not usually the case.
While a systematic review should include a thorough search for all relevant studies, the actual amount of grey/unpublished literature included, and the types, varies considerably across meta-analyses.”

“In sum, it is possible that the studies in a meta-analysis may overestimate the true effect size because they are based on a biased sample of the target population of studies. But how do we deal with this concern? The only true test for publication bias is to compare effects in the published studies formally with effects in the unpublished studies. This requires access to the unpublished studies, and if we had that we would no longer be concerned. Nevertheless, the best approach would be for the reviewer to perform a truly comprehensive search of the literature, in hopes of minimizing the bias. In fact, there is evidence that this approach is somewhat effective. Cochrane reviews tend to include more studies and to report a smaller effect size than similar reviews published in medical journals. Serious efforts to find unpublished, and difficult to find studies, typical of Cochrane reviews, may therefore reduce some of the effects of publication bias. Despite the increased resources that are needed to locate and retrieve data from sources such as dissertations, theses, conference papers, government and technical reports and the like, it is generally indefensible to conduct a synthesis that categorically excludes these types of research reports. Potential benefits and costs of grey literature searches must be balanced against each other.”

“Since we cannot be certain that we have avoided bias, researchers have developed methods intended to assess its potential impact on any given meta-analysis. These methods address the following questions:
*Is there evidence of any bias?
*Is it possible that the entire effect is an artifact of bias?
*How much of an impact might the bias have? […]
Methods developed to address publication bias require us to make many assumptions, including the assumption that the pattern of results is due to bias, and that this bias follows a certain model. […] In order to gauge the impact of publication bias we need a model that tells us which studies are likely to be missing. The model that is generally used […] makes the following assumptions: (a) Large studies are likely to be published regardless of statistical significance because these involve large commitments of time and resources. (b) Moderately sized studies are at risk for being lost, but with a moderate sample size even modest effects will be significant, and so only some studies are lost here. (c) Small studies are at greatest risk for being lost. Because of the small sample size, only the largest effects are likely to be significant, with the small and moderate effects likely to be unpublished.
The combined result of these three items is that we expect the bias to increase as the sample size goes down, and the methods described […] are all based on this model. […] [One problem is however that] when there is clear evidence of asymmetry, we cannot assume that this reflects publication bias. The effect size may be larger in small studies because we retrieved a biased sample of the smaller studies, but it is also possible that the effect size really is larger in smaller studies for entirely unrelated reasons. For example, the small studies may have been performed using patients who were quite ill, and therefore more likely to benefit from the drug (as is sometimes the case in early trials of a new compound). Or, the small studies may have been performed with better (or worse) quality control than the larger ones. Sterne et al. (2001) use the term small-study effect to describe a pattern where the effect is larger in small studies, and to highlight the fact that the mechanism for this effect is not known.”

“It is almost always important to include an assessment of publication bias in relation to a meta-analysis. It will either assure the reviewer that the results are robust, or alert them that the results are suspect.”



November 10, 2014 Posted by | books, statistics | Leave a comment

Self-esteem (I)

I’m currently reading this book. I’ve written about this kind of stuff before here on the blog, so there are some observations from the book which I’ve decided not to repeat here even if it’s stuff that’s nice to know – instead I refer to these posts on the topic (I should perhaps clarify that a few of the observations made in those posts are observations I’d have liked the authors to include in the book as well, though they decided not to..). It’s worth mentioning that many other psychology-related posts in the archives also deal with stuff covered in the book, though the focus has often been different – one example would be this post, but there are lots of others as well.

I like the book and it’s certainly worth reading even if at times it’s been somewhat speculative and I have disagreed with a few of the authors about the interpretation of the research results they’ve presented. Reading a book like this one may easily make you question what you know about yourself and how you think about yourself, and this is probably a very good thing for me to do. People reading along here may not know this, but I have quite low self-esteem – I however along the way got curious about where I’d actually score on metrics usually applied in the literature, and so I decided to have a go at the Rosenberg self-esteem scale before publishing this post. You should check it out if you’re the least bit interested, there are only 10 questions and it shouldn’t take you very long to answer the questions. My score was 6. It’s best thought of as a state estimate, not a trait estimate, but I doubt I’d ever score anywhere near 15 if not on illegal drugs or severely intoxicated by alcohol. It’s perhaps noteworthy in the context of this test that “average scores for most self-esteem instruments are well above the midpoint of their response scales (more than one standard deviation in many cases[)]”.

The book is not a self-help book, it’s a research book dealing with (parts of) the psychological literature written on the topic (a lot of stuff has been written on this topic: “Self-esteem is clearly one of the most popular topics in modern psychology, with more than 35,000 publications on the subject of this construct”). That said, a few tentative conclusions about how ‘healthy self-esteem’ may be different from ‘unhealthy self-esteem’ can be drawn from the literature – or anyway have been drawn from the literature by some of the authors of the book, whether or not they should have been drawn – and I’ve included some of these in the post below. An important general point in that context is that self-esteem is a complex trait; “there is far more to self-esteem than simply whether global self-esteem is high or low”. I’ve added a few comments about some key moderating variables below.

The book has a lot of good stuff and unlike a few of the books I’ve recently read it’s relatively easy to blog, in the sense that a somewhat high proportion of the total content would be stuff I could justify including in a post like this. If you like what I’ve included in the post below, I think it’s quite likely you’ll like the book.

With those things out of the way, below some observations from the first half of the book.

“self-esteem is generally considered to be the evaluative aspect of self-knowledge that reflects the extent to which people like themselves and believe they are competent […]. High self-esteem refers to a highly favorable view of the self, whereas low self-esteem refers to evaluations of the self that are either uncertain or outright negative […] self-esteem reflects perception rather than reality.
Self-esteem is considered to be a relatively enduring characteristic that possesses both motivational and cognitive components […]. Individuals tend to show a desire for high levels of self-esteem and engage in a variety of strategies to maintain or enhance their feelings of self-worth […] Individuals with different levels of self-esteem tend to adopt different strategies to regulate their feelings of self-worth, such that those with high self-esteem are more likely to focus their efforts on further increasing their feelings of self-worth (i.e., self-enhancement), whereas those with low self-esteem are primarily concerned with not losing the limited self-esteem resources they already possess (i.e., self-protection[)] […] In contrast to the self-enhancing tendencies exhibited by those with high self-esteem, individuals with low levels of self-esteem are more likely to employ self-protective strategies characterized by a reluctance to call attention to themselves, attempts to prevent their bad qualities from being noticed, and an aversion to risk. In essence, individuals with low self-esteem tend to behave in a manner that is generally cautious and conservative […] the risks taken by individuals with low self-esteem appear to have a greater potential cost for them than for those with high self-esteem because those with low self-esteem lack the evaluative resources necessary to buffer themselves from the self-esteem threats that accompany negative experiences such as failure and rejection.”

“According to the sociometer model, self-esteem has a status-tracking property such that the feelings of self-worth possessed by an individual depend on the level of relational value that the individual believes he or she possesses […] In essence, the sociometer model suggests that self-esteem is analogous to a gauge that tracks gains in perceived relational value (accompanied by increases in self-esteem) as well as losses in perceived value (accompanied by decreases in self-esteem). […] Although the sociometer model has been extremely influential, it may provide only a partial representation of the way this information is transferred between the individual and the social environment. That is, status-tracking models of self-esteem have focused exclusively on the influence that perceived standing has on feelings of self-worth […] without addressing the possibility that self-esteem also influences how others perceive the individual […] The status-signaling model of self-esteem […] provides a complement to the sociometer model by addressing the possibility that self-esteem influences how individuals present themselves to others and alters how those individuals are perceived by their social environment. […] The existing data has supported this basic idea”

“A wide array of studies have shown clear and consistent evidence that individuals who report more positive feelings of self-worth are also more emotionally stable and less prone to psychological distress than those who do not feel as good about themselves […] There is little debate that self-esteem is positively associated with outcomes such as self-reported happiness […] and overall life satisfaction […] Although there is a clear link between low self-esteem and psychopathology, the reason for this connection [however] remains unclear.”

“The model of evaluative self-organization measures the distribution of positively and negatively valenced self-beliefs across self-aspects (i.e., contexts). This model highlights individual differences in the organization of positive and negative beliefs into same- or mixed-valenced self-aspects, labeled compartmentalization and integration, respectively […] the basic model outlines two types of self-organizations: Evaluative compartmentalization, wherein individuals separate their positive and negative self-beliefs into distinct self-aspects, and evaluative integration, wherein individuals intermix positive and negative self-beliefs in each of their multiple self-aspects […] Compartmentalized selves are highly differentiated. […] This suggests that compartmentalized individuals have [relatively high] affect intensity […] with evaluative integration, there appears to be lower affective intensity. Trait self-esteem is gauged less heavily on affect, but more by cognitive features. Integratives possibly weigh their positive and negative beliefs by more objective standards, such as overall social position. Moreover, state self-esteem is often consistent with trait self-esteem because the situation does not often change the qualities of self-beliefs […] A second important feature of evaluative self-organization is differential importance […]. Some selves are considered subjectively more important than others and, naturally, these important selves weigh heavily in self-esteem judgments.”

“Individuals whose positive self-aspects are more important than their negatives (differential importance) are referred to as positively compartmentalized or positively integrative; and those whose negative selves are most important are referred to as negatively compartmentalized or negative integrative. […] Both negatively compartmentalized and integrative individuals feel as though acceptance from others is beyond their control, but their reactions and approaches to life may differ dramatically. […] negatively compartmentalized individuals strive to obtain belongingness in much the same way as their positively compartmentalized counterparts, but they fail in their efforts to live up to contingencies. This likely makes social acceptance particularly desirable and the inability to achieve it all the more frustrating, culminating in a despairing form of low self-esteem (a judgment they might arrive at reluctantly). On the other hand, negative integratives’ response to rejection seems more accepting, as if they can simply conclude that they are not worthy of acceptance and concede that their needs are unlikely ever to be met: a defeated form of low self-esteem.”

“People with HSE [high self-esteem] vs LSE [low self-esteem] have very different ways of orienting to their social worlds and regulating feelings of safety and security in response to self-esteem threats. HSE people’s self-confidence and interpersonal security motivates them to strive for positive end states (e.g., positive affect, social rewards) more than avoiding negative end states (e.g., loss of self-esteem, rejection). For example, following self-esteem threats, HSEs are quicker to access their strengths relative to weaknesses, are likely to dismiss the validity of negative feedback, derogate out-group members, make self-serving biases, and express increased zeal about value-laden opinions and ideologies […] In relational contexts, HSEs often show approach-motivated responses by drawing closer to their relationship partners following threat. […] In close relationships, LSEs [on the other hand] typically adopt avoidance strategies following threat, such as distancing from their romantic partner and devaluing their relationship […] When LSEs fail, they feel ashamed and humiliated […], generalize the failure to other aspects of themselves […], have difficulty accessing positive thoughts about themselves […], are less likely to show self-serving biases […], and are thought to possess fewer positive aspects of their self-image with which to affirm themselves […] Such findings support the idea that people who feel relationally devalued prioritize self-protection goals over self-enhancement goals or relationship-promotion goals, especially under conditions of heightened threat or perceived risk […] [various] findings support the idea that whereas HSEs regulate their responses to threat by defending and maintaining their favorable self-views, LSEs regulate their emotional reactions by withdrawing from the situation […] to avoid further loss of self-esteem. […] Overall, then, these studies suggest that interpersonally motivated responses – to draw closer or distance from others following threat – depends on one’s global level of self-esteem, the degree to which self-worth is invested in a domain, and whether or not one experiences a threat to that domain.”

“We define consistency in this chapter as rank-order stability, which is typically assessed using test-retest correlations […] The degree of rank-order stability is an important consideration when evaluating whether the construct of self-esteem is more state- or trait-like […]. Psychological traits such as the Big Five typically exhibit high stability over time, whereas mood and other states tend to exhibit lower levels of stability […]. Although debate persists […], we believe that the evidence now supports the conclusion that self-esteem is best conceptualized as a stable trait. Most notably, Trzesniewski and colleagues (2003) examined the rank-order stability of self-esteem using data from 50 published articles (N = 29,839). They found that test-retest correlations are moderate in magnitude and comparable to those found for personality traits […] The rank-order stability of self-esteem showed a robust curvilinear trend [as a function of age] […] Recent studies have replicated the curvilinear trend for the Big Five personality domains […], suggesting that the pattern observed for self-esteem may reflect a more general developmental process. […] The increasing consistency of self-esteem from childhood to mid-life conforms well to the cumulative continuity principle […], which states that psychological traits become more consistent as individuals mature into adulthood. […] After decades of contentious debate […], research accumulating over the past several years suggests that there are reliable age differences in self-esteem across the life span […]. A broad generalization is that levels of self-esteem decline from childhood to adolescence, increase during the transition to adulthood, reach a peak sometime in middle adulthood, and decrease in old age […] there is […] a biological component to self-esteem that is being increasingly recognized. Twin studies indicated that genetic factors account for about 40% of the observed variability in self-esteem
(Neiss, Sedikides, & Stevenson, 2002). The relatively high heritability of self-esteem […] approaches that found for basic personality traits”

“Perceptions by others may shape self-esteem and these reflected appraisals have long been implicated in self-esteem development […] perceiving one’s partner as supportive and loving leads to greater self-esteem over time, and perceiving a partner to view one less positively leads to diminished self-esteem over time. Similarly, being viewed as competent and liked by peers may promote self-esteem, whereas being viewed as incompetent and disliked by peers may diminish self-esteem […]. One complicated issue […] is that self-esteem may actually shape how individuals perceive the world […]. Individuals with low self-esteem may perceive peer rejection and negativity even when peers do not actually harbor such perspectives. These self-perceptions – whether true or not – may reinforce levels of self-esteem. […] An emerging body of evidence suggests that people with low self-esteem elicit particular responses from the social environment. […] For example, people report being more interested in voting for presidential candidates that are perceived as having higher self-esteem and people perceived as having higher self-esteem are thought to make more desirable relationship partners, particularly when the high self-esteem target is male […] In many cases, the environmental stimuli evoked by self-esteem seems to follow the “corresponsive principle” […] of personality development, the idea that life experiences accentuate the characteristics that were initially responsible for the environmental experiences in the first place. For instance, when low self-esteem invites victimization, it is likely that peer victimization will further depress self-esteem. Similarly, Holmes and Wood (2009) found that individuals with low self-esteem are less disclosing in interpersonal settings, which tends to hamper the development of close relationships. As they note, “the avoidance of risk is self-defeating, resulting in lost social opportunities, the very lack of close connection that [individuals with low self-esteem] fear, and the perpetuation of their low self-esteem””

“differences in traits motivate individuals to select certain situations over others. These processes will also facilitate continuity by the corresponsive principle […] The idea that self-esteem is related to the kinds of environments individuals select for themselves is also consistent with Swann’s self-verification theory, which proposes that individuals are motivated to confirm their pre-existing self-views […]. That is, individuals with low self-esteem seek contexts that confirm and maintain their low self-regard whereas individuals with high self-esteem seek contexts that promote their high self-regard. This process might explain why individuals with low self-esteem prefer certain kinds of relationships that can involve negative feedback […]. An important caveat is that individuals with low self-esteem will prefer negative feedback in relationship contexts with a low risk of rejection. The gist is that a romantic partner can be negative but not rejecting. Nonetheless, the upshot of self-verification motives is that they tend to promote the consistency of self-views.”

“Direct measures of self-esteem do a good job of assessing one’s overall level of self-esteem (i.e., global self-evaluation) but individuals with both secure and fragile forms of high self-esteem will report feeling good about themselves and they should score equally high on these measures of self-esteem. Consequently, researchers have begun considering factors beyond self-esteem level in order to identify individuals with the fragile form of high self-esteem. The three main approaches have been to consider whether high self-esteem is contingent, unstable, or accompanied by low implicit self-esteem […] Contingent self-esteem refers to self-evaluations that depend on meeting standards of performance, approval, or acceptance in order to be maintained. This is a fragile form of high self-esteem because individuals only feel good about themselves when they are able to meet these standards […] Deci and Ryan […] argued in their self-determination theory that some people regulate their behavior and goals based on introjected standards – most typically they internalize significant others’ conditional standards of approval – which causes them to develop contingent self-esteem. These individuals become preoccupied with meeting externally derived standards or expectations in order to maintain their feelings of self-worth. As a result, their self-esteem is continually “on the line.” […] Tellingly, drops in self-esteem that result from failures in contingent domains are often greater in magnitude than boosts that result from successes […]. This asymmetry may contribute to the overall fragility of contingent self-esteem.
As a result of their positive self-views being continuously on the line, individuals with contingent high self-esteem may go to great lengths to guard against threatening information by engaging in practices such as blaming others for their failures, derogating people who criticize them, or distorting or denying information that reflects poorly on them […] Taken together, this evidence suggests that contingent high self-esteem is fragile and related to defensiveness.”

“Another indicator of fragile self-esteem is self-esteem instability […] Unstable high self-esteem is considered to be a form of fragile high self-esteem because these fluctuations in feelings of self-worth suggest that the positive attitudes these individuals hold about themselves are vulnerable to challenges or threats. Contingent self-esteem can contribute to self-esteem instability. […] Some self-esteem contingencies are more likely to induce self-esteem instability than others. For example, those contingencies of self-worth that have been identified by Crocker and her colleagues […] as external […] appear to be more closely associated with self-esteem instability than those contingencies that are internal […] individuals with unstable high self-esteem are more self-aggrandizing and indicate they are more likely to boast to their friends about their successes than individuals with stable high self-esteem […] When they actually do perform well (e.g., on an exam), individuals with unstable high self-esteem are more likely to claim that they did so in spite of performance-inhibiting factors […] The final major indicator of fragile self-esteem is high self-esteem that is accompanied by low levels of implicit self-esteem […]. A number of longstanding theories suggest that some individuals with high self-esteem are defensive, aggressive, and boastful because they harbor negative self-feelings at less conscious levels […] We can now offer a provisional answer to the question of what constitutes “healthy” self-esteem: it is high self-esteem that is relatively non-contingent, stable, and accompanied by high implicit self-esteem. […] The key to healthy self-esteem may […] be to take one’s focus away from self-esteem. Self-determination theory posits that secure (or “true”) high self-esteem arises from acting in accordance with one’s authentic values and interests rather than regulating behavior around self-esteem concerns […] Somewhat ironically, the most effective way to cultivate healthy self-esteem may be to worry less about having high self-esteem.”

November 6, 2014 Posted by | books, Psychology | Leave a comment

Elementary Set Theory


20110404(Smbc. Here are the links)

I finished this book earlier today. This is a really hard book to blog, and I ended up concluding that I should just cover the material here by adding some links to relevant wiki articles dealing with stuff also covered in the book. It’s presumably far from obvious to someone who has not read the book how the different links are related to each other, but adding details about this as well would mean rewriting the book. Consider the articles samples of the kind of stuff covered in the book.

It should be noted that the book has very few illustrations and a lot of (/nothing but) formulas, theorems, definitions, and examples. Lots of proofs, naturally. I must admit I found the second half of the book unnecessarily hard to follow because it unlike the first half included no images and illustrations at all (like the ones in the various wiki articles below) – you don’t need an illustration on every second page to follow this stuff, but in my opinion you do need some occasionally to help you imagine what’s actually going on; this stuff is quite abstract enough to begin with without removing all ‘visual aids’. Some of the stuff covered in the book was review, but a lot of the stuff covered was new to me and I did have a few ‘huh, I never really realized you could think about [X] that way‘-experiences along the way. One problem making me want to read the book was also that although I’ve encountered some of the concepts introduced along the way – to give an example, the axiom of choice and various properties of sets, such as countability, came up during a micro course last year – I’ve never really read a book about that stuff so it’s sort of felt disconnected and confusing to some extent because I’ve been missing the big picture. It was of course too much to ask for to assume that this book would give me the big picture, but it’s certainly helped a bit to read this book. I incidentally did not find it too surprising that the stuff in this book sort of overlaps a bit with some stuff I’ve previously encountered in micro; in a book like this one there are a lot of thoughts and ideas about how to set up a mathematical system, and you start out at a very simple level and then gradually add more details along the way when you encounter problems or situations which cannot be dealt with with the tools/theorems/axioms already at your disposal – this to me seems to be a conceptually similar approach to numbers and mathematics as the approach to modelling preferences and behaviour which is applied in e.g. Mas-Colell et al.‘s microeconomics textbook, so it makes sense that some similar questions and considerations pop up. A related point is that some of the problems the two analytical approaches are trying to solve are really quite similar; a microeconomist wants to know how best to define preferences and then use the relevant definitions to compare people’s preferences, e.g. by trying to order them in various ways, and he wants to understand which ways of mathematically manipulating the terms used are permissible and which are not – activities which, at least from a certain point of view, are not that different from those of set theorists such as those who wrote this book.

I decided not to rate the book in part because I don’t think it’s really fair to rate a math book when you’re not going to have a go at the exercises. I wanted an overview, I don’t care enough about this stuff to dive in deep and start proving stuff on my own. I was as mentioned annoyed by the lack of illustrations in the second part and occasionally it was really infuriating to encounter the standard ‘…and so it’s obvious that…’ comments which you can find in most math textbooks, but I don’t know how much these things should really matter. Another reason why I decided not to rate the book is that I’ve been suffering from some pretty bad noise pollution over the last days, which has meant that I’ve occasionally had a really hard time concentrating on the mathematics (‘if I’d just had some peace and quiet I might have liked the book much better than I did, and I might have done some of the exercises as well’, is my working hypothesis).

Okay, as mentioned I’ve added some relevant links below – check them out if you want to know more about what this book’s about and what kind of mathematics is covered in there. I haven’t read all of those links, but I’ve read about those things by reading the book. I should note, in case it’s not clear from the links, that some of this stuff is quite weird.

Set theory.
Bijection, injection and surjection.
Indexed family.
Power set.
Empty set.
Family of sets.
Axiom of choice.
Successor function.
Axiom of infinity.
Peano axioms.
Equivalence class.
Equivalence relation.
Schröder–Bernstein theorem.
Cantor’s theorem.
Cantor’s diagonal argument.
Order theory.
Transfinite induction.
Zorn’s Lemma.
Ordinal number.
Cardinal number.
Cantor’s continuum hypothesis.


November 4, 2014 Posted by | books, mathematics | Leave a comment

Chlamydia and gonorrhea…

Below some observations from Holmes et al.‘s chapters about the sexually transmitted bacterial infections chlamydia and gonorrhea. A few of these chapters covered some really complicated stuff, but I’ve tried to keep the coverage reasonably readable by avoiding many of the technical details. I’ve also tried to make the excerpts easier to read by adding relevant links and by adding brief explanations of specific terms in brackets where this approach seemed like it might be helpful.

“Since the early 1970s, Chlamydia trachomatis has been recognized as a genital pathogen responsible for an increasing variety of clinical syndromes, many closely resembling infections caused by Neisseria gonorrhoeae […]. Because many practitioners have lacked access to facilities for laboratory testing for chlamydia, these infections often have been diagnosed and treated without benefit of microbiological confirmation. Newer, molecular diagnostic tests have in part now addressed this problem […] Unfortunately, many chlamydial infections, particularly in women, are difficult to diagnose clinically and elude detection because they produce few or no symptoms and because the symptoms and signs they do produce are nonspecific. […] chlamydial infections tend to follow a fairly self-limited acute course, resolving into a low-grade persistent infection which may last for years. […] The disease process and clinical manifestations of chlamydial infections probably represent the combined effects of tissue damage from chlamydial replication and inflammatory responses to chlamydiae and the necrotic material from destroyed host cells. There is an abundant immune response to chlamydial infection (in terms of circulating antibodies or cell-mediated responses), and there is evidence that chlamydial diseases are diseases of immunopathology. […] A common pathologic end point of chlamydial infection is scarring of the affected mucous membranes. This is what ultimately leads to blindness in trachoma and to infertility and ectopic pregnancy after acute salpingitis. There is epidemiologic evidence that repeated infection results in higher rates of sequelae.”

“The prevalence of chlamydial urethral infection has been assessed in populations of men attending general medical clinics, STD clinics, adolescent medicine clinics, and student health centers and ranges from 3–5% of asymptomatic men seen in general medical settings to 15–20% of all men seen in STD clinics. […] The overall incidence of C. trachomatis infection in men has not been well defined, since in most countries these infections are not officially reported, are not microbiologically
confirmed, and often may be asymptomatic, thus escaping detection. […] The prevalence of chlamydial infection has been studied in pregnant women, in women attending gynecology or family planning clinics, in women attending STD clinics, in college students, and in women attending general medicine or family practice clinics in school-based clinics and more recently in population-based studies. Prevalence of infection in these studies has ranged widely from 3% in asymptomatic women in community-based surveys to over 20% in women seen in STD clinics.[31–53] During pregnancy, 3–7% of women generally have been chlamydia positive […] Several studies in the United States indicate that approximately 5% of neonates acquire chlamydial infection perinatally, yet antibody prevalence in later childhood before onset of sexual activity may exceed 20%.”

“Clinically, chlamydia-positive and chlamydia-negative NGU [Non-Gonococcal Urethritis] cannot be differentiated on the basis of signs or symptoms.[76] Both usually present after a 7–21-day incubation period with dysuria and mild-to-moderate whitish or clear urethral discharge. Examination reveals no abnormalities other than the discharge in most cases […] Clinical recognition of chlamydial cervicitis depends on a high index of suspicion and a careful cervical examination. There are no genital symptoms that are specifically correlated with chlamydial cervical infection. […] Although urethral symptoms may develop in some women with chlamydial infection, the majority of female STD clinic patients with urethral chlamydial infection do not have dysuria or frequency. […] the majority of women with chlamydial infection cannot be distinguished from uninfected women either by clinical examination or by […] simple tests and thus require the use of specific diagnostic testing. […] Since many chlamydial infections are asymptomatic, it has become clear that effective control must involve periodic testing of individuals at risk.[168] As the cost of extensive screening may be prohibitive, various approaches to defining target populations at increased risk of infection have been evaluated. One strategy has been to designate patients attending specific high prevalence clinic populations for universal testing. Such clinics would include STD, juvenile detention, and some family planning clinics. This approach, however, fails to account for the majority of asymptomatic infections, since attendees at high prevalence clinics often attend because of symptoms or suspicion of infection. Consequently, selective screening criteria have been developed for use in various clinical settings.[204–208] Among women, young age (generally, <24 years) is a critical risk factor for chlamydial infection in almost all studies. […] The practical implementation of screening programs in settings with low-to-moderate chlamydia prevalence requires that the prevalence at which selective screening becomes cost effective relative to universal screening must be defined. Toward this end, a number of investigators have undertaken cost-effectiveness analyses. Most of these analyses have concluded that universal screening is preferred in settings with chlamydia prevalence above 3–7%.”

If you’re a woman who’s decided not to have children and so aren’t terribly worried about infertility, it should be emphasized that untreated chlamydia can cause other really unpleasant stuff as well, like chronic pelvic pain from pelvic inflammatory disease, or ectopic pregnancy, which may be life-threatening. This is the sort of infection you’ll want to get treated even if you’re not bothered by symptoms.

Neisseria gonorrhoeae (gonococci) is the etiologic agent of gonorrhea and its related clinical syndromes (urethritis, cervicitis, salpingitis, bacteremia, arthritis, and others). It is closely related to Neisseria meningitidis (meningococci), the etiologic agent of one form of bacterial meningitis, and relatively closely to Neisseria lactamica, an occasional human pathogen. The genus Neisseria includes a variety of other relatively or completely nonpathogenic organisms that are principally important because of their occasional diagnostic confusion with gonococci and meningococci. […] Many dozens of specific serovars have been defined […] By a combination of auxotyping and serotyping […] gonococci can be divided into over 70 different strains; the number may turn out to be much larger.”

“Humans are the only natural host for gonococci. Gonococci survive only a short time outside the human body. Although gonococci can be cultured from a dried environment such as a toilet seat up to 24 hours after being artificially inoculated in large numbers onto such a surface, there is virtually no evidence that natural transmission occurs from toilet seats or similar objects. Gonorrhea is a classic example of an infection spread by contact: immediate physical contact with the mucosal surfaces of an infected person, usually a sexual partner, is required for transmission. […] Infection most often remains localized to initial sites of inoculation. Ascending genital infections (salpingitis, epididymitis) and bacteremia, however, are relatively common and account for most of the serious morbidity due to gonorrhea.”

“Consideration of clinical manifestations of gonorrhea suggests many facets of the pathogenesis of the infection. Since gonococci persist in the male urethra despite hydrodynamic forces that would tend to wash the organisms from the mucosal surface, they must be able to adhere effectively to mucosal surfaces. Similarly, since gonococci survive in the urethra despite close attachment to large numbers of neutrophils, they must have mechanisms that help them to survive interactions with polymorphonuclear neutrophils. Since some gonococci are able to invade and persist in the bloodstream for many days at least, they must be able to evade killing by normal defense mechanisms of plasma […] Invasion of the bloodstream also implies that gonococci are able to invade mucosal barriers in order to gain access to the bloodstream. Repeated reinfections of the same patient by one strain strongly suggest that gonococci are able to change surface antigens frequently and/or to escape local immune mechanisms […] The considerable tissue damage of fallopian tubes consequent to gonococcal salpingitis suggests that gonococci make at least one tissue toxin or gonococci trigger an immune response that results in damage to host tissues.[127] There is evidence to support many of these inferences. […] Since the mid-1960s, knowledge of the molecular basis of gonococcal–host interactions and of gonococcal epidemiology has increased to the point where it is amongst the best described of all microbial pathogens. […] Studies of pathogenesis are [however] complicated by the absence of a suitable animal model. A variety of animal models have been developed, each of which has certain utility, but no animal model faithfully reproduces the full spectrum of naturally acquired disease of humans.”

“Gonococci are inherently quite sensitive to antimicrobial agents, compared with many other gram-negative bacteria. However, there has been a gradual selection for antibioticresistant mutants in clinical practice over the past several decades […] The consequence of these events has been to make penicillin and tetracycline therapy ineffective in most areas. Antibiotics such as spectinomycin, ciprofloxacin, and ceftriaxone generally are effective but more expensive than penicillin G and tetracycline. Resistance to ciprofloxacin emerged in SE Asia and Africa in the past decade and has spread gradually throughout much of the world […] Streptomycin (Str) is not frequently used for therapy of gonorrhea at present, but many gonococci exhibit high-level resistance to Str. […] Resistance to fluoroquinolones is increasing, and now has become a general problem in many areas of the world.”

“The efficiency of gonorrhea transmission depends on anatomic sites infected and exposed as well as the number of exposures. The risk of acquiring urethral infection for a man following a single episode of vaginal intercourse with an infected woman is estimated to be 20%, rising to an estimated 60–80% following four exposures.[16] The prevalence of infection in women named as secondary sexual contacts of men with gonococcal urethritis has been reported to be 50–90%,[16,17] but no published studies have carefully controlled for number of exposures. It is likely that the single-exposure transmission rate from male to female is higher than that from female to male […] Previous reports saying that 80% of women with gonorrhea were asymptomatic were most often based on studies of women who were examined in screening surveys or referred to STD clinics because of sexual contact with infected men.[23] Symptomatic infected women who sought medical attention were thus often excluded from such surveys. However […] more than 75% of women with gonorrhea attending acute care facilities such as hospital emergency rooms are symptomatic.[24] The true proportion of infected women who remain asymptomatic undoubtedly lies between these extremes […] Asymptomatic infections occur in men as well as women […] Asymptomatically infected males and females contribute disproportionately to gonorrhea transmission, because symptomatic individuals are more likely to cease sexual activity and seek medical care.”

“the incidence of asymptomatic urethral gonococcal infection in the general population also has been estimated at approximately 1–3%.[27] The prevalence of asymptomatic infection may be much higher, approaching 5% in some studies, because untreated asymptomatic infections may persist for considerable periods. […] The prevalence of gonorrhea within communities tends to be dynamic, fluctuating over time, and influenced by a number of interactive factors. Mathematical models for gonorrhea within communities suggest that gonorrhea prevalence is sustained not only through continued transmission by asymptomatically infected patients but also by “core group” transmitters who are more likely than members of the general population to become infected and transmit gonorrhea to their sex partners. […] At present, gonorrhea prevention and control efforts are heavily invested in the concept of vigorous pursuit and treatment of infected core-group members and asymptomatically infected individuals.”

“Relatively large numbers (>50) of gonococcal A/S [auxotype/serotype] classes usually are present in most communities simultaneously […] and new strains can be detected over time. The distribution of isolates within A/S classes tends to be uneven, with a few A/S classes contributing disproportionately to the total number of isolates. These predominant A/S classes generally persist within communities for months or years. […] Interviews of the patients infected by [a specific] strain early in [an] outbreak identified one infected female who acknowledged over 100 different sexual partners over the preceding 2 months, suggesting that she may have played an important role in the introduction and establishment of this gonococcal strain in the community. Thus the Proto/IB-3 strain may have become common in Seattle not because of specific biologic factors but because of its chance of transmission to members of a core population by a high-frequency transmitter.” [100+ partners over a 2 month period! I was completely dumbstruck when I’d read that.]

“clinical gonorrhea is manifested by a broad spectrum of clinical presentations including asymptomatic and symptomatic local infections, local complicated infections, and systemic dissemination. […] Acute anterior urethritis is the most common manifestation of gonococcal infection in men. The incubation period ranges from 1 to 14 days or even longer; however, the majority of men develop symptoms within 2–5 days […] The predominant symptoms are urethral discharge or dysuria [pain on urination]. […] Without treatment, the usual course of gonococcal urethritis is spontaneous resolution over a period of several weeks, and before the development of effective antimicrobial therapy, 95% of untreated patients became asymptomatic within 6 months.[43] […] The incubation period for urogenital gonorrhea in women is less certain and probably more variable than in men, but most who develop local symptoms apparently do so within 10 days of infection.[51,52] The most common symptoms are those of most lower genital tract infections in women […] and include increased vaginal discharge, dysuria, intermenstrual uterine bleeding, and menorrhagia [abnormally heavy and prolonged menstrual period], each of which may occur alone or in combination and may range in intensity from minimal to severe. […] The clinical assessment of women for gonorrhea is often confounded […] by the nonspecificity of these signs and symptoms and by the high prevalence of coexisting cervical or vaginal infections with Chlamydia trachomatis, Trichomonas vaginalis, Candida albicans, herpes simplex virus, and a variety of other organisms […] Among coinfecting agents for patients with gonorrhea in the United States, C. trachomatis [chlamydia] is preeminent. Up to 10–20% of men and 20–30% of women with acute urogenital gonorrhea are coinfected with C. trachomatis.[10,46,76,139–141] In addition, substantial numbers of women with acute gonococcal infection have simultaneous T. vaginalis infections.”

“Among patients with gonorrhea, pharyngeal infection occurs in 3–7% of heterosexual men, 10–20% of heterosexual women, and 10–25% of homosexually active men. […] Gonococcal infection is transmitted to the pharynx by orogenital sexual contact and is more efficiently acquired by fellatio than by cunnilingus.[63]”

“In men, the most common local complication of gonococcal urethritis is epididymitis […], a syndrome that occurred in up to 20% of infected patients prior to the availability of modern antimicrobial therapy. […] Postinflammatory urethral strictures were common complications of untreated gonorrhea in the preantibiotic era but are now rare […] In acute PID [pelvic inflammatory disease], the clinical syndrome comprised primarily of salpingitis, and frequently including endometritis, tubo-ovarian tuboovarian abscess, or pelvic peritonitis is the most common complication of gonorrhea in women, occurring in an estimated 10–20% of those with acute gonococcal infection.[75,76] PID is the most common of all complications of gonorrhea, as well as the most important in terms of public-health impact, because of both its acute manifestations and its longterm sequelae (infertility, ectopic pregnancy, and chronic pelvic pain).”

“A major impediment to use of culture for gonorrhea diagnosis in many clinical settings are the time, expense, and logistical limitations such as specimen transport to laboratories for testing, a process that may take several days and result in temperature variation or other circumstances that can jeopardize culture viability.[111] In recent years, reliable nonculture assays for gonorrhea detection have become available and are being used increasingly. […] recently, nucleic acid amplification tests (NAATs) for gonorrhea diagnosis have become widely available.[116,117] Assays based on polymerase chain reaction (PCR), transcription-mediated amplification (TMA), and other nucleic acid amplification technologies have been developed. As a group, commercially available NAATs are more sensitive than culture for gonorrhea diagnosis and specificities are nearly as high as for culture. […] Emerging data suggest that most currently available NAATs are substantially more sensitive for gonorrhea detection than conventional culture.”

“Prior to the mid-1930s, when sulfanilamide was introduced, gonorrhea therapy involved local genital irrigation with antiseptic solutions such as silver nitrate […] By 1944 […] many gonococci had become sulfanilamide resistant […] Fortunately, in 1943 the first reports of the near 100% utility of penicillin for gonorrhea therapy were published,[127] and by the end of World War II, as penicillin became available to the general public, it quickly became the therapy of choice. Since then, continuing development of antimicrobial resistance by N. gonorrhoeae[128,129] led to regular revisions of recommended gonorrhea therapy. From the 1950s until the mid-1970s, gradually increasing chromosomal penicillin resistance led to periodic increases in the amount of penicillin required for reliable therapy. […] by the late 1980s, penicillins and tetracyclines were no longer recommended for gonorrhea therapy.
In addition to resistance to penicillin, tetracyclines, and erythromycin, in 1987, clinically significant chromosomally mediated resistance to spectinomycin — another drug recommended for gonorrhea therapy — was described in U.S. military personnel in Korea.[132] In Korea, because of the high prevalence of PPNG [spectinomycin-resistant Penicillinase-Producing Neisseria Gonorrhoeae], in 1981, spectinomycin had been adopted as the drug of choice for gonorrhea therapy. By 1983, however, spectinomycin treatment failures were beginning to occur in patients with gonorrhea […] Following recognition of the outbreak of spectinomycin-resistant gonococci in Korea, ceftriaxone became the drug of choice for treatment of gonorrhea in U.S. military personnel in that country.[132] […] Beginning in 1993, fluoroquinolone antibiotics were recommended for therapy of uncomplicated gonorrhea in the United States […] [However] in 2007 the CDC opted to no longer recommend fluoroquinolone antibiotics for therapy of uncomplicated gonorrhea. This change meant that ceftriaxone and other cephalosporin antibiotics had become the sole class of antibiotics recommended as first-line therapy for gonorrhea. […] For over two decades, ceftriaxone — a third-generation cephalosporin—has been the most reliable single-dose regimen used for gonorrhea worldwide. […] there are currently few well-studied therapeutic alternatives to ceftriaxone for gonorrhea treatment.”

November 1, 2014 Posted by | books, medicine | Leave a comment