Econstudentlog

Stuff

i. Sample Size in Psychological Research Over the Past 30 Years.

“Summary. —The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force’s final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.”

I unfortunately can’t find an ungated copy of this paper online, but here’s a little more stuff from the paper:

“Cohen (1962) concluded, “Increased sample size is likely to prove the most effective general prescription for improving power” (p. 153), but there is little evidence that the field has taken note. After reviewing the literature, Holmes (1979) reported finding only two studies that examined sample sizes directly. One study reported the number of articles published about single-subject samples (Dukes, 1965), and the other examined sample sizes reported in two British journals, finding that every reported study had N ≤ 25 (Cochrane & Duffy, 1974).
Holmes (1979, 1983) himself examined sample sizes in four APA journals in 1955 and 1977, and reported median sample sizes for the total study and each of the comparison groups. His general conclusions were that sample size had not changed significantly between 1955 and 1977, and that the typical sample size in psychology did not seem large […] the purpose of the present study was to examine sample sizes reported in the same four journals examined by Holmes (1979, 1983), but in more recent volumes. Two additional data collections were undertaken, one in 1995 (about the time the Task Force was formed), and the other in 2006 […]

Table 1(click to view in a higher resolution)

So yeah, the median sample size was 32 in 1995 and 40 in 2006. 25% of published studies had n=14 or less in 1995, and n=18 or less in 2006. The sample size that occured most often in the 1995 sample was n=8; in 2006 it was 16.

“Our modeling showed that sample size depends on the field. Smaller samples are needed in experimental settings, presumably because sufficient control of extraneous variation is in place, and standard errors tend to be smaller. (Higher cost per participant may also be a factor, due to sophisticated measurement equipment or laboratory controls.) However some fields, such as applied and developmental psychology, depend much more on quasi-experimental research because of their greater emphasis on comparisons of naturally occurring groups and ecological validity. Such research designs result in more variation in the data, and larger samples are necessary to gain feasible standard errors. (Lower cost per participant may also be a factor, because of the availability of institutional archival data.) […]

We found that overall, the relatively small sample sizes found by Holmes did not increase significantly over the next 29 years. However, there was significant variability in the change in sample size over time by field, with increases from 1977 to 2006 appearing in the Journal of Abnormal Psychology and Developmental Psychology, and no change in Experimental Psychology or Applied Psychology (which actually showed a slight decrease for individual sample size).
The third hypothesis was that sample sizes remained unchanged after the Task Force report in 1999. A change would have been reflected in a significant difference in sample size between 1995 and 2006, but none was found. This result is not surprising, given previous research on power (e.g., Cohen, 1962; Sedlmeier & Gigerenzer, 1989; Rossi, 1990; Maddock & Rossi, 2001; Maxwell, 2004) and Holmes’ own studies on sample size (Holmes, 1979, 1983; Holmes, et al., 1981). However, it is troubling, especially when one considers the increased use of sophisticated multivariate analyses and statistical modeling techniques during this time that would require the employment of larger sample sizes (Merenda, 2007; Rodgers, 2010).”

Here’s a link to one of the ungated power studies mentioned in the paper.

ii. Old pictures. Lots of old pictures.

iii. BookOs.

iv. “What [would happen] if I took a swim in a typical spent nuclear fuel pool? Would I need to dive to actually experience a fatal amount of radiation? How long could I stay safely at the surface?”

And now you know.

There’s a little background stuff on the subject here.

v. For some reason this picture touched me deeply (click to view full size):

Mongolian_woman_condemned_to_die_of_starvation_(retouched)Via Wikipedia.

vi. “Facebook killed TV.” – from this Paul Graham essay on Why TV Lost.

vii. The End of History Illusion.

“We measured the personalities, values, and preferences of more than 19,000 people who ranged in age from 18 to 68 and asked them to report how much they had changed in the past decade and/or to predict how much they would change in the next decade. Young people, middle-aged people, and older people all believed they had changed a lot in the past but would change relatively little in the future. People, it seems, regard the present as a watershed moment at which they have finally become the person they will be for the rest of their lives. This “end of history illusion” had practical consequences, leading people to overpay for future opportunities to indulge their current preferences.”

Unfortunately I’ve not been able to find an ungated link, but here’s a bit more from the concluding remarks of the paper:

“Across six studies of more than 19,000 participants, we found consistent evidence to indicate that people underestimate how much they will change in the future, and that doing so can lead to suboptimal decisions. Although these data cannot tell us what causes the end of history illusion, two possibilities seem likely. First, most people believe that their personalities are attractive, their values admirable, and their preferences wise (10); and having reached that exalted state, they may be reluctant to entertain the possibility of change. People also like to believe that they know themselves well (11), and the possibility of future change may threaten that belief. In short, people are motivated to think well of themselves and to feel secure in that understanding, and the end of history illusion may help them accomplish these goals.

Second, there is at least one important difference between the cognitive processes that allow people to look forward and backward in time (12). Prospection is a constructive process, retrospection is a reconstructive process, and constructing new things is typically more difficult than reconstructing old ones (13, 14). The reason this matters is that people often draw inferences from the ease with which they can remember or imagine (15, 16). If people find it difficult to imagine the ways in which their traits, values, or preferences will change in the future, they may assume that such changes are unlikely. In short, people may confuse the difficulty of imagining personal change with the unlikelihood of change itself.

Although the magnitude of this end of history illusion in some of our studies was greater for younger people than for older people, it was nonetheless evident at every stage of adult life that we could analyze. Both teenagers and grandparents seem to believe that the pace of personal change has slowed to a crawl and that they have recently become the people they will remain. History, it seems, is always ending today.”

February 8, 2013 - Posted by | Psychology, random stuff, statistics, studies

4 Comments »

  1. > vii. The End of History Illusion.

    Has been heavily criticized. I don’t understand the criticism but apparently a lot of people have found it persuasive.

    Comment by gwern | February 8, 2013 | Reply

    • Any links?

      I only skimmed the paper, but given that the conclusions were not too surprising or counterintuitive to me and given the large n I did not bother to look too closely at the methodology (‘cohort effects may be a problem here and a longitudinal design would have been more convincing, but it doesn’t seem too crazy anyway…’).

      Update: This is probably one of the critical articles you think of.

      Comment by US | February 8, 2013 | Reply

      • /shrug. The criticism was discussed a lot of places, including LW IIRC.

        Comment by gwern | February 8, 2013

      • I see. I rarely read LW these days.

        No need to go hunt down links if you don’t have any at hand – if critics point to the problems with the study design and things like the stuff mentioned in the BPS link, they’re not really telling me anything new. It’s one study anyway; if it fails to replicate with a more convincing setup it probably wasn’t real. And given that it’s presumably some of the first work that’s been done on this, it makes a lot of sense they didn’t start out with a prospective cohort study (that kind of stuff costs a lot of money).

        Comment by US | February 8, 2013


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: