Econstudentlog

Introduction to Meta Analysis (III)

meta-analysis
(xkcd).

This will be my last post about the book. Below I have included some observations from the last 100 pages.

“A central theme in this volume is the fact that we usually prefer to work with effect sizes, rather than p-values. […] While we would argue that researchers should shift their focus to effect sizes even when working entirely with primary studies, the shift is absolutely critical when our goal is to synthesize data from multiple studies. A narrative reviewer who works with p-values (or with reports that were based on p-values) and uses these as the basis for a synthesis, is facing an impossible task. Where people tend to misinterpret a single p-value, the problem is much worse when they need to compare a series of p-values. […] the p-value is often misinterpreted. Because researchers care about the effect size, they tend to take whatever information they have and press it into service as an indicator of effect size. A statistically significant p-value is assumed to reflect a clinically important effect, and a nonsignificant p-value is assumed to reflect a trivial (or zero) effect. However, these interpretations are not necessarily correct. […] The narrative review typically works with p-values (or with conclusions that are based on p-values), and therefore lends itself to […] mistakes. p-values that differ are assumed to reflect different effect sizes but may not […], p-values that are the same are assumed to reflect similar effect sizes but may not […], and a more significant p-value is assumed to reflect a larger effect size when it may actually be based on a smaller effect size […]. By contrast, the meta-analysis works with effect sizes. As such it not only focuses on the question of interest (what is the size of the effect) but allows us to compare the effect size from study to study.”

“To compute the summary effect in a meta-analysis we compute an effect size for each study and then combine these effect sizes, rather than pooling the data directly. […] This approach allows us to study the dispersion of effects before proceeding to the summary effect. For a random-effects model this approach also allows us to incorporate the between-studies dispersion into the weights. There is one additional reason for using this approach […]. The reason is to ensure that each effect size is based on the comparison of a group with its own control group, and thus avoid a problem known as Simpson’s paradox. In some cases, particularly when we are working with observational studies, this is a critically important feature. […] The term paradox refers to the fact that one group can do better in every one of the included studies, but still do worse when the raw data are pooled. The problem is not limited to studies that use proportions, but can exist also in studies that use means or other indices. The problem exists only when the base rate (or mean) varies from study to study and the proportion of participants from each group varies as well. For this reason, the problem is generally limited to observational studies, although it can exist in randomized trials when allocation ratios vary from study to study.” [See the wiki article for more]

“When studies are addressing the same outcome, measured in the same way, using the same approach to analysis, but presenting results in different ways, then the only obstacles to meta-analysis are practical. If sufficient information is available to estimate the effect size of interest, then a meta-analysis is possible. […]
When studies are addressing the same outcome, measured in the same way, but using different approaches to analysis, then the possibility of a meta-analysis depends on both statistical and practical considerations. One important point is that all studies in a meta-analysis must use essentially the same index of treatment effect. For example, we cannot combine a risk difference with a risk ratio. Rather, we would need to use the summary data to compute the same index for all studies.
There are some indices that are similar, if not exactly the same, and judgments are required as to whether it is acceptable to combine them. One example is odds ratios and risk ratios. When the event is rare, then these are approximately equal and can readily be combined. As the event gets more common the two diverge and should not be combined. Other indices that are similar to risk ratios are hazard ratios and rate ratios. Some people decide these are similar enough to combine; others do not. The judgment of the meta-analyst in the context of the aims of the meta-analysis will be required to make such decisions on a case by case basis.
When studies are addressing the same outcome measured in different ways, or different outcomes altogether, then the suitability of a meta-analysis depends mainly on substantive considerations. The researcher will have to decide whether a combined analysis would have a meaningful interpretation. […] There is a useful class of indices that are, perhaps surprisingly, combinable under some simple transformations. In particular, formulas are available to convert standardized mean differences, odds ratios and correlations to a common metric [I should note that the book covers these data transformations, but I decided early on not to talk about that kind of stuff in my posts because it’s highly technical and difficult to blog] […] These kinds of conversions require some assumptions about the underlying nature of the data, and violations of these assumptions can have an impact on the validity of the process. […] A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test [because the test will often have low power anyway]. […] The report of a meta-analysis should generally include a forest plot.”

“The issues addressed by a sensitivity analysis for a systematic review are similar to those that might be addressed by a sensitivity analysis for a primary study. That is, the focus is on the extent to which the results are (or are not) robust to assumptions and decisions that were made when carrying out the synthesis. The kinds of issues that need to be included in a sensitivity analysis will vary from one synthesis to the next. […] One kind of sensitivity analysis is concerned with the impact of decisions that lead to different data being used in the analysis. A common example of sensitivity analysis is to ask how results might have changed if different study inclusion rules had been used. […] Another kind of sensitivity analysis is concerned with the impact of the statistical methods used […] For example one might ask whether the conclusions would have been different if a different effect size measure had been used […] Alternatively, one might ask whether the conclusions would be the same if fixed-effect versus random-effects methods had been used. […] Yet another kind of sensitivity analysis is concerned with how we addressed missing data […] A very important form of missing data is the missing data on effect sizes that may result from incomplete reporting or selective reporting of statistical results within studies. When data are selectively reported in a way that is related to the magnitude of the effect size (e.g., when results are only reported when they are statistically significant), such missing data can have biasing effects similar to publication bias on entire studies. In either case, we need to ask how the results would have changed if we had dealt with missing data in another way.”

“A cumulative meta-analysis is a meta-analysis that is performed first with one study, then with two studies, and so on, until all relevant studies have been included in the analysis. As such, a cumulative analysis is not a different analytic method than a standard analysis, but simply a mechanism for displaying a series of separate analyses in one table or plot. When the series are sorted into a sequence based on some factor, the display shows how our estimate of the effect size (and its precision) shifts as a function of this factor. When the studies are sorted chronologically, the display shows how the evidence accumulated, and how the conclusions may have shifted, over a period of time.”

“While cumulative analyses are most often used to display the pattern of the evidence over time, the same technique can be used for other purposes as well. Rather than sort the data chronologically, we can sort it by any variable, and then display the pattern of effect sizes. For example, assume that we have 100 studies that looked at the impact of homeopathic medicines, and we think that the effect is related to the quality of the blinding process. We anticipate that studies with complete blinding will show no effect, those with lower quality blinding will show a minor effect, those that blind only some people will show a larger effect, and so on. We could sort the studies based on the quality of the blinding (from high to low), and then perform a cumulative analysis. […] Similarly, we could use cumulative analyses to display the possible impact of publication bias. […] large studies are assumed to be unbiased, but the smaller studies may tend to over-estimate the effect size. We could perform a cumulative analysis, entering the larger studies at the top and adding the smaller studies at the bottom. If the effect was initially small when the large (nonbiased) studies were included, and then increased as the smaller studies were added, we would indeed be concerned that the effect size was related to sample size. A benefit of the cumulative analysis is that it displays not only if there is a shift in effect size, but also the magnitude of the shift. […] It is important to recognize that cumulative meta-analysis is a mechanism for display, rather than analysis. […] These kinds of displays are compelling and can serve an important function. However, if our goal is actually to examine the relationship between a factor and effect size, then the appropriate analysis is a meta-regression”

“John C. Bailar, in an editorial for the New England Journal of Medicine (Bailar, 1997), [wrote] that mistakes […] are common in meta-analysis. He argues that a meta-analysis is inherently so complicated that mistakes by the persons performing the analysis are all but inevitable. He also argues that journal editors are unlikely to uncover all of these mistakes. […] The specific points made by Bailar about problems with meta-analysis are entirely reasonable. He is correct that many meta-analyses contain errors, some of them important ones. His list of potential (and common) problems can serve as a bullet list of mistakes to avoid when performing a meta-analysis. However, the mistakes cited by Bailar are flaws in the application of the method, rather than problems with the method itself. Many primary studies suffer from flaws in the design, analyses, and conclusions. In fact, some serious kinds of problems are endemic in the literature. The response of the research community is to locate these flaws, consider their impact for the study in question, and (hopefully) take steps to avoid similar mistakes in the future. In the case of meta-analysis, as in the case of primary studies, we cannot condemn a method because some people have used that method improperly. […] In his editorial Bailar concludes that, until such time as the quality of meta-analyses is improved, he would prefer to work with the traditional narrative reviews […] We disagree with the conclusion that narrative reviews are preferable to systematic reviews, and that meta-analyses should be avoided. The narrative review suffers from every one of the problems cited for the systematic review. The only difference is that, in the narrative review, these problems are less obvious. […] the key advantage of the systematic approach of a meta-analysis is that all steps are clearly described so that the process is transparent.”

November 21, 2014 - Posted by | Books, Statistics

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.