Econstudentlog

Prevention of Late-Life Depression (II)

Some more observations from the book:

In contrast to depression in childhood and youth when genetic and developmental vulnerabilities play a significant role in the development of depression, the development of late-life depression is largely attributed to its interactions with acquired factors, especially medical illness [17, 18]. An analysis of the WHO World Health Survey indicated that the prevalence of depression among medical patients ranged from 9.3 to 23.0 %, significantly higher than that in individuals without medical conditions [19]. Wells et al. [20] found in the Epidemiologic Catchment Area Study that the risk of developing lifetime psychiatric disorders among individuals with at least one medical condition was 27.9 % higher than among those without medical conditions. […] Depression and disability mutually reinforce the risk of each other, and adversely affect disease progression and prognosis [21, 25]. […] disability caused by medical conditions serves as a risk factor for depression [26]. When people lose their normal sensory, motor, cognitive, social, or executive functions, especially in a short period of time, they can become very frustrated or depressed. Inability to perform daily tasks as before decreases self-esteem, reduces independence, increases the level of psychological stress, and creates a sense of hopelessness. On the other hand, depression increases the risk for disability. Negative interpretation, attention bias, and learned hopelessness of depressed persons may increase risky health behaviors that exacerbate physical disorders or disability. Meanwhile, depression-related cognitive impairment also affects role performance and leads to functional disability [25]. For example, Egede [27] found in the 1999 National Health Interview Survey that the risk of having functional disability among patients with the comorbidity of diabetes and depression were approximately 2.5–5 times higher than those with either depression or diabetes alone. […]  A leading cause of disability among medical patients is pain and pain-related fears […] Although a large proportion of pain complaints can be attributed to physiological changes from physical disorders, psychological factors (e.g., attention, interpretation, and coping skills) play an important role in perception of pain […] Bair et al. [31] indicated in a literature review that the prevalence of pain was higher among depressed patients than non-depressed patients, and the prevalence of major depression was also higher among pain patients comparing to those without pain complaints.”

Alcohol use has more serious adverse health effects on older adults than other age groups, since aging-related physiological changes (e.g. reduced liver detoxification and renal clearance) affect alcohol metabolism, increase the blood concentration of alcohol, and magnify negative consequences. More importantly, alcohol interacts with a variety of frequently prescribed medications potentially influencing both treatment and adverse effects. […] Due to age-related changes in pharmacokinetics and pharmacodynamics, older adults are a vulnerable population to […] adverse drug effects. […] Adverse drug events are frequently due to failure to adjust dosage or to account for drug–drug interactions in older adults [64]. […] Loneliness […] is considered as an independent risk factor for depression [46, 47], and has been demonstrated to be associated with low physical activity, increased cardiovascular risks, hyperactivity of the hypothalamic-pituitary-adrenal axis, and activation of immune response [for details, see Cacioppo & Patrick’s book on these topics – US] […] Hopelessness is a key concept of major depression [54], and also an independent risk factor of suicidal ideation […] Hopelessness reduces expectations for the future, and negatively affects judgment for making medical and behavioral decisions, including non-adherence to medical regimens or engaging in unhealthy behaviors.”

Co-occurring depression and medical conditions are associated with more functional impairment and mortality than expected from the severity of the medical condition alone. For example, depression accompanying diabetes confers increased functional impairment [27], complications of diabetes [65, 66], and mortality [6771]. Frasure-Smith and colleagues highlighted the prognostic importance of depression among persons who had sustained a myocardial infarction (MI), finding that depression was a significant predictor of mortality at both 6 and 18 months post MI [72, 73]. Subsequent follow-up studies have borne out the increased risk conferred by depression on the mortality of patients with cardiovascular disease [10, 74, 75]. Over the course of a 2-year follow-up interval, depression contributed as much to mortality as did myocardial infarction or diabetes, with the population attributable fraction of mortality due to depression approximately 13 % (similar to the attributable risk associated with heart attack at 11 % and diabetes at 9 %) [76]. […] Although the bidirectional relationship between physical disorders and depression has been well known, there are still relatively few randomized controlled trials on preventing depression among medically ill patients. […] Rates of attrition [in post-stroke depression prevention trials has been observed to be] high […] Stroke, acute coronary syndrome, cancer, and other conditions impose a variety of treatment burdens on patients so that additional interventions without direct or immediate clinical effects may not be acceptable [95]. So even with good participation rates, lack of adherence to the intervention might limit effects.”

Late-life depression (LLD) is a heterogeneous disease, with multiple risk factors, etiologies, and clinical features. It has been recognized for many years that there is a significant relationship between the presence of depression and cerebrovascular disease in older adults [1, 2]. This subtype of LLD was eventually termed “vascular depression.” […] There have been a multitude of studies associating white matter abnormalities with depression in older adults using MRI technology to visualize lesions, or what appear as hyperintensities in the white matter on T2-weighted scans. A systematic review concluded that white matter hyperintensities (WMH) are more common and severe among older adults with depression compared to their non-depressed peers [9]. […] WMHs are associated with older age [13] and cerebrovascular risk factors, including diabetes, heart disease, and hypertension [14–17]. White matter severity and extent of WMH volume has been related to the severity of depression in late life [18, 19]. For example, among 639 older, community-dwelling adults, white matter lesion (WML) severity was found to predict depressive episodes and symptoms over a 3-year period [19]. […] Another way of investigating white matter integrity is with diffusion tensor imaging (DTI), which measures the diffusion of water in tissues and allows for indirect evidence of the microstructure of white matter, most commonly represented as fractional anisotropy (FA) and mean diffusivity (MD). DTI may be more sensitive to white matter pathology than is quantification of WMH […] A number of studies have found lower FA in widespread regions among individuals with LLD relative to controls [34, 36, 37]. […] lower FA has been associated with poorer performance on measures of cognitive functioning among patients with LLD [35, 38–40] and with measures of cerebrovascular risk severity. […] It is important to recognize that FA reflects the organization of fiber tracts, including fiber density, axonal diameter, or myelination in white matter. Thus, lower FA can result from multiple pathophysiological sources [42, 43]. […] Together, the aforementioned studies provide support for the vascular depression hypothesis. They demonstrate that white matter integrity is reduced in patients with LLD relative to controls, is somewhat specific to regions important for cognitive and emotional functioning, and is associated with cognitive functioning and depression severity. […] There is now a wealth of evidence to support the association between vascular pathology and depression in older age. While the etiology of depression in older age is multifactorial, from the epidemiological, neuroimaging, behavioral, and genetic evidence available, we can conclude that vascular depression represents one important subtype of LLD. The mechanisms underlying the relationship between vascular pathology and depression are likely multifactorial, and may include disrupted connections between key neural regions, reduced perfusion of blood to key brain regions integral to affective and cognitive processing, and inflammatory processes.”

Cognitive changes associated with depression have been the focus of research for decades. Results have been inconsistent, likely as a result of methodological differences in how depression is diagnosed and cognitive functioning measured, as well as the effects of potential subtypes and the severity of depression […], though deficits in executive functioning, learning and memory, and attention have been associated with depression in most studies [75]. In older adults, additional confounding factors include the potential presence of primary degenerative disorders, such as Alzheimer’s disease, which can pose a challenge to differential diagnosis in its early stages. […] LLD with cognitive dysfunction has been shown to result in greater disability than depressive symptoms alone [6], and MCI [mild cognitive impairment, US] with co-occurring LLD has been shown to double the risk of developing Alzheimer’s disease (AD) compared to MCI alone [86]. The conversion from MCI to AD also appears to occur earlier in patients with cooccurring depressive symptoms, as demonstrated by Modrego & Ferrandez [86] in their prospective cohort study of 114 outpatients diagnosed with amnestic MCI. […] Given accruing evidence for abnormal functioning of a number of cortical and subcortical networks in geriatric depression, of particular interest is whether these abnormalities are a reflection of the actively depressed state, or whether they may persist following successful resolution of symptoms. To date, studies have investigated this question through either longitudinal investigation of adults with geriatric depression, or comparison of depressed elders who are actively depressed versus those who have achieved symptom remission. Of encouragement, successful treatment has been reliably associated with normalization of some aspects of disrupted network functioning. For example, successful antidepressant treatment is associated with reduction of the elevated cerebral glucose metabolism observed during depressed states (e.g., [71–74]), with greater symptom reduction associated with greater metabolic change […] Taken together, these studies suggest that although a subset of the functional abnormalities observed during the LLD state may resolve with successful treatment, other abnormalities persist and may be tied to damage to the structural connectivity in important affective and cognitive networks. […] studies suggest a chronic decrement in cognitive functioning associated with LLD that is not adequately addressed through improvement of depressive symptoms alone.”

A review of the literature on evidence-based treatments for LLD found that about 50 % of patients improved on antidepressants, but that the number needed to treat (NNT) was quite high (NNT = 8, [139]) and placebo effects were significant [140]. Additionally, no difference was demonstrated in the effectiveness of one antidepressant drug class over another […], and in one-third of patients, depression was resistant to monotherapy [140]. The addition of medications or switching within or between drug classes appears to result in improved treatment response for these patients [140, 141]. A meta-analysis of patient-level variables demonstrated that duration of depressive symptoms and baseline depression severity significantly predicts response to antidepressant treatment in LLD, with chronically depressed older patients with moderate-to-severe symptoms at baseline experiencing more improvement in symptoms than mildly and acutely depressed patients [142]. Pharmacological treatment response appears to range from incomplete to poor in LLD with co-occurring cognitive impairment.”

“[C]ompared to other formulations of prevention, such as primary, secondary, or tertiary — in which interventions are targeted at the level of disease/stage of disease — the IOM conceptual framework involves interventions that are targeted at the level of risk in the population [2]. […] [S]elective prevention studies have an important “numbers” advantage — similar to that of indicated prevention trials: the relatively high incidence of depression among persons with key risk markers enables investigator to test interventions with strong statistical power, even with somewhat modest sample sizes. This fact was illustrated by Schoevers and colleagues [3], in which the authors were able to account for nearly 50 % of total risk of late-life depression with consideration of only a handful of factors. Indeed, research, largely generated by groups in the Netherlands and the USA, has identified that selective prevention may be one of the most efficient approaches to late-life depression prevention, as they have estimated that targeting persons at high risk for depression — based on risk markers such as medical comorbidity, low social support, or physical/functional disability — can yield theoretical numbers needed to treat (NNTs) of approximately 5–7 in primary care settings [4–7]. […] compared to the findings from selective prevention trials targeting older persons with general health/medical problems, […] trials targeting older persons based on sociodemographic risk factors have been more mixed and did not reveal as consistent a pattern of benefits for selective prevention of depression.”

Few of the studies in the existing literature that involve interventions to prevent depression and/or reduce depressive symptoms in older populations have included economic evaluations [13]. The identification of cost-effective interventions to provide to groups at high risk for depression is an important public health goal, as such treatments may avert or reduce a significant amount of the disease burden. […] A study by Katon and colleagues [8] showed that elderly patients with either subsyndromal or major depression had significantly higher medical costs during the previous 6 months than those without depression; total healthcare costs were $1,045 to $1,700 greater, and total outpatient/ambulatory costs ranged from being $763 to $979 more, on average. Depressed patients had greater usage of health resources in every category of care examined, including those that are not mental health-related, such as emergency department visits. No difference in excess costs was found between patients with a DSM-IV depressive disorder and those with depressive symptoms only, however, as mean total costs were 51 % higher in the subthreshold depression group (95 % CI = 1.39–1.66) and 49 % higher in the MDD/dysthymia group (95 % CI = 1.28–1.72) than in the nondepressed group [8]. In a similar study, the usage of various types of health services by primary care patients in the Netherlands was assessed, and average costs were determined to be 1,403 more in depressed individuals versus control patients [21]. Study investigators once again observed that patients with depression had greater utilization of both non-mental and mental healthcare services than controls.”

“In order for routine depression screening in the elderly to be cost-effective […] appropriate follow-up measures must be taken with those who screen positive, including a diagnostic interview and/or referral to a mental health professional [this – the necessity/requirement of proper follow-up following screens in order for screening to be cost-effective – is incidentally a standard result in screening contexts, see also Juth & Munthe’s book – US] [23, 25]. For example, subsequent steps may include initiation of psychotherapy or antidepressant treatment. Thus, one reason that the USPSTF does not recommend screening for depression in settings where proper mental health resources do not exist is that the evidence suggests that outcomes are unlikely to improve without effective follow-up care […]  as per the USPSTF suggestion, Medicare will only cover the screening when the appropriate supports for proper diagnosis and treatment are available […] In order to determine which interventions to prevent and treat depression should be provided to those who screen positive for depressive symptoms and to high-risk populations in general, cost-effectiveness analyses must be completed for a variety of different treatments and preventive measures. […] questions remain regarding whether annual versus other intervals of screening are most cost-effective. With respect to preventive interventions, the evidence to date suggests that these are cost-effective in settings where those at the highest risk are targeted.”

Advertisements

February 19, 2018 Posted by | Books, Cardiology, Diabetes, Health Economics, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”

February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Depression (II)

I have added some more quotes from the last half of the book as well as some more links to relevant topics below.

“The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. […] Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel […] Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. […] Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent. Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.”

“[S]ome respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials […] It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists). […] Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression. […] The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines”.

“[W]hilst some cases of mild–moderate depression can benefit from antidepressants (e.g. chronic mild depression of several years’ duration can often respond to medication), it is repeatedly shown that the only group who consistently benefit from antidepressants are those with severe depression. The problem is that in the real world, most antidepressants are actually prescribed for less severe cases, that is, the group least likely to benefit; which is part of the reason why the argument about whether antidepressants work is not going to go away any time soon.”

“The economic argument for therapy can only be sustained if it is shown that the long-term outcome of depression (fewer relapses and better quality of life) is improved by receiving therapy instead of medication or by receiving both therapy and medication. Despite claims about how therapies such as CBT, behavioural activation, IPT, or family therapy may work, the reality is that many of the elements included in these therapies are the same as elements described in all the other effective therapies (sometimes referred to as empirically supported therapies). The shared elements include forming a positive working alliance with the depressed person, sharing the model and the plan for therapy with the patient from day one, and helping the patient engage in active problem-solving, etc. Given the degree of overlap, it is hard to make a real case for using one empirically supported therapy instead of another. Also, there are few predictors (besides symptom severity and personal preference) that consistently show who will respond to one of these therapies rather than to medication. […] One of the reasons for some scepticism about the value of therapies for treating depression is that it has proved difficult to demonstrate exactly what mediates the benefits of these interventions. […] despite the enthusiasm for mindfulness, there were fewer than twenty high-quality research trials on its use in adults with depression by the end of 2015 and most of these studies had fewer than 100 participants. […] exercise improves the symptoms of depression compared to no treatment at all, but the currently available studies on this topic are less than ideal (with many problems in the design of the study or sample of participants included in the clinical trial). […] Exercise is likely to be a better option for those individuals whose mood improves from participating in the experience, rather than someone who is so depressed that they feel further undermined by the process or feel guilty about ‘not trying hard enough’ when they attend the programme.”

“Research […] indicates that treatment is important and a study from the USA in 2005 showed that those who took the prescribed antidepressant medications had a 20 per cent lower rate of absenteeism than those who did not receive treatment for their depression. Absence from work is only one half of the depression–employment equation. In recent times, a new concept ‘presenteeism’ has been introduced to try to describe the problem of individuals who are attending their place of work but have reduced efficiency (usually because their functioning is impaired by illness). As might be imagined, presenteeism is a common issue in depression and a study in the USA in 2007 estimated that a depressed person will lose 5–8 hours of productive work every week because the symptoms they experience directly or indirectly impair their ability to complete work-related tasks. For example, depression was associated with reduced productivity (due to lack of concentration, slowed physical and mental functioning, loss of confidence), and impaired social functioning”.

“Health economists do not usually restrict their estimates of the cost of a disorder simply to the funds needed for treatment (i.e. the direct health and social care costs). A comprehensive economic assessment also takes into account the indirect costs. In depression these will include costs associated with employment issues (e.g. absenteeism and presenteeism; sickness benefits), costs incurred by the patient’s family or significant others (e.g. associated with time away from work to care for someone), and costs arising from premature death such as depression-related suicides (so-called mortality costs). […] Studies from around the world consistently demonstrate that the direct health care costs of depression are dwarfed by the indirect costs. […] Interestingly, absenteeism is usually estimated to be about one-quarter of the costs of presenteeism.”

Jakob Klaesi. António Egas Moniz. Walter Jackson Freeman II.
Electroconvulsive therapy.
Psychosurgery.
Vagal nerve stimulation.
Chlorpromazine. Imipramine. Tricyclic antidepressant. MAOIs. SSRIs. John CadeMogens Schou. Lithium carbonate.
Psychoanalysis. CBT.
Thomas Szasz.
Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration (Kirsch et al.).
Chronobiology. Chronobiotics. Melatonin.
Eric Kandel. BDNF.
The global burden of disease (Murray & Lopez) (the author discusses some of the data included in that publication).

January 8, 2018 Posted by | Books, Health Economics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Depression (I)

Below I have added some quotes and links related to the first half of this book.

Quotes:

“One of the problems encountered in any discussion of depression is that the word is used to mean different things by different people. For many members of the public, the term depression is used to describe normal sadness. In clinical practice, the term depression can be used to describe negative mood states, which are symptoms that can occur in a range of illnesses (e.g. individuals with psychosis may also report depressed mood). However, the term depression can also be used to refer to a diagnosis. When employed in this way it is meant to indicate that a cluster of symptoms have all occurred together, with the most common changes being in mood, thoughts, feelings, and behaviours. Theoretically, all these symptoms need to be present to make a diagnosis of depressive disorder.”

“The absence of any laboratory tests in psychiatry means that the diagnosis of depression relies on clinical judgement and the recognition of patterns of symptoms. There are two main problems with this. First, the diagnosis represents an attempt to impose a ‘present/absent’ or ‘yes/no’ classification on a problem that, in reality, is dimensional and varies in duration and severity. Also, many symptoms are likely to show some degree of overlap with pre-existing personality traits. Taken together, this means there is an ongoing concern about the point at which depression or depressive symptoms should be regarded as a mental disorder, that is, where to situate the dividing line on a continuum from health to normal sadness to illness. Second, for many years, there was a lack of consistent agreement on what combination of symptoms and impaired functioning would benefit from clinical intervention. This lack of consensus on the threshold for treatment, or for deciding which treatment to use, is a major source of problems to this day. […] A careful inspection of the criteria for identifying a depressive disorder demonstrates that diagnosis is mainly reliant on the cross-sectional assessment of the way the person presents at that moment in time. It is also emphasized that the current presentation should represent a change from the person’s usual state, as this step helps to begin the process of differentiating illness episodes from long-standing personality traits. Clarifying the longitudinal history of any lifetime problems can help also to establish, for example, whether the person has previously experienced mania (in which case their diagnosis will be revised to bipolar disorder), or whether they have a history of chronic depression, with persistent symptoms that may be less severe but are nevertheless very debilitating (this is usually called dysthymia). In addition, it is important to assess whether the person has another mental or physical disorder as well as these may frequently co-occur with depression. […] In the absence of diagnostic tests, the current classifications still rely on expert consensus regarding symptom profiles.”

“In summary, for a classification system to have utility it needs to be reliable and valid. If a diagnosis is reliable doctors will all make the same diagnosis when they interview patients who present with the same set of symptoms. If a diagnosis has predictive validity it means that it is possible to forecast the future course of the illness in individuals with the same diagnosis and to anticipate their likely response to different treatments. For many decades, the lack of reliability so undermined the credibility of psychiatric diagnoses that most of the revisions of the classification systems between the 1950s and 2010 focused on improving diagnostic reliability. However, insufficient attention has been given to validity and until this is improved, the criteria used for diagnosing depressive disorders will continue to be regarded as somewhat arbitrary […]. Weaknesses in the systems for the diagnosis and classification of depression are frequently raised in discussions about the existence of depression as a separate entity and concerns about the rationale for treatment. It is notable that general medicine uses a similar approach to making decisions regarding the health–illness dimension. For example, levels of blood pressure exist on a continuum. However, when an individual’s blood pressure measurement reaches a predefined level, it is reported that the person now meets the criteria specified for the diagnosis of hypertension (high blood pressure). Depending on the degree of variation from the norm or average values for their age and gender, the person will be offered different interventions. […] This approach is widely accepted as a rational approach to managing this common physical health problem, yet a similar ‘stepped care’ approach to depression is often derided.”

“There are few differences in the nature of the symptoms experienced by men and women who are depressed, but there may be gender differences in how their distress is expressed or how they react to the symptoms. For example, men may be more likely to become withdrawn rather than to seek support from or confide in other people, they may become more outwardly hostile and have a greater tendency to use alcohol to try to cope with their symptoms. It is also clear that it may be more difficult for men to accept that they have a mental health problem and they are more likely to deny it, delay seeking help, or even to refuse help. […] becoming unemployed, retirement, and loss of a partner and change of social roles can all be risk factors for depression in men. In addition, chronic physical health problems or increasing disability may also act as a precipitant. The relationship between physical illness and depression is complex. When people are depressed they may subjectively report that their general health is worse than that of other people; likewise, people who are ill or in pain may react by becoming depressed. Certain medical problems such as an under-functioning thyroid gland (hypothyroidism) may produce symptoms that are virtually indistinguishable from depression. Overall, the rate of depression in individuals with a chronic physical disease is almost three times higher than those without such problems.”

“A long-standing problem in gathering data about suicide is that many religions and cultures regard it as a sin or an illegal act. This has had several consequences. For example, coroners and other public officials often strive to avoid identifying suspicious deaths as a suicide, meaning that the actual rates of suicide may be under-reported.”

“In Beck’s [depression] model, it is proposed that an individual’s interpretations of events or experiences are encapsulated in automatic thoughts, which arise immediately following the event or even at the same time. […] Beck suggested that these automatic thoughts occur at a conscious level and can be accessible to the individual, although they may not be actively aware of them because they are not concentrating on them. The appraisals that occur in specific situations largely determine the person’s emotional and behavioural responses […] [I]n depression, the content of a person’s thinking is dominated by negative views of themselves, their world, and their future (the so-called negative cognitive triad). Beck’s theory suggests that the themes included in the automatic thoughts are generated via the activation of underlying cognitive structures, called dysfunctional beliefs (or cognitive schemata). All individuals develop a set of rules or ‘silent assumptions’ derived from early learning experiences. Whilst automatic thoughts are momentary, event-specific cognitions, the underlying beliefs operate across a variety of situations and are more permanent. Most of the underlying beliefs held by the average individual are quite adaptive and guide our attempts to act and react in a considered way. Individuals at risk of depression are hypothesized to hold beliefs that are maladaptive and can have an unhelpful influence on them. […] faulty information processing contributes to further deterioration in a person’s mood, which sets up a vicious cycle with more negative mood increasing the risk of negative interpretations of day-to-day life experiences and these negative cognitions worsening the depressed mood. Beck suggested that the underlying beliefs that render an individual vulnerable to depression may be broadly categorized into beliefs about being helpless or unlovable. […] Beliefs about ‘the self’ seem especially important in the maintenance of depression, particularly when connected with low or variable self-esteem.”

“[U]nidimensional models, such as the monoamine hypothesis or the social origins of depression model, are important building blocks for understanding depression. However, in reality there is no one cause and no single pathway to depression and […] multiple factors increase vulnerability to depression. Whether or not someone at risk of depression actually develops the disorder is partly dictated by whether they are exposed to certain types of life events, the perceived level of threat or distress associated with those events (which in turn is influenced by cognitive and emotional reactions and temperament), their ability to cope with these experiences (their resilience or adaptability under stress), and the functioning of their biological stress-sensitivity systems (including the thresholds for switching on their body’s stress responses).”

Some links:

Humorism. Marsilio Ficino. Thomas Willis. William Cullen. Philippe Pinel. Benjamin Rush. Emil Kraepelin. Karl Leonhard. Sigmund Freud.
Depression.
Relation between depression and sociodemographic factors.
Bipolar disorder.
Postnatal depression. Postpartum psychosis.
Epidemiology of suicide. Durkheim’s typology of suicide.
Suicide methods.
Reserpine.
Neuroendocrine hypothesis of depression. HPA (Hypothalamic–Pituitary–Adrenal) axis.
Cognitive behavioral therapy.
Coping responses.
Brown & Harris (1978).
5-HTTLPR.

January 5, 2018 Posted by | Books, Medicine, Psychiatry, Psychology | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Child psychology

I was not impressed with this book, but as mentioned in the short review it was ‘not completely devoid of observations of interest’.

Before I start my proper coverage of the book, here are some related ‘observations’ from a different book I recently read, Bellwether:

““First we’re all going to play a game. Bethany, it’s Brittany’s birthday.” She attempted a game involving balloons with pink Barbies on them and then gave up and let Brittany open her presents. “Open Sandy’s first,” Gina said, handing her the book.
“No, Caitlin, these are Brittany’s presents.”
Brittany ripped the paper off Toads and Diamonds and looked at it blankly.
“That was my favorite fairy tale when I was little,” I said. “It’s about a girl who meets a good fairy, only she doesn’t, know it because the fairy’s in disguise—” but Brittany had already tossed it aside and was ripping open a Barbie doll in a glittery dress.
“Totally Hair Barbie!” she shrieked.
“Mine,” Peyton said, and made a grab that left Brittany holding nothing but Barbie’s arm.
“She broke Totally Hair Barbie!” Brittany wailed.
Peyton’s mother stood up and said calmly, “Peyton, I think you need a time-out.”
I thought Peyton needed a good swat, or at least to have Totally Hair Barbie taken away from her and given back to Brittany, but instead her mother led her to the door of Gina’s bedroom. “You can come out when you’re in control of your feelings,” she said to Peyton, who looked like she was in control to me.
“I can’t believe you’re still using time-outs,” Chelsea’s mother said. “Everybody’s using holding now.”
“Holding?” I asked.
“You hold the child immobile on your lap until the negative behavior stops. It produces a feeling of interceptive safety.”
“Really,” I said, looking toward the bedroom door. I would have hated trying to hold Peyton against her will.
“Holding’s been totally abandoned,” Lindsay’s mother said. “We use EE.”
“EE?” I said.
“Esteem Enhancement,” Lindsay’s mother said. “EE addresses the positive peripheral behavior no matter how negative the primary behavior is.”
“Positive peripheral behavior?” Gina said dubiously. “When Peyton took the Barbie away from Brittany just now,” Lindsay’s mother said, obviously delighted to explain, “you would have said, ‘My, Peyton, what an assertive grip you have.’”

[A little while later, during the same party:]

“My, Peyton,” Lindsay’s mother said, “what a creative thing to do with your frozen yogurt.””

Okay, on to the coverage of the book. I haven’t covered it in much detail, but I have included some observations of interest below.

“[O]ptimal development of grammar (knowledge about language structure) and phonology (knowledge about the sound elements in words) depends on the brain experiencing sufficient linguistic input. So quantity of language matters. The quality of the language used with young children is also important. The easiest way to extend the quality of language is with interactions around books. […] Natural conversations, focused on real events in the here and now, are those which are critical for optimal development. Despite this evidence, just talking to young children is still not valued strongly in many environments. Some studies find that over 60 per cent of utterances to young children are ‘empty language’ — phrases such as ‘stop that’, ‘don’t go there’, and ‘leave that alone’. […] studies of children who experience high levels of such ‘restricted language’ reveal a negative impact on later cognitive, social, and academic development.”

[Neural] plasticity is largely achieved by the brain growing connections between brain cells that are already there. Any environmental input will cause new connections to form. At the same time, connections that are not used much will be pruned. […] the consistency of what is experienced will be important in determining which connections are pruned and which are retained. […] Brains whose biology makes them less efficient in particular and measurable aspects of processing seem to be at risk in specific areas of development. For example, when auditory processing is less efficient, this can carry a risk of later language impairment.”

“Joint attention has […] been suggested to be the basis of ‘natural pedagogy’ — a social learning system for imparting cultural knowledge. Once attention is shared by adult and infant on an object, an interaction around that object can begin. That interaction usually passes knowledge from carer to child. This is an example of responsive contingency in action — the infant shows an interest in something, the carer responds, and there is an interaction which enables learning. Taking the child’s focus of attention as the starting point for the interaction is very important for effective learning. Of course, skilled carers can also engineer situations in which babies or children will become interested in certain objects. This is the basis of effective play-centred learning. Novel toys or objects are always interesting.”

“Some research suggests that the pitch and amplitude (loudness) of a baby’s cry has been developed by evolution to prompt immediate action by adults. Babies’ cries appear to be designed to be maximally stressful to hear.”

“[T]he important factors in becoming a ‘preferred attachment figure’ are proximity and consistency.”

“[A]dults modify their actions in important ways when they interact with infants. These modifications appear to facilitate learning. ‘Infant-directed action’ is characterized by greater enthusiasm, closer proximity to the infant, greater repetitiveness, and longer gaze to the face than interactions with another adult. Infant-directed action also uses simplified actions with more turn-taking. […] carers tend to use a special tone of voice to talk to babies. This is more sing-song and attention-grabbing than normal conversational speech, and is called ‘infant-directed speech’ [IDS] or ‘Parentese’. All adults and children naturally adopt this special tone when talking to a baby, and babies prefer to listen to Parentese. […] IDS […] heightens pitch, exaggerates the length of words, and uses extra stress, exaggerating the rhythmic or prosodic aspects of speech. […] the heightened prosody increases the salience of acoustic cues to where words begin and end. […] So as well as capturing attention, IDS is emphasizing key linguistic cues that help language acquisition. […] The infant brain seems to cope with the ‘learning problem’ of which sounds matter by initially being sensitive to all the sound elements used by the different world languages. Via acoustic learning during the first year of life, the brain then specializes in the sounds that matter for the particular languages that it is being exposed to.”

“While crawling makes it difficult to carry objects with you on your travels, learning to walk enables babies to carry things. Indeed, walking babies spend most of their time selecting objects and taking them to show their carer, spending on average 30–40 minutes per waking hour interacting with objects. […] Self-generated movement is seen as critical for child development. […] most falling is adaptive, as it helps infants to gain expertise. Indeed, studies show that newly walking infants fall on average 17 times per hour. From the perspective of child psychology, the importance of ‘motor milestones’ like crawling and walking is that they enable greater agency (self-initiated and self-chosen behaviour) on the part of the baby.”

“Statistical learning enables the brain to learn the statistical structure of any event or object. […] Statistical structure is learned in all sensory modalities simultaneously. For example, as the child learns about birds, the child will learn that light body weight, having feathers, having wings, having a beak, singing, and flying, all go together. Each bird that the child sees may be different, but each bird will share the features of flying, having feathers, having wings, and so on. […] The connections that form between the different brain cells that are activated by hearing, seeing, and feeling birds will be repeatedly strengthened for these shared features, thereby creating a multi-modal neural network for that particular concept. The development of this network will be dependent on everyday experiences, and the networks will be richer if the experiences are more varied. This principle of learning supports the use of multi-modal instruction and active experience in nursery and primary school. […] knowledge about concepts is distributed across the entire brain. It is not stored separately in a kind of conceptual ‘dictionary’ or distinct knowledge system. Multi-modal experiences strengthen learning across the whole brain. Accordingly, multisensory learning is the most effective kind of learning for young children.”

“Babies learn words most quickly when an adult both points to and names a new item.”

“…direct teaching of scientific reasoning skills helps children to reason logically independently of their pre-existing beliefs. This is more difficult than it sounds, as pre-existing beliefs exert strong effects. […] in many social situations we are advantaged if we reason on the basis of our pre-existing beliefs. This is one reason that stereotypes form”. [Do remember on a related note that stereotype accuracy is one of the largest and most replicable effects in all of social psychology – US].

“Some gestures have almost universal meaning, like waving goodbye. Babies begin using gestures like this quite early on. Between 10 and 18 months of age, gestures become frequent and are used extensively for communication. […] After around 18 months, the use of gesture starts declining, as vocalization becomes more and more dominant in communication. […] By [that time], most children are entering the two-word stage, when they become able to combine words. […] At this age, children often use a word that they know to refer to many different entities whose names are not yet known. They might use the word ‘bee’ for insects that are not bees, or the word ‘dog’ to refer to horses and cows. Experiments have shown that this is not a semantic confusion. Toddlers do not think that horses and cows are a type of dog. Rather, they have limited language capacities, and so they stretch their limited vocabularies to communicate as flexibly as possible. […] there is a lot of similarity across cultures at the two-word stage regarding which words are combined. Young children combine words to draw attention to objects (‘See doggie!’), to indicate ownership (‘My shoe’), to point out properties of objects (‘Big doggie’), to indicate plurality (‘Two cookie’), and to indicate recurrence (‘Other cookie’). […] It is only as children learn grammar that some divergence is found across languages. This is probably because different languages have different grammatical formats for combining words. […] grammatical learning emerges naturally from extensive language experience (of the utterances of others) and from language use (the novel utterances of the child, which are re-formulated by conversational partners if they are grammatically incorrect).”

“The social and communicative functions of language, and children’s understanding of them, are captured by pragmatics. […] pragmatic aspects of conversation include taking turns, and making sure that the other person has sufficient knowledge of the events being discussed to follow what you are saying. […] To learn about pragmatics, children need to go beyond the literal meaning of the words and make inferences about communicative intent. A conversation is successful when a child has recognized the type of social situation and applied the appropriate formula. […] Children with autism, who have difficulties with social cognition and in reading the mental states of others, find learning the pragmatics of conversation particularly difficult. […] Children with autism often show profound delays in social understanding and do not ‘get’ many social norms. These children may behave quite inappropriately in social settings […] Children with autism may also show very delayed understanding of emotions and of intentions. However, this does not make them anti-social, rather it makes them relatively ineffective at being pro-social.”

“When children have siblings, there are usually developmental advantages for social cognition and psychological understanding. […] Discussing the causes of disputes appears to be particularly important for developing social understanding. Young children need opportunities to ask questions, argue with explanations, and reflect on why other people behave in the way that they do. […] Families that do not talk about the intentions and emotions of others and that do not explicitly discuss social norms will create children with reduced social understanding.”

“[C]hildren, like adults, are more likely to act in pro-social ways to ingroup members. […] Social learning of cultural ‘ingroups’ appears to develop early in children as part of general socio-moral development. […] being loyal to one’s ‘ingroup’ is likely to make the child more popular with the other members of that group. Being in a group thus requires the development of knowledge about how to be loyal, about conforming to pressure and about showing ingroup bias. For example, children may need to make fine judgements about who is more popular within the group, so that they can favour friends who are more likely to be popular with the rest of the group. […] even children as young as 6 years will show more positive responding to the transgression of social rules by ingroup members compared to outgroup members, particularly if they have relatively well-developed understanding of emotions and intentions.”

“Good language skills improve memory, because children with better language skills are able to construct narratively coherent and extended, temporally organized representations of experienced events.”

“Once children begin reading, […] letter-sound knowledge and ‘phonemic awareness’ (the ability to divide words into the single sound elements represented by letters) become the most important predictors of reading development. […] phonemic awareness largely develops as a consequence of being taught to read and write. Research shows that illiterate adults do not have phonemic awareness. […] brain imaging shows that learning to read ‘re-maps’ phonology in the brain. We begin to hear words as sequences of ‘phonemes’ only after we learn to read.”

October 29, 2017 Posted by | Books, Language, Neurology, Psychology | Leave a comment

A few diabetes papers of interest

i. Neurocognitive Functioning in Children and Adolescents at the Time of Type 1 Diabetes Diagnosis: Associations With Glycemic Control 1 Year After Diagnosis.

“Children and youth with type 1 diabetes are at risk for developing neurocognitive dysfunction, especially in the areas of psychomotor speed, attention/executive functioning, and visuomotor integration (1,2). Most research suggests that deficits emerge over time, perhaps in response to the cumulative effect of glycemic extremes (36). However, the idea that cognitive changes emerge gradually has been challenged (79). Ryan (9) argued that if diabetes has a cumulative effect on cognition, cognitive test performance should be positively correlated with illness duration. Yet he found comparable deficits in psychomotor speed (the most commonly noted area of deficit) in adolescents and young adults with illness duration ranging from 6 to 25 years. He therefore proposed a diathesis model in which cognitive declines in diabetes are especially likely to occur in more vulnerable patients, at crucial periods, in response to illness-related events (e.g., severe hyperglycemia) known to have an impact on the central nervous system (CNS) (8). This model accounts for the finding that cognitive deficits are more likely in children with early-onset diabetes, and for the accelerated cognitive aging seen in diabetic individuals later in life (7). A third hypothesized crucial period is the time leading up to diabetes diagnosis, during which severe fluctuations in blood glucose and persistent hyperglycemia often occur. Concurrent changes in blood-brain barrier permeability could result in a flood of glucose into the brain, with neurotoxic effects (9).”

“In the current study, we report neuropsychological test findings for children and adolescents tested within 3 days of diabetes diagnosis. The purpose of the study was to determine whether neurocognitive impairments are detectable at diagnosis, as predicted by the diathesis hypothesis. We hypothesized that performance on tests of psychomotor speed, visuomotor integration, and attention/executive functioning would be significantly below normative expectations, and that differences would be greater in children with earlier disease onset. We also predicted that diabetic ketoacidosis (DKA), a primary cause of diabetes-related neurological morbidity (12) and a likely proxy for severe peri-onset hyperglycemia, would be associated with poorer performance.”

“Charts were reviewed for 147 children/adolescents aged 5–18 years (mean = 10.4 ± 3.2 years) who completed a short neuropsychological screening during their inpatient hospitalization for new-onset type 1 diabetes, as part of a pilot clinical program intended to identify patients in need of further neuropsychological evaluation. Participants were patients at a large urban children’s hospital in the southwestern U.S. […] Compared with normative expectations, children/youth with type 1 diabetes performed significantly worse on GPD, GPN, VMI, and FAS (P < 0.0001 in all cases), with large decrements evident on all four measures (Fig. 1). A small but significant effect was also evident in DSB (P = 0.022). High incidence of impairment was evident on all neuropsychological tasks completed by older participants (aged 9–18 years) except DSF/DSB (Fig. 2).”

“Deficits in neurocognitive functioning were evident in children and adolescents within days of type 1 diabetes diagnosis. Participants performed >1 SD below normative expectations in bilateral psychomotor speed (GP) and 0.7–0.8 SDs below expected performance in visuomotor integration (VMI) and phonemic fluency (FAS). Incidence of impairment was much higher than normative expectations on all tasks except DSF/DSB. For example, >20% of youth were impaired in dominant hand fine-motor control, and >30% were impaired with their nondominant hand. These findings provide provisional support for Ryan’s hypothesis (79) that the peri-onset period may be a time of significant cognitive vulnerability.

Importantly, deficits were not evident on all measures. Performance on measures of attention/executive functioning (TMT-A, TMT-B, DSF, and DSB) was largely consistent with normative expectations, as was reading ability (WRAT-4), suggesting that the below-average performance in other areas was not likely due to malaise or fatigue. Depressive symptoms at diagnosis were associated with performance on TMT-B and FAS, but not on other measures. Thus, it seems unlikely that depressive symptoms accounted for the observed motor slowing.

Instead, the findings suggest that the visual-motor system may be especially vulnerable to early effects of type 1 diabetes. This interpretation is especially compelling given that psychomotor impairment is the most consistently reported long-term cognitive effect of type 1 diabetes. The sensitivity of the visual-motor system at diabetes diagnosis is consistent with a growing body of neuroimaging research implicating posterior white matter tracts and associated gray matter regions (particularly cuneus/precuneus) as areas of vulnerability in type 1 diabetes (3032). These regions form part of the neural system responsible for integrating visual inputs with motor outputs, and in adults with type 1 diabetes, structural pathology in these regions is directly correlated to performance on GP [grooved pegboard test] (30,31). Arbelaez et al. (33) noted that these brain areas form part of the “default network” (34), a system engaged during internally focused cognition that has high resting glucose metabolism and may be especially vulnerable to glucose variability.”

“It should be noted that previous studies (e.g., Northam et al. [3]) have not found evidence of neurocognitive dysfunction around the time of diabetes diagnosis. This may be due to study differences in measures, outcomes, and/or time frame. We know of no other studies that completed neuropsychological testing within days of diagnosis. Given our time frame, it is possible that our findings reflect transient effects rather than more permanent changes in the CNS. Contrary to predictions, we found no association between DKA at diagnosis and neurocognitive performance […] However, even transient effects could be considered potential indicators of CNS vulnerability. Neurophysiological changes at the time of diagnosis have been shown to persist under certain circumstances or for some patients. […] [Some] findings suggest that some individuals may be particularly susceptible to the effects of glycemic extremes on neurocognitive function, consistent with a large body of research in developmental neuroscience indicating individual differences in neurobiological vulnerability to adverse events. Thus, although it is possible that the neurocognitive impairments observed in our study might resolve with euglycemia, deficits at diagnosis could still be considered a potential marker of CNS vulnerability to metabolic perturbations (both acute and chronic).”

“In summary, this study provides the first demonstration that type 1 diabetes–associated neurocognitive impairment can be detected at the time of diagnosis, supporting the possibility that deficits arise secondary to peri-onset effects. Whether these effects are transient markers of vulnerability or represent more persistent changes in CNS awaits further study.”

ii. Association Between Impaired Cardiovascular Autonomic Function and Hypoglycemia in Patients With Type 1 Diabetes.

“Cardiovascular autonomic neuropathy (CAN) is a chronic complication of diabetes and an independent predictor of cardiovascular disease (CVD) morbidity and mortality (13). The mechanisms of CAN are complex and not fully understood. It can be assessed by simple cardiovascular reflex tests (CARTs) and heart rate variability (HRV) studies that were shown to be sensitive, noninvasive, and reproducible (3,4).”

“HbA1c fails to capture information on the daily fluctuations in blood glucose levels, termed glycemic variability (GV). Recent observations have fostered the notion that GV, independent of HbA1c, may confer an additional risk for the development of micro- and macrovascular diabetes complications (8,9). […] the relationship between GV and chronic complications, specifically CAN, in patients with type 1 diabetes has not been systematically studied. In addition, limited data exist on the relationship between hypoglycemic components of the GV and measures of CAN among subjects with type 1 diabetes (11,12). Therefore, we have designed a prospective study to evaluate the impact and the possible sustained effects of GV on measures of cardiac autonomic function and other cardiovascular complications among subjects with type 1 diabetes […] In the present communication, we report cross-sectional analyses at baseline between indices of hypoglycemic stress on measures of cardiac autonomic function.”

“The following measures of CAN were predefined as outcomes of interests and analyzed: expiration-to-inspiration ratio (E:I), Valsalva ratio, 30:15 ratios, low-frequency (LF) power (0.04 to 0.15 Hz), high-frequency (HF) power (0.15 to 0.4 Hz), and LF/HF at rest and during CARTs. […] We found that LBGI [low blood glucose index] and AUC [area under the curve] hypoglycemia were associated with reduced LF and HF power of HRV [heart rate variability], suggesting an impaired autonomic function, which was independent of glucose control as assessed by the HbA1c.”

“Our findings are in concordance with a recent report demonstrating attenuation of the baroreflex sensitivity and of the sympathetic response to various cardiovascular stressors after antecedent hypoglycemia among healthy subjects who were exposed to acute hypoglycemic stress (18). Similar associations […] were also reported in a small study of subjects with type 2 diabetes (19). […] higher GV and hypoglycemic stress may have an acute effect on modulating autonomic control with inducing a sympathetic/vagal imbalance and a blunting of the cardiac vagal control (18). The impairment in the normal counter-regulatory autonomic responses induced by hypoglycemia on the cardiovascular system could be important in healthy individuals but may be particularly detrimental in individuals with diabetes who have hitherto compromised cardiovascular function and/or subclinical CAN. In these individuals, hypoglycemia may also induce QT interval prolongation, increase plasma catecholamine levels, and lower serum potassium (19,20). In concert, these changes may lower the threshold for serious arrhythmia (19,20) and could result in an increased risk of cardiovascular events and sudden cardiac death. Conversely, the presence of CAN may increase the risk of hypoglycemia through hypoglycemia unawareness and subsequent impaired ability to restore euglycemia (21) through impaired sympathoadrenal response to hypoglycemia or delayed gastric emptying. […] A possible pathogenic role of GV/hypoglycemic stress on CAN development and progressions should be also considered. Prior studies in healthy and diabetic subjects have found that higher exposure to hypoglycemia reduces the counter-regulatory hormone (e.g., epinephrine, glucagon, and adrenocorticotropic hormone) and blunts autonomic nervous system responses to subsequent hypoglycemia (21). […] Our data […] suggest that wide glycemic fluctuations, particularly hypoglycemic stress, may increase the risk of CAN in patients with type 1 diabetes.”

“In summary, in this cohort of relatively young and uncomplicated patients with type 1 diabetes, GV and higher hypoglycemic stress were associated with impaired HRV reflective of sympathetic/parasympathetic dysfunction with potential important clinical consequences.”

iii. Elevated Levels of hs-CRP Are Associated With High Prevalence of Depression in Japanese Patients With Type 2 Diabetes: The Diabetes Distress and Care Registry at Tenri (DDCRT 6).

“In the last decade, several studies have been published that suggest a close association between diabetes and depression. Patients with diabetes have a high prevalence of depression (1) […] and a high prevalence of complications (3). In addition, depression is associated with mortality in these patients (4). […] Because of this strong association, several recent studies have suggested the possibility of a common biological pathway such as inflammation as an underlying mechanism of the association between depression and diabetes (5). […] Multiple mechanisms are involved in the association between diabetes and inflammation, including modulation of lipolysis, alteration of glucose uptake by adipose tissue, and an indirect mechanism involving an increase in free fatty acid levels blocking the insulin signaling pathway (10). Psychological stress can also cause inflammation via innervation of cytokine-producing cells and activation of the sympathetic nervous systems and adrenergic receptors on macrophages (11). Depression enhances the production of inflammatory cytokines (1214). Overproduction of inflammatory cytokines may stimulate corticotropin-releasing hormone production, a mechanism that leads to hypothalamic-pituitary axis activity. Conversely, cytokines induce depressive-like behaviors; in studies where healthy participants were given endotoxin infusions to trigger cytokine release, the participants developed classic depressive symptoms (15). Based on this evidence, it could be hypothesized that inflammation is the common biological pathway underlying the association between diabetes and depression.”

“[F]ew studies have examined the clinical role of inflammation and depression as biological correlates in patients with diabetes. […] In this study, we hypothesized that high CRP [C-reactive protein] levels were associated with the high prevalence of depression in patients with diabetes and that this association may be modified by obesity or glycemic control. […] Patient data were derived from the second-year survey of a diabetes registry at Tenri Hospital, a regional tertiary care teaching hospital in Japan. […] 3,573 patients […] were included in the study. […] Overall, mean age, HbA1c level, and BMI were 66.0 years, 7.4% (57.8 mmol/mol), and 24.6 kg/m2, respectively. Patients with major depression tended to be relatively young […] and female […] with a high BMI […], high HbA1c levels […], and high hs-CRP levels […]; had more diabetic nephropathy […], required more insulin therapy […], and exercised less […]”.

“In conclusion, we observed that hs-CRP levels were associated with a high prevalence of major depression in patients with type 2 diabetes with a BMI of ≥25 kg/m2. […] In patients with a BMI of <25 kg/m2, no significant association was found between hs-CRP quintiles and major depression […] We did not observe a significant association between hs-CRP and major depression in either of HbA1c subgroups. […] Our results show that the association between hs-CRP and diabetes is valid even in an Asian population, but it might not be extended to nonobese subjects. […] several factors such as obesity and glycemic control may modify the association between inflammation and depression. […] Obesity is strongly associated with chronic inflammation.”

iv. A Novel Association Between Nondipping and Painful Diabetic Polyneuropathy.

“Sleep problems are common in painful diabetic polyneuropathy (PDPN) (1) and contribute to the effect of pain on quality of life. Nondipping (the absence of the nocturnal fall in blood pressure [BP]) is a recognized feature of diabetic cardiac autonomic neuropathy (CAN) and is attributed to the abnormal prevalence of nocturnal sympathetic activity (2). […] This study aimed to evaluate the relationship of the circadian pattern of BP with both neuropathic pain and pain-related sleep problems in PDPN […] Investigating the relationship between PDPN and BP circadian pattern, we found patients with PDPN exhibited impaired nocturnal decrease in BP compared with those without neuropathy, as well as higher nocturnal systolic BP than both those without DPN and with painless DPN. […] in multivariate analysis including comorbidities and most potential confounders, neuropathic pain was an independent determinant of ∆ in BP and nocturnal systolic BP.”

“PDPN could behave as a marker for the presence and severity of CAN. […] PDPN should increasingly be regarded as a condition of high cardiovascular risk.”

v. Reduced Testing Frequency for Glycated Hemoglobin, HbA1c, Is Associated With Deteriorating Diabetes Control.

I think a potentially important take-away from this paper, which they don’t really talk about, is that when you’re analyzing time series data in research contexts where the HbA1c variable is available at the individual level at some base frequency and you then encounter individuals for whom the HbA1c variable is unobserved in such a data set for some time periods/is not observed at the frequency you’d expect, such (implicit) missing values may not be missing at random (for more on these topics see e.g. this post). More specifically, in light of the findings of this paper I think it would make a lot of sense to default to an assumption of missing values being an indicator of worse-than-average metabolic control during the unobserved period of the time series in question when doing time-to-event analyses, especially in contexts where the values are missing for an extended period of time.

The authors of the paper consider metabolic control an outcome to be explained by the testing frequency. That’s one way to approach these things, but it’s not the only one and I think it’s also important to keep in mind that some patients also sometimes make a conscious decision not to show up for their appointments/tests; i.e. the testing frequency is not necessarily fully determined by the medical staff, although they of course have an important impact on this variable.

Some observations from the paper:

“We examined repeat HbA1c tests (400,497 tests in 79,409 patients, 2008–2011) processed by three U.K. clinical laboratories. We examined the relationship between retest interval and 1) percentage change in HbA1c and 2) proportion of cases showing a significant HbA1c rise. The effect of demographics factors on these findings was also explored. […] Figure 1 shows the relationship between repeat requesting interval (categorized in 1-month intervals) and percentage change in HbA1c concentration in the total data set. From 2 months onward, there was a direct relationship between retesting interval and control. A testing frequency of >6 months was associated with deterioration in control. The optimum testing frequency in order to maximize the downward trajectory in HbA1c between two tests was approximately four times per year. Our data also indicate that testing more frequently than 2 months has no benefit over testing every 2–4 months. Relative to the 2–3 month category, all other categories demonstrated statistically higher mean change in HbA1c (all P < 0.001). […] similar patterns were observed for each of the three centers, with the optimum interval to improvement in overall control at ∼3 months across all centers.”

“[I]n patients with poor control, the pattern was similar to that seen in the total group, except that 1) there was generally a more marked decrease or more modest increase in change of HbA1c concentration throughout and, consequently, 2) a downward trajectory in HbA1c was observed when the interval between tests was up to 8 months, rather than the 6 months as seen in the total group. In patients with a starting HbA1c of <6% (<42 mmol/mol), there was a generally linear relationship between interval and increase in HbA1c, with all intervals demonstrating an upward change in mean HbA1c. The intermediate group showed a similar pattern as those with a starting HbA1c of <6% (<42 mmol/mol), but with a steeper slope.”

“In order to examine the potential link between monitoring frequency and the risk of major deterioration in control, we then assessed the relationship between testing interval and proportion of patients demonstrating an increase in HbA1c beyond the normal biological and analytical variation in HbA1c […] Using this definition of significant increase as a ≥9.9% rise in subsequent HbA1c, our data show that the proportion of patients showing this magnitude of rise increased month to month, with increasing intervals between tests for each of the three centers. […] testing at 2–3-monthly intervals would, at a population level, result in a marked reduction in the proportion of cases demonstrating a significant increase compared with annual testing […] irrespective of the baseline HbA1c, there was a generally linear relationship between interval and the proportion demonstrating a significant increase in HbA1c, though the slope of this relationship increased with rising initial HbA1c.”

“Previous data from our and other groups on requesting patterns indicated that relatively few patients in general practice were tested annually (5,6). […] Our data indicate that for a HbA1c retest interval of more than 2 months, there was a direct relationship between retesting interval and control […], with a retest frequency of greater than 6 months being associated with deterioration in control. The data showed that for diabetic patients as a whole, the optimum repeat testing interval should be four times per year, particularly in those with poorer diabetes control (starting HbA1c >7% [≥53 mmol/mol]). […] The optimum retest interval across the three centers was similar, suggesting that our findings may be unrelated to clinical laboratory factors, local policies/protocols on testing, or patient demographics.”

It might be important to mention that there are important cross-country differences in terms of how often people with diabetes get HbA1c measured – I’m unsure of whether or not standards have changed since then, but at least in Denmark a specific treatment goal of the Danish Regions a few years ago was whether or not 95% of diabetics had had their HbA1c measured within the last year (here’s a relevant link to some stuff I wrote about related topics a while back).

October 2, 2017 Posted by | Cardiology, Diabetes, Immunology, Medicine, Neurology, Psychology, Statistics, Studies | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

Depression and Heart Disease (II)

Below I have added some more observations from the book, which I gave four stars on goodreads.

“A meta-analysis of twin (and family) studies estimated the heritability of adult MDD around 40% [16] and this estimate is strikingly stable across different countries [17, 18]. If measurement error due to unreliability is taken into account by analysing MDD assessed on two occasions, heritability estimates increase to 66% [19]. Twin studies in children further show that there is already a large genetic contribution to depressive symptoms in youth, with heritability estimates varying between 50% and 80% [20–22]. […] Cardiovascular research in twin samples has suggested a clear-cut genetic contribution to hypertension (h2 = 61%) [30], fatal stroke (h2 = 32%) [31] and CAD (h2 = 57% in males and 38% in females) [32]. […] A very important, and perhaps underestimated, source of pleiotropy in the association of MDD and CAD are the major behavioural risk factors for CAD: smoking and physical inactivity. These factors are sometimes considered ‘environmental’, but twin studies have shown that such behaviours have a strong genetic component [33–35]. Heritability estimates for [many] established risk factors [for CAD – e.g. BMI, smoking, physical inactivity – US] are 50% or higher in most adult twin samples and these estimates remain remarkably similar across the adult life span [41–43].”

“The crucial question is whether the genetic factors underlying MDD also play a role in CAD and CAD risk factors. To test for an overlap in the genetic factors, a bivariate extension of the structural equation model for twin data can be used [57]. […] If the depressive symptoms in a twin predict the IL-6 level in his/her co-twin, this can only be explained by an underlying factor that affects both depression and IL-6 levels and is shared by members of a family. If the prediction is much stronger in MZ than in DZ twins, this signals that the underlying factor is their shared genetic make-up, rather than their shared (family) environment. […] It is important to note clearly here that genetic correlations do not prove the existence of pleiotropy, because genes that influence MDD may, through causal effects of MDD on CAD risk, also become ‘CAD genes’. The absence of a genetic correlation, however, can be used to falsify the existence of genetic pleiotropy. For instance, the hypothesis that genetic pleiotropy explains part of the association between depressive symptoms and IL-6 requires the genetic correlation between these traits to be significantly different from zero. [Furthermore,] the genetic correlation should have a positive value. A negative genetic correlation would signal that genes that increase the risk for depression decrease the risk for higher IL-6 levels, which would go against the genetic pleiotropy hypothesis. […] Su et al. [26] […] tested pleiotropy as a possible source of the association of depressive symptoms with Il-6 in 188 twin pairs of the Vietnam Era Twin (VET) Registry. The genetic correlation between depressive symptoms and IL-6 was found to be positive and significant (RA = 0.22, p = 0.046)”

“For the association between MDD and physical inactivity, the dominant hypothesis has not been that MDD causes a reduction in regular exercise, but instead that regular exercise may act as a protective factor against mood disorders. […] we used the twin method to perform a rigorous test of this popular hypothesis [on] 8558 twins and their family members using their longitudinal data across 2-, 4-, 7-, 9- and 11-year follow-up periods. In spite of sufficient statistical power, we found only the genetic correlation to be significant (ranging between *0.16 and *0.44 for different symptom scales and different time-lags). The environmental correlations were essentially zero. This means that the environmental factors that cause a person to take up exercise do not cause lower anxiety or depressive symptoms in that person, currently or at any future time point. In contrast, the genetic factors that cause a person to take up exercise also cause lower anxiety or depressive symptoms in that person, at the present and all future time points. This pattern of results falsifies the causal hypothesis and leaves genetic pleiotropy as the most likely source for the association between exercise and lower levels of anxiety and depressive symptoms in the population at large. […] Taken together, [the] studies support the idea that genetic pleiotropy may be a factor contributing to the increased risk for CAD in subjects suffering from MDD or reporting high counts of depressive symptoms. The absence of environmental correlations in the presence of significant genetic correlations for a number of the CAD risk factors (CFR, cholesterol, inflammation and regular exercise) suggests that pleiotropy is the sole reason for the association between MDD and these CAD risk factors, whereas for other CAD risk factors (e.g. smoking) and CAD incidence itself, pleiotropy may coexist with causal effects.”

“By far the most tested polymorphism in psychiatric genetics is a 43-base pair insertion or deletion in the promoter region of the serotonin transporter gene (5HTT, renamed SLC6A4). About 55% of Caucasians carry a long allele (L) with 16 repeat units. The short allele (S, with 14 repeat units) of this length polymorphism repeat (LPR) reduces transcriptional efficiency, resulting in decreased serotonin transporter expression and function [83]. Because serotonin plays a key role in one of the major theories of MDD [84], and because the most prescribed antidepressants act directly on this transporter, 5HTT is an obvious candidate gene for this disorder. […] The dearth of studies attempting to associate the 5HTTLPR to MDD or related personality traits tells a revealing story about the fate of most candidate genes in psychiatric genetics. Many conflicting findings have been reported, and the two largest studies failed to link the 5HTTLPR to depressive symptoms or clinical MDD [85, 86]. Even at the level of reviews and meta-analyses, conflicting conclusions have been drawn about the role of this polymorphism in the development of MDD [87, 88]. The initially promising explanation for discrepant findings – potential interactive effects of the 5HTTLPR and stressful life events [89] – did not survive meta-analysis [90].”

“Across the board, overlooking the wealth of candidate gene studies on MDD, one is inclined to conclude that this approach has failed to unambiguously identify genetic variants involved in MDD […]. Hope is now focused on the newer GWA [genome wide association] approach. […] At the time of writing, only two GWA studies had been published on MDD [81, 95]. […] In theory, the strategy to identify potential pleiotropic genes in the MDD–CAD relationship is extremely straightforward. We simply select the genes that occur in the lists of confirmed genes from the GWA studies for both traits. In practice, this is hard to do, because genetics in psychiatry is clearly lagging behind genetics in cardiology and diabetes medicine. […] What is shown by the reviewed twin studies is that some genetic variants may influence MDD and CAD risk factors. This can occur through one of three mechanisms: (a) the genetic variants that increase the risk for MDD become part of the heritability of CAD through a causal effect of MDD on CAD risk factors (causality); (b) the genetic variants that increase the risk for CAD become part of the heritability of MDD through a direct causal effect of CAD on MDD (reverse causality); (c) the genetic variants influence shared risk factors that independently increase the risk for MDD as well as CAD (pleiotropy). I suggest that to fully explain the MDD–CAD association we need to be willing to be open to the possibility that these three mechanisms co-exist. Even in the presence of true pleiotropic effects, MDD may influence CAD risk factors, and having CAD in turn may worsen the course of MDD.”

“Patients with depression are more likely to exhibit several unhealthy behaviours or avoid other health-promoting ones than those without depression. […] Patients with depression are more likely to have sleep disturbances [6]. […] sleep deprivation has been linked with obesity, diabetes and the metabolic syndrome [13]. […] Physical inactivity and depression display a complex, bidirectional relationship. Depression leads to physical inactivity and physical inactivity exacerbates depression [19]. […] smoking rates among those with depression are about twice that of the general population [29]. […] Poor attention to self-care is often a problem among those with major depressive disorder. In the most severe cases, those with depression may become inattentive to their personal hygiene. One aspect of this relationship that deserves special attention with respect to cardiovascular disease is the association of depression and periodontal disease. […] depression is associated with poor adherence to medical treatment regimens in many chronic illnesses, including heart disease. […] There is some evidence that among patients with an acute coronary syndrome, improvement in depression is associated with improvement in adherence. […] Individuals with depression are often socially withdrawn or isolated. It has been shown that patients with heart disease who are depressed have less social support [64], and that social isolation or poor social support is associated with increased mortality in heart disease patients [65–68]. […] [C]linicians who make recommendations to patients recovering from a heart attack should be aware that low levels of social support and social isolation are particularly common among depressed individuals and that high levels of social support appear to protect patients from some of the negative effects of depression [78].”

“Self-efficacy describes an individual’s self-confidence in his/her ability to accomplish a particular task or behaviour. Self-efficacy is an important construct to consider when one examines the psychological mechanisms linking depression and heart disease, since it influences an individual’s engagement in behaviour and lifestyle changes that may be critical to improving cardiovascular risk. Many studies on individuals with chronic illness show that depression is often associated with low self-efficacy [95–97]. […] Low self-efficacy is associated with poor adherence behaviour in patients with heart failure [101]. […] Much of the interest in self-efficacy comes from the fact that it is modifiable. Self-efficacy-enhancing interventions have been shown to improve cardiac patients’ self-efficacy and thereby improve cardiac health outcomes [102]. […] One problem with targeting self-efficacy in depressed heart disease patients is [however] that depressive symptoms reduce the effects of self-efficacy-enhancing interventions [105, 106].”

“Taken together, [the] SADHART and ENRICHD [studies] suggest, but do not prove, that antidepressant drug therapy in general, and SSRI treatment in particular, improve cardiovascular outcomes in depressed post-acute coronary syndrome (ACS) patients. […] even large epidemiological studies of depression and antidepressant treatment are not usually informative, because they confound the effects of depression and antidepressant treatment. […] However, there is one Finnish cohort study in which all subjects […] were followed up through a nationwide computerised database [17]. The purpose of this study was not to examine the relationship between depression and cardiac mortality, but rather to look at the relationship between antidepressant use and suicide. […] unexpectedly, ‘antidepressant use, and especially SSRI use, was associated with a marked reduction in total mortality (=49%, p < 0.001), mostly attributable to a decrease in cardiovascular deaths’. The study involved 15 390 patients with a mean follow-up of 3.4 years […] One of the marked differences between the SSRIs and the earlier tricyclic antidepressants is that the SSRIs do not cause cardiac death in overdose as the tricyclics do [41]. There has been literature that suggested that tricyclics even at therapeutic doses could be cardiotoxic and more problematic than SSRIs [42, 43]. What has been surprising is that both in the clinical trial data from ENRICHD and the epidemiological data from Finland, tricyclic treatment has also been associated with a decreased risk of mortality. […] Given that SSRI treatment of depression in the post-ACS period is safe, effective in reducing depressed mood, able to improve health behaviours and may reduce subsequent cardiac morbidity and mortality, it would seem obvious that treating depression is strongly indicated. However, the vast majority of post-ACS patients will not see a psychiatrically trained professional and many cases are not identified [33].”

“That depression is associated with cardiovascular morbidity and mortality is no longer open to question. Similarly, there is no question that the risk of morbidity and mortality increases with increasing severity of depression. Questions remain about the mechanisms that underlie this association, whether all types of depression carry the same degree of risk and to what degree treating depression reduces that risk. There is no question that the benefits of treating depression associated with coronary artery disease far outweigh the risks.”

“Two competing trends are emerging in research on psychotherapy for depression in cardiac patients. First, the few rigorous RCTs that have been conducted so far have shown that even the most efficacious of the current generation of interventions produce relatively modest outcomes. […] Second, there is a growing recognition that, even if an intervention is highly efficacious, it may be difficult to translate into clinical practice if it requires intensive or extensive contacts with a highly trained, experienced, clinically sophisticated psychotherapist. It can even be difficult to implement such interventions in the setting of carefully controlled, randomised efficacy trials. Consequently, there are efforts to develop simpler, more efficient interventions that can be delivered by a wider variety of interventionists. […] Although much more work remains to be done in this area, enough is already known about psychotherapy for comorbid depression in heart disease to suggest that a higher priority should be placed on translation of this research into clinical practice. In many cases, cardiac patients do not receive any treatment for their depression.”

August 14, 2017 Posted by | Books, Cardiology, Diabetes, Genetics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Depression and Heart Disease (I)

I’m currently reading this book. It’s a great book, with lots of interesting observations.

Below I’ve added some quotes from the book.

“Frasure-Smith et al. [1] demonstrated that patients diagnosed with depression post MI [myocardial infarction, US] were more than five times more likely to die from cardiac causes by 6 months than those without major depression. At 18 months, cardiac mortality had reached 20% in patients with major depression, compared with only 3% in non-depressed patients [5]. Recent work has confirmed and extended these findings. A meta-analysis of 22 studies of post-MI subjects found that post-MI depression was associated with a 2.0–2.5 increased risk of negative cardiovascular outcomes [6]. Another meta-analysis examining 20 studies of subjects with MI, coronary artery bypass graft (CABG), angioplasty or angiographically documented CAD found a twofold increased risk of death among depressed compared with non-depressed patients [7]. Though studies included in these meta-analyses had substantial methodological variability, the overall results were quite similar [8].”

“Blumenthal et al. [31] published the largest cohort study (N = 817) to date on depression in patients undergoing CABG and measured depression scores, using the CES-D, before and at 6 months after CABG. Of those patients, 26% had minor depression (CES-D score 16–26) and 12% had moderate to severe depression (CES-D score =27). Over a mean follow-up of 5.2 years, the risk of death, compared with those without depression, was 2.4 (HR adjusted; 95% CI 1.4, 4.0) in patients with moderate to severe depression and 2.2 (95% CI 1.2, 4.2) in those whose depression persisted from baseline to follow-up at 6 months. This is one of the few studies that found a dose response (in terms of severity and duration) between depression and death in CABG in particular and in CAD in general.”

“Of the patients with known CAD but no recent MI, 12–23% have major depressive disorder by DSM-III or DSM-IV criteria [20, 21]. Two studies have examined the prognostic association of depression in patients whose CAD was confirmed by angiography. […] In [Carney et al.], a diagnosis of major depression by DSM-III criteria was the best predictor of cardiac events (MI, bypass surgery or death) at 1 year, more potent than other clinical risk factors such as impaired left ventricular function, severity of coronary disease and smoking among the 52 patients. The relative risk of a cardiac event was 2.2 times higher in patients with major depression than those with no depression.[…] Barefoot et al. [23] provided a larger sample size and longer follow-up duration in their study of 1250 patients who had undergone their first angiogram. […] Compared with non-depressed patients, those who were moderately to severely depressed had 69% higher odds of cardiac death and 78% higher odds of all-cause mortality. The mildly depressed had a 38% higher risk of cardiac death and a 57% higher risk of all-cause mortality than non-depressed patients.”

“Ford et al. [43] prospectively followed all male medical students who entered the Johns Hopkins Medical School from 1948 to 1964. At entry, the participants completed questionnaires about their personal and family history, health status and health behaviour, and underwent a standard medical examination. The cohort was then followed after graduation by mailed, annual questionnaires. The incidence of depression in this study was based on the mailed surveys […] 1190 participants [were included in the] analysis. The cumulative incidence of clinical depression in this population at 40 years of follow-up was 12%, with no evidence of a temporal change in the incidence. […] In unadjusted analysis, clinical depression was associated with an almost twofold higher risk of subsequent CAD. This association remained after adjustment for time-dependent covariates […]. The relative risk ratio for CAD development with versus without clinical depression was 2.12 (95% CI 1.24, 3.63), as was their relative risk ratio for future MI (95% CI 1.11, 4.06), after adjustment for age, baseline serum cholesterol level, parental MI, physical activity, time-dependent smoking, hypertension and diabetes. The median time from the first episode of clinical depression to first CAD event was 15 years, with a range of 1–44 years.”

“In the Women’s Ischaemia Syndrome Evaluation (WISE) study, 505 women referred for coronary angiography were followed for a mean of 4.9 years and completed the BDI [46]. Significantly increased mortality and cardiovascular events were found among women with elevated BDI scores, even after adjustment for age, cholesterol, stenosis score on angiography, smoking, diabetes, education, hyper-tension and body mass index (RR 3.1; 95% CI 1.5, 6.3). […] Further compelling evidence comes from a meta-analysis of 28 studies comprising almost 80 000 subjects [47], which demonstrated that, despite heterogeneity and differences in study quality, depression was consistently associated with increased risk of cardiovascular diseases in general, including stroke.”

“The preponderance of evidence strongly suggests that depression is a risk factor for CAD [coronary artery disease, US] development. […] In summary, it is fair to conclude that depression plays a significant role in CAD development, independent of conventional risk factors, and its adverse impact endures over time. The impact of depression on the risk of MI is probably similar to that of smoking [52]. […] Results of longitudinal cohort studies suggest that depression occurs before the onset of clinically significant CAD […] Recent brain imaging studies have indicated that lesions resulting from cerebrovascular insufficiency may lead to clinical depression [54, 55]. Depression may be a clinical manifestation of atherosclerotic lesions in certain areas of the brain that cause circulatory deficits. The depression then exacerbates the onset of CAD. The exact aetiological mechanism of depression and CAD development remains to be clarified.”

“Rutledge et al. [65] conducted a meta-analysis in 2006 in order to better understand the prevalence of depression among patients with CHF and the magnitude of the relationship between depression and clinical outcomes in the CHF population. They found that clinically significant depression was present in 21.5% of CHF patients, varying by the use of questionnaires versus diagnostic interview (33.6% and 19.3%, respectively). The combined results suggested higher rates of death and secondary events (RR 2.1; 95% CI 1.7, 2.6), and trends toward increased health care use and higher rates of hospitalisation and emergency room visits among depressed patients.”

“In the past 15 years, evidence has been provided that physically healthy subjects who suffer from depression are at increased risk for cardiovascular morbidity and mortality [1, 2], and that the occurrence of depression in patients with either unstable angina [3] or myocardial infarction (MI) [4] increases the risk for subsequent cardiac death. Moreover, epidemiological studies have proved that cardiovascular disease is a risk factor for depression, since the prevalence of depression in individuals with a recent MI or with coronary artery disease (CAD) or congestive heart failure has been found to be significantly higher than in the general population [5, 6]. […] findings suggest a bidirectional association between depression and cardiovascular disease. The pathophysiological mechanisms underlying this association are, at present, largely unclear, but several candidate mechanisms have been proposed.”

“Autonomic nervous system dysregulation is one of the most plausible candidate mechanisms underlying the relationship between depression and ischaemic heart disease, since changes of autonomic tone have been detected in both depression and cardiovascular disease [7], and autonomic imbalance […] has been found to lower the threshold for ventricular tachycardia, ventricular fibrillation and sudden cardiac death in patients with CAD [8, 9]. […] Imbalance between prothrombotic and antithrombotic mechanisms and endothelial dysfunction have [also] been suggested to contribute to the increased risk of cardiac events in both medically well patients with depression and depressed patients with CAD. Depression has been consistently associated with enhanced platelet activation […] evidence has accumulated that selective serotonin reuptake inhibitors (SSRIs) reduce platelet hyperreactivity and hyperaggregation of depressed patients [39, 40] and reduce the release of the platelet/endothelial biomarkers ß-thromboglobulin, P-selectin and E-selectin in depressed patients with acute CAD [41]. This may explain the efficacy of SSRIs in reducing the risk of mortality in depressed patients with CAD [42–44].”

“[S]everal studies have shown that reduced endothelium-dependent flow-mediated vasodilatation […] occurs in depressed adults with or without CAD [48–50]. Atherosclerosis with subsequent plaque rupture and thrombosis is the main determinant of ischaemic cardiovascular events, and atherosclerosis itself is now recognised to be fundamentally an inflammatory disease [56]. Since activation of inflammatory processes is common to both depression and cardiovascular disease, it would be reasonable to argue that the link between depression and ischaemic heart disease might be mediated by inflammation. Evidence has been provided that major depression is associated with a significant increase in circulating levels of both pro-inflammatory cytokines, such as IL-6 and TNF-a, and inflammatory acute phase proteins, especially the C-reactive protein (CRP) [57, 58], and that antidepressant treatment is able to normalise CRP levels irrespective of whether or not patients are clinically improved [59]. […] Vaccarino et al. [79] assessed specifically whether inflammation is the mechanism linking depression to ischaemic cardiac events and found that, in women with suspected coronary ischaemia, depression was associated with increased circulating levels of CRP and IL-6 and was a strong predictor of ischaemic cardiac events”

“Major depression has been consistently associated with hyperactivity of the HPA axis, with a consequent overstimulation of the sympathetic nervous system, which in turn results in increased circulating catecholamine levels and enhanced serum cortisol concentrations [68–70]. This may cause an imbalance in sympathetic and parasympathetic activity, which results in elevated heart rate and blood pressure, reduced HRV [heart rate variability], disruption of ventricular electrophysiology with increased risk of ventricular arrhythmias as well as an increased risk of atherosclerotic plaque rupture and acute coronary thrombosis. […] In addition, glucocorticoids mobilise free fatty acids, causing endothelial inflammation and excessive clotting, and are associated with hypertension, hypercholesterolaemia and glucose dysregulation [88, 89], which are risk factors for CAD.”

“Most of the literature on [the] comorbidity [between major depressive disorder (MDD) and coronary artery disease (CAD), US] has tended to favour the hypothesis of a causal effect of MDD on CAD, but reversed causality has also been suggested to contribute. Patients with severe CAD at baseline, and consequently a worse prognosis, may simply be more prone to report mood disturbances than less severely ill patients. Furthermore, in pre-morbid populations, insipid atherosclerosis in cerebral vessels may cause depressive symptoms before the onset of actual cardiac or cerebrovascular events, a variant of reverse causality known as the ‘vascular depression’ hypothesis [2]. To resolve causality, comorbidity between MDD and CAD has been addressed in longitudinal designs. Most prospective studies reported that clinical depression or depressive symptoms at baseline predicted higher incidence of heart disease at follow-up [1], which seems to favour the hypothesis of causal effects of MDD. We need to remind ourselves, however […] [that] [p]rospective associations do not necessarily equate causation. Higher incidence of CAD in depressed individuals may reflect the operation of common underlying factors on MDD and CAD that become manifest in mental health at an earlier stage than in cardiac health. […] [T]he association between MDD and CAD may be due to underlying genetic factors that lead to increased symptoms of anxiety and depression, but may also independently influence the atherosclerotic process. This phenomenon, where low-level biological variation has effects on multiple complex traits at the organ and behavioural level, is called genetic ‘pleiotropy’. If present in a time-lagged form, that is if genetic effects on MDD risk precede effects of the same genetic variants on CAD risk, this phenomenon can cause longitudinal correlations that mimic a causal effect of MDD.”

 

August 12, 2017 Posted by | Books, Cardiology, Genetics, Medicine, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

A few diabetes papers of interest

i. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

“Modest cognitive dysfunction is consistently reported in children and young adults with type 1 diabetes (T1D) (1). Mental efficiency, psychomotor speed, executive functioning, and intelligence quotient appear to be most affected (2); studies report effect sizes between 0.2 and 0.5 (small to modest) in children and adolescents (3) and between 0.4 and 0.8 (modest to large) in adults (2). Whether effect sizes continue to increase as those with T1D age, however, remains unknown.

A key issue not yet addressed is whether aging individuals with T1D have an increased risk of manifesting “clinically relevant cognitive impairment,” defined by comparing individual cognitive test scores to demographically appropriate normative means, as opposed to the more commonly investigated “cognitive dysfunction,” or between-group differences in cognitive test scores. Unlike the extensive literature examining cognitive impairment in type 2 diabetes, we know of only one prior study examining cognitive impairment in T1D (4). This early study reported a higher rate of clinically relevant cognitive impairment among children (10–18 years of age) diagnosed before compared with after age 6 years (24% vs. 6%, respectively) or a non-T1D cohort (6%).”

“This study tests the hypothesis that childhood-onset T1D is associated with an increased risk of developing clinically relevant cognitive impairment detectable by middle age. We compared cognitive test results between adults with and without T1D and used demographically appropriate published norms (1012) to determine whether participants met criteria for impairment for each test; aging and dementia studies have selected a score ≥1.5 SD worse than the norm on that test, corresponding to performance at or below the seventh percentile (13).”

“During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988.  […] The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education.”

“Having T1D was the only factor significantly associated with the between-group difference in clinically relevant cognitive impairment in our sample. Traditional risk factors for age-related cognitive impairment, in particular older age and high blood pressure (24), were not related to the between-group difference we observed. […] Similar to previous studies of younger adults with T1D (14,26), we found no relationship between the number of severe hypoglycemic episodes and cognitive impairment. Rather, we found that chronic hyperglycemia, via its associated vascular and metabolic changes, may have triggered structural changes in the brain that disrupt normal cognitive function.”

Just to be absolutely clear about these results: The type 1 diabetics they recruited in this study were on average not yet fifty years old, yet more than one in four of them were cognitively impaired to a clinically relevant degree. This is a huge effect. As they note later in the paper:

“Unlike previous reports of mild/modest cognitive dysfunction in young adults with T1D (1,2), we detected clinically relevant cognitive impairment in 28% of our middle-aged participants with T1D. This prevalence rate in our T1D cohort is comparable to the prevalence of mild cognitive impairment typically reported among community-dwelling adults aged 85 years and older (29%) (20).”

The type 1 diabetics included in the study had had diabetes for roughly a decade more than I have. And the number of cognitively impaired individuals in that sample corresponds roughly to what you find when you test random 85+ year-olds. Having type 1 diabetes is not good for your brain.

ii. Comment on Nunley et al. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

This one is a short comment to the above paper, below I’ve quoted ‘the meat’ of the comment:

“While the […] study provides us with important insights regarding cognitive impairment in adults with type 1 diabetes, we regret that depression has not been taken into account. A systematic review and meta-analysis published in 2014 identified significant objective cognitive impairment in adults and adolescents with depression regarding executive functioning, memory, and attention relative to control subjects (2). Moreover, depression is two times more common in adults with diabetes compared with those without this condition, regardless of type of diabetes (3). There is even evidence that the co-occurrence of diabetes and depression leads to additional health risks such as increased mortality and dementia (3,4); this might well apply to cognitive impairment as well. Furthermore, in people with diabetes, the presence of depression has been associated with the development of diabetes complications, such as retinopathy, and higher HbA1c values (3). These are exactly the diabetes-specific correlates that Nunley et al. (1) found.”

“We believe it is a missed opportunity that Nunley et al. (1) mainly focused on biological variables, such as hyperglycemia and microvascular disease, and did not take into account an emotional disorder widely represented among people with diabetes and closely linked to cognitive impairment. Even though severe or chronic cases of depression are likely to have been excluded in the group without type 1 diabetes based on exclusion criteria (1), data on the presence of depression (either measured through a diagnostic interview or by using a validated screening questionnaire) could have helped to interpret the present findings. […] Determining the role of depression in the relationship between cognitive impairment and type 1 diabetes is of significant importance. Treatment of depression might improve cognitive impairment both directly by alleviating cognitive depression symptoms and indirectly by improving treatment nonadherence and glycemic control, consequently lowering the risk of developing complications.”

iii. Prevalence of Diabetes and Diabetic Nephropathy in a Large U.S. Commercially Insured Pediatric Population, 2002–2013.

“[W]e identified 96,171 pediatric patients with diabetes and 3,161 pediatric patients with diabetic nephropathy during 2002–2013. We estimated prevalence of pediatric diabetes overall, by diabetes type, age, and sex, and prevalence of pediatric diabetic nephropathy overall, by age, sex, and diabetes type.”

“Although type 1 diabetes accounts for a majority of childhood and adolescent diabetes, type 2 diabetes is becoming more common with the increasing rate of childhood obesity and it is estimated that up to 45% of all new patients with diabetes in this age-group have type 2 diabetes (1,2). With the rising prevalence of diabetes in children, a rise in diabetes-related complications, such as nephropathy, is anticipated. Moreover, data suggest that the development of clinical macrovascular complications, neuropathy, and nephropathy may be especially rapid among patients with young-onset type 2 diabetes (age of onset <40 years) (36). However, the natural history of young patients with type 2 diabetes and resulting complications has not been well studied.”

I’m always interested in the identification mechanisms applied in papers like this one, and I’m a little confused about the high number of patients without prescriptions (almost one-third of patients); I sort of assume these patients do take (/are given) prescription drugs, but get them from sources not available to the researchers (parents get prescriptions for the antidiabetic drugs, and the researchers don’t have access to these data? Something like this..) but this is a bit unclear. The mechanism they employ in the paper is not perfect (no mechanism is), but it probably works:

“Patients who had one or more prescription(s) for insulin and no prescriptions for another antidiabetes medication were classified as having type 1 diabetes, while those who filled prescriptions for noninsulin antidiabetes medications were considered to have type 2 diabetes.”

When covering limitations of the paper, they observe incidentally in this context that:

“Klingensmith et al. (31) recently reported that in the initial month after diagnosis of type 2 diabetes around 30% of patients were treated with insulin only. Thus, we may have misclassified a small proportion of type 2 cases as type 1 diabetes or vice versa. Despite this, we found that 9% of patients had onset of type 2 diabetes at age <10 years, consistent with the findings of Klingensmith et al. (8%), but higher than reported by the SEARCH for Diabetes in Youth study (<3%) (31,32).”

Some more observations from the paper:

“There were 149,223 patients aged <18 years at first diagnosis of diabetes in the CCE database from 2002 through 2013. […] Type 1 diabetes accounted for a majority of the pediatric patients with diabetes (79%). Among these, 53% were male and 53% were aged 12 to <18 years at onset, while among patients with type 2 diabetes, 60% were female and 79% were aged 12 to <18 years at onset.”

“The overall annual prevalence of all diabetes increased from 1.86 to 2.82 per 1,000 during years 2002–2013; it increased on average by 9.5% per year from 2002 to 2006 and slowly increased by 0.6% after that […] The prevalence of type 1 diabetes increased from 1.48 to 2.32 per 1,000 during the study period (average increase of 8.5% per year from 2002 to 2006 and 1.4% after that; both P values <0.05). The prevalence of type 2 diabetes increased from 0.38 to 0.67 per 1,000 during 2002 through 2006 (average increase of 13.3% per year; P < 0.05) and then dropped from 0.56 to 0.49 per 1,000 during 2007 through 2013 (average decrease of 2.7% per year; P < 0.05). […] Prevalence of any diabetes increased by age, with the highest prevalence in patients aged 12 to <18 years (ranging from 3.47 to 5.71 per 1,000 from 2002 through 2013).” […] The annual prevalence of diabetes increased over the study period mainly because of increases in type 1 diabetes.”

“Dabelea et al. (8) reported, based on data from the SEARCH for Diabetes in Youth study, that the annual prevalence of type 1 diabetes increased from 1.48 to 1.93 per 1,000 and from 0.34 to 0.46 per 1,000 for type 2 diabetes from 2001 to 2009 in U.S. youth. In our study, the annual prevalence of type 1 diabetes was 1.48 per 1,000 in 2002 and 2.10 per 1,000 in 2009, which is close to their reported prevalence.”

“We identified 3,161 diabetic nephropathy cases. Among these, 1,509 cases (47.7%) were of specific diabetic nephropathy and 2,253 (71.3%) were classified as probable cases. […] The annual prevalence of diabetic nephropathy in pediatric patients with diabetes increased from 1.16 to 3.44% between 2002 and 2013; it increased by on average 25.7% per year from 2002 to 2005 and slowly increased by 4.6% after that (both P values <0.05).”

Do note that the relationship between nephropathy prevalence and diabetes prevalence is complicated and that you cannot just explain an increase in the prevalence of nephropathy over time easily by simply referring to an increased prevalence of diabetes during the same time period. This would in fact be a very wrong thing to do, in part but not only on account of the data structure employed in this study. One problem which is probably easy to understand is that if more children got diabetes but the same proportion of those new diabetics got nephropathy, the diabetes prevalence would go up but the diabetic nephropathy prevalence would remain fixed; when you calculate the diabetic nephropathy prevalence you implicitly condition on diabetes status. But this just scratches the surface of the issues you encounter when you try to link these variables, because the relationship between the two variables is complicated; there’s an age pattern to diabetes risk, with risk (incidence) increasing with age (up to a point, after which it falls – in most samples I’ve seen in the past peak incidence in pediatric populations is well below the age of 18). However diabetes prevalence increases monotonously with age as long as the age-specific death rate of diabetics is lower than the age-specific incidence, because diabetes is chronic, and then on top of that you have nephropathy-related variables, which display diabetes-related duration-dependence (meaning that although nephropathy risk is also increasing with age when you look at that variable in isolation, that age-risk relationship is confounded by diabetes duration – a type 1 diabetic at the age of 12 who’s had diabetes for 10 years has a higher risk of nephropathy than a 16-year old who developed diabetes the year before). When a newly diagnosed pediatric patient is included in the diabetes sample here this will actually decrease the nephropathy prevalence in the short run, but not in the long run, assuming no changes in diabetes treatment outcomes over time. This is because the probability that that individual has diabetes-related kidney problems as a newly diagnosed child is zero, so he or she will unquestionably only contribute to the denominator during the first years of illness (the situation in the middle-aged type 2 context is different; here you do sometimes have newly-diagnosed patients who have developed complications already). This is one reason why it would be quite wrong to say that increased diabetes prevalence in this sample is the reason why diabetic nephropathy is increasing as well. Unless the time period you look at is very long (e.g. you have a setting where you follow all individuals with a diagnosis until the age of 18), the impact of increasing prevalence of one condition may well be expected to have a negative impact on the estimated risk of associated conditions, if those associated conditions display duration-dependence (which all major diabetes complications do). A second factor supporting a default assumption of increasing incidence of diabetes leading to an expected decreasing rate of diabetes-related complications is of course the fact that treatment options have tended to increase over time, and especially if you take a long view (look back 30-40 years) the increase in treatment options and improved medical technology have lead to improved metabolic control and better outcomes.

That both variables grew over time might be taken to indicate that both more children got diabetes and that a larger proportion of this increased number of children with diabetes developed kidney problems, but this stuff is a lot more complicated than it might look and it’s in particular important to keep in mind that, say, the 2005 sample and the 2010 sample do not include the same individuals, although there’ll of course be some overlap; in age-stratified samples like this you always have some level of implicit continuous replacement, with newly diagnosed patients entering and replacing the 18-year olds who leave the sample. As long as prevalence is constant over time, associated outcome variables may be reasonably easy to interpret, but when you have dynamic samples as well as increasing prevalence over time it gets difficult to say much with any degree of certainty unless you crunch the numbers in a lot of detail (and it might be difficult even if you do that). A factor I didn’t mention above but which is of course also relevant is that you need to be careful about how to interpret prevalence rates when you look at complications with high mortality rates (and late-stage diabetic nephropathy is indeed a complication with high mortality); in such a situation improvements in treatment outcomes may have large effects on prevalence rates but no effect on incidence. Increased prevalence is not always bad news, sometimes it is good news indeed. Gleevec substantially increased the prevalence of CML.

In terms of the prevalence-outcomes (/complication risk) connection, there are also in my opinion reasons to assume that there may be multiple causal pathways between prevalence and outcomes. For example a very low prevalence of a condition in a given area may mean that fewer specialists are educated to take care of these patients than would be the case for an area with a higher prevalence, and this may translate into a more poorly developed care infrastructure. Greatly increasing prevalence may on the other hand lead to a lower level of care for all patients with the illness, not just the newly diagnosed ones, due to binding budget constraints and care rationing. And why might you have changes in prevalence; might they not sometimes rather be related to changes in diagnostic practices, rather than changes in the True* prevalence? If that’s the case, you might not be comparing apples to apples when you’re comparing the evolving complication rates. There are in my opinion many reasons to believe that the relationship between chronic conditions and the complication rates of these conditions is far from simple to model.

All this said, kidney problems in children with diabetes is still rare, compared to the numbers you see when you look at adult samples with longer diabetes duration. It’s also worth distinguishing between microalbuminuria and overt nephropathy; children rarely proceed to develop diabetes-related kidney failure, although poor metabolic control may mean that they do develop this complication later, in early adulthood. As they note in the paper:

“It has been reported that overt diabetic nephropathy and kidney failure caused by either type 1 or type 2 diabetes are uncommon during childhood or adolescence (24). In this study, the annual prevalence of diabetic nephropathy for all cases ranged from 1.16 to 3.44% in pediatric patients with diabetes and was extremely low in the whole pediatric population (range 2.15 to 9.70 per 100,000), confirming that diabetic nephropathy is a very uncommon condition in youth aged <18 years. We observed that the prevalence of diabetic nephropathy increased in both specific and unspecific cases before 2006, with a leveling off of the specific nephropathy cases after 2005, while the unspecific cases continued to increase.”

iv. Adherence to Oral Glucose-Lowering Therapies and Associations With 1-Year HbA1c: A Retrospective Cohort Analysis in a Large Primary Care Database.

“Between a third and a half of medicines prescribed for type 2 diabetes (T2DM), a condition in which multiple medications are used to control cardiovascular risk factors and blood glucose (1,2), are not taken as prescribed (36). However, estimates vary widely depending on the population being studied and the way in which adherence to recommended treatment is defined.”

“A number of previous studies have used retrospective databases of electronic health records to examine factors that might predict adherence. A recent large cohort database examined overall adherence to oral therapy for T2DM, taking into account changes of therapy. It concluded that overall adherence was 69%, with individuals newly started on treatment being significantly less likely to adhere (19).”

“The impact of continuing to take glucose-lowering medicines intermittently, but not as recommended, is unknown. Medication possession (expressed as a ratio of actual possession to expected possession), derived from prescribing records, has been identified as a valid adherence measure for people with diabetes (7). Previous studies have been limited to small populations in managed-care systems in the U.S. and focused on metformin and sulfonylurea oral glucose-lowering treatments (8,9). Further studies need to be carried out in larger groups of people that are more representative of the general population.

The Clinical Practice Research Database (CPRD) is a long established repository of routine clinical data from more than 13 million patients registered with primary care services in England. […] The Genetics of Diabetes and Audit Research Tayside Study (GoDARTS) database is derived from integrated health records in Scotland with primary care, pharmacy, and hospital data on 9,400 patients with diabetes. […] We conducted a retrospective cohort study using [these databases] to examine the prevalence of nonadherence to treatment for type 2 diabetes and investigate its potential impact on HbA1c reduction stratified by type of glucose-lowering medication.”

“In CPRD and GoDARTS, 13% and 15% of patients, respectively, were nonadherent. Proportions of nonadherent patients varied by the oral glucose-lowering treatment prescribed (range 8.6% [thiazolidinedione] to 18.8% [metformin]). Nonadherent, compared with adherent, patients had a smaller HbA1c reduction (0.4% [4.4 mmol/mol] and 0.46% [5.0 mmol/mol] for CPRD and GoDARTs, respectively). Difference in HbA1c response for adherent compared with nonadherent patients varied by drug (range 0.38% [4.1 mmol/mol] to 0.75% [8.2 mmol/mol] lower in adherent group). Decreasing levels of adherence were consistently associated with a smaller reduction in HbA1c.”

“These findings show an association between adherence to oral glucose-lowering treatment, measured by the proportion of medication obtained on prescription over 1 year, and the corresponding decrement in HbA1c, in a population of patients newly starting treatment and continuing to collect prescriptions. The association is consistent across all commonly used oral glucose-lowering therapies, and the findings are consistent between the two data sets examined, CPRD and GoDARTS. Nonadherent patients, taking on average <80% of the intended medication, had about half the expected reduction in HbA1c. […] Reduced medication adherence for commonly used glucose-lowering therapies among patients persisting with treatment is associated with smaller HbA1c reductions compared with those taking treatment as recommended. Differences observed in HbA1c responses to glucose-lowering treatments may be explained in part by their intermittent use.”

“Low medication adherence is related to increased mortality (20). The mean difference in HbA1c between patients with MPR <80% and ≥80% is between 0.37% and 0.55% (4 mmol/mol and 6 mmol/mol), equivalent to up to a 10% reduction in death or an 18% reduction in diabetes complications (21).”

v. Health Care Transition in Young Adults With Type 1 Diabetes: Perspectives of Adult Endocrinologists in the U.S.

“Empiric data are limited on best practices in transition care, especially in the U.S. (10,1316). Prior research, largely from the patient perspective, has highlighted challenges in the transition process, including gaps in care (13,1719); suboptimal pediatric transition preparation (13,20); increased post-transition hospitalizations (21); and patient dissatisfaction with the transition experience (13,1719). […] Young adults with type 1 diabetes transitioning from pediatric to adult care are at risk for adverse outcomes. Our objective was to describe experiences, resources, and barriers reported by a national sample of adult endocrinologists receiving and caring for young adults with type 1 diabetes.”

“We received responses from 536 of 4,214 endocrinologists (response rate 13%); 418 surveys met the eligibility criteria. Respondents (57% male, 79% Caucasian) represented 47 states; 64% had been practicing >10 years and 42% worked at an academic center. Only 36% of respondents reported often/always reviewing pediatric records and 11% reported receiving summaries for transitioning young adults with type 1 diabetes, although >70% felt that these activities were important for patient care.”

“A number of studies document deficiencies in provider hand-offs across other chronic conditions and point to the broader relevance of our findings. For example, in two studies of inflammatory bowel disease, adult gastroenterologists reported inadequacies in young adult transition preparation (31) and infrequent receipt of medical histories from pediatric providers (32). In a study of adult specialists caring for young adults with a variety of chronic diseases (33), more than half reported that they had no contact with the pediatric specialists.

Importantly, more than half of the endocrinologists in our study reported a need for increased access to mental health referrals for young adult patients with type 1 diabetes, particularly in nonacademic settings. Report of barriers to care was highest for patient scenarios involving mental health issues, and endocrinologists without easy access to mental health referrals were significantly more likely to report barriers to diabetes management for young adults with psychiatric comorbidities such as depression, substance abuse, and eating disorders.”

“Prior research (34,35) has uncovered the lack of mental health resources in diabetes care. In the large cross-national Diabetes Attitudes, Wishes and Needs (DAWN) study (36) […] diabetes providers often reported not having the resources to manage mental health problems; half of specialist diabetes physicians felt unable to provide psychiatric support for patients and one-third did not have ready access to outside expertise in emotional or psychiatric matters. Our results, which resonate with the DAWN findings, are particularly concerning in light of the vulnerability of young adults with type 1 diabetes for adverse medical and mental health outcomes (4,34,37,38). […] In a recent report from the Mental Health Issues of Diabetes conference (35), which focused on type 1 diabetes, a major observation included the lack of trained mental health professionals, both in academic centers and the community, who are knowledgeable about the mental health issues germane to diabetes.”

August 3, 2017 Posted by | Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Psychology, Statistics, Studies | Leave a comment

Beyond Significance Testing (III)

There are many ways to misinterpret significance tests, and this book spends quite a bit of time and effort on these kinds of issues. I decided to include in this post quite a few quotes from chapter 4 of the book, which deals with these topics in some detail. I also included some notes on effect sizes.

“[P] < .05 means that the likelihood of the data or results even more extreme given random sampling under the null hypothesis is < .05, assuming that all distributional requirements of the test statistic are satisfied and there are no other sources of error variance. […] the odds-against-chance fallacy […] [is] the false belief that p indicates the probability that a result happened by sampling error; thus, p < .05 says that there is less than a 5% likelihood that a particular finding is due to chance. There is a related misconception i call the filter myth, which says that p values sort results into two categories, those that are a result of “chance” (H0 not rejected) and others that are due to “real” effects (H0 rejected). These beliefs are wrong […] When p is calculated, it is already assumed that H0 is true, so the probability that sampling error is the only explanation is already taken to be 1.00. It is thus illogical to view p as measuring the likelihood of sampling error. […] There is no such thing as a statistical technique that determines the probability that various causal factors, including sampling error, acted on a particular result.

Most psychology students and professors may endorse the local Type I error fallacy [which is] the mistaken belief that p < .05 given α = .05 means that the likelihood that the decision just taken to reject H0 is a type I error is less than 5%. […] p values from statistical tests are conditional probabilities of data, so they do not apply to any specific decision to reject H0. This is because any particular decision to do so is either right or wrong, so no probability is associated with it (other than 0 or 1.0). Only with sufficient replication could one determine whether a decision to reject H0 in a particular study was correct. […] the valid research hypothesis fallacy […] refers to the false belief that the probability that H1 is true is > .95, given p < .05. The complement of p is a probability, but 1 – p is just the probability of getting a result even less extreme under H0 than the one actually found. This fallacy is endorsed by most psychology students and professors”.

“[S]everal different false conclusions may be reached after deciding to reject or fail to reject H0. […] the magnitude fallacy is the false belief that low p values indicate large effects. […] p values are confounded measures of effect size and sample size […]. Thus, effects of trivial magnitude need only a large enough sample to be statistically significant. […] the zero fallacy […] is the mistaken belief that the failure to reject a nil hypothesis means that the population effect size is zero. Maybe it is, but you cannot tell based on a result in one sample, especially if power is low. […] The equivalence fallacy occurs when the failure to reject H0: µ1 = µ2 is interpreted as saying that the populations are equivalent. This is wrong because even if µ1 = µ2, distributions can differ in other ways, such as variability or distribution shape.”

“[T]he reification fallacy is the faulty belief that failure to replicate a result is the failure to make the same decision about H0 across studies […]. In this view, a result is not considered replicated if H0 is rejected in the first study but not in the second study. This sophism ignores sample size, effect size, and power across different studies. […] The sanctification fallacy refers to dichotomous thinking about continuous p values. […] Differences between results that are “significant” versus “not significant” by close margins, such as p = .03 versus p = .07 when α = .05, are themselves often not statistically significant. That is, relatively large changes in p can correspond to small, nonsignificant changes in the underlying variable (Gelman & Stern, 2006). […] Classical parametric statistical tests are not robust against outliers or violations of distributional assumptions, especially in small, unrepresentative samples. But many researchers believe just the opposite, which is the robustness fallacy. […] most researchers do not provide evidence about whether distributional or other assumptions are met”.

“Many [of the above] fallacies involve wishful thinking about things that researchers really want to know. These include the probability that H0 or H1 is true, the likelihood of replication, and the chance that a particular decision to reject H0 is wrong. Alas, statistical tests tell us only the conditional probability of the data. […] But there is [however] a method that can tell us what we want to know. It is not a statistical technique; rather, it is good, old-fashioned replication, which is also the best way to deal with the problem of sampling error. […] Statistical significance provides even in the best case nothing more than low-level support for the existence of an effect, relation, or difference. That best case occurs when researchers estimate a priori power, specify the correct construct definitions and operationalizations, work with random or at least representative samples, analyze highly reliable scores in distributions that respect test assumptions, control other major sources of imprecision besides sampling error, and test plausible null hypotheses. In this idyllic scenario, p values from statistical tests may be reasonably accurate and potentially meaningful, if they are not misinterpreted. […] The capability of significance tests to address the dichotomous question of whether effects, relations, or differences are greater than expected levels of sampling error may be useful in some new research areas. Due to the many limitations of statistical tests, this period of usefulness should be brief. Given evidence that an effect exists, the next steps should involve estimation of its magnitude and evaluation of its substantive significance, both of which are beyond what significance testing can tell us. […] It should be a hallmark of a maturing research area that significance testing is not the primary inference method.”

“[An] effect size [is] a quantitative reflection of the magnitude of some phenomenon used for the sake of addressing a specific research question. In this sense, an effect size is a statistic (in samples) or parameter (in populations) with a purpose, that of quantifying a phenomenon of interest. more specific definitions may depend on study design. […] cause size refers to the independent variable and specifically to the amount of change in it that produces a given effect on the dependent variable. A related idea is that of causal efficacy, or the ratio of effect size to the size of its cause. The greater the causal efficacy, the more that a given change on an independent variable results in proportionally bigger changes on the dependent variable. The idea of cause size is most relevant when the factor is experimental and its levels are quantitative. […] An effect size measure […] is a named expression that maps data, statistics, or parameters onto a quantity that represents the magnitude of the phenomenon of interest. This expression connects dimensions or generalized units that are abstractions of variables of interest with a specific operationalization of those units.”

“A good effect size measure has the [following properties:] […] 1. Its scale (metric) should be appropriate for the research question. […] 2. It should be independent of sample size. […] 3. As a point estimate, an effect size should have good statistical properties; that is, it should be unbiased, consistent […], and efficient […]. 4. The effect size [should be] reported with a confidence interval. […] Not all effect size measures […] have all the properties just listed. But it is possible to report multiple effect sizes that address the same question in order to improve the communication of the results.” 

“Examples of outcomes with meaningful metrics include salaries in dollars and post-treatment survival time in years. means or contrasts for variables with meaningful units are unstandardized effect sizes that can be directly interpreted. […] In medical research, physical measurements with meaningful metrics are often available. […] But in psychological research there are typically no “natural” units for abstract, nonphysical constructs such as intelligence, scholastic achievement, or self-concept. […] Therefore, metrics in psychological research are often arbitrary instead of meaningful. An example is the total score for a set of true-false items. Because responses can be coded with any two different numbers, the total is arbitrary. Standard scores such as percentiles and normal deviates are arbitrary, too […] Standardized effect sizes can be computed for results expressed in arbitrary metrics. Such effect sizes can also be directly compared across studies where outcomes have different scales. this is because standardized effect sizes are based on units that have a common meaning regardless of the original metric.”

“1. It is better to report unstandardized effect sizes for outcomes with meaningful metrics. This is because the original scale is lost when results are standardized. 2. Unstandardized effect sizes are best for comparing results across different samples measured on the same outcomes. […] 3. Standardized effect sizes are better for comparing conceptually similar results based on different units of measure. […] 4. Standardized effect sizes are affected by the corresponding unstandardized effect sizes plus characteristics of the study, including its design […], whether factors are fixed or random, the extent of error variance, and sample base rates. This means that standardized effect sizes are less directly comparable over studies that differ in their designs or samples. […] 5. There is no such thing as T-shirt effect sizes (Lenth, 2006– 2009) that classify standardized effect sizes as “small,” “medium,” or “large” and apply over all research areas. This is because what is considered a large effect in one area may be seen as small or trivial in another. […] 6. There is usually no way to directly translate standardized effect sizes into implications for substantive significance. […] It is standardized effect sizes from sets of related studies that are analyzed in most meta analyses.”

July 16, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Personality Puzzle (IV)

Below I have added a few quotes from the last 100 pages of the book. This will be my last post about the book.

“Carol Dweck and her colleagues claim that two […] kinds of goals are […] important […]. One kind she calls judgment goals. Judgment, in this context, refers to seeking to judge or validate an attribute in oneself. For example, you might have the goal of convincing yourself that you are smart, beautiful, or popular. The other kind she calls development goals. A development goal is the desire to actually improve oneself, to become smarter, more beautiful, or more popular. […] From the perspective of Dweck’s theory, these two kinds of goals are important in many areas of life because they produce different reactions to failure, and everybody fails sometimes. A person with a development goal will respond to failure with what Dweck calls a mastery-oriented pattern, in which she tries even harder the next time. […] In contrast, a person with a judgment goal responds to failure with what Dweck calls the helpless pattern: Rather than try harder, this individual simply concludes, “I can’t do it,” and gives up. Of course, that only guarantees more failure in the future. […] Dweck believes [the goals] originate in different kinds of implicit theories about the nature of the world […] Some people hold what Dweck calls entity theories, and believe that personal qualities such as intelligence and ability are unchangeable, leading them to respond helplessly to any indication that they do not have what it takes. Other people hold incremental theories, believing that intelligence and ability can change with time and experience. Their goals, therefore, involve not only proving their competence but increasing it.”

(I should probably add here that any sort of empirical validation of those theories and their consequences are, aside from a brief discussion of the results of a few (likely weak, low-powered) studies, completely absent in the book, but this kind of stuff might even so be worth having in mind, which was why I included this quote in my coverage – US).

“A large amount of research suggests that low self-esteem […] is correlated with outcomes such as dissatisfaction with life, hopelessness, and depression […] as well as loneliness […] Declines in self-esteem also appear to cause outcomes including depression, lower satisfaction with relationships, and lower satisfaction with one’s career […] Your self-esteem tends to suffer when you have failed in the eyes of your social group […] This drop in self-esteem may be a warning about possible rejection or even social ostracism — which, for our distant ancestors, could literally be fatal — and motivate you to restore your reputation. High self-esteem, by contrast, may indicate success and acceptance. Attempts to bolster self-esteem can backfire. […] People who self-enhance — who think they are better than the other people who know them think they are — can run into problems in relations with others, mental health, and adjustment […] Narcissism is associated with high self-esteem that is brittle and unstable because it is unrealistic […], and unstable self-esteem may be worse than low self-esteem […] The bottom line is that promoting psychological health requires something more complex than simply trying to make everybody feel better about themselves […]. The best way to raise self-esteem is through accomplishments that increase it legitimately […]. The most important aspect of your opinion of yourself is not whether it is good or bad, but the degree to which it is accurate.”

“An old theory suggested that if you repeated something over and over in your mind, such rehearsal was sufficient to move the information into long-term memory (LTM), or permanent memory storage. Later research showed that this idea is not quite correct. The best way to get information into LTM, it turns out, is not just to repeat it, but to really think about it (a process called elaboration). The longer and more complex the processing that a piece of information receives, the more likely it is to get transferred into LTM”.

“Concerning mental health, aspects of personality can become so extreme as to cause serious problems. When this happens, psychologists begin to speak of personality disorders […] Personality disorders have five general characteristics. They are (1) unusual and, (2) by definition, tend to cause problems. In addition, most but not quite all personality disorders (3) affect social relations and (4) are stable over time. Finally, (5) in some cases, the person who has a personality disorder may see it not as a disorder at all, but a basic part of who he or she is. […] personality disorders can be ego-syntonic, which means the people who have them do not think anything is wrong. People who suffer from other kinds of mental disorder generally experience their symptoms of confusion, depression, or anxiety as ego-dystonic afflictions of which they would like to be cured. For a surprising number of people with personality disorders, in contrast, their symptoms feel like normal and even valued aspects of who they are. Individuals with the attributes of the antisocial or narcissistic personality disorders, in particular, typically do not think they have a problem.”

[One side-note: It’s important to be aware of the fact that not all people who display unusual behavioral patterns which are causing them problems necessarily suffer from a personality disorder. Other categorization schemes also exist. Autism is for example not categorized as a personality disorder, but is rather considered to be a (neuro)developmental disorder. Funder does not go into this kind of stuff in his book but I thought it might be worth mentioning here – US]

“Some people are more honest than others, but when deceit and manipulation become core aspects of an individual’s way of dealing with the world, he may be diagnosed with antisocial personality disorder. […] People with this disorder are impulsive, and engage in risky behaviors […] They typically are irritable, aggressive, and irresponsible. The damage they do to others bothers them not one whit; they rationalize […] that life is unfair; the world is full of suckers; and if you don’t take what you want whenever you can, then you are a sucker too. […] A wide variety of negative outcomes may accompany this disorder […] Antisocial personality disorder is sometimes confused with the trait of psychopathy […] but it’s importantly different […] Psychopaths are emotionally cold, they disregard social norms, and they are manipulative and often cunning. Most psychopaths meet the criteria for antisocial personality disorder, but the reverse is not true.”

“From day to day with different people, and over time with the same people, most individuals feel and act pretty consistently. […] Predictability makes it possible to deal with others in a reasonable way, and gives each of us a sense of individual identity. But some people are less consistent than others […] borderline personality disorder […] is characterized by unstable and confused behavior, a poor sense of identity, and patterns of self-harm […] Their chaotic thoughts, emotions, and behaviors make persons suffering from this disorder very difficult for others to “read” […] Borderline personality disorder (BPD) entails so many problems for the affected person that nobody doubts that it is, at the very least, on the “borderline” with severe psychopathology.5 Its hallmark is emotional instability. […] All of the personality disorders are rather mixed bags of indicators, and BPD may be the most mixed of all. It is difficult to find a coherent, common thread among its characteristics […] Some psychologists […] have suggested that this [personality disorder] category is too diffuse and should be abandoned.”

“[T]he modern research literature on personality disorders has come close to consensus about one conclusion: There is no sharp dividing line between psychopathology and normal variation (L. A. Clark & Watson, 1999a; Furr & Funder, 1998; Hong & Paunonen, 2011; Krueger & Eaton, 2010; Krueger & Tackett, 2003; B. P. O’Connor, 2002; Trull & Durrett, 2005).”

“Accurate self-knowledge has long been considered a hallmark of mental health […] The process for gaining accurate self-knowledge is outlined by the Realistic Accuracy Model […] according to RAM, one can gain accurate knowledge of anyone’s personality through a four-stage process. First, the person must do something relevant to the trait being judged; second, the information must be available to the judge; third, the judge must detect this information; and fourth, the judge must utilize the information correctly. This model was initially developed to explain the accuracy of judgments of other people. In an important sense, though, you are just one of the people you happen to know, and, to some degree, you come to know yourself the same way you find out about anybody else — by observing what you do and trying to draw appropriate conclusions”.

“[P]ersonality is not just something you have; it is also something you do. The unique aspects of what you do comprise the procedural self, and your knowledge of this self typically takes the form of procedural knowledge. […] The procedural self is made up of the behaviors through which you express who you think you are, generally without knowing you are doing so […]. Like riding a bicycle, the working of the procedural self is automatic and not very accessible to conscious awareness.”

July 14, 2017 Posted by | Books, Psychology | Leave a comment

Beyond Significance Testing (II)

I have added some more quotes and observations from the book below.

“The least squares estimators M and s2 are not robust against the effects of extreme scores. […] Conventional methods to construct confidence intervals rely on sample standard deviations to estimate standard errors. These methods also rely on critical values in central test distributions, such as t and z, that assume normality or homoscedasticity […] Such distributional assumptions are not always plausible. […] One option to deal with outliers is to apply transformations, which convert original scores with a mathematical operation to new ones that may be more normally distributed. The effect of applying a monotonic transformation is to compress one part of the distribution more than another, thereby changing its shape but not the rank order of the scores. […] It can be difficult to find a transformation that works in a particular data set. Some distributions can be so severely nonnormal that basically no transformation will work. […] An alternative that also deals with departures from distributional assumptions is robust estimation. Robust (resistant) estimators are generally less affected than least squares estimators by outliers or nonnormality.”

“An estimator’s quantitative robustness can be described by its finite-sample breakdown point (BP), or the smallest proportion of scores that when made arbitrarily very large or small renders the statistic meaningless. The lower the value of BP, the less robust the estimator. For both M and s2, BP = 0, the lowest possible value. This is because the value of either statistic can be distorted by a single outlier, and the ratio 1/N approaches zero as sample size increases. In contrast, BP = .50 for the median because its value is not distorted by arbitrarily extreme scores unless they make up at least half the sample. But the median is not an optimal estimator because its value is determined by a single score, the one at the 50th percentile. In this sense, all the other scores are discarded by the median. A compromise between the sample mean and the median is the trimmed mean. A trimmed mean Mtr is calculated by (a) ordering the scores from lowest to highest, (b) deleting the same proportion of the most extreme scores from each tail of the distribution, and then (c) calculating the average of the scores that remain. […] A common practice is to trim 20% of the scores from each tail of the distribution when calculating trimmed estimators. This proportion tends to maintain the robustness of trimmed means while minimizing their standard errors when sampling from symmetrical distributions […] For 20% trimmed means, BP = .20, which says they are robust against arbitrarily extreme scores unless such outliers make up at least 20% of the sample.”

The standard H0 is both a point hypothesis and a nil hypothesis. A point hypothesis specifies the numerical value of a parameter or the difference between two or more parameters, and a nil hypothesis states that this value is zero. The latter is usually a prediction that an effect, difference, or association is zero. […] Nil hypotheses as default explanations may be fine in new research areas when it is unknown whether effects exist at all. But they are less suitable in established areas when it is known that some effect is probably not zero. […] Nil hypotheses are tested much more often than non-nil hypotheses even when the former are implausible. […] If a nil hypothesis is implausible, estimated probabilities of data will be too low. This means that risk for Type I error is basically zero and a Type II error is the only possible kind when H0 is known in advance to be false.”

“Too many researchers treat the conventional levels of α, either .05 or .01, as golden rules. If other levels of α are specifed, they tend to be even lower […]. Sanctification of .05 as the highest “acceptable” level is problematic. […] Instead of blindly accepting either .05 or .01, one does better to […] [s]pecify a level of α that reflects the desired relative seriousness (DRS) of Type I error versus Type II error. […] researchers should not rely on a mechanical ritual (i.e., automatically specify .05 or .01) to control risk for Type I error that ignores the consequences of Type II error.”

“Although p and α are derived in the same theoretical sampling distribution, p does not estimate the conditional probability of a Type I error […]. This is because p is based on a range of results under H0, but α has nothing to do with actual results and is supposed to be specified before any data are collected. Confusion between p and α is widespread […] To differentiate the two, Gigerenzer (1993) referred to p as the exact level of significance. If p = .032 and α = .05, H0 is rejected at the .05 level, but .032 is not the long-run probability of Type I error, which is .05 for this example. The exact level of significance is the conditional probability of the data (or any result even more extreme) assuming H0 is true, given all other assumptions about sampling, distributions, and scores. […] Because p values are estimated assuming that H0 is true, they do not somehow measure the likelihood that H0 is correct. […] The false belief that p is the probability that H0 is true, or the inverse probability error […] is widespread.”

“Probabilities from significance tests say little about effect size. This is because essentially any test statistic (TS) can be expressed as the product TS = ES × f(N) […] where ES is an effect size and f(N) is a function of sample size. This equation explains how it is possible that (a) trivial effects can be statistically significant in large samples or (b) large effects may not be statistically significant in small samples. So p is a confounded measure of effect size and sample size.”

“Power is the probability of getting statistical significance over many random replications when H1 is true. it varies directly with sample size and the magnitude of the population effect size. […] This combination leads to the greatest power: a large population effect size, a large sample, a higher level of α […], a within-subjects design, a parametric test rather than a nonparametric test (e.g., t instead of Mann–Whitney), and very reliable scores. […] Power .80 is generally desirable, but an even higher standard may be need if consequences of Type II error are severe. […] Reviews from the 1970s and 1980s indicated that the typical power of behavioral science research is only about .50 […] and there is little evidence that power is any higher in more recent studies […] Ellis (2010) estimated that < 10% of studies have samples sufficiently large to detect smaller population effect sizes. Increasing sample size would address low power, but the number of additional cases necessary to reach even nominal power when studying smaller effects may be so great as to be practically impossible […] Too few researchers, generally < 20% (Osborne, 2008), bother to report prospective power despite admonitions to do so […] The concept of power does not stand without significance testing. as statistical tests play a smaller role in the analysis, the relevance of power also declines. If significance tests are not used, power is irrelevant. Cumming (2012) described an alternative called precision for research planning, where the researcher specifies a target margin of error for estimating the parameter of interest. […] The advantage over power analysis is that researchers must consider both effect size and precision in study planning.”

“Classical nonparametric tests are alternatives to the parametric t and F tests for means (e.g., the Mann–Whitney test is the nonparametric analogue to the t test). Nonparametric tests generally work by converting the original scores to ranks. They also make fewer assumptions about the distributions of those ranks than do parametric tests applied to the original scores. Nonparametric tests date to the 1950s–1960s, and they share some limitations. One is that they are not generally robust against heteroscedasticity, and another is that their application is typically limited to single-factor designs […] Modern robust tests are an alternative. They are generally more flexible than nonparametric tests and can be applied in designs with multiple factors. […] At the end of the day, robust statistical tests are subject to many of the same limitations as other statistical tests. For example, they assume random sampling albeit from population distributions that may be nonnormal or heteroscedastic; they also assume that sampling error is the only source of error variance. Alternative tests, such as the Welch–James and Yuen–Welch versions of a robust t test, do not always yield the same p value for the same data, and it is not always clear which alternative is best (Wilcox, 2003).”

July 11, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Personality Puzzle (III)

I have added some more quotes and observations from the book below.

“Across many, many traits, the average correlation across MZ twins is about .60, and across DZ twins it is about .40, when adjusted for age and gender […] This means that, according to twin studies, the average heritability of many traits is about .40, which is interpreted to mean that 40 percent of phenotypic (behavioral) variance is accounted for by genetic variance. The heritabilities of the Big Five traits are a bit higher; according to one comprehensive summary they range from .42, for agreeableness, to .57, for openness (Bouchard, 2004). […] behavioral genetic analyses and the statistics they produce refer to groups or populations, not individuals. […] when research concludes that a personality trait is, say, 50 percent heritable, this does not mean that half of the extent to which an individual expresses that trait is determined genetically. Instead, it means that 50 percent of the degree to which the trait varies across the population can be attributed to genetic variation. […] Because heritability is the proportion of variation due to genetic influences, if there is no variation, then the heritability must approach zero. […] Heritability statistics are not the nature-nurture ratio; a biologically determined trait can have a zero heritability.”

The environment can […] affect heritability […]. For example, when every child receives adequate nutrition, variance in height is genetically controlled. […] But in an environment where some are well fed while others go hungry, variance in height will fall more under the control of the environment. Well-fed children will grow near the maximum of their genetic potential while poorly fed children will grow closer to their genetic minimum, and the height of the parents will not matter so much; the heritability coeffcient for height will be much closer to 0. […] A trait that is adaptive in one situation may be harmful in another […] the same environments that promote good outcomes for some people can promote bad outcomes for others, and vice versa […] More generally, the same circumstances might be experienced as stressful, enjoyable, or boring, depending on the genetic predispositions of the individuals involved; these variations in experience can lead to very different behaviors and, over time, to the development of different personality traits.”

Mihalyi Csikszentmihalyi [argued] that the best way a person can spend time is in autotelic activities, those that are enjoyable for their own sake. The subjective experience of an autotelic activity — the enjoyment itself — is what Csikszentmihalyi calls flow.
Flow is not the same thing as joy, happiness, or other, more familiar terms for subjective well-being. Rather, the experience of flow is characterized by tremendous concentration, total lack of distractibility, and thoughts concerning only the activity at hand. […] Losing track of time is one sign of experiencing flow. According to Csikszentmihalyi, flow arises when the challenges an activity presents are well matched with your skills. If an activity is too diffcult or too confusing, you will experience anxiety, worry, and frustration. If the activity is too easy, you will experience boredom and (again) anxiety. But when skills and challenges are balanced, you experience flow. […] Csikszentmihalyi thinks that the secret for enhancing your quality of life is to spend as much time in flow as possible. Achieving flow entails becoming good at something you find worthwhile and enjoyable. […] Even in the best of circumstances [however], flow seems to describe a rather solitary kind of happiness. […] The drawback with flow is that somebody experiencing it can be difficult to interact with”. [I really did not like most of the stuff included in the part of the book from which this quote is taken, but I did find Csikszentmihalyi’s flow concept quite interesting.]

“About 80 percent of the participants in psychological research come from countries that are Western, Educated, Industrialized, Rich, and Democratic — ”WEIRD” in other words — although only 12 percent of the world’s population live there (Henrich et al., 2010).”

“If an animal or a person performs a behavior, and the behavior is followed by a good result — a reinforcement — the behavior becomes more likely. If the behavior is followed by a punishment, it becomes less likely. […] the results of operant conditioning are not necessarily logical. It can increase the frequency of any behavior, regardless of its real connection with the consequences that follow.”

“A punishment is an aversive consequence that follows an act in order to stop it and prevent its repetition. […] Many people believe the only way to stop or prevent somebody from doing something is punishment. […] You can [however] use reward for this purpose too. All you have to do is find a response that is incompatible with the one you are trying to get rid of, and reward that incompatible response instead. Reward a child for reading instead of punishing him for watching television. […] punishment works well when it is done right. The only problem is, it is almost never done right. […] One way to see how punishment works, or fails to work, is to examine the rules for applying it correctly. The classic behaviorist analysis says that five principles are most important […] 1. Availability of Alternatives: An alternative response to the behavior that is being punished must be available. This alternative response must not be punished and should be rewarded. […] 2. Behavioral and Situational Specificity: Be clear about exactly what behavior you are punishing and the circumstances under which it will and will not be punished. […] 3. Timing and Consistency: To be effective, a punishment needs to be applied immediately after the behavior you wish to prevent, every time that behavior occurs. Otherwise, the person (or animal) being punished may not understand which behavior is forbidden. […] 4. Conditioning Secondary Punishing Stimuli: One can lessen the actual use of punishment by conditioning secondary stimuli to it [such as e.g.  verbal warnings] […] 5. Avoiding Mixed Messages: […] Sometimes, after punishing a child, the parent feels so guilty that she picks the child up for a cuddle. This is a mistake. The child might start to misbehave just to get the cuddle that follows the punishment. Punish if you must punish, but do not mix your message. A variant on this problem occurs when the child learns to play one parent against the other. For example, after the father punishes the child, the child goes to the mother for sympathy, or vice versa. This can produce the same counterproductive result.”

Punishment will backfire unless all of the guidelines [above] are followed. Usually, they are not. A punisher has to be extremely careful, for several reasons. […] The first and perhaps most important danger of punishment is that it creates emotion. […] powerful emotions are not conducive to clear thinking. […] Punishment [also] tends to vary with the punisher’s mood, which is one reason it is rarely applied consistently. […] Punishment [furthermore] [m]otivates [c]oncealment: The prospective punishee has good reasons to conceal behavior that might be punished. […] Rewards have the reverse effect. When workers anticipate rewards for good work instead of punishment for bad work, they are naturally motivated to bring to the boss’s attention everything they are doing, in case it merits reward.”

Gordon Allport observed years ago [that] [“]For some the world is a hostile place where men are evil and dangerous; for others it is a stage for fun and frolic. It may appear as a place to do one’s duty grimly; or a pasture for cultivating friendship and love.[“] […] people with different traits see the world differently. This perception affects how they react to the events in their lives which, in turn, affects what they do. […] People [also] differ in the emotions they experience, the emotions they want to experience, how strongly they experience emotions, how frequently their emotions change, and how well they understand and control their emotions.”

July 9, 2017 Posted by | Books, Genetics, Psychology | Leave a comment

Beyond Significance Testing (I)

“This book introduces readers to the principles and practice of statistics reform in the behavioral sciences. it (a) reviews the now even larger literature about shortcomings of significance testing; (b) explains why these criticisms have sufficient merit to justify major changes in the ways researchers analyze their data and report the results; (c) helps readers acquire new skills concerning interval estimation and effect size estimation; and (d) reviews alternative ways to test hypotheses, including Bayesian estimation. […] I assume that the reader has had undergraduate courses in statistics that covered at least the basics of regression and factorial analysis of variance. […] This book is suitable as a textbook for an introductory course in behavioral science statistics at the graduate level.”

I’m currently reading this book. I have so far read 8 out of the 10 chapters included, and I’m currently sort of hovering between a 3 and 4 star goodreads rating; some parts of the book are really great, but there are also a few aspects I don’t like. Some parts of the coverage are rather technical and I’m still debating to which extent I should cover the technical stuff in detail later here on the blog; there are quite a few equations included in the book and I find it annoying to cover math using the wordpress format of this blog. For now I’ll start out with a reasonably non-technical post with some quotes and key ideas from the first parts of the book.

“In studies of intervention outcomes, a statistically significant difference between treated and untreated cases […] has nothing to do with whether treatment leads to any tangible benefits in the real world. In the context of diagnostic criteria, clinical significance concerns whether treated cases can no longer be distinguished from control cases not meeting the same criteria. For example, does treatment typically prompt a return to normal levels of functioning? A treatment effect can be statistically significant yet trivial in terms of its clinical significance, and clinically meaningful results are not always statistically significant. Accordingly, the proper response to claims of statistical significance in any context should be “so what?” — or, more pointedly, “who cares?” — without more information.”

“There are free computer tools for estimating power, but most researchers — probably at least 80% (e.g., Ellis, 2010) — ignore the power of their analyses. […] Ignoring power is regrettable because the median power of published nonexperimental studies is only about .50 (e.g., Maxwell, 2004). This implies a 50% chance of correctly rejecting the null hypothesis based on the data. In this case the researcher may as well not collect any data but instead just toss a coin to decide whether or not to reject the null hypothesis. […] A consequence of low power is that the research literature is often difficult to interpret. Specifically, if there is a real effect but power is only .50, about half the studies will yield statistically significant results and the rest will yield no statistically significant findings. If all these studies were somehow published, the number of positive and negative results would be roughly equal. In an old-fashioned, narrative review, the research literature would appear to be ambiguous, given this balance. It may be concluded that “more research is needed,” but any new results will just reinforce the original ambiguity, if power remains low.”

“Statistical tests of a treatment effect that is actually clinically significant may fail to reject the null hypothesis of no difference when power is low. If the researcher in this case ignored whether the observed effect size is clinically significant, a potentially beneficial treatment may be overlooked. This is exactly what was found by Freiman, Chalmers, Smith, and Kuebler (1978), who reviewed 71 randomized clinical trials of mainly heart- and cancer-related treatments with “negative” results (i.e., not statistically significant). They found that if the authors of 50 of the 71 trials had considered the power of their tests along with the observed effect sizes, those authors should have concluded just the opposite, or that the treatments resulted in clinically meaningful improvements.”

“Even if researchers avoided the kinds of mistakes just described, there are grounds to suspect that p values from statistical tests are simply incorrect in most studies: 1. They (p values) are estimated in theoretical sampling distributions that assume random sampling from known populations. Very few samples in behavioral research are random samples. Instead, most are convenience samples collected under conditions that have little resemblance to true random sampling. […] 2. Results of more quantitative reviews suggest that, due to assumptions violations, there are few actual data sets in which significance testing gives accurate results […] 3. Probabilities from statistical tests (p values) generally assume that all other sources of error besides sampling error are nil. This includes measurement error […] Other sources of error arise from failure to control for extraneous sources of variance or from flawed operational definitions of hypothetical constructs. It is absurd to assume in most studies that there is no error variance besides sampling error. Instead it is more practical to expect that sampling error makes up the small part of all possible kinds of error when the number of cases is reasonably large (Ziliak & mcCloskey, 2008).”

“The p values from statistical tests do not tell researchers what they want to know, which often concerns whether the data support a particular hypothesis. This is because p values merely estimate the conditional probability of the data under a statistical hypothesis — the null hypothesis — that in most studies is an implausible, straw man argument. In fact, p values do not directly “test” any hypothesis at all, but they are often misinterpreted as though they describe hypotheses instead of data. Although p values ultimately provide a yes-or-no answer (i.e., reject or fail to reject the null hypothesis), the question — p < a?, where a is the criterion level of statistical significance, usually .05 or .01 — is typically uninteresting. The yes-or-no answer to this question says nothing about scientific relevance, clinical significance, or effect size. […] determining clinical significance is not just a matter of statistics; it also requires strong knowledge about the subject matter.”

“[M]any null hypotheses have little if any scientific value. For example, Anderson et al. (2000) reviewed null hypotheses tested in several hundred empirical studies published from 1978 to 1998 in two environmental sciences journals. They found many implausible null hypotheses that specified things such as equal survival probabilities for juvenile and adult members of a species or that growth rates did not differ across species, among other assumptions known to be false before collecting data. I am unaware of a similar survey of null hypotheses in the behavioral sciences, but I would be surprised if the results would be very different.”

“Hoekstra, Finch, Kiers, and Johnson (2006) examined a total of 266 articles published in Psychonomic Bulletin & Review during 2002–2004. Results of significance tests were reported in about 97% of the articles, but confidence intervals were reported in only about 6%. Sadly, p values were misinterpreted in about 60% of surveyed articles. Fidler, Burgman, Cumming, Buttrose, and Thomason (2006) sampled 200 articles published in two different biology journals. Results of significance testing were reported in 92% of articles published during 2001–2002, but this rate dropped to 78% in 2005. There were also corresponding increases in the reporting of confidence intervals, but power was estimated in only 8% and p values were misinterpreted in 63%. […] Sun, Pan, and Wang (2010) reviewed a total of 1,243 works published in 14 different psychology and education journals during 2005–2007. The percentage of articles reporting effect sizes was 49%, and 57% of these authors interpreted their effect sizes.”

“It is a myth that the larger the sample, the more closely it approximates a normal distribution. This idea probably stems from a misunderstanding of the central limit theorem, which applies to certain group statistics such as means. […] This theorem justifies approximating distributions of random means with normal curves, but it does not apply to distributions of scores in individual samples. […] larger samples do not generally have more normal distributions than smaller samples. If the population distribution is, say, positively skewed, this shape will tend to show up in the distributions of random samples that are either smaller or larger.”

“A standard error is the standard deviation in a sampling distribution, the probability distribution of a statistic across all random samples drawn from the same population(s) and with each sample based on the same number of cases. It estimates the amount of sampling error in standard deviation units. The square of a standard error is the error variance. […] Variability of the sampling distributions […] decreases as the sample size increases. […] The standard error sM, which estimates variability of the group statistic M, is often confused with the standard deviation s, which measures variability at the case level. This confusion is a source of misinterpretation of both statistical tests and confidence intervals […] Note that the standard error sM itself has a standard error (as do standard errors for all other kinds of statistics). This is because the value of sM varies over random samples. This explains why one should not overinterpret a confidence interval or p value from a significance test based on a single sample.”

“Standard errors estimate sampling error under random sampling. What they measure when sampling is not random may not be clear. […] Standard errors also ignore […] other sources of error [:] 1. Measurement error [which] refers to the difference between an observed score X and the true score on the underlying construct. […] Measurement error reduces absolute effect sizes and the power of statistical tests. […] 2. Construct definition error [which] involves problems with how hypothetical constructs are defined or operationalized. […] 3. Specification error [which] refers to the omission from a regression equation of at least one predictor that covaries with the measured (included) predictors. […] 4. Treatment implementation error occurs when an intervention does not follow prescribed procedures. […] Gosset used the term real error to refer all types of error besides sampling error […]. In reasonably large samples, the impact of real error may be greater than that of sampling error.”

“The technique of bootstrapping […] is a computer-based method of resampling that recombines the cases in a data set in different ways to estimate statistical precision, with fewer assumptions than traditional methods about population distributions. Perhaps the best known form is nonparametric bootstrapping, which generally makes no assumptions other than that the distribution in the sample reflects the basic shape of that in the population. It treats your data file as a pseudo-population in that cases are randomly selected with replacement to generate other data sets, usually of the same size as the original. […] The technique of nonparametric bootstrapping seems well suited for interval estimation when the researcher is either unwilling or unable to make a lot of assumptions about population distributions. […] potential limitations of nonparametric bootstrapping: 1. Nonparametric bootstrapping simulates random sampling, but true random sampling is rarely used in practice. […] 2. […] If the shape of the sample distribution is very different compared with that in the population, results of nonparametric bootstrapping may have poor external validity. 3. The “population” from which bootstrapped samples are drawn is merely the original data file. If this data set is small or the observations are not independent, resampling from it will not somehow fix these problems. In fact, resampling can magnify the effects of unusual features in a small data set […] 4. Results of bootstrap analyses are probably quite biased in small samples, but this is true of many traditional methods, too. […] [In] parametric bootstrapping […] the researcher specifies the numerical and distributional properties of a theoretical probability density function, and then the computer randomly samples from that distribution. When repeated many times by the computer, values of statistics in these synthesized samples vary randomly about the parameters specified by the researcher, which simulates sampling error.”

July 9, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

A few SSC comments

I recently left a few comments in an open thread on SSC, and I figured it might make sense to crosspost some of the comments made there here on the blog. I haven’t posted all my contributions to the debate here, rather I’ve just quoted some specific comments and observations which might be of interest. I’ve also added some additional remarks and comments which relate to the topics discussed. Here’s the main link (scroll down to get to my comments).

“One thing worth keeping in mind when evaluating pre-modern medicine characterizations of diabetes and the natural history of diabetes is incidentally that especially to the extent that one is interested in type 1 survivorship bias is a major problem lurking in the background. Prognostic estimates of untreated type 1 based on historical accounts of how long people could live with the disease before insulin are not in my opinion likely to be all that reliable, because the type of patients that would be recognized as (type 1) diabetics back then would tend to mainly be people who had the milder forms, because they were the only ones who lived long enough to reach a ‘doctor’; and the longer they lived, and the milder the sub-type, the more likely they were to be studied/’diagnosed’. I was a 2-year old boy who got unwell on a Tueday and was hospitalized three days later. Avicenna would have been unlikely to have encountered me, I’d have died before he saw me. (Similar lines of reasoning might lead to an argument that the incidence of diseases like type 1 diabetes may also today be underdiagnosed in developing countries with poorly developed health care systems.)”

Douglas Knight mentioned during our exchange that medical men of the far past might have been more likely to attend to patients with acute illnesses than patients with chronic conditions, making them more likely to attend to such cases than would otherwise be the case, a point I didn’t discuss in any detail during the exchange. I did however think it important to note here that information exchange was significantly slower, and transportation costs were much higher, in the past than they are today. This should make such a bias less relevant, all else equal. Avicenna and his colleagues couldn’t take a taxi, or learn by phone that X is sick. He might have preferentially attended to the acute cases he learned about, but given high transportation costs and inefficient communication channels he might often never arrive in time, or at all. A particular problem here is that there are no good data on the unobserved cases, because the only cases we know about today are the ones people like him have told us about.

Some more comments:

“One thing I was considering adding to my remarks about survivorship bias is that it is not in my opinion unlikely that what you might term the nature of the disease has changed over the centuries; indeed it might still be changing today. Globally the incidence of type 1 has been increasing for decades and nobody seems to know why, though there’s consensus about an environmental trigger playing a major role. Maybe incidence is not the only thing that’s changed, maybe e.g. the time course of the ‘average case’ has also changed? Maybe due to secondary factors; better nutritional status now equals slower progression of beta cell failure than was the case in the past? Or perhaps the other way around: Less exposure to bacterial agents the immune system throughout evolutionary time has been used to having to deal with today means that the autoimmune process is accelerated today, compared to in the far past where standards of hygiene were different. Who knows? […] Maybe survivorship bias wasn’t that big of a deal, but I think one should be very cautious about which assumptions one might implicitly be making along the way when addressing questions of this sort of nature. Some relevant questions will definitely be unknowable due to lack of good data which we will never be able to obtain.”

I should perhaps interpose here that even if survivorship bias ‘wasn’t that big of a deal’, it’s still sort of a big problem in the analytical setting because it seems perfectly plausible to me to be making the assumption that it might even so have been a big deal. These kinds of problems magnify our error bars and reduce confidence in our conclusions, regardless of the extent to which they actually played a role. When you know the exact sign and magnitude of a given moderating effect you can try to correct for it, but this is very difficult to do when a large range of moderator effect sizes might be considered plausible. It might also here be worth mentioning explicitly that biases such as the survivorship bias mentioned can of course impact a lot of things besides just the prognostic estimates; for example if a lot of cases never come to the attention of the medical people because these people were unavailable (due to distance, cost, lack of information, etc.) to the people who were sick, incidence and prevalence will also implicitly be underestimated. And so on. Back to the comments:

“Once you had me thinking that it might have been harder [for people in the past] to distinguish [between type 1 and type 2 diabetes] than […] it is today, I started wondering about this, and the comments below relate to this topic. An idea that came to mind in relation to the type 1/type 2 distinction and the ability of people in the past to make this distinction: I’ve worked on various identification problems present in the diabetes context before, and I know that people even today make misdiagnoses and e.g. categorize type 1 diabetics as type 2. I asked a diabetes nurse working in the local endocrinology unit about this at one point, and she told me they had actually had a patient not long before then who had been admitted a short while after having been diagnosed with type 2. Turned out he was type 1, so the treatment failed. Misdiagnoses happen for multiple reasons, one is that obese people also sometimes develop type 1, and if it’s an acute onset setting the weight loss is not likely to be very significant. Patient history should in such a case provide the doctor with the necessary clues, but if the guy making the diagnosis is a stressed out GP who’s currently treating a lot of obese patients for type 2, mistakes happen. ‘Pre-scientific method’ this sort of individual would have been inconvenient to encounter, because a ‘counter-example’ like that supposedly demonstrating that the obese/thin(/young/old, acute/protracted…) distinction was ‘invalid’ might have held a lot more weight than it hopefully would today in the age of statistical analysis. A similar problem would be some of the end-stage individuals: A type 1 pre-insulin would be unlikely to live long enough to develop long term complications of the disease, but would instead die of DKA. The problem is that some untreated type 2 patients also die of DKA, though the degree of ketosis varies in type 2 patients. DKA in type 2 could e.g. be triggered by a superimposed cardiovascular event or an infection, increasing metabolic demands to an extent that can no longer be met by the organism, and so might well present just as acutely as it would in a classic acute-onset type 1 case. Assume the opposite bias you mention is playing a role; the ‘doctor’ in the past is more likely to see the patients in such a life-threatening setting than in the earlier stages. He observes a 55 year old fat guy dying in a very similar manner to the way a 12 year old girl died a few months back – very characteristic symptoms, breath smells fruity, Kussmaul respiration, polyuria and polydipsia…). What does he conclude? Are these different diseases?”

Making the doctor’s decision problem even harder is of course the fact that type 2 diabetes even today often goes undiagnosed until complications arise. Some type 2 patients get their diagnosis only after they had their first heart attack as a result of their illness. So the hypothetical obese middle-aged guy presenting with DKA might not have been known by anyone to be ‘a potentially different kind of diabetic’.

‘The Nybbler’ asked this question in the thread: “Wouldn’t reduced selection pressure be a major reason for increase of Type I diabetes? Used to be if you had it, chance of surviving to reproduce was close to nil.”

I’ll mention here that I’ve encountered this kind of theorizing before, but that I’ve never really addressed it – especially the second part – explicitly, though I’ve sometimes felt like doing that. I figured this post might be a decent place to at least scratch the surface. The idea that there are more type 1 diabetics now than there used to be because type 1 diabetics used to die of their disease and now they don’t (…and so now they are able to transmit their faulty genes to subsequent generations, leading to more diabetic individuals over time) sounds sort of reasonable if you don’t know very much about diabetes, but it sounds less reasonable to people who do. Genes matter, and changed selection pressures have probably played a role, but I find it hard to believe this particular mechanism is a major factor. I have included both my of my replies to ‘Nybbler’ below:

First comment:

“I’m not a geneticist and this is sort-of-kind-of near the boundary area of where I feel comfortable providing answers (given that others may be more qualified to evaluate questions like this than I am). However a few observations which might be relevant are the following:

i) Although I’ll later go on to say that vertical transmission is low, I first have to point out that some people who developed type 1 diabetes in the past did in fact have offspring, though there’s no doubt about the condition being fitness-reducing to a very large degree. The median age of diagnosis of type 1 is somewhere in the teenage years (…today. Was it the same way 1000 years ago, or has the age profile changed over time? This again relates to questions asked elsewhere in this discussion…), but people above the age of 30 get type 1 too.

ii) Although type 1 display some level of familia[l] clustering, most cases of type 1 are not the result of diabetics having had children who then proceed to inherit their parents’ disease. To the extent that reduced selection is a driver of increased incidence, the main cause would be broad selection effects pertaining to immune system functioning in general in the total population at risk (i.e. children in general, including many children with what might be termed suboptimal immune system functioning, being more likely to survive and later develop type 1 diabetes), not effects derived from vertical transmission of the disease (from parent to child). Roughly 90% of newly diagnosed type 1 diabetics in population studies have a negative family history of the disease, and on average only 2% of the children of type 1 diabetic mothers, and 5% of the children of type 1 diabetic fathers, go on to develop type 1 diabetes themselves.

iii) Historically vertical transmission has even in modern times been low. On top of the quite low transmission rates mentioned above, until well into the 80es or 90es many type 1 diabetic females were explicitly advised by their medical care providers not to have children, not because of the genetic risk of disease transmission but because pregnancy outcomes were likely to be poor; and many of those who disregarded the advice gave birth to offspring who were at a severe fitness disadvantage from the start. Poorly controlled diabetes during pregnancy leads to a very high risk of birth defects and/or miscarriage, and may pose health risks to the mother as well through e.g. an increased risk of preeclampsia (relevant link). It is only very recently that we’ve developed the knowledge and medical technology required to make pregnancy a reasonably safe option for female diabetics. You still had some diabetic females who gave birth before developing diabetes, like in the far past, and the situation was different for males, but either way I feel reasonably confident claiming that if you look for genetic causes of increasing incidence, vertical transmission should not be the main factor to consider.

iv) You need to be careful when evaluating questions like these to keep a distinction between questions relating to drivers of incidence and questions relating to drivers of prevalence at the back of your mind. These two sets of questions are not equivalent.

v) If people are interested to know more about the potential causes of increased incidence of type 1 diabetes, here’s a relevant review paper.”

I followed up with a second comment a while later, because I figured a few points of interest might not have been sufficiently well addressed in my first comment:

“@Nybbler:

A few additional remarks.

i) “Temporal trends in chronic disease incidence rates are almost certainly environmentally induced. If one observes a 50% increase in the incidence of a disorder over 20 yr, it is most likely the result of changes in the environment because the gene pool cannot change that rapidly. Type 1 diabetes is a very dynamic disease. […] results clearly demonstrate that the incidence of type 1 diabetes is rising, bringing with it a large public health problem. Moreover, these findings indicate that something in our environment is changing to trigger a disease response. […] With the exception of a possible role for viruses and infant nutrition, the specific environmental determinants that initiate or precipitate the onset of type 1 diabetes remain unclear.” (Type 1 Diabetes, Etiology and Treatment. Just to make it perfectly clear that although genes matter, environmental factors are the most likely causes of the rising levels of incidence we’ve seen in recent times.)

ii. Just as you need to always keep incidence and prevalence in mind when analyzing these things (for example low prevalence does not mean incidence is necessarily low, or was low in the past; low prevalence could also be a result of a combination of high incidence and high case mortality. I know from experience that even diabetes researchers tend to sometimes overlook stuff like this), you also need to keep the distinction between genotype and phenotype in mind. Given the increased importance of one or more environmental triggers in modern times, penetrance is likely to have changed over time. This means for example that ‘a diabetic genotype’ may have been less fitness reducing in the past than it is today, even if the associated ‘diabetic phenotype’ may on the other hand have been much more fitness reducing than it is now; people who developed diabetes died, but many of the people who might in the current environment be considered high-risk cases may not have been high risk in the far past, because the environmental trigger causing disease was absent, or rarely encountered. Assessing genetic risk for diabetes is complicated, and there’s no general formula for calculating this risk either in the type 1 or type 2 case; monogenic forms of diabetes do exist, but they account for a very small proportion of cases (1-5% of diabetes in young individuals) – most cases are polygenic and display variable levels of penetrance. Note incidentally that a story of environmental factors becoming more important over time is actually implicitly also, to the extent that diabetes is/has been fitness-reducing, a story of selection pressures against diabetic genotypes potentially increasing over time, rather than the opposite (which seems to be the default assumption when only taking into account stuff like the increased survival rates of type 1 diabetics over time). This stuff is complicated.”

I wasn’t completely happy with my second comment (I wrote it relatively fast and didn’t have time to go over it in detail after I’d written it), so I figured it might make sense to add a few details here. One key idea here is of course that you need to distinguish between people who are ‘vulnerable’ to developing type 1 diabetes, and people who actually do develop the disease. If fewer people who today would be considered ‘vulnerable’ developed the disease in the past than is the case now, selection against the ‘vulnerable’ genotype would – all else equal – have been lower throughout evolutionary time than it is today.

All else is not equal because of insulin treatment. But a second key point is that when you’re interested in fitness effects, mortality is not the only variable of interest; many diabetic women who were alive because of insulin during the 20th century but who were also being discouraged from having children may well have left no offspring. Males who committed suicide or died from kidney failure in their twenties likely also didn’t leave many offspring. Another point related to the mortality variable is that although diabetes mortality might in the past have been approximated reasonably well by a simple binary outcome variable/process (no diabetes = alive, diabetes = dead), type 1 diabetes has had large effects on mortality rates also throughout the chunk of history during which insulin has been a treatment option; mortality rates 3 or 4 times higher than those of non-diabetics are common in population studies, and such mortality rates add up over time even if base rates are low, especially in a fitness context, as they for most type 1 diabetics are at play throughout the entire fertile period of the life history. Type 2 diabetes is diagnosed mainly in middle-aged individuals, many of whom have already completed their reproductive cycle, but type 1 diabetes is very different in that respect. Of course there are multiple indirect effects at play as well here, e.g. those of mate choice; which is the more attractive potential partner, the individual with diabetes or the one without? What if the diabetic also happens to be blind?

A few other quotes from the comments:

“The majority of patients on insulin in the US are type 2 diabetics, and it is simply wrong that type 2 diabetics are not responsive to insulin treatment. They were likely found to be unresponsive in early trials because of errors of dosage, as they require higher levels of the drug to obtain the same effect as will young patients diagnosed with type 1 (the primary group on insulin in the 30es). However, insulin treatment is not the first-line option in the type 2 context because the condition can usually be treated with insulin-sensitizing agents for a while, until they fail (those drugs will on average fail in something like ~50% of subjects within five years of diagnosis, which is the reason – combined with the much (order(/s, depending on where you are) of magnitude) higher prevalence of type 2 – why a majority of patients on insulin have type 2), and these tend to a) be more acceptable to the patients (a pill vs an injection) and b) have fewer/less severe side effects on average. One reason which also played a major role in delaying the necessary use of insulin to treat type 2 diabetes which could not be adequately controlled via other means was incidentally the fact that insulin ca[u]ses weight gain, and the obesity-type 2 link was well known.”

“Type 1 is autoimmune, and most cases of type 2 are not, but some forms of type 2 seem to have an autoimmune component as well (“the overall autoantibody frequency in type 2 patients varies between 6% and 10%” – source) (these patients, who can be identified through genetic markers, will on average proceed to insulin dependence because of treatment failure in the context of insulin sensitizing-agents much sooner than is usually the case in patients with type 2). In general type 1 is caused by autoimmune beta cell destruction and type 2 mainly by insulin resistance, but combinations of the two are also possible […], and patients with type 1 can develop insulin resistance just as patients with type 2 can lose beta cells via multiple pathways. The major point here being that the sharp diagnostic distinction between type 1 and type 2 is a major simplification of what’s really going on, and it’s hiding a lot of heterogeneity in both samples. Some patients with type 1 will develop diabetes acutely or subacutely, within days or hours, whereas others will have elevated blood glucose levels for months before medical attention is received and a diagnosis is made (you can tell whether or not blood glucose has been elevated pre-diagnosis by looking at one of the key diagnostic variables, Hba1c, which is a measure of the average blood glucose over the entire lifetime of a red blood cell (~3-4 months) – in some newly diagnosed type 1s, this variable is elevated, in others it is not. Some type 1 patients will develop other autoimmune conditions later on, whereas others will not, and some will be more likely to develop complications than others who have the same level of glycemic control.

Type 1 and type 2 diabetes are quite different conditions, but in terms of many aspects of the diseases there are significant degrees of overlap (for example they develop many of the same complications, for similar (pathophysiological) reasons), yet they are both called diabetes. You don’t want to treat a type 2 diabetic with insulin if he can be treated with metformin, and treating a type 1 with metformin will not help – so different treatments are required.”

“In terms of whether it’s ideal to have one autistic diagnostic group or two (…or three, or…) [this question was a starting point for the debate from which I quote, but I decided not to go much into this topic here], I maintain that to a significant extent the answer to that question relates to what the diagnosis is supposed to accomplish. If it makes sense for researchers to be able to distinguish, which it probably does, but it is not necessary for support organizers/providers to know the subtype in order to provide aid, then you might end up with one ‘official’ category and two (or more) ‘research categories’. I would be fine with that (but again I don’t find this discussion interesting). Again a parallel might be made to diabetes research: Endocrinologists are well aware that there’s a huge amount of variation in both the type 1 and type 2 samples, to the extent that it’s sort of silly to even categorize these illnesses using the same name, but they do it anyway for reasons which are sort of obvious. If you’re type 1 diabetic and you have an HLA mutation which made you vulnerable to diabetes and you developed diabetes at the age of 5, well, we’ll start you on insulin, try to help you achieve good metabolic control, and screen you regularly for complications. If on the other hand you’re an adult guy who due to a very different genetic vulnerability developed type 1 diabetes at the age of 30 (and later on Graves’ disease at the age of 40, due to the same mutation), well, we’ll start you on insulin, try to help you achieve good metabolic control, and screen you regularly for complications. The only thing type 1 diabetics have in common is the fact that their beta cells die due to some autoimmune processes. But it could easily be conceived of instead as literally hundreds of different diseases. Currently the distinctions between the different disease-relevant pathophysiological processes don’t matter very much in the treatment context, but they might do that at some point in the future, and if that happens the differences will start to become more important. People might at that point start to talk about type 1a diabetes, which might be the sort you can delay or stop with gene therapy, and type 1b which you can’t delay or stop (…yet). Lumping ‘different’ groups together into one diagnostic category is bad if it makes you overlook variation which is important, and this may be a problem in the autism context today, but regardless of the sizes of the diagnostic groups you’ll usually still end up with lots of residual (‘unexplained’) variation.”

I can’t recall to which extent I’ve discussed this last topic – the extent to which type 1 diabetes is best modeled as one illness or many – but it’s an important topic to keep at the back of your mind when you’re reading the diabetes literature. I’m assuming that in some contexts the subgroup heterogeneities, e.g. in terms of treatment response, will be much more important than in other contexts, so you probably need specific subject matter knowledge to make any sort of informed decision about to which extent potential unobserved heterogeneities may be important in a specific setting, but even if you don’t have that ‘a healthy skepticism’, derived from keeping the potential for these factors to play a role in mind, is likely to be more useful than the alternative. In that context I think the (poor, but understandable) standard practice of lumping together type 1 and type 2 diabetics in studies may lead many people familiar with the differences between the two conditions to think along the lines that as long as you know the type, you’re good to go – ‘at least this study only looked at type 1 individuals, not like those crappy studies which do not distinguish between type 1 and type 2, so I can definitely trust these results to apply to the subgroup of type 1 diabetics in which I’m interested’ – and I think this tendency, to the extent that it exists, is unfortunate.

July 8, 2017 Posted by | autism, Diabetes, Epidemiology, Genetics, Medicine, Psychology | Leave a comment

The Personality Puzzle (II)

I have added some more quotes and observations from the book below. Some of the stuff covered in this post is very closely related to material I’ve previously covered on the blog, e.g. here and here, but I didn’t mind reviewing this stuff here. If you’re already familiar with Funder’s RAM model of personality judgment you can probably skip the last half of the post without missing out on anything.

“[T]he trait approach [of personality psychology] focuses exclusively on individual differences. It does not attempt to measure how dominant, sociable, or nervous anybody is in an absolute sense; there is no zero point on any dominance scale or on any measure of any other trait. Instead, the trait approach seeks to measure the degree to which a person might be more or less dominant, sociable, or nervous than someone else. (Technically, therefore, trait measurements are made on ordinal rather than ratio scales.) […] Research shows that the stability of the differences between people increases with age […] According to one major summary of the literature, the correlation coefficient reflecting consistency of individual differences in personality is .31 across childhood, .54 during the college years, and .74 between the ages of 50 and 70 […] The main reason personality becomes more stable during the transition from child to adult to senior citizen seems to be that one’s environment also gets more stable with age […] According to one major review, longitudinal data show that, on average, people tend to become more socially dominant, agreeable, conscientious, and emotionally stable (lower on neuroticism) over time […] [However] people differ from each other in the degree to which they have developed a consistent personality […] Several studies suggest that the consistency of personality is associated with maturity and general mental health […] More-consistent people appear to be less neurotic, more controlled, more mature, and more positive in their relations with others (Donnellan, Conger, & Burzette, 2007; Roberts, Caspi, & Mofftt, 2001; Sherman, Nave, & Funder, 2010).”

“Despite the evidence for the malleability of personality […], it would be a mistake to conclude that change is easy. […] most people like their personalities pretty much the way they are, and do not see any reason for drastic change […] Acting in a way contrary to one’s traits takes effort and can be exhausting […] Second, people have a tendency to blame negative experiences and failures on external forces rather than recognizing the role of their own personality. […] Third, people generally like their lives to be consistent and predictable […] Change requires learning new skills, going new places, meeting new people, and acting in unaccustomed ways. That can make it uncomfortable. […] personality change has both a downside and an upside. […] people tend to like others who are “judgeable,” who are easy to understand, predict, and relate to. But when they don’t know what to expect or how to predict what a person will do, they are more likely to avoid that person. […] Moreover, if one’s personality is constantly changing, then it will be difficult to choose consistent goals that can be pursued over the long term.”

“There is no doubt that people change their behavior from one situation to the next. This obvious fact has sometimes led to the misunderstanding that personality consistency somehow means “acting the same way all the time.” But that’s not what it means at all. […] It is individual differences in behavior that are maintained across situations, not how much a behavior is performed. […] as the effect of the situation gets stronger, the effect of the person tends to get weaker, and vice versa. […] any fair reading of the research literature make one thing abundantly clear: When it comes to personality, one size does not fit all. People really do act differently from each other. Even when they are all in the same situation, some individuals will be more sociable, nervous, talkative, or active than others. And when the situation changes, those differences will still be there […] the evidence is overwhelming that people are psychologically different from one another, that personality traits exist, that people’s impressions of each other’s personalities are based on reality more than cognitive error, and that personality traits affect important life outcomes […] it is […] important to put the relative role of personality traits and situations into perspective. Situational variables are relevant to how people will act under specific circumstances. Personality traits are better for describing how people act in general […] A sad legacy of the person-situation debate is that many psychologists became used to thinking of the person and the situation as opposing forces […] It is much more accurate to see persons and situations as constantly interacting to produce behavior together. […] Persons and situations interact in three major ways […] First, the effect of a personality variable may depend on the situation, or vice versa. […] Certain types of people go to or find themselves in different types of situations. This is the second kind of person-situation interaction. […] The third kind of interaction stems from the way people change situations by virtue of what they do in them”.

“Shy people are often lonely and may deeply wish to have friends and normal social interactions, but are so fearful of the process of social involvement that they become isolated. In some cases, they won’t ask for help when they need it, even when someone who could easily solve their problem is nearby […]. Because shy people spend a lot of time by themselves, they deny themselves the opportunity to develop normal social skills. When they do venture out, they are so out of practice they may not know how to act. […] A particular problem for shy people is that, typically, others do not perceive them as shy. Instead, to most observers, they seem cold and aloof. […] shy people generally are not cold and aloof, or at least they do not mean to be. But that is frequently how they are perceived. That perception, in turn, affects the lives of shy people in important negative ways and is part of a cycle that perpetuates shyness […] the judgments of others are an important part of the social world and can have a significant effect on personality and life. […] Judgments of others can also affect you through “self-fulfilling prophecies,” more technically known as expectancy effects.1 These effects can affect both intellectual performance and social behavior.”

“Because people constantly make personality judgments, and because these judgments are consequential, it would seem important to know when and to what degree these judgments are accurate. […] [One relevant] method is called convergent validation. […] Convergent validation is achieved by assembling diverse pieces of information […] that “converge” on a common conclusion […] The more items of diverse information that converge, the more confident the conclusion […] For personality judgments, the two primary converging criteria are interjudge agreement and behavioral prediction. […] psychological research can evaluate personality judgments by asking two questions […] (1) Do the judgments agree with one another? (2) Can they predict behavior? To the degree the answers are Yes, the judgments are probably accurate.”

“In general, judges [of personality] will reach more accurate conclusions if the behaviors they observe are closely related to the traits they are judging. […] A moderator of accuracy […] is a variable that changes the correlation between a judgment and its criterion. Research on accuracy has focused primarily on four potential moderators: properties (1) of the judge, (2) of the target (the person who is judged), (3) of the trait that is judged, and (4) of the information on which the judgment is based. […] Do people know whether they are good judges of personality? The answer appears to be both no and yes […]. No, because people who describe themselves as good judges, in general, are no better than those who rate themselves as poorer in judgmental ability. But the answer is yes, in another sense. When asked which among several acquaintances they can judge most accurately, most people are mostly correct. In other words, we can tell the difference between people who we can and cannot judge accurately. […] Does making an extra effort to be accurate help? Research results so far are mixed.”

“When it comes to accurate judgment, who is being judged might be even more important than who is doing the judging. […] People differ quite a lot in how accurately they can be judged. […] “Judgable” people are those about whom others reach agreement most easily, because they are the ones whose behavior is most predictable from judgments of their personalities […] The behavior of judgable people is organized coherently; even acquaintances who know them in separate settings describe essentially the same person. Furthermore, the behavior of such people is consistent; what they do in the future can be predicted from what they have done in the past. […] Theorists have long postulated that it is psychologically healthy to conceal as little as possible from those around you […]. If you exhibit a psychological façade that produces large discrepancies between the person “inside” and the person you display “outside,” you may feel isolated from the people around you, which can lead to unhappiness, hostility, and depression. Acting in a way that is contrary to your real personality takes effort, and can be psychologically tiring […]. Evidence even suggests that concealing your emotions may be harmful to physical health“.

“All traits are not created equal — some are much easier to judge accurately than others. For example, more easily observed traits, such as “talkativeness,” “sociability,” and other traits related to extraversion, are judged with much higher levels of interjudge agreement than are less visible traits, such as cognitive and ruminative styles and habits […] To find out about less visible, more internal traits like beliefs or tendencies to worry, self-reports […] are more informative […] [M]ore information is usually better, especially when judging certain traits. […] Quantity is not the only important variable concerning information. […] it can be far more informative to observe a person in a weak situation, in which different people do different things, than in a strong situation, in which social norms restrict what people do […] The best situation for judging someone’s personality is one that brings out the trait you want to judge. To evaluate a person’s approach toward his work, the best thing to do is to observe him working. To evaluate a person’s sociability, observations at a party would be more informative […] The accurate judgment of personality, then, depends on both the quantity and the quality of the information on which it is based. More information is generally better, but it is just as important for the information to be relevant to the traits that one is trying to judge.”

“In order to get from an attribute of an individual’s personality to an accurate judgment of that trait, four things must happen […]. First, the person being judged must do something relevant; that is, informative about the trait to be judged. Second, this information must be available to a judge. Third, this judge must detect this information. Fourth and fnally, the judge must utilize this information correctly. […] If the process fails at any step — the person in question never does something relevant, or does it out of sight of the judge, or the judge doesn’t notice, or the judge makes an incorrect interpretation — accurate personality judgment will fail. […] Traditionally, efforts to improve accuracy have focused on attempts to get judges to think better, to use good logic and avoid inferential errors. These efforts are worthwhile, but they address only one stage — utilization — out of the four stages of accurate personality judgment. Improvement could be sought at the other stages as well […] Becoming a better judge of personality […] involves much more than “thinking better.” You should also try to create an interpersonal environment where other people can be themselves and where they feel free to let you know what is really going on.”

July 5, 2017 Posted by | Books, Psychology | Leave a comment

A few diabetes papers of interest

i. An Inverse Relationship Between Age of Type 2 Diabetes Onset and Complication Risk and Mortality: The Impact of Youth-Onset Type 2 Diabetes.

“This study compared the prevalence of complications in 354 patients with T2DM diagnosed between 15 and 30 years of age (T2DM15–30) with that in a duration-matched cohort of 1,062 patients diagnosed between 40 and 50 years (T2DM40–50). It also examined standardized mortality ratios (SMRs) according to diabetes age of onset in 15,238 patients covering a wider age-of-onset range.”

“After matching for duration, despite their younger age, T2DM15–30 had more severe albuminuria (P = 0.004) and neuropathy scores (P = 0.003). T2DM15–30 were as commonly affected by metabolic syndrome factors as T2DM40–50 but less frequently treated for hypertension and dyslipidemia (P < 0.0001). An inverse relationship between age of diabetes onset and SMR was seen, which was the highest for T2DM15–30 (3.4 [95% CI 2.7–4.2]). SMR plots adjusting for duration show that for those with T2DM15–30, SMR is the highest at any chronological age, with a peak SMR of more than 6 in early midlife. In contrast, mortality for older-onset groups approximates that of the background population.”

“Young people with type 2 diabetes are likely to be obese, with a clustering of unfavorable cardiometabolic risk factors all present at a very early age (3,4). In adolescents with type 2 diabetes, a 10–30% prevalence of hypertension and an 18–54% prevalence of dyslipidemia have been found, much greater than would be expected in a population of comparable age (4).”

CONCLUSIONS The negative effect of diabetes on morbidity and mortality is greatest for those diagnosed at a young age compared with T2DM of usual onset.”

It’s important to keep base rates in mind when interpreting the reported SMRs, but either way this is interesting.

ii. Effects of Sleep Deprivation on Hypoglycemia-Induced Cognitive Impairment and Recovery in Adults With Type 1 Diabetes.

OBJECTIVE To ascertain whether hypoglycemia in association with sleep deprivation causes greater cognitive dysfunction than hypoglycemia alone and protracts cognitive recovery after normoglycemia is restored.”

CONCLUSIONS Hypoglycemia per se produced a significant decrement in cognitive function; coexisting sleep deprivation did not have an additive effect. However, after restoration of normoglycemia, preceding sleep deprivation was associated with persistence of hypoglycemic symptoms and greater and more prolonged cognitive dysfunction during the recovery period. […] In the current study of young adults with type 1 diabetes, the impairment of cognitive function that was associated with hypoglycemia was not exacerbated by sleep deprivation. […] One possible explanation is that hypoglycemia per se exerts a ceiling effect on the degree of cognitive dysfunction as is possible to demonstrate with conventional tests.”

iii. Intensive Diabetes Treatment and Cardiovascular Outcomes in Type 1 Diabetes: The DCCT/EDIC Study 30-Year Follow-up.

“The DCCT randomly assigned 1,441 patients with type 1 diabetes to intensive versus conventional therapy for a mean of 6.5 years, after which 93% were subsequently monitored during the observational Epidemiology of Diabetes Interventions and Complications (EDIC) study. Cardiovascular disease (nonfatal myocardial infarction and stroke, cardiovascular death, confirmed angina, congestive heart failure, and coronary artery revascularization) was adjudicated using standardized measures.”

“During 30 years of follow-up in DCCT and EDIC, 149 cardiovascular disease events occurred in 82 former intensive treatment group subjects versus 217 events in 102 former conventional treatment group subjects. Intensive therapy reduced the incidence of any cardiovascular disease by 30% (95% CI 7, 48; P = 0.016), and the incidence of major cardiovascular events (nonfatal myocardial infarction, stroke, or cardiovascular death) by 32% (95% CI −3, 56; P = 0.07). The lower HbA1c levels during the DCCT/EDIC statistically account for all of the observed treatment effect on cardiovascular disease risk.”

CONCLUSIONS Intensive diabetes therapy during the DCCT (6.5 years) has long-term beneficial effects on the incidence of cardiovascular disease in type 1 diabetes that persist for up to 30 years.”

I was of course immediately thinking that perhaps they had not considered if this might just be the result of the Hba1c differences achieved during the trial being maintained long-term (during follow-up), and so what they were doing was not as much measuring the effect of the ‘metabolic memory’ component as they were just measuring standard population outcome differences resulting from long-term Hba1c differences. But they (of course) had thought about that, and that’s not what’s going on here, which is what makes it particularly interesting:

“Mean HbA1c during the average 6.5 years of DCCT intensive therapy was ∼2% (20 mmol/mol) lower than that during conventional therapy (7.2 vs. 9.1% [55.6 vs. 75.9 mmol/mol], P < 0.001). Subsequently during EDIC, HbA1c differences between the treatment groups dissipated. At year 11 of EDIC follow-up and most recently at 19–20 years of EDIC follow-up, there was only a trivial difference between the original intensive and conventional treatment groups in the mean level of HbA1c

They do admittedly find a statistically significant difference between the Hba1cs of the two groups when you look at (weighted) Hba1cs long-term, but that difference is certainly nowhere near large enough to explain the clinical differences in outcomes you observe. Another argument in favour of the view that what’s driving these differences is metabolic memory is the observation that the difference in outcomes between the treatment and control groups are smaller now than they were ten years ago (my default would probably be to if anything expect the outcomes of the two groups to converge long-term if the samples were properly randomized to start with, but this is not the only plausible model and it sort of depends on how you model the risk function, as they also talk about in the paper):

“[T]he risk reduction of any CVD with intensive therapy through 2013 is now less than that reported previously through 2004 (30% [P = 0.016] vs. 47% [P = 0.005]), and likewise, the risk reduction per 10% lower mean HbA1c through 2013 was also somewhat lower than previously reported but still highly statistically significant (17% [P = 0.0001] vs. 20% [P = 0.001]).”

iv. Commonly Measured Clinical Variables Are Not Associated With Burden of Complications in Long-standing Type 1 Diabetes: Results From the Canadian Study of Longevity in Diabetes.

“The Canadian Study of Longevity in Diabetes actively recruited 325 individuals who had T1D for 50 or more years (5). Subjects completed a questionnaire, and recent laboratory tests and eye reports were provided by primary care physicians and eye specialists, respectively. […] The 325 participants were 65.5 ± 8.5 years old with diagnosis at age 10 years (interquartile range [IQR] 6.0, 16) and duration of 54.9 ± 6.4 years.”

“In univariable analyses, the following were significantly associated with a greater burden of complications: presence of hypertension, statin, aspirin and ACE inhibitor or ARB use, higher Problem Areas in Diabetes (PAID) and Geriatric Depression Scale (GDS) scores, and higher levels of triglycerides and HbA1c. The following were significantly associated with a lower burden of complications: current physical activity, higher quality of life, and higher HDL cholesterol.”

“In the multivariable analysis, a higher PAID score was associated with a greater burden of complications (risk ratio [RR] 1.15 [95% CI 1.06–1.25] for each 10-point-higher score). Aspirin and statin use were also associated with a greater burden of complications (RR 1.24 [95% CI 1.01–1.52] and RR 1.34 [95% CI 1.05–1.70], respectively) (Table 1), whereas HbA1c was not.”

“Our findings indicate that in individuals with long-standing T1D, burden of complications is largely not associated with historical characteristics or simple objective measurements, as associations with statistical significance likely reflect reverse causality. Notably, HbA1c was not associated with burden of complications […]. This further confirms that other unmeasured variables such as genetic, metabolic, or physiologic characteristics may best identify mechanisms and biomarkers of complications in long-standing T1D.”

v. Cardiovascular Risk Factor Targets and Cardiovascular Disease Event Risk in Diabetes: A Pooling Project of the Atherosclerosis Risk in Communities Study, Multi-Ethnic Study of Atherosclerosis, and Jackson Heart Study.

“Controlling cardiovascular disease (CVD) risk factors in diabetes mellitus (DM) reduces the number of CVD events, but the effects of multifactorial risk factor control are not well quantified. We examined whether being at targets for blood pressure (BP), LDL cholesterol (LDL-C), and glycated hemoglobin (HbA1c) together are associated with lower risks for CVD events in U.S. adults with DM. […] We studied 2,018 adults, 28–86 years of age with DM but without known CVD, from the Atherosclerosis Risk in Communities (ARIC) study, Multi-Ethnic Study of Atherosclerosis (MESA), and Jackson Heart Study (JHS). Cox regression examined coronary heart disease (CHD) and CVD events over a mean 11-year follow-up in those individuals at BP, LDL-C, and HbA1c target levels, and by the number of controlled risk factors.”

“Of 2,018 DM subjects (43% male, 55% African American), 41.8%, 32.1%, and 41.9% were at target levels for BP, LDL-C, and HbA1c, respectively; 41.1%, 26.5%, and 7.2% were at target levels for any one, two, or all three factors, respectively. Being at BP, LDL-C, or HbA1c target levels related to 17%, 33%, and 37% lower CVD risks and 17%, 41%, and 36% lower CHD risks, respectively (P < 0.05 to P < 0.0001, except for BP in CHD risk); those subjects with one, two, or all three risk factors at target levels (vs. none) had incrementally lower adjusted risks of CVD events of 36%, 52%, and 62%, respectively, and incrementally lower adjusted risks of CHD events of 41%, 56%, and 60%, respectively (P < 0.001 to P < 0.0001). Propensity score adjustment showed similar findings.”

“In our pooled analysis of subjects with DM in three large-scale U.S. prospective studies, the more factors among HbA1c, BP, and LDL-C that were at goal levels, the lower are the observed CHD and CVD risks (∼60% lower when all three factors were at goal levels compared with none). However, fewer than one-tenth of our subjects were at goal levels for all three factors. These findings underscore the value of achieving target or lower levels of these modifiable risk factors, especially in combination, among persons with DM for the future prevention of CHD and CVD events.”

In some studies you see very low proportions of patients reaching target variables because the targets are stupid (to be perfectly frank about it). The HbA1c target applied in this study was a level <53.0 mmol/mol (7%), which is definitely not crazy if the majority of the individuals included were type 2, which they almost certainly were. You can argue about the BP goal, but it’s obvious here that the authors are perfectly aware of the contentiousness of this variable.

It’s incidentally noteworthy – and the authors do take note of it, of course – that one of the primary results of this study (~60% lower risk when all risk factors reach the target goal), which includes a large proportion of African Americans in the study sample, is almost identical to the results of the Danish Steno-2 clinical trial, which included only Danish white patients (and the results of which I have discussed here on the blog before). In the Steno study, the result was “a 57% reduction in CVD death and a 59% reduction in CVD events.”

vi. Illness Identity in Adolescents and Emerging Adults With Type 1 Diabetes: Introducing the Illness Identity Questionnaire.

“The current study examined the utility of a new self-report questionnaire, the Illness Identity Questionnaire (IIQ), which assesses the concept of illness identity, or the degree to which type 1 diabetes is integrated into one’s identity. Four illness identity dimensions (engulfment, rejection, acceptance, and enrichment) were validated in adolescents and emerging adults with type 1 diabetes. Associations with psychological and diabetes-specific functioning were assessed.”

“A sample of 575 adolescents and emerging adults (14–25 years of age) with type 1 diabetes completed questionnaires on illness identity, psychological functioning, diabetes-related problems, and treatment adherence. Physicians were contacted to collect HbA1c values from patients’ medical records. Confirmatory factor analysis (CFA) was conducted to validate the IIQ. Path analysis with structural equation modeling was used to examine associations between illness identity and psychological and diabetes-specific functioning.”

“The first two identity dimensions, engulfment and rejection, capture a lack of illness integration, or the degree to which having diabetes is not well integrated as part of one’s sense of self. Engulfment refers to the degree to which diabetes dominates a person’s identity. Individuals completely define themselves in terms of their diabetes, which invades all domains of life (9). Rejection refers to the degree to which diabetes is rejected as part of one’s identity and is viewed as a threat or as unacceptable to the self. […] Acceptance refers to the degree to which individuals accept diabetes as a part of their identity, besides other social roles and identity assets. […] Enrichment refers to the degree to which having diabetes results in positive life changes, benefits one’s identity, and enables one to grow as a person (12). […] These changes can manifest themselves in different ways, including an increased appreciation for life, a change of life priorities, and a more positive view of the self (14).”

“Previous quantitative research assessing similar constructs has suggested that the degree to which individuals integrate their illness into their identity may affect psychological and diabetes-specific functioning in patients. Diabetes intruding upon all domains of life (similar to engulfment) [has been] related to more depressive symptoms and more diabetes-related problems […] In contrast, acceptance has been related to fewer depressive symptoms and diabetes-related problems and to better glycemic control (6,15). Similarly, benefit finding has been related to fewer depressive symptoms and better treatment adherence (16). […] The current study introduces the IIQ in individuals with type 1 diabetes as a way to assess all four illness identity dimensions.”

“The Cronbach α was 0.90 for engulfment, 0.84 for rejection, 0.85 for acceptance, and 0.90 for enrichment. […] CFA indicated that the IIQ has a clear factor structure, meaningfully differentiating four illness identity dimensions. Rejection was related to worse treatment adherence and higher HbA1c values. Engulfment was related to less adaptive psychological functioning and more diabetes-related problems. Acceptance was related to more adaptive psychological functioning, fewer diabetes-related problems, and better treatment adherence. Enrichment was related to more adaptive psychological functioning. […] the concept of illness identity may help to clarify why certain adolescents and emerging adults with diabetes show difficulties in daily functioning, whereas others succeed in managing developmental and diabetes-specific challenges.”

June 30, 2017 Posted by | Cardiology, Diabetes, Medicine, Psychology, Studies | Leave a comment

The Personality Puzzle (I)

I don’t really like this book, which is a personality psychology introductory textbook by David Funder. I’ve read the first 400 pages (out of 700), but I’m still debating whether or not to finish it, it just isn’t very good; the level of coverage is low, it’s very fluffy and the signal-to-noise ratio is nowhere near where I’d like it to be when I’m reading academic texts. Some parts of it frankly reads like popular science. However despite not feeling that the book is all that great I can’t justify not blogging it; stuff I don’t blog I tend to forget, and if I’m reading a mediocre textbook anyway I should at least try to pick out some of the decent stuff in there which keeps me reading and try to make it easier for me to recall that stuff later. Some parts of- and arguments/observations included in the book are in my opinion just plain silly or stupid, but I won’t go into these things in this post because I don’t really see what would be the point of doing that.

The main reason why I decided to give the book a go was that I liked Funder’s book Personality Judgment, which I read a few years ago and which deals with some topics also covered superficially in this text – it’s a much better book, in my opinion, at least as far as I can remember (…I have actually been starting to wonder if it was really all that great, if it was written by the same guy who wrote this book…), if you’re interested in these matters. If you’re interested in a more ‘pure’ personality psychology text, a significantly better alternative is Leary et al.‘s Handbook of Individual Differences in Social Behavior. Because of the multi-author format it also includes some very poor chapters, but those tend to be somewhat easy to identify and skip to get to the good stuff if you’re so inclined, and the general coverage is at a much higher level than that of this book.

Below I have added some quotes and observations from the first 150 pages of the book.

“A theory that accounts for certain things extremely well will probably not explain everything else so well. And a theory that tries to explain almost everything […] would probably not provide the best explanation for any one thing. […] different [personality psychology] basic approaches address different sets of questions […] each basic approach usually just ignores the topics it is not good at explaining.”

Personality psychology tends to emphasize how individuals are different from one another. […] Other areas of psychology, by contrast, are more likely to treat people as if they were the same or nearly the same. Not only do the experimental subfields of psychology, such as cognitive and social psychology, tend to ignore how people are different from each other, but also the statistical analyses central to their research literally put individual differences into their “error” terms […] Although the emphasis of personality psychology often entails categorizing and labeling people, it also leads the field to be extraordinarily sensitive — more than any other area of psychology — to the fact that people really are different.”

“If you want to “look at” personality, what do you look at, exactly? Four different things. First, and perhaps most obviously, you can have the person describe herself. Personality psychologists often do exactly this. Second, you can ask people who know the person to describe her. Third, you can check on how the person is faring in life. And finally, you can observe what the person does and try to measure her behavior as directly and objectively as possible. These four types of clues can be called S [self-judgments], I [informants], L [life], and B [behavior] data […] The point of the four-way classification […] is not to place every kind of data neatly into one and only one category. Rather, the point is to illustrate the types of data that are relevant to personality and to show how they all have both advantages and disadvantages.”

“For cost-effectiveness, S data simply cannot be beat. […] According to one analysis, 70 percent of the articles in an important personality journal were based on self-report (Vazire, 2006).”

“I data are judgments by knowledgeable “informants” about general attributes of the individual’s personality. […] Usually, close acquaintanceship paired with common sense is enough to allow people to make judgments of each other’s attributes with impressive accuracy […]. Indeed, they may be more accurate than self-judgments, especially when the judgments concern traits that are extremely desirable or extremely undesirable […]. Only when the judgments are of a technical nature (e.g., the diagnosis of a mental disorder) does psychological education become relevant. Even then, acquaintances without professional training are typically well aware when someone has psychological problems […] psychologists often base their conclusions on contrived tests of one kind or another, or on observations in carefully constructed and controlled environments. Because I data derive from behaviors informants have seen in daily social interactions, they enjoy an extra chance of being relevant to aspects of personality that affect important life outcomes. […] I data reflect the opinions of people who interact with the person every day; they are the person’s reputation. […] personality judgments can [however] be [both] unfair as well as mistaken […] The most common problem that arises from letting people choose their own informants — the usual practice in research — may be the “letter of recommendation effect” […] research participants may tend to nominate informants who think well of them, leading to I data that provide a more positive picture than might have been obtained from more neutral parties.”

“L data […] are verifable, concrete, real-life facts that may hold psychological significance. […] An advantage of using archival records is that they are not prone to the potential biases of self-report or the judgments of others. […] [However] L data have many causes, so trying to establish direct connections between specific attributes of personality and life outcomes is chancy. […] a psychologist can predict a particular outcome from psychological data only to the degree that the outcome is psychologically caused. L data often are psychologically caused only to a small degree.”

“The idea of B data is that participants are found, or put, in some sort of a situation, sometimes referred to as a testing situation, and then their behavior is directly observed. […] B data are expensive [and] are not used very often compared to the other types. Relatively few psychologists have the necessary resources.”

“Reliable data […] are measurements that reflect what you are trying to assess and are not affected by anything else. […] When trying to measure a stable attribute of personality—a trait rather than a state — the question of reliability reduces to this: Can you get the same result more than once? […] Validity is the degree to which a measurement actually reflects what one thinks or hopes it does. […] for a measure to be valid, it must be reliable. But a reliable measure is not necessarily valid. […] A measure that is reliable gives the same answer time after time. […] But even if a measure is the same time after time, that does not necessarily mean it is correct.”

“[M]ost personality tests provide S data. […] Other personality tests yield B data. […] IQ tests […] yield B data. Imagine trying to assess intelligence using an S-data test, asking questions such as “Are you an intelligent person?” and “Are you good at math?” Researchers have actually tried this, but simply asking people whether they are smart turns out to be a poor way to measure intelligence”.

“The answer an individual gives to any one question might not be particularly informative […] a single answer will tend to be unreliable. But if a group of similar questions is asked, the average of the answers ought to be much more stable, or reliable, because random fluctuations tend to cancel each other out. For this reason, one way to make a personality test more reliable is simply to make it longer.”

“The factor analytic method of test construction is based on a statistical technique. Factor analysis identifies groups of things […] that seem to have something in common. […] To use factor analysis to construct a personality test, researchers begin with a long list of […] items […] The next step is to administer these items to a large number of participants. […] The analysis is based on calculating correlation coefficients between each item and every other item. Many items […] will not correlate highly with anything and can be dropped. But the items that do correlate with each other can be assembled into groups. […] The next steps are to consider what the items have in common, and then name the factor. […] Factor analysis has been used not only to construct tests, but also to decide how many fundamental traits exist […] Various analysts have come up with different answers.”

[The Big Five were derived from factor analyses.]

The empirical strategy of test construction is an attempt to allow reality to speak for itself. […] Like the factor analytic approach described earlier, the frst step of the empirical approach is to gather lots of items. […] The second step, however, is quite different. For this step, you need to have a sample of participants who have already independently been divided into the groups you are interested in. Occupational groups and diagnostic categories are often used for this purpose. […] Then you are ready for the third step: administering your test to your participants. The fourth step is to compare the answers given by the different groups of participants. […] The basic assumption of the empirical approach […] is that certain kinds of people answer certain questions on personality inventories in distinctive ways. If you answer questions the same way as members of some occupational or diagnostic group did in the original derivation study, then you might belong to that group too. […] responses to empirically derived tests are difficult to fake. With a personality test of the straightforward, S-data variety, you can describe yourself the way you want to be seen, and that is indeed the score you will get. But because the items on empirically derived scales sometimes seem backward or absurd, it is difficult to know how to answer in such a way as to guarantee the score you want. This is often held up as one of the great advantages of the empirical approach […] [However] empirically derived tests are only as good as the criteria by which they are developed or against which they are cross-validated. […] the empirical correlates of item responses by which these tests are assembled are those found in one place, at one time, with one group of participants. If no attention is paid to item content, then there is no way to be confident that the test will work in a similar manner at another time, in another place, with different participants. […] A particular concern is that the empirical correlates of item response might change over time. The MMPI was developed decades ago and has undergone a major revision only once”.

“It is not correct, for example, that the significance level provides the probability that the substantive (non-null) hypothesis is true. […] the significance level gives the probability of getting the result one found if the null hypothesis were true. One statistical writer offered the following analogy (Dienes, 2011): The probability that a person is dead, given that a shark has bitten his head off, is 1.0. However, the probability that a person’s head was bitten off by a shark, given that he is dead, is much lower. The probability of the data given the hypothesis, and of the hypothesis given the data, is not the same thing. And the latter is what we really want to know. […] An effect size is more meaningful than a significance level. […] It is both facile and misleading to use the frequently taught method of squaring correlations if the intention is to evaluate effect size.”

June 30, 2017 Posted by | Books, Psychology, Statistics | Leave a comment